score
stringclasses
605 values
text
stringlengths
4
618k
url
stringlengths
3
537
year
int64
13
21
28
logging in or signing up chemical bonds and compounds rangerblue Download Post to : URL : Related Presentations : Share Add to Flag Embed Email Send to Blogs and Networks Add to Channel Uploaded from authorPOINT lite Insert YouTube videos in PowerPont slides with aS Desktop Copy embed code: Embed: Flash iPad Copy Does not support media & animations WordPress Embed Customize Embed URL: Copy Thumbnail: Copy The presentation is successfully added In Your Favorites. Views: 11335 Category: Education License: All Rights Reserved Like it (0) Dislike it (0) Added: June 14, 2008 This Presentation is Public Favorites: 3 Presentation Description No description available. Comments Posting comment... Premium member Presentation Transcript Unit D Chapter 2Chemical Bonds and CompoundsBy Jerry Mullins : Unit D Chapter 2Chemical Bonds and CompoundsBy Jerry Mullins Principle : Principle Matter changes form and moves from place to place The properties of compounds depend on their atoms and chemical bonds Sections 2.1 Elements Combine to Form Compounds : Sections 2.1 Elements Combine to Form Compounds Compounds have different properties from the elements that make them Atoms combine to predictable numbers Section 2.1 Objective : Section 2.1 Objective The Student will: Define Key vocabulary Describe how compounds are made from combinations of atoms Explain how chemical formulas represent compounds Model a compound in an experiment Warm-Up : Warm-Up Draw a diagram of a neutral carbon atom. A neutral carbon atom has six protons in its nucleus. On you diagram, label the nucleus and electron cloud and indicate the total positive or negative charge on each T12 Overhead Vocabulary : Vocabulary Chemical formula Subscript Compounds Chemical bonds Bonds Explore Lab : Explore Lab “Compounds” Question: How are compounds different from elements? Material: Carbon *Water *Sugar *Test tubes *Test-tube holder *Candle Procedure: Examine the lump of carbon, the beaker of water, and the sugar. Record your observations. Pour some sugar into a test tube and heat it over a candle for several minutes. Record you observation What do you think?: The sugar is made up of atoms of the same elements that are in the carbon and water. How are sugar, carbon, and water different from one another? Does heating the sugar give you and clue that sugar contains more than one element? Compounds : Compounds Compound: a combination of two or more elements What makes a compound different from a mixture is that atoms of the elements in a compound are held together by chemical bonds Chemical bonds: can hold atoms together in large networks or in small group (think of it as “glue”) Bonds: help determine the properties of a compound The proportion of atoms are always fixed Slide 9: Compounds Most elements do not exist by themselves Readily combine with other elements in a predictable fashion Compounds : Compounds The properties of a compound depends not only on which atoms the compound contains, but also on how the atoms are arranged. Example: atoms of Carbon 6 (C) and Hydrogen 1 (H) can combine to form many thousands of different compounds, such as: Natural gas; Components of automobile gasoline; Hard waxes in candles; and Many plastics. Compounds : Compounds Remember: the properties of compounds are often very different from the properties of the elements that make them, another example: Water is made from two atoms of hydrogen bonded to one atom of oxygen. at room temperature, Hydrogen 1 (H) and Oxygen 8 (O) are both colorless, odorless gases, and they remain gases down to extremely low temperatures. Water (H2O) is a liquid at temperatures up to 100° C (212° F) and a solid below 0° C (32° F) What melts the ice on our roads? : What melts the ice on our roads? Calcium Chloride: is nonpoisonous white solid, which melts the ice that form on streets in the wintertime this compound is made up of the elements: Calcium 20 (Ca): soft, silvery metallic solid Chlorine 17 (Cl): greenish-yellow gas that is extremely reactive and poisonous to humans + = Calcium + Chlorine = Calcium Chloride Atoms combine in predictable numbers : Atoms combine in predictable numbers A given compound always contains atoms of elements in a specific RATIO Example: Ammonia always has 3 Hydrogen atoms for every 1 Nitrogen atom—3 to 1 ratio of Hydrogen to Nitrogen (3:1). However: if we change the above ratio we get Hydrazoic acid, which also contains atoms of hydrogen and nitrogen but in a different ratio. Hydrazoic acid has a ratio of 1 hydrogen atom to 3 nitrogen atoms (1:3). Chemical Formulas : Chemical Formulas Chemical formula shows the kind and proportion of atoms of each element that occurs in a particular compound Remember that atoms of elements can be represented by their chemical symbols. Therefore, a Chemical Formula uses these chemical symbols to represent the atoms of the elements and their ratios in a chemical compound What is a “subscript” : What is a “subscript” Simple put, it is a number written to the right of a chemical symbol and slightly below it. Used in writing a chemical formulas The subscript of “1” is never written, only number “2” or more Writing a chemical formula : Writing a chemical formula Carbon Dioxide for example, consist of 1 atom of Carbon attached by chemical bonds to two atoms of oxygen. This is how we would write this: Find the symbols for Carbon (C) and oxygen (O) on the periodic table. Write these symbols side by side To indicate that there are two oxygen atoms for every carbon atoms, place the subscript “2” to the right of the oxygen atom’s symbol Because there is only one carbon in carbon dioxide, you need no subscript for carbon. CO2 means 1:3 ratio Investigative Lab : Investigative Lab “Element Ratios” Question: How can you model a compound? Materials: nuts and bolts Procedure: Collect a number of nuts and bolts. The nuts represent hydrogen atoms. The bolts represent carbon atoms. Connect the nuts to the bolts to model the compound methane. Methane contains four hydrogen atoms attached to one carbon atoms. Make as many of these models as you can Count the nuts and bolts left over What do you think?: What ratio of nuts to bolts did you use to made a model of a methane atom? How many methane models did you make? Why couldn’t you make more? Challenge: The compound ammonia has one nitrogen atom and three hydrogen atoms. How could you use the nuts and bolts to model this compound? What are the three main types of chemical bonds? : What are the three main types of chemical bonds? Metallic Ionic Covalent Section 2.2Chemical bonds hold compounds together : Section 2.2Chemical bonds hold compounds together Chemical bonds between atoms involve electrons Atoms can transfer electrons. Atoms can share electrons. Chemical bonds give all materials their structures. Section 2.2 Objective : Section 2.2 Objective THE STUDENT WILL: Define Key vocabulary; Explain how electrons are involved in chemical bonding; Describe what the different types of chemical bonds are; Determine how chemical bonds affect structure; and Observe how a crystal grows in an experiment. Warm-Up : Warm-Up Match each definition to a term T12 Overhead Teacher Demo : Teacher Demo TE 49 Demonstrate how opposite charges attract Misconceptions : Misconceptions TE 48 Vocabulary : Vocabulary Ionic bond Covalent bond Molecule Polar covalent bond Explore Lab : Explore Lab “Think about it” Question: How do you keep things together? TE p 47 Electron role in forming compounds : The tendency of elements to combine and form compounds depends on the number and arrangement of electrons in their atoms. Atoms are most stable when their outer most energy level is filled Electron role in forming compounds Why do atoms naturally combine into compounds? : Most atoms are not stable in their natural state Tend to react (combine) with other atoms in order to become more stable (undergo chemical reactions) In chemical reactions bonds are broken; atoms rearranged and new chemical bonds are formed Why do atoms naturally combine into compounds? Remember why chemical bonds form? : Remember why chemical bonds form? Atoms form bonds in order to become more stable. According to the Octet Rule, atoms will form bonds by gaining, losing, or sharing valence electrons in order to obtain an octet (8 valence electrons). Atoms transfer electrons : Atoms transfer electrons Ions form when atoms gain or lose electrons Gaining electrons changes an atom into a negative ion Losing electrons changes an atom into a positive ion Individual atoms do not form ions by themselves Ions are typically formed in pairs when one atom transfers one or more electrons to another Periodic table can give us clues as to the type of ions the atoms will form Metal Ions : Metal Ions All metals lose electrons to form positive charges (cation) Group 1: metals commonly lose only one electron to form ions with a single positive charge Na+ Group 2: metals commonly lose two electrons to form ions with two positive charges Ca2+ Transition metals Group 3-12: also form positive charges but the number of electrons given away varies Nonmetal Ions : Nonmetal Ions Form ions by gaining electrons to form negative charges (anion) Group 17 nonmetals gain one electron to form ions with a 1- charge Cl- Group 16 nonmetals gain two electrons to form ions with 2- charge O2- Noble gases : Noble gases Do not normally gain or lose electrons and so do not normally form ions What are the three main types of chemical bonds? : What are the three main types of chemical bonds? Ionic Covalent metallic Ionic Bonds : Ionic Bonds Ionic Bonds: the force of attraction between positive and negative ions. Therefore, An ionic bond is formed when an electron is transferred from a metal atom to a nonmetal atom. When the ions are created, therefore, they are drawn toward one another by electrical attraction Electrical forces act in all directions, each ion attracts all other nearby opposite charge ion Sodium chlorideexample of ionic bond : Sodium chlorideexample of ionic bond Notice that both ions now have 8 valence electrons. Each positive ion is surrounded by six negative ions, and each negative ion is surrounded by six positive ion this regular arrangement gives sodium chloride a crystal characteristic cubic shape Rock salt viewed through a magnifying glass Rules for naming Ionic Compounds : Rules for naming Ionic Compounds Based on the names of the ions it is made of The name for a positive ion is the same as the name of the atom from which it is formed Ammonia (NH3) Water (H2O) The name of a negative ion is formed by dropping the last part of the name of the atom and adding the suffix “-ide” Sodium Chloride Hydrogen Peroxide (H2O2) Rules for naming Ionic Compounds : Rules for naming Ionic Compounds The name of an ionic compound will always have the positive ion name first, followed by the name of the negative ion Example the name of salt: sodium chloride Example: : Example: Naming the chemical formula BaI2: take the name of the positive metal element: barium take the name of the negative, nonmetal element, iodine, and give it the ending –ide: iodide combine the two names: barium iodide As a lab group class write out these: KBr MgF2 Covalent Bonding : Covalent Bonding A covalent bond is formed when electrons are shared between two nonmetals. Neither atom gains nor loses an electron, therefore, no ion is formed The overlapping orbitals create the chemical bond. Covalent bonds often are represented via models: Electron cloud model Ball-and-stick model Notice that both atoms now have 8 valence electrons Determining the number of Covalent Bonds an atom can form : Determining the number of Covalent Bonds an atom can form The number of covalent bonds that an atom can form depends on the number of electrons that it has available for sharing Halogen group and hydrogen can contribute only one electron to a covalent bond, therefore only one covalent bond can be formed Group 16 elements can form two covalent bonds Group 15 can form three covalent bonds Carbon and silicon in Group 14 can form four bonds. For example Carbon forms four covalent bonds with four hydrogen atoms to produce CH4 (methane) How to we represent covalent bonds? : How to we represent covalent bonds? Ball-and-stick model The lines, helps to indicate the type of bond Single bond Double bond Triple bond Space-filled model Helps to show general shape of the bonded atom and takes up less space ball-and-stick diagram of covalent bonds : ball-and-stick diagram of covalent bonds Double bond Triple bond What is a molecule? : What is a molecule? Molecule: are a group of atoms held together by covalent bonds Can contain from two to many thousand atoms Most contain the atoms of two or more elements Water (H2O) Ammonia (NH3) Methane (CH4) However some molecules only contain one kind of atoms. These element exist as two-atom molecules Hydrogen H2 Nitrogen N2 Oxygen O2 Fluorine F2 Chlorine Cl2 Bromine Br2 Iodine I2 Polar covalent bonds : Polar covalent bonds Is a covalent bond in which the electrons are shared unequally Word polar refers to anything that has two extremes like a magnet. Water molecule (H2O) oxygen atoms attracts electrons more strongly than hydrogen atoms do. Oxygen nucleus has 8 protons, hydrogen nucleus has 1 proton. Therefore the oxygen atom pulls the shared electrons more strongly toward it oxygen side has slightly more negative charge then hydrogen which has slightly positive charge Ionic COMPOUNDS : Ionic COMPOUNDS Chemical bonds give all materials their structure. Look around you at the different properties Have regular crystals structures, such as, NaCl (salt) Crystals are responsible for bending rays of light, metals shine, and medications attach certain diseases in the body because their atoms are arranged in specific ways One consequence of such rigid structures is that, when enough force is applied to the crystal, it shatters rather than bends. Ionic COMPOUNDS : Ionic COMPOUNDS Dissolve easily in water, separating into positive ions and negative ions Separated ions can move freely so solutions of ionic compounds are good conductors of electricity. Your body uses ionic solutions to transmit impulses between nerve and muscle cells Exercise rapidly deplete the body of these ionic solutions, so a good sports drink contains ionic compounds like potassium chloride Investigative Lab : Investigative Lab “Crystals” Question: How does a crystal grow? Materials: Crystal-growing substance *2 glass beakers *Hot tap water *Stirring stick *Cotton string *Paper clip Pencil *Hand lens Procedure: Add a small amount of the crystal-growing substance to a beaker of hot tap water. Stir until it mixes completely with the water. Keep adding the substance and stirring until no more will dissolves Pour the mixture into another beaker Tie one end of the string to the paper clip and the other end to a pencil. Lower the paper clip into the solution and lay the pencil across the top of the beaker. The paper clip should hand at about the middle of the beaker Use a hand lens to observe the paper clip several times a week for three weeks What do you think? Describe the crystals you see forming on the paper clip. Do the crystals look different as they get larger? Compare your crystals to those of other groups. What similarities do you see among them? What differences? Covalent COMPOUNDS : Covalent COMPOUNDS Exist as individual molecules. Chemical bonds give each molecule a specific three dimensional shape called its molecular structure Molecular structure can influence everything from how a specific substance feels to the touch to how well it interacts with other substances. Basic structure of covalent compounds : Basic structure of covalent compounds Simple linear shape (I2) Bent shapes (H2O) Pyramid (NH3) ammonia Complex shapes (CH4) methane Covalent COMPOUNDS : Covalent COMPOUNDS Have almost the exact opposite properties of ionic compounds The atoms are organized as individual molecules, melting or boiling a covalent compound does not require breaking chemical bonds. Therefore they often melt and boil at lower temperatures The molecules stay together when dissolved in water Therefore poor conductors of electricity Table sugar Sections 2.3Substances’ properties depend on their bonds : Sections 2.3Substances’ properties depend on their bonds Metals have unique bonds Ionic and covalent bounds give compounds certain properties Bonds can make the same element look different Section 2.3 Objective : Section 2.3 Objective THE STUDENT WILL: Define Key vocabulary; Describe how metal atoms form chemical bonds with one another; Analyze how ionic and covalent bonds influence substances’ properties; and Identify different forms of the same element. Warm-Up : Warm-Up Decide if these statements are true. If they are not true, correct them T13 Overhead Vocabulary : Vocabulary Metallic bonds Explorative Lab : Explorative Lab “Bonds in Metals” Question: What objects conduct electricity? Materials: Masking tape * 3 pieces of copper wire (15cm) D cell battery * light bulb and holder Objects to test (paper clip, penny, pencil, eraser, etc) Procedure: Tape on end of a copper wire to one terminal of the battery. Attach the other end of the copper wire to the light bulb holder. Attach a second wire to the holder. Tape the third wire to the other terminal of the batter Touch the ends of both wires to objects around the classroom. Notice if the bulb light or not What do you think? Which objects make the bulb light? How are these objects similar? Metallic Bond : Metallic Bond A metallic bond is formed when metal atoms share all of their valence electrons to form an “electron sea.” Magnesium Metallic Bond : Metallic Bond The attraction between the loose electrons and the positively charged metal cations creates the chemical bond. Magnesium Properties of Metallic Bond : Properties of Metallic Bond Properties of metals are determined by metallic bonds. One common property is that they are good conductors of electric current Due to the electrons ability to flow through the material and carrying the electric current The free movement of electrons also means that metals are good conductors of heat they typically have high melting points except for mercury Metals are solid at room temperature Easily shaped by pounding and can be drawn into wire Chapter Investigative Lab : Chapter Investigative Lab “Chemical Bonds” TE/PE pp60-61 You do not have the permission to view this presentation. In order to view it, please contact the author of the presentation.
http://www.authorstream.com/Presentation/rangerblue-71859-chemical-bonds-compounds-education-ppt-powerpoint/
13
19
The Protestant Reformation was a reform movement in Europe that began in 1517, though its roots lie further back in time. It began with Martin Luther and may be considered to have ended with the Peace of Westphalia in 1648. The movement began as an attempt to reform the Catholic Church. Many western Catholics were troubled by what they saw as false doctrines and malpractices within the Church, particularly involving the teaching and sale of indulgences. Another major contention was the practice of buying and selling church positions (simony) and what was seen at the time as considerable corruption within the Church's hierarchy. This corruption was seen by many at the time as systemic, even reaching the position of the Pope. Martin Luther's spiritual predecessors included men such as John Wycliffe and Jan Hus, who had attempted to reform the church along similar lines, though their efforts had been largely unsuccessful. The Reformation can be said to have begun in earnest on October 31, 1517, in Wittenberg, Saxony (in present-day Germany). There, Luther nailed his Ninety-Five Theses to the door of the All Saints' Church, which served as a notice board for university-related announcements. These were points for debate that criticized the Church and the Pope. The most controversial points centered on the practice of selling indulgences and the Church's policy on purgatory. Other reformers, such as Ulrich Zwingli, soon followed Luther's lead. Church beliefs and practices under attack by Protestant reformers included purgatory, particular judgment, devotion to Mary (Mariology), the intercession of and devotion to the saints, most of the sacraments, the mandatory celibacy requirement of its clergy (including monasticism), and the authority of the Pope. The reform movement soon split along certain doctrinal lines. Spiritual disagreements between Luther and Zwingli, and later between Luther and John Calvin, led to the emergence of rival Protestant churches. The most important denominations to emerge directly from the Reformation were the Lutherans, and the Reformed/Calvinists/Presbyterians. The process of reform had decidedly different causes and effects in other countries. In England, where it gave rise to Anglicanism, the period became known as the English Reformation. Subsequent Protestant denominations generally trace their roots back to the initial reforming movements. The reformers also accelerated the Catholic or Counter Reformation within the Catholic Church. The Protestant Reformation is also referred to as the German Reformation, Protestant Revolution, Protestant Revolt, and, in Germany, as the Lutheran Reformation. The Council of Constance confirmed and strengthened the traditional medieval conception of Church and Empire. It did not address the national tensions, or the theological tensions which had been stirred up during the previous century. The council could not prevent schism and the Hussite Wars in Bohemia. Historical upheaval usually yields much new thinking as to how society should be organized. This was the case leading up to the Protestant Reformation. Following the breakdown of monastic institutions and scholasticism in late medieval Europe, accentuated by the "Babylonian Captivity" of the Avignon Papacy, the Great Schism, and the failure of the Conciliar movement, the sixteenth century saw the fomenting of a great cultural debate about religious reforms and later fundamental religious values (See German mysticism). Historians would generally assume that the failure to reform (too many vested interests, lack of coordination in the reforming coalition) would eventually lead to a greater upheaval or even revolution, since the system must eventually be adjusted or disintegrate, and the failure of the Conciliar movement helped lead to the Protestant Reformation in Europe. These frustrated reformist movements ranged from nominalism, devotio moderna (modern devotion), to humanism occurring in conjunction with economic, political and demographic forces that contributed to a growing disaffection with the wealth and power of the elite clergy, sensitizing the population to the financial and moral corruption of the secular Renaissance church. The outcome of the Black Death encouraged a radical reorganization of the economy, and eventually of European society. In the emerging urban centers, however, the calamities of the fourteenth and early fifteenth century, and the resultant labor shortages, provided a strong impetus for economic diversification and technological innovations. Following the Black Death, the initial loss of life due to famine, plague, and pestilence contributed to an intensification of capital accumulation in the urban areas, and thus a stimulus to trade, industry, and burgeoning urban growth in fields as diverse as banking (the Fugger banking family in Augsburg and the Medici family of Florence being the most prominent); textiles, armaments, especially stimulated by the Hundred Years' War, and mining of iron ore due, in large part, to the booming armaments industry. Accumulation of surplus, competitive overproduction, and heightened competition to maximize economic advantage, contributed to civil war, aggressive militarism, and thus to centralization. As a direct result of the move toward centralization, leaders like Louis XI of France (1461–1483), the "spider king", sought to remove all constitutional restrictions on the exercise of their authority. In England, France, and Spain the move toward centralization begun in the thirteenth century was carried to a successful conclusion. But as recovery and prosperity progressed, enabling the population to reach its former levels in the late 15th and 16th centuries, the combination of both a newly-abundant labor supply as well as improved productivity, were 'mixed blessings' for many segments of Western European society. Despite tradition, landlords started the move to exclude peasants from "common lands". With trade stimulated, landowners increasingly moved away from the manorial economy. Woolen manufacturing greatly expanded in France, Germany, and the Netherlands and new textile industries began to develop. The invention of movable type would lead to the Protestant zeal for translating the Bible and getting it into the hands of the laity. This would advance the culture of Biblical literacy. The "humanism" of the Renaissance period stimulated unprecedented academic ferment, and a concern for academic freedom. Ongoing, earnest theoretical debates occurred in the universities about the nature of the church, and the source and extent of the authority of the papacy, of councils, and of princes. The protests against Rome began in earnest when Martin Luther, an Augustinian monk and professor at the university of Wittenberg, called in 1517 for a reopening of the debate on the sale of indulgences. Luther's dissent marked a sudden outbreak of a new and irresistible force of discontent which had been pushed underground but not resolved. The quick spread of discontent occurred to a large degree because of the printing press and the resulting swift movement of both ideas and documents, including the 95 Theses. Information was also widely disseminated in manuscript form, as well as by cheap prints and woodcuts amongst the poorer sections of society. Parallel to events in Germany, a movement began in Switzerland under the leadership of Ulrich Zwingli. These two movements quickly agreed on most issues, as the recently introduced printing press spread ideas rapidly from place to place, but some unresolved differences kept them separate. Some followers of Zwingli believed that the Reformation was too conservative, and moved independently toward more radical positions, some of which survive among modern day Anabaptists. Other Protestant movements grew up along lines of mysticism or humanism (cf. Erasmus), sometimes breaking from Rome or from the Protestants, or forming outside of the churches. After this first stage of the Reformation, following the excommunication of Luther and condemnation of the Reformation by the Pope, the work and writings of John Calvin were influential in establishing a loose consensus among various groups in Switzerland, Scotland, Hungary, Germany and elsewhere. The Reformation foundations engaged with Augustinianism. Both Luther and Calvin thought along lines linked with the theological teachings of Augustine of Hippo. The Augustinianism of the Reformers struggled against Pelagianism, a heresy that they perceived in the Catholic church of their day. In the course of this religious upheaval, the Peasants' War of 1524–1525 swept through the Bavarian, Thuringian and Swabian principalities, leaving scores of Catholics slaughtered at the hands of Protestant bands, including the Black Company of Florian Geier,a knight from Giebelstadt who joined the peasants in the general outrage against the Catholic hierarchy. Even though Luther and Calvin had very similar theological teachings, the relationship between their followers turned quickly to conflict. Frenchman Michel de Montaigne told a story of a Lutheran pastor who once claimed that he would rather celebrate the mass of Rome than participate in a Calvinist service. The political separation of the Church of England from Rome under Henry VIII, beginning in 1529 and completed in 1536, brought England alongside this broad Reformed movement. However, religious changes in the English national church proceeded more conservatively than elsewhere in Europe. Reformers in the Church of England alternated, for centuries, between sympathies for Catholic traditions and Protestantism, progressively forging a stable compromise between adherence to ancient tradition and Protestantism, which is now sometimes called the via media. Martin Luther, John Calvin, and Ulrich Zwingli are considered Magisterial Reformers because their reform movements were supported by ruling authorities or "magistrates". Frederick the Wise not only supported Luther, who was a professor at the university he founded, but also protected him by hiding Luther in Wartburg Castle in Eisenach. Zwingli and Calvin were supported by the city councils in Zurich and Geneva. Since the term "magister" also means "teacher", the Magisterial Reformation is also characterized by an emphasis on the authority of a teacher. This is made evident in the prominence of Luther, Calvin, and Zwingli as leaders of the reform movements in their respective areas of ministry. Because of their authority, they were often criticized by Radical Reformers as being too much like the Roman Popes. For example, Radical Reformer Andreas von Bodenstein Karlstadt referred to the Wittenberg theologians as the "new papists". The major individualistic reform movements that revolted against medieval scholasticism and the institutions that underpinned it were: humanism, devotionalism, (see for example, the Brothers of the Common Life and Jan Standonck) and the observatine tradition. In Germany, "the modern way" or devotionalism caught on in the universities, requiring a redefinition of God, who was no longer a rational governing principle but an arbitrary, unknowable will that cannot be limited. God was now a ruler, and religion would be more fervent and emotional. Thus, the ensuing revival of Augustinian theology, stating that man cannot be saved by his own efforts but only by the grace of God, would erode the legitimacy of the rigid institutions of the church meant to provide a channel for man to do good works and get into heaven. Humanism, however, was more of an educational reform movement with origins in the Renaissance's revival of classical learning and thought. A revolt against Aristotelian logic, it placed great emphasis on reforming individuals through eloquence as opposed to reason. The European Renaissance laid the foundation for the Northern humanists in its reinforcement of the traditional use of Latin as the great unifying cultural language. The polarization of the scholarly community in Germany over the Reuchlin (1455–1522) affair, attacked by the elite clergy for his study of Hebrew and Jewish texts, brought Luther fully in line with the humanist educational reforms who favored academic freedom. At the same time, the impact of the Renaissance would soon backfire against traditional Catholicism, ushering in an age of reform and a repudiation of much of medieval Latin tradition. Led by Erasmus, the humanists condemned various forms of corruption within the Church, forms of corruption that might not have been any more prevalent than during the medieval zenith of the church. Erasmus held that true religion was a matter of inward devotion rather than outward symbols of ceremony and ritual. Going back to ancient texts, scriptures, from this viewpoint the greatest culmination of the ancient tradition, are the guides to life. Favoring moral reforms and de-emphasizing didactic ritual, Erasmus laid the groundwork for Luther. Humanism's intellectual anti-clericalism would profoundly influence Luther. The increasingly well-educated middle sectors of Northern Germany, namely the educated community and city dwellers would turn to Luther's rethinking of religion to conceptualize their discontent according to the cultural medium of the era. The great rise of the burghers, the desire to run their new businesses free of institutional barriers or outmoded cultural practices, contributed to the appeal of humanist individualism. To many, papal institutions were rigid, especially regarding their views on just price and usury. In the North, burghers and monarchs were united in their frustration for not paying any taxes to the nation, but collecting taxes from subjects and sending the revenues disproportionately to the Pope in Italy. These trends heightened demands for significant reform and revitalization along with anticlericalism. New thinkers began noticing the divide between the priests and the flock. The clergy, for instance, were not always well-educated. Parish priests often did not know Latin and rural parishes often did not have great opportunities for theological education for many at the time. Due to its large landholdings and institutional rigidity, a rigidity to which the excessively large ranks of the clergy contributed, many bishops studied law, not theology, being relegated to the role of property managers trained in administration. While priests emphasized works of religiosity, the respectability of the church began diminishing, especially among well educated urbanites, and especially considering the recent strings of political humiliation, such as the apprehension of Pope Boniface VIII by Philip IV of France, the "Babylonian Captivity", the Great Schism, and the failure of Conciliar reformism. In a sense, the campaign by Pope Leo X to raise funds to rebuild St. Peter's Basilica was too much of an excess by the secular Renaissance church, prompting high-pressure indulgences that rendered the clergy establishments even more disliked in the cities. Luther borrowed from the humanists the sense of individualism, that each man can be his own priest (an attitude likely to find popular support considering the rapid rise of an educated urban middle class in the North), and that the only true authority is the Bible, echoing the reformist zeal of the Conciliar movement and opening up the debate once again on limiting the authority of the Pope. While his ideas called for the sharp redefinition of the dividing lines between the laity and the clergy, his ideas were still, by this point, reformist in nature. Luther's contention that the human will was incapable of following good, however, resulted in his rift with Erasmus finally distinguishing Lutheran reformism from humanism. While there were some parallels between certain movements within humanism and teachings later common among the Reformers, the Reformation's principal arguments were based on "direct" Biblical interpretation. The Catholic Church had for several centuries been the main purveyor in Europe of non-secular humanism: the Neoplatonism of the scholastics and the neo-Aristotelianism of Thomas Aquinas and his followers had made humanism a part of Church dogma. This was of course due to the Catholic Church's use of historic, religious tradition (including the Canonization of Saints) in the forming of its liturgy. Thus, when Luther and the other reformers adopted the standard of sola scriptura, making the Bible the sole measure of theology, they made the Reformation a reaction against the humanism of that time. Previously, the Scriptures had been seen by some as the pinnacle of a hierarchy of sacred texts, and on par with the oral traditions of the Church. The Protestants emphasized such concepts as justification by "faith alone" (not faith and good works or infused righteousness), "Scripture alone" (the Bible as the sole inspired rule of faith, rather than the Bible plus tradition), "the priesthood of all believers" (eschewing the special authority and power of the Catholic sacramental priesthood), that all people are individually responsible for their status before God such that talk of mediation through any but Christ alone is unbiblical. Because they saw these teachings as stemming from the Bible, they encouraged publication of the Bible in the common language and universal education. Part of the revolt was an iconoclasm, seen in Huldrych Zwingli, but particularly amongst the radical reformers. Iconoclastic riots took place in Zürich (in 1523), Copenhagen (1530), Münster (1534), Geneva (1535), Augsburg (1537), and Scotland (1559). John Calvin took a more moderate stance to Zwingli and the Anabaptists, but preferred a more simple aesthetic, to the excesses of the Middle Ages. The Reformation did not happen in a vacuum, as there were movements for centuries calling for a return to Biblical teachings, the most famous being from Wycliffe, Jan Hus, and the Waldensians. It is no surprise that their teachings were later found in the Reformation, as they imbibed from the same source. While it is true that there were calls for religious, doctrinal, and moral reformation within and without the institutional church for centuries, apparently it was the invention of the printing press which allowed quick broadcasting of ideas, the rise in nationalistic fervor, the increasing availability of the Bible to the public, and popular discontent at the moral corruption in the church to coalesce in support for a reformation as never before. Many unskilled laborers had been squeezed from the countryside into the cities and suffered from the over-crowding and high prices that can follow such a quick and voluminous influx of new citizens. Discontented and morally righteous, the lower classes embraced the most radical theological options opened up by the religious revolution and were ready to follow leaders rising within their ranks, who urged them to band together against immorality and decadence. The Drummer of Niklashausen and later the Anabaptist preachers railed against landowners who took control of increasing areas, kings centralizing control, and princes looking for increased tax revenues to fund their growing states. The Anabaptists and other radical leaders were condemned by the Lutherans and nationalistic Germans. Nearly every country in Europe saw a flare-up of failed peasant revolts motivated by religious concerns and executed according to religious doctrine. The Hungarian Peasants' War (1514), the revolt against Charles V in Spain (1520), the discontent of the lower classes in France with the excessive taxes levied by Louis XI, and the secret associations which prepared the way for the great Peasants' War of the lower classes in Germany (1524), show that discontent was not confined to any one country in Europe. A Lutheran understanding of the Eucharist is distinct from the Reformed doctrine of the Eucharist in that Lutherans affirm a real, physical presence of Christ in the Eucharist (as opposed to either a "spiritual presence" or a "memorial") and Lutherans affirm that the presence of Christ does not depend on the faith of the recipient; the repentant receive Christ in the Eucharist worthily, the unrepentant who receive the Eucharist risk the wrath of Christ. Luther, along with his colleague Philipp Melanchthon, emphasized this point in his plea for the Reformation at the Reichstag in 1529 amid charges of heresy. But the changes he proposed were of such a fundamental nature that by their own logic they would automatically overthrow the old order; neither the Emperor nor the Church could possibly accept them, as Luther well knew. As was only to be expected, the edict by the Diet of Worms (1521) prohibited all innovations. Meanwhile, in these efforts to retain the guise of a Catholic reformer as opposed to a heretical revolutionary, and to appeal to German princes with his religious condemnation of the peasant revolts backed up by the Doctrine of the Two Kingdoms, Luther's growing conservatism would provoke more radical reformers. At a religious conference with the Zwinglians in 1529, Melanchthon joined with Luther in opposing a union with Zwingli. There would finally be a schism in the reform movement due to Luther's belief in real presence—the real (as opposed to symbolic) presence of Christ at the Eucharist. His original intention was not schism, but with the Reichstag of Augsburg (1530) and its rejection of the Lutheran "Augsburg Confession", a separate Lutheran church finally emerged. In a sense, Luther would take theology further in its deviation from established Catholic dogma, forcing a rift between the humanist Erasmus and Luther. Similarly, Zwingli would further repudiate ritualism, and break with the increasingly conservative Luther. Aside from the enclosing of the lower classes, the middle sectors of Northern Germany, namely the educated community and city dwellers, would turn to religion to conceptualize their discontent according to the cultural medium of the era. The great rise of the burghers, the desire to run their new businesses free of institutional barriers or outmoded cultural practices contributed to the appeal of individualism. To many, papal institutions were rigid, especially regarding their views on just price and usury. In the North, burghers and monarchs were united in their frustration for not paying any taxes to the nation, but collecting taxes from subjects and sending the revenues disproportionately to Italy. In Northern Europe Luther appealed to the growing national consciousness of the German states because he denounced the Pope for involvement in politics as well as religion. Moreover, he backed the nobility, which was now justified to crush the Great Peasant Revolt of 1525 and to confiscate church property by Luther's Doctrine of the Two Kingdoms. This explains the attraction of some territorial princes to Lutheranism, especially its Doctrine of the Two Kingdoms. However, the Elector of Brandenburg, Joachim I, blamed Lutheranism for the revolt and so did others. In Brandenburg, it was only under his successor Joachim II that Lutheranism was established, and the old religion was not formally extinct in Brandenburg until the death of the last Catholic bishop there, Georg von Blumenthal, who was Bishop of Lebus and sovereign Prince-Bishop of Ratzeburg. With the church subordinate to and the agent of civil authority and peasant rebellions condemned on strict religious terms, Lutheranism and German nationalist sentiment were ideally suited to coincide. Though Charles V fought the Reformation, it is no coincidence either that the reign of his nationalistic predecessor Maximilian I saw the beginning of the Reformation. While the centralized states of western Europe had reached accords with the Vatican permitting them to draw on the rich property of the church for government expenditures, enabling them to form state churches that were greatly autonomous of Rome, similar moves on behalf of the Reich were unsuccessful so long as princes and prince bishops fought reforms to drop the pretension of the secular universal empire. In Sweden the Reformation was spearheaded by Gustav Vasa, elected king in 1523. Friction with the pope over the latter's interference in Swedish ecclesiastical affairs led to the discontinuance of any official connection between Sweden and the papacy from 1523. Four years later, at the Diet of Västerås, the king succeeded in forcing the diet to accept his dominion over the national church. The king was given possession of all church property, church appointments required royal approval, the clergy were subject to the civil law, and the "pure Word of God" was to be preached in the churches and taught in the schools—effectively granting official sanction to Lutheran ideas. Under the reign of Frederick I (1523–33), Denmark remained officially Catholic. But though Frederick initially pledged to persecute Lutherans, he soon adopted a policy of protecting Lutheran preachers and reformers, of whom the most famous was Hans Tausen. During his reign, Lutheranism made significant inroads among the Danish population. Frederick's son, Christian, was openly Lutheran, which prevented his election to the throne upon his father's death. However, following his victory in the civil war that followed, in 1537 he became Christian III and began a reformation of the official state church. In England, the Reformation followed a different course than elsewhere in Europe. There had long been a strong strain of anti-clericalism, and England had already given rise to the Lollard movement of John Wycliffe, which played an important part in inspiring the Hussites in Bohemia. By the 1520s, however, the Lollards were not an active force, or, at least, certainly not a mass movement. The different character of the English Reformation came rather from the fact that it was driven initially by the political necessities of Henry VIII. Henry had once been a sincere Catholic and had even authored a book strongly criticizing Luther, but he later found it expedient and profitable to break with the Papacy. His wife, Catherine of Aragon, bore him only a single child, Mary. As England had recently gone through a lengthy dynastic conflict (see Wars of the Roses), Henry feared that his lack of a male heir might jeopardize his descendants' claim to the throne. However, Pope Clement VII, concentrating more on Charles V's "sack of Rome", denied his request for an annulment. Had Clement granted the annullment and therefore admitted that his predessecor, Julius II, had erred, Clement would have given support to the Lutheran assertion that Popes replaced their own judgement for the will of God. King Henry decided to remove the Church of England from the authority of Rome. In 1534, the Act of Supremacy made Henry the Supreme Head of the Church of England. Between 1535 and 1540, under Thomas Cromwell, the policy known as the Dissolution of the Monasteries was put into effect. The veneration of some saints, certain pilgrimages and some pilgrim shrines were also attacked. Huge amounts of church land and property passed into the hands of the crown and ultimately into those of the nobility and gentry. The vested interest thus created made for a powerful force in support of the dissolutions. There were some notable opponents to the Henrician Reformation, such as Thomas More and Bishop John Fisher, who were executed for their opposition. There was also a growing party of reformers who were imbued with the Zwinglian and Calvinistic doctrines now current on the Continent. When Henry died he was succeeded by his Protestant son Edward VI, who, through his empowered councilors (with the King being only nine years old at his succession and not yet sixteen at his death) the Duke of Somerset and the Duke of Northumberland, ordered the destruction of images in churches, and the closing of the chantries. Under Edward VI the reform of the Church of England was established unequivocally in doctrinal terms. Yet, at a popular level, religion in England was still in a state of flux. Following a brief Roman Catholic restoration during the reign of Mary 1553–1558, a loose consensus developed during the reign of Elizabeth I, though this point is one of considerable debate among historians. Yet it is the so-called "Elizabethan Religious Settlement" to which the origins of Anglicanism are traditionally ascribed. The compromise was uneasy and was capable of veering between extreme Calvinism on the one hand and Catholicism on the other, but compared to the bloody and chaotic state of affairs in contemporary France, it was relatively successful until the Puritan Revolution or English Civil War in the seventeenth century. The success of the Counter-Reformation on the Continent and the growth of a Puritan party dedicated to further Protestant reform polarized the Elizabethan Age, although it was not until the 1640s that England underwent religious strife comparable to that which its neighbours had suffered some generations before. The early Puritan movement (late 16th century-17th century) was Reformed or Calvinist and was a movement for reform in the Church of England. Its origins lay in the discontent with the Elizabethan Religious Settlement. The desire was for the Church of England to resemble more closely the Protestant churches of Europe, especially Geneva. The Puritans objected to ornaments and ritual in the churches as idolatrous (vestments, surplices, organs, genuflection), which they castigated as "popish pomp and rags". (See Vestments controversy.) They also objected to ecclesiastical courts. They refused to endorse completely all of the ritual directions and formulas of the Book of Common Prayer; the imposition of its liturgical order by legal force and inspection sharpened Puritanism into a definite opposition movement. The Reformation Parliament of 1560, which repudiated the pope's authority, forbade the celebration of the mass and approved a Protestant Confession of Faith, was made possible by a revolution against French hegemony under the regime of the regent Mary of Guise, who had governed Scotland in the name of her absent daughter Mary Queen of Scots (then also Queen of France). Harsh persecution of Protestants by the Spanish government of Phillip II contributed to a desire for independence in the provinces, which led to the Eighty Years' War and eventually, the separation of the largely Protestant Dutch Republic from the Catholic-dominated Southern Netherlands, the present-day Belgium. In the more independent northwest the rulers and priests, protected now by the Habsburg Monarchy which had taken the field to fight the Turks, defended the old Catholic faith. They dragged the Protestants to prison and the stake wherever they could. Such strong measures only fanned the flames of protest, however. Leaders of the Protestants included Matthias Biro Devai, Michael Sztarai, and Stephen Kis Szegedi. Protestants likely formed a majority of Hungary's population at the close of the sixteenth century, but Counter-Reformation efforts in the seventeenth century reconverted a majority of the kingdom to Catholicism. A significant Protestant minority remained, most of it adhering to the Calvinist faith. Though he was not personally interested in religious reform, Francis I (1515–47) initially maintained an attitude of tolerance, arising from his interest in the humanist movement. This changed in 1534 with the Affair of the Placards. In this act, Protestants denounced the mass in placards that appeared across France, even reaching the royal apartments. The issue of religious faith having been thrown into the arena of politics, Francis was prompted to view the movement as a threat to the kingdom's stability. This led to the first major phase of anti-Protestant persecution in France, in which the Chambre Ardente ("Burning Chamber") was established within the Parlement of Paris to handle with the rise in prosecutions for heresy. Several thousand French Protestants fled the country during this time, most notably John Calvin, who settled in Geneva. Calvin continued to take an interest in the religious affairs of his native land and, from his base in Geneva, beyond the reach of the French king, regularly trained pastors to lead congregations in France. Despite heavy persecution by Henry II, the Reformed Church of France, largely Calvinist in direction, made steady progress across large sections of the nation, in the urban bourgeoisie and parts of the aristocracy, appealing to people alienated by the obduracy and the complacency of the Catholic establishment. French Protestantism, though its appeal increased under persecution, came to acquire a distinctly political character, made all the more obvious by the noble conversions of the 1550s. This had the effect of creating the preconditions for a series of destructive and intermittent conflicts, known as the Wars of Religion. The civil wars were helped along by the sudden death of Henry II in 1559, which saw the beginning of a prolonged period of weakness for the French crown. atrocity and outrage became the defining characteristic of the time, illustrated at its most intense in the St. Bartholomew's Day massacre of August 1572, when between 30,000 and 100,000 Huguenots were killed across France. The wars only concluded when Henry IV, himself a former Huguenot, issued the Edict of Nantes, promising official toleration of the Protestant minority, but under highly restricted conditions. Catholicism remained the official state religion, and the fortunes of French Protestants gradually declined over the next century, culminating in Louis XIV's Edict of Fontainebleau—which revoked the Edict of Nantes and made Catholicism the sole legal religion of France. In response to the Edict of Fontainebleau, Frederick William of Brandenburg declared the Edict of Potsdam, giving free passage to French Huguenot refugees, and tax-free status to them for 10 years. The Reformation led to a series of religious wars that culminated in the Thirty Years War. From 1618 to 1648 the Catholic Habsburgs and their allies fought against the Protestant princes of Germany, supported by Denmark and Sweden. The Habsburgs, who ruled Spain, Austria, the Spanish Netherlands and most of Germany and Italy, were the staunchest defenders of the Catholic Church. The Reformation Era came to a close when Catholic France allied herself, first in secret and later on the battlefields, with the Protestants against the Habsburgs. For the first time since the days of Luther, political and national convictions again outweighed religious convictions in Europe. Following the Peace of Westphalia, the major denominations now lived in relative peace on the continent. The main tenets of the Peace of Westphalia were: The treaty also effectively ended the Pope's pan-European political power. Fully aware of the loss, Pope Innocent X declared the treaty "null, void, invalid, iniquitous, unjust, damnable, reprobate, inane, empty of meaning and effect for all times." European Sovereigns, Catholic and Protestant alike, ignored his verdict. documents from the Reformation (Protestant perspective)
http://www.reference.com/browse/Protestant%20Reformation
13
17
The money system used in Victorian England had existed for several hundred years. It wasn't until 1971 that the currency was divided into 100 smaller units and decimalized being divided in precise divisions of halves, quarters, fifths, tenths, twentieths, twenty-fifths, and fiftieths. Until that time, however, the British pound sterling consisted of 240 parts (halves, thirds, quarters, fifths, sixths, eighths, tenths, twelfths, fifteenths, sixteenths, twentieths, twenty-fourths, thirtieths, fortieths, forty-eightieths, sixtieths, eightieths, and one-hundred and twentieths. The symbols used to express Victorian money was the following: Written in its descending order is simple: £2-4s-6d. If the amount was below a pound it might be written as 4/6, 4s-6d or 10/ (ten shillings). The amount 4/6 would be pronounced "four and six". A shilling was also called a "bob" but only for whole shillings. Another monetary expression is a "guinea" which was £1-1s-od or £1.05. Tradesmen were paid in pounds, but gentlemen such as an artist, in guineas. In the legal profession, a barrister was paid in guineas, kept the pounds and gave his male clerks (a "Bob Crachit") the shillings. A guinea could also be divided into many different amounts and a third of a guinea was exactly seven shillings. In 1817, coins were minted in gold and silver. A gold coin was worth £1 and was also called a "sovereign; the half sovereign was ten shillings and also gold. The "crown" was a silver coin worth 5s, a half-crown 2/6 or 1/8 of a pound. The shilling was also silver as were sixpence, threepence, and four pence (also known as a "groat". Half-groats and silver pennies were not in circulation during the Victorian era, however, they were minted for a tradition called Maundy Money whereby the Monarch would give the poor people (the number of which is defined by the same number of the monarch's years) in the parish a groat, threepence, half groat and a penny. Smaller valued coins originally made of copper were changed to bronze after 1860. With the penny, came the halfpenny, sometimes referred to as ha'penny) and the farthing, a quarter of a penny. One can well imagine that purchasing goods with currency broken down into so many "odd" values could prove a bit cumbersome and one needed to be very good at math, especially if you didn't want to be cheated. |One pound (£)||20 shillings (s)| |One shilling||12 pence (d)| |One penny||Two half pennies and four farthings| |One guinea||21 shillings| |Gold sovereign||One pound| |Half Crown||2s 6d| Moving towards a system of decimalization, the Victorians did produce a new two shilling coin in 1849 called a "Florin". This coin was minted until 1968 and then redesigned to form the new ten pence coin. The half-crown was also taken out of circulation in 1970. Guineas also disappeared early in the period not being minted after 1813. Banknotes (paper money) Paper money was first issued by the Bank of London in the 1690s, but was not accepted for use by many and being hand-signed did not encourage confidence in this form of currency. As the result of an economic crisis in 1797, the Bank stopped making payments in coins for more than £1 thus increase in quantity and circulation of banknotes. They lasted until 1828 when the Bank began issuing £5 notes. In 1853 the bank begin to print the notes with signatures eliminating the need of "hand-written" ones. The change to decimalized coinage was made on February 15, 1971. The pound was then divided into 100 pennies worth 2.4 pence. Ten and five pence coins were phased in from 1968 and the fifty pence in October 1969 which replaced the ten shilling note. || Family Gallery | Servants Parlour | Tour Home | Typical Day | Etiquette | Shopping Trip || || Victorian Christmas | Victorian England Fun and Games | Ashton Library | Victorian Wedding || || Victorian England Overview | Guest Registry | Honorary Victorian | Tours || || Awards Received | Bibliography || || 1876 Victorian England Home || Credits below copyright information All Rights Reserved - B. Malheiro May not be reproduced in any way without express written permission of webmaster. Background and buttons are the creation of webmaster, B. Malheiro. These images have been watermarked and are not for use on another site. Site authored by webmaster. 1. Clapham, Economic History of Modern Britain "Free Trade and Steel 1850 - 1886". 2. Best, Mid-Victorian Britain 1851-1875.
http://www.logicmgmt.com/1876/living/money.htm
13
15
The Rocky Mountains developed in a period of mountain building in the Tertiary Age (66 million years ago) as sedimentary rock was thrust up. The area is rich in coal deposits and, in the interior of British Columbia, with metal deposits. It, thus, as early as the 1860s (1864 Gold Rush with deposits in Mount Fisher), became an area of intense economic activity, first with the establishment of British Columbia and, then, with the establishment of Alberta and Saskatchewan, the prairie provinces. Besides the east/west linkages (Alberta/BC), there were also important north/south trails connecting with the US (for example, Montana) that allowed miners access from the south. For example, the Wild Horse gold workings were developed by the American Robert C. Dore and his first claim, exhausted in three years, produced $521,700. Preceding the coming of the railways, steamboats carried miners in the mining boom of 1893-98 connecting Jennings, Montana, and Fort Steele. By 1900, their usefulness had ended as rail became the dominant means of transport. According to historians Howard and Tamara Palmer, this north/south linkage so concerned the CPR that they decided to push for a railroad from Lethbridge to the Crow's Nest Pass and obtained a subsidy from Sir Wilfred Laurier's government in 1897 to do 1. The railroad was completed in 1898 and signaled major economic development and settlement in the . Calgary, as the closest southern Alberta urban centre benefited from these developments. With the building of the more northern rail route in the early part of the 20th century, Edmonton developed as the hub and became the destination for immigrants. The Grand Trunk Pacific Railway saw its line as the means of opening up the agricultural land around Grande Prairie as well as a linkage to Jasper Park as a draw for tourism. Mining was thus instrumental in the development of communities in both the BC and Alberta portions of the Rockies. The railways needed fuel to run and coal mines were developed to do this, as well as to meet industrial needs (for example, the smelters in Trail, BC) and domestic needs. The largest deposits are found in Alberta and BC and their exploitation paralleled the settlement of the West. From the beginnings, these developments were characterized by cycles of boom and bust, particularly with the gold mines. Entrepreneurs and miners were mobile, moving from California to Dawson City following gold strikes. Diggings were begun and abandoned and remnants of mine works and cemeteries can be seen by the visitor to the 3. According to the Palmers in Alberta: A New History, "Coal production increased more than tenfold from 242,000 tons in 1897 to almost three million tons in 1910, and then to over four million tons in 1913. By 1911 coal mining employed 6 per cent of the non-agricultural workforce in Alberta." 4 As well, western Canada, by 1911, was the largest coal producing area of the country. Key mining communities in the Rockies include: Fernie-50 miles from Bellevue, was the first settlement as a result of the arrival of the railway in the 1890s, and the site of many mine disasters including the Coal Creek explosion on May 22nd, 1902 which killed 128 of 800 men on shift and, again, on July 31st, 1908 when another explosion happened trapping 23 miners; at the same time, a major fire at the Cedar Valley Lumber Company burned out of control and destroyed the town of 6,000; only surviving buildings were the Crow's Nest Pass Coal company offices, the Western Canada Warehouse and the Great Northern's depot and water tank Michel and Natal-significant mine communities also affected by disasters: 1902, site of a fire in the mine that destroyed half of Michel; January 9th, 1904, another gas explosion in which seven men were killed; August 1st, 1908, the Fernie fire also threatened Michel but did no serious damage; finally, on July 5th, 1938, a thunderstorm appears to have caused an explosion in mine No. 3 with three fatalities Crowsnest-mining began in 1899, after the coming of the railroad, the Crow's Nest Branch of the CPR; also known for its bootlegging after the July, 1915 provincial election that saw the introduction of prohibition. Not only locals were served but also the American market making bootlegging "big business." Emilo Picariello ("Emperor Pic") was one of the Italian immigrants who profited from the trade and was known in the 1920s as the "Al Capone of the Pass" though he was also known for his generosity to the poor. The town of Crowsnest, no longer in existence, was situated on the Alberta - British Columbia border. Coleman-founded in 1903 and designed as an ideal community, its mining tragedies included April 3, 1907 (three deaths), November 23, 1926 (10 deaths) Blairmore-begun as the community of Tenth Siding, renamed Springs and, then, re-named Blairmore in 1898; the Greenhill Coal Mines have been disaster free and the community prospered Frank-renowned for the Frank Slide, which happened at 4:10 am on April 29th, 1903; 90 million tons of rock broke away from the side of Turtle Mountain and crashed to the valley floor; destroyed the local coal company plant and houses; 76 people were killed and their bodies were never found. Hillcrest-halfway between Frank and Bellevue was the village of Hillcrest and its mine, which became infamous on Friday, June 19th, 1914 when 228 miners started the morning shift at 7 am; the mine had been opened in January, 1905 by a Montreal syndicate headed by Charles Plummer Hill; at 9:30 am a series of blasts occurred in No. 1 tunnel, 500 feet below the surface; besides the strength of the explosion and damage done to tunnels, poison gas (black damp) spread to adjoining tunnels and rooms; 188(189) men were killed Bellevue-east of the Frank Slide, was founded about 1900; on December 9th, 1910 a mine disaster killed 30 men; miners were unhappy in the way the insurance company, Trust and Guarantee Company, settled their claims Leich Collieries-an established and initially prosperous coal mining operation Canmore-was the Canadian Pacific Railways' first divisional point 68 miles west of Calgary and the depot was completed in 1884. The first train went through Canmore on its way to Craigelachie at Eagle's Pass for the historic driving of the last spike on November 7th, 1885. In 1889, its population, at 450, exceeded that of Banff (270) and Anthracite (167). The No. 1 Mine started Anthracite-coal was discovered in 1886 and a town developed in 1887; the mine was located 10 miles west of Canmore; when sales declined, the mine closed; it was resued by an American, by H.W. McNeil; in 1892, he also controlled the No. 1 Mine and the Cochrane Mines in Canmore. The Anthracite Mine closed in Bankhead-the town of Bankhead was built in 1904 within Rocky Mountain Park (later, Banff National Park) and the town consisted of a coal mine and 900 residents. The town derived its name from the tipple, also called a bankhead. Bankhead mine closed The mines at Canmore and Bankhead developed on the CPR line. According to Ben Gadd, writing in Bankhead: The Twenty Year Town (Ottawa: Minister of Supply and Services Canada, 1989), the town was planned carefully and was designed by the CPR as a model town. Houses had indoor plumbing and in 1905 electricity. Neither Banff nor Canmore had these luxuries. By 1908, there were 114 buildings of which about 100 were houses. According to Gadd, the Italians came primarily from northern Italy, Turin and Milan. These included the D'Amico family, Morello family and Mike Perotti. But the model town did not guarantee that things went well in the mine. Narcissus Morello died in April, 1920, in one of the various mining accidents that killed 15 men over the years. He is entombed in a coal chute deep inside Cascade Mountain as a result of the collapse of the coal face they were working on. The Dominion Parks Commissioner ordered the CPR to remove the entire town from the Park in 1922. Opinion is mixed as to whether mining was no longer considered appropriate within the Park or whether this was the result of strikes in 1919 and 1922. According to a Calgary Herald article in 1926, a local contractor moved 38 houses, six miles in 40 The beneficiary was the townsite of Banff, which was becoming a tourism attraction. The miners moved on to other mining communities including Coleman, Bellevue, Hillcrest and Blairmore in the Crow's Nest Pass, Nordegg and the Coal Branch, as well as Drumheller and Lethbridge. The mines in the southern part of the Rockies developed as a result of the coming of the railways to the southern part of Alberta/British Columbia. Two major railways were being constructed along the northern route toward Jasper, and both intended to be transcontinental. These were the Grand Trunk Pacific Railway (GTPR) and the Canadian Northern Railway (CNR) of Mackenzie and Mann. Both had intended to use Yellowhead Pass, although the GTPR attempted to change their originally filed plans with the federal Department of Railways and Canals, to prevent the CNR from using the Pass. As the railways expanded, new mines had to be developed. D. B. Dowling of the Geological Survey of Canada, in 1906, reported on a number of coal deposits in the Rockies. Dowling's work also included assessment of the commercial value of mineral seams, and he produced extremely detailed maps of the areas he covered. In 1909, John Gregg discovered coal in the Athabasca Valley near Brulé and Hinton. Martin Cohn (later known as Martin Nordegg) was a brilliant entrepreneur. Born in Berlin, he trained as a scientist (photo chemistry), but was dissatisfied with the economic potential of Germany and traveled abroad. According to Anne Belliveau, Nordegg historian, Martin Cohn was sent to Canada by the Deutsches Kanada Syndikat, a group consisting largely of bankers and influential businessmen. In 1906, he came to Ottawa where he connected with Colonel Onésiphore Talbot, Liberal Member of Parliament for Ottawa, whom he had previously met in Germany and taken on a tour of the Technical Institute in the Berlin suburb of Charlottenburgh. Talbot connected his friend with A. P. Low, the Director of the Geological Survey of Canada. While Nordegg was initially interested in investments in eastern smelters, a visit to Sudbury with Alfred E. Barlow of the Geological Survey, proved that there were few remaining opportunities. Barlow offered to work with him and, together, they looked for opportunities in the newest growth area-western Canada. Nordegg discovered the survey reports of George Dawson (Director of the Geological Survey of Canada, 1895-1905), which mentioned coal strata in the Yellowhead Pass. After hearing parliamentary debates about the new transcontinental railways to cross the Rockies at the Yellowhead Pass, he sensed the opportunity. Barlow introduced him to Dowling, who had surveyed the area. Dowling showed Nordegg that it would be too expensive for the new railways to get their coal from Vancouver Island or Canmore and that new deposits would have to be exploited. Having fixed on this area for investment, Nordegg met Frank Oliver, Minister of the Interior and Member of Parliament from Edmonton, and his course was was a shrewd political lobbyist and knew how to interpret political trends for profit. Nordegg, through an introduction by Senator Lougheed, connected up with R. B. Bennett to help him to defeat an Edmonton group of investors to develop the Rocky Mountain Collieries in the Kananaskis field. On May 1st, 1907, Nordegg set out by train for the West. Nordegg fell in love with Winnipeg and was impressed that in 40 years a village of 240 had become a thriving city of 100,000. He traveled with Dowling to Calgary and, then, Morley. They went on to Brazeau River country to find the coal fields he knew were there. On the way, they visited the coal fields in Canmore and Bankhead so that Nordegg could learn about local mining conditions in western Canada. Dowling and Nordegg returned to Ottawa to stake claims and find development capital. Working with the lawyer Andrew Haydon, Nordegg registered a company under a Dominion charter under the name German Development Company Ltd 6. This company, which included a number of Ottawa businessmen as well as the original German investors, absorbed The Deutsches Kanada Syndikat, and had capital of $1 million. This information is based on research done by Anne Belliveau on company materials given to her by her Father, who was the Technical Operations Manager of Brazeau Collieries. Nordegg headquartered his company at 19 Elgin Street in Ottawa as well as having a Toronto office. In 1908, Dowling suggested Nordegg hire James McEvoy, who previously had worked with the Geological Survey, and who knew the territory. In 1908, one more field was staked, and work was done to ready their existing fields. Nordegg then went looking for a railway to buy the coal that had been staked. The CNR was interested. When the German Development Company joined forces with the CNR in 1909 to create Brazeau Collieries Limited, they amalgamated all eight of the (combined) coal claims, which stretched from Grand Cache area to Kananaskis. The Nordegg Field was not discovered until 1911 and was not a part of this original set of claims and, by that time, Brazeau had already begun mining at the South Brazeau/Blackstone field and a railway was being cut. With the addition of the Nordegg field, Brazeau Collieries had almost 60 square miles of coal holding and the Nordegg field was 30 miles closer to the main Calgary/Edmonton rail line. Development of all Brazeau Collieries' coal fields was to begin with the Nordegg field and this would eliminate the toughest and most expensive miles on which to build a railway. Nordegg's close connection with the town of Nordegg ended with the breakout of World War I. As an enemy alien, he was forced to sell his interests in the mining enterprises that he had established. Perhaps it was his great love of the West that, in April, 1909, prompted him to change his name to Nordegg for reasons unknown. A commentator has said that "Nord" and "egg" means in some German dialects, "north corner." He established a model town of that name and the Brazeau Collieries to exploit the area's coal and bring it to market. But it was not just Nordegg who was interested in the coal in the area of Jasper. Entrepreneurs from the US, Britain, France/Belgium and Eastern Canada were also interested in exploiting this important resource. Toni Ross writes in Oh, The Coal Branch: A delegation of 80 businessmen from Edmonton headed by Hon. G. H. Bulyea, Lieutenant Governor visit the marl deposits. They lunch at the Capital Hotel in Bickerdike and have dinner at the Boston Hotel in Edson. Edson greets them with an arch across Main Street with a banner which reads "The gateway to Grande Prairie and Peace River An important Coal Branch development was at Mountain Park, which was on the eastern border of Jasper National Park. This was the first community on the western line of the Coal Branch and was established in 1911. Prior to this, in 1904, the railway had acquired Prairie Creek, later to be renamed Hinton. The community had been established by the American prospector John Gregg (also known as John James Greig), who had married Mary Cardinal, the daughter of Stoney Chief Michael Cardinal. She guided him to coal deposits, known by First Nations, in the Nikanassin Range. With railway surveyor Robert Wesley Jones, Gregg registered his stake in 1909. A backer of the development was Christopher John Leyland, a British industrialist. The Mountain Park Coal Company Limited was incorporated in 1911 and Gregg sold his shares to Leyland and his partners, who set up the company. The Company initially tried to build their own rail link to Coalspur but this proved challenging and expensive and, in 1912, they undertook an agreement with the Grand Trunk Pacific Railway to do it. Eventually, another British industrialist, Lieutenant-Colonel Alexander Mitchell, beca me involved in the mine. The Coal Branch communities and mining camps that developed included: - Mountain Park - McLeod River - Coal Valley
http://wayback.archive-it.org/2217/20101215220301/http:/www.albertasource.ca/abitalian/background/rockies_cb_overview.html
13
23
Byzantine Greeks or Byzantines (Greek: Βυζαντινοί) is a conventional term used by modern historians to refer to the medieval Greek or Hellenised citizens of the Byzantine Empire, centered mainly in Constantinople, the southern Balkans, the Greek islands, Asia Minor (modern Turkey), Cyprus and the large urban centres of the Near East and northern Egypt. The identity of the Byzantine Greeks has taken many forms in name, with such variants as Romaioi or Romioi (meaning "Romans"), Graikoi (meaning "Greeks"), "Byzantines", and "Byzantine Greeks". The social structure of the Byzantine Greeks was primarily supported by a rural, agrarian base that consisted of the peasantry, and a small fraction of the poor. These peasants lived within three kinds of settlements, the chorion or village, the agridion or hamlet, and the proasteion or estate. Many civil disturbances that occurred during the time of the Byzantine Empire were attributed to political factions within the Empire rather than to this large popular base. Soldiers among the Byzantine Greeks were at first conscripted amongst the rural peasants and trained on an annual basis. As the Byzantine Empire entered the 11th century, more of the soldiers within the army were either professional men-at-arms or mercenaries. Education within the Byzantine Greek population was until the twelfth century more advanced than in the West, particularly on the primary school level, which increased literacy rates. Success came easily to Byzantine Greek merchants, who enjoyed a very strong position in international trade. Despite the challenges they faced against rival Italian merchants, they managed to hold their own throughout the latter half of the Byzantine Empire’s existence. The clergy also held a special place, not only having more freedom than their Western counterparts, but also maintaining a patriarch in Constantinople that was considered to be the equal of the pope. This position of strength had built up over time, for at the beginning of the Byzantine Empire, under Emperor Constantine the Great (r. 306–337), only a small part, about 10%, of the population was Christian. The language of the Byzantine Greeks since the age of Constantine had been Greek, although Latin was the language of the administration. From the reign of Emperor Heraclius (r. 610–641), Greek was not only the predominant language amongst the populace but also replaced Latin in the administration. The makeup of the Byzantine Empire had at first a multi-ethnic character that, following the loss of the non-Greek speaking provinces, came to be dominated by the Byzantine Greeks. Over time, the relationship between them and the West, particularly with Germanic Roman and Frankish Europe, deteriorated. Relations were further damaged by a schism between the Roman West and Orthodox East that led to the Byzantine Greeks being labeled as heretics. Throughout the later centuries of the Byzantine Empire and particularly following the coronation of Charlemagne (r. 768–814) in Rome in 800, the Byzantine Greeks were not considered by Western Europeans as heirs of the Roman Empire, but rather part of an Eastern kingdom made up of Greek peoples. In actuality, the Byzantine Empire was the Roman Empire, continuing the unbroken line of succession of the Roman emperors. During most of the Middle Ages, the Byzantine Greeks identified themselves as Romaioi (Greek: Ρωμαίοι, "Romans", meaning citizens of the Roman Empire), a term which in the Greek language had become synonymous to a Christian Greek. They also identified themselves as Graikoi (Greek: Γραικοί, "Greeks") even though the ethnonym was never used in official Byzantine political correspondence prior to 1204 AD. The ancient name Hellene was in popular use synonymous to a pagan and was revived as an ethnonym in the Middle Byzantine period (11th century). While in the West the term "Roman" acquired a new meaning in connection with the Catholic Church and the Bishop of Rome, the Greek form "Romaioi" remained attached to the Greeks of the Eastern Roman Empire. These people called themselves Romaioi (Romans) in their language, and the term "Byzantines" or "Byzantine Greeks" is an exonym applied by later historians like Hieronymus Wolf. However, the use of the term "Byzantine Greeks" for the Romaioi is not entirely uncontroversial. Most historians agree that the defining features of their civilization were: 1) Greek language, culture, literature, and science, 2) Roman law and tradition, 3) Christian faith. The term "Byzantine" has been adopted by Western scholarship on the assumption that anything Roman is essentially "western" and by modern Greek scholarship for nationalistic reasons of identification with ancient Greece. In modern times, the Greek people still use the ethnonym "Romaioi" or rather "Romioi" to refer to themselves. In addition, the Eastern Roman Empire was in language and civilization a Greek society. Byzantinist August Heisenberg (1869–1930) defined the Byzantine Empire as "the Christianised Roman empire of the Greek nation". Byzantium was primarily known as the Empire of the Greeks by foreigners due to the predominance of Greek linguistic, cultural, and demographic elements. Byzantine Greeks, forming the majority of the Byzantine Empire proper at the height of its power, gradually came under the dominance of foreign powers with the decline of the Empire during the Middle Ages. Mostly coming under Arab Muslim rule, Byzantine Greeks either fled their former lands or subdued to the new Muslim rulers, receiving the status of Dhimmi. Over the centuries surviving Christian societies of former Byzantine Greeks evolved into Antiochian Greeks, Melchites or merged into the societies of Arab Christians, existing to this day. While social mobility was not unknown in Byzantium the order of society was thought of as more enduring, with the average man regarding the court of Heaven to be the archetype of the imperial court in Constantinople. This society included various classes of people that were neither exclusive nor immutable. The most characteristic were the poor, the peasants, the soldiers, the teachers, entrepreneurs, and clergy. According to a text dated to 533 AD, a man was termed "poor" if he did not have 50 gold coins (aurei), which was a modest though not negligible sum. The Byzantines were heirs to the Greek concepts of charity for the sake of the polis, nevertheless it was the Christian concepts attested in the Bible that animated their giving habits, and specifically the examples of Basil of Caesarea (who is the Greek equivalent of Santa Claus), Gregory of Nyssa, and John Chrysostom. The number of the poor fluctuated in the many centuries of Byzantium's existence but they provided a constant supply of muscle power for the building projects and rural work. There did, however, appear to be an apparent rise in their numbers towards the end of late antiquity, the late fourth and early fifth centuries as barbarian raids and a desire to avoid taxation pushed rural populations into cities. Since Homeric times, there were several categories of poverty with the ptochos (Greek: πτωχός, "passive poor") being lower than the penes (Greek: πένης, "active poor"). They formed the majority of the infamous Constantinopolitan mob whose function was similar to the mob of the First Rome. However, while there are instances of riots attributed to the poor, specifically the majority of civil disturbances were attributable to the various factions of the Hippodrome like the Greens and Blues. Apart from the fact that they constituted a non-negligible percentage of the population, there is a point to focusing on the poor because their existence influenced the Christian society of Byzantium to create a large network of hospitals (Greek: ιατρεία, iatreia), alms houses, and a religious and social model largely justified by the existence of the poor and born out of the Christian transformation of Classical society. There are no reliable figures as to the numbers of the peasantry, yet it is widely assumed that the vast majority of Byzantines lived in rural and agrarian areas. In the Taktika of Emperor Leo VI the Wise (r. 886–912), the two professions defined as the backbone of the state are the peasantry (Greek: γεωργική, geōrgikē) and the soldiers (Greek: στρατιωτική, stratiōtikē). The reason for this was that besides producing most of the Empire's food the peasants also produced most of its taxes. Peasants lived mostly in villages, whose name changed slowly from the classical kome (Greek: κώμη) to the modern chorio (Greek: χωριό). While agriculture and herding were the dominant occupations of villagers they were not the only ones. There are records for the small town of Lampsakos, situated on the eastern shore of the Hellespont, which out of 173 households classifies 113 as peasant and 60 as urban, which indicate other kinds of ancillary activities. The Treatise on Taxation, preserved in the Biblioteca Marciana in Venice, distinguishes between three types of rural settlements, the chorion (Greek: χωρίον) or village, the agridion (Greek: αγρίδιον) or hamlet, and the proasteion (Greek: προάστειον) or estate. According to a fourteenth century survey of the village of Aphetos, donated to the monastery of Chilandar, the average size of a landholding is only 3.5 modioi (0.08 ha). Taxes placed on rural populations included the kapnikon (Greek: καπνικόν) or hearth tax, the synone (Greek: συνονή) or cash payment frequently affiliated with the kapnikon, the ennomion (Greek: εννόμιον) or pasture tax, and the aerikon (Greek: αέρικον, meaning "of the air") which depended on the village's population and ranged between 4 and 20 gold coins annually. Their diet consisted of mainly grains and beans and in fishing communities fish was usually substituted for meat. Bread, wine, and olives were important staples of Byzantine diet with soldiers on campaign eating double-baked and dried bread called paximadion (Greek: παξιμάδιον). As in antiquity and modern times, the most common cultivations in the choraphia (Greek: χωράφια) were olive groves and vineyards. While Liutprand of Cremona, a visitor from Italy, found Greek wine irritating as it was often flavoured with resin (retsina) most other Westerners admired Greek wines, Cretan in particular being famous. While both hunting and fishing were common, the peasants mostly hunted to protect their herds and crops. Apiculture, the keeping of bees, was as highly developed in Byzantium as it had been in Ancient Greece. Aside from agriculture, the peasants also laboured in the crafts, fiscal inventories mentioning smiths (Greek: χαλκεύς, chalkeus), tailors (Greek: ράπτης, rhaptes), and cobblers (Greek: τζαγγάριος, tzangarios). During the Byzantine millennium, hardly a year passed without a military campaign. Soldiers were a normal part of everyday life, much more than in modern Western societies. While it is difficult to draw a distinction between Roman and Byzantine soldiers from an organizational aspect, it is easier to do so in terms of their social profile. The military handbooks known as the Taktika continued a Hellenistic and Roman tradition that contains a wealth of information about the appearance, customs, habits, and life of the soldiers. As with the peasantry, there are apart from the main core of soldiers many who performed ancillary activities, like medics and technicians. Selection for military duty was annual with yearly call-ups and great stock was placed on military exercises, during the winter months, which formed a large part of a soldier's life. Until the eleventh century, the majority of the conscripts were from rural areas, while the conscription of craftsmen and merchants is still an open question. From that point on, professional recruiting replaced conscription and the rising use of mercenaries within the army placed a ruinous burden on the treasury. From the tenth century onwards, stipulations exist for the connection between land-ownership and military service. While the state never allotted land for obligatory service, soldiers could and did use their pay to buy landed estates and taxes would be decreased or waived in some cases. What the state did allocate to soldiers, however, from the twelfth century onwards, were the tax revenues from some estates called pronoiai (Greek: πρόνοιαι). As in antiquity, the basic food of the soldier remained the dried biscuit bread though its name had changed from boukelaton (Greek: βουκελάτον) to paximadion. Byzantine education was the product of an ancient educational tradition that stretched back to the fifth century BC. It comprised a tripartite system of education that, taking shape during the Hellenistic era, was maintained, with inevitable changes, up until the fall of Constantinople. The stages of education were the elementary school, where pupils ranged from six to ten years, secondary school, where pupils ranged from ten to sixteen, and higher education. Elementary education was widely available throughout most of the Byzantine Empire's existence, in the countryside, as well as in towns. This, in turn, ensured that literacy was much more widespread than in Western Europe, at least until the twelfth century. Secondary education was confined to the larger cities while higher education was the exclusive provenance of Constantinople. The elementary school teacher occupied a low social position and taught mainly from simple fairy tale books (Aesop's Fables were often used). However, the grammarian and rhetorician, teachers responsible for the following two phases of education, were more respected. These used classical Greek texts like Homer's Iliad or Odyssey and much of their time was taken with detailed word-for-word explication. Books were rare and very expensive and likely only possessed by teachers who dictated passages to students. While constituting 50% of the population, women have tended to be overlooked in Byzantine studies. Byzantine society was patriarchical and left few records about them. In addition, women were generally viewed with suspicion and considered periodically unclean and as a result were the subjects of discrimination. Women were disadvantaged in some aspects of their legal status, in their access to education and limited in their freedom of movement. The life of a Byzantine Greek woman could be divided into three phases, girlhood, motherhood, and widowhood. Childhood was brief and perilous, even more so for girls than boys. Parents would celebrate twice as much the birth of a boy and there is some evidence of female infanticide, though it was contrary to both civil and canon law. Educational opportunities for girls were few as they did not attend regular schools but were taught in groups at home by tutors. With few exceptions, education was limited to literacy and the Bible. There were no forays into classical literature for most girls. A famous exception is the Princess Anna Comnena, whose Alexiad displays an uncanny depth of erudition. The majority of a young girl's daily life would be spent in household and agrarian chores, preparing herself for marriage. For most girls, childhood came to an abrupt end with the onset of puberty which was followed shortly after by betrothal and marriage. This was due to most women (and indeed men) having high mortality rates, the average age if they survived infancy being thirty-five. Although marriage arrangements by the family was the norm, romantic love was by no means unknown. Most women produced a large number of children in order to ensure the survival of at least a few, and grief for the loss of a loved one was an inalienable part of life. The main form of birth control was abstinence and while there is evidence of contraception it seems to have been mainly used by prostitutes. Due to prevailing norms of modesty, women would wear clothing that covered the whole of their body except their hands. While women among the poor could get away with wearing sleeveless tunics, most women were obliged to cover even their heads with the long maphorion (Greek: μαφόριον) veil. Women of means, however, spared no expense in adorning their clothes with exquisite jewelry and fine silk fabrics. Divorces were hard to obtain even though there were laws permitting them. Husbands would often beat their wives, though the reverse situation was not unknown as in Theodore Prodromos's description of a battered husband in the Ptochoprodromos poems. Although female life expectancy in Byzantium was lower than that of men, due to wars and the fact that men married younger, female widowhood was still fairly common. Still, women were often able to circumvent societal strictures and work as traders, craftswomen, female abbots and entertainers not to mention empresses and scholars. The traditional image of Byzantine Greek merchants as unenterprising benefactors of state aid is beginning to change for that of mobile, pro-active agents. The merchant class, particularly that of Constantinople, became a force of its own that could, at times, even threaten the Emperor as it did in the eleventh and twelfth centuries. This was achieved through efficient use of credit and other monetary innovations. Merchants invested surplus funds in financial products called chreokoinonia (Greek: χρεοκοινωνία), the equivalent and perhaps ancestor of the later Italian commenda. Eventually, the purchasing power of Byzantine merchants became such that it could influence prices in markets as far afield as Cairo and Alexandria. In reflection of their success, emperors gave merchants the right to become members of the Senate, that is to integrate themselves with the ruling elite. This had an end by the end of the eleventh century when political machinations allowed the landed aristocracy to secure the throne for a century and more. Following that phase, however, the enterprising merchants bounced back and wielded real clout during the time of the Third Crusade. The reason Byzantine Greek merchants have often been neglected in historiography is not that they were any less able than their ancient or modern Greek colleagues in matters of trade. It rather originated with the way history was written in Byzantium, which was often under the patronage of their competitors, the court, and land aristocracy. The fact that they were eventually surpassed by their Italian rivals is attributable to the privileges sought and acquired by the Crusader States within the Levant and the dominant maritime violence of the Italians. Unlike in Western Europe where priests were clearly demarcated from the laymen, the clergy of the Eastern Roman Empire remained in close contact with the rest of society. Readers and subdeacons were drawn from the laity and expected to be at least twenty years of age while priests and bishops had to be at least 30. Unlike the Latin church, the Byzantine church allowed married priests and deacons, as long as they were married before ordination. Bishops, however, were required to be unmarried. While the religious hierarchy mirrored the Empire's administrative divisions, the clergy were more ubiquitous than the emperor's servants. The issue of caesaropapism, while usually associated with the Byzantine Empire, is now understood to be an oversimplification of actual conditions in the Empire. By the fifth century, the Patriarch of Constantinople was recognized as first among equals of the four eastern Patriarchs and as of equal status with the Pope in Rome. The ecclesiastical provinces were called eparchies and were headed by archbishops or metropolitans who supervised their subordinate bishops or episkopoi. For most people, however, it was their parish priest or papas (from the Greek word for "father") that was the most recognizable face of the clergy. Linguistically, Byzantine or medieval Greek is situated between the Hellenistic (Koine) and modern phases of the language. Since as early as the Hellenistic era, Greek had been the lingua franca of the educated elites of the Eastern Mediterranean, spoken natively in the southern Balkans, the Greek islands, Asia Minor, and the ancient and Hellenistic Greek colonies of Southern Italy, the Black Sea, western Asia and north Africa. At the beginning of the Byzantine millennium, the koine (Greek: κοινή) remained the basis for spoken Greek and Christian writings, while Attic Greek was the language of the philosophers and orators. As Christianity became the dominant religion, Attic began to be used also in Christian writings in addition to and often interspersed with, the koine Greek. Nonetheless, from the sixth century on and at least until the twelfth, Attic remained entrenched in the educational system while further changes to the spoken language can be postulated for the early and middle Byzantine periods. The Byzantine Empire, at least in its early stages, was composed of people whose mother tongue was other than Greek, as well. These included Latin, Aramaic, Coptic, and Caucasian languages, while Cyril Mango cites evidence for bilingualism as well in the south and southeast. These influences in addition to the influx of people of Arabic, Celtic, Germanic, Turkic, and Slavic backgrounds supplied medieval Greek with many loanwords that have survived in the modern Greek language as well. From the eleventh and twelfth centuries onward, there is a steady rise in the literary use of the vernacular, as well. Following the Fourth Crusade and the increased contact with the West, this entailed that the lingua franca of commerce became Italian, and in the areas of the Crusader kingdoms a classical education (Greek: παιδεία, paideia) ceased to be the sine qua non of social status, leading to the rise of the vernacular. It is from this era that many beautiful works in the vernacular, often written by people deeply steeped in classical education, are attested. A famous example are the four Ptochoprodromic poems attributed to Theodoros Prodromos. From the thirteenth to fifteenth centuries, the last of the Empire, arise several works like laments, fables, romances, and chronicles written outside Constantinople, which until then had been the seat of most literature, in an idiom termed by scholars as "Byzantine Koine". Nonetheless, this did not in the end obviate the diglossia of the Greek-speaking world (which had already started in ancient Greece), and which continued under Ottoman rule and persisted in the modern Greek state until 1976, while Atticist Greek remains the official language of the Greek Orthodox Church. As shown in the poems of Ptochoprodromos, an early stage of modern Greek had already been shaped by the twelfth century and possibly earlier. Vernacular Greek continued to be known as "Romaic" up until the twentieth century. At the time of Constantine the Great (r. 306–337), barely 10% of the Roman Empire's population were Christians, with most of them being urban population and generally found in the eastern part of the Roman Empire. The majority of people still honoured the old gods in the public Roman way of religio. As Christianity became a complete philosophical system, whose theory and apologetics were heavily indebted to the Classic word, this changed. In addition, Constantine as Pontifex Maximus was responsible for the correct cultus or veneratio of the deity which was in accordance with former Roman practice. The move from the old religion to the new entailed some elements of continuity as well as break with the past, though the artistic heritage of paganism was literally broken by Christian zeal. Christianity led to the development of a few phenomena characteristic of Byzantium. Namely, the intimate connection between Church and State, a legacy of Roman cultus. Also, the creation of a Christian philosophy that guided Byzantine Greeks in their everyday lives. And finally, the dichotomy between the Christian ideals of the Bible and classical Greek paideia which could not be left out, however, since so much of Christian scholarship and philosophy depended on it. These shaped Byzantine Greek character and the perceptions of themselves and others. Christians at the time of Constantine's conversion made up only 10% of the population. This would rise to 50% by the end of the fourth century and 90% by the end of the fifth century. Emperor Justinian I (r. 527–565) then brutally mopped up the rest of the pagans, highly literate academics on one end of the scale and illiterate peasants on the other. A conversion so rapid seems to have been rather the result of expediency than of conviction. The survival of the Empire in the East assured an active role of the emperor in the affairs of the Church. The Byzantine state inherited from pagan times the administrative and financial routine of organising religious affairs, and this routine was applied to the Christian Church. Following the pattern set by Eusebius of Caesarea, the Byzantines viewed the emperor as a representative or messenger of Christ, responsible particularly for the propagation of Christianity among pagans, and for the "externals" of the religion, such as administration and finances. The imperial role in the affairs of the Church never developed into a fixed, legally defined system, however. With the decline of Rome, and internal dissension in the other Eastern patriarchates, the church of Constantinople became, between the 6th and 11th centuries, the richest and most influential centre of Christendom. Even when the Byzantine Empire was reduced to only a shadow of its former self, the Church, as an institution, exercised so much influence both inside and outside the imperial frontiers as never before. As George Ostrogorsky points out: "The Patriarchate of Constantinople remained the center of the Orthodox world, with subordinate metropolitan sees and archbishoprics in the territory of Asia Minor and the Balkans, now lost to Byzantium, as well as in Caucasus, Russia and Lithuania. The Church remained the most stable element in the Byzantine Empire." Within the Byzantine Empire, a Greek or Hellenised citizen was generally called a Rhōmaîos (Greek: Ῥωμαῖος), which was first of all defined in opposition to a foreigner, ethnikós (Greek: ἐθνικός). The Byzantine Greeks were, and perceived themselves as, the descendants of their classical Greek forebears, the political heirs of imperial Rome, and followers of the Apostles. Thus, their sense of "Romanity" was different from that of their contemporaries in the West. "Romaic" was the name of the vulgar Greek language, as opposed to "Hellenic" which was its literary or doctrinal form. "Greek" (Greek: Γραικός) had become synonymous with "Roman" (Greek: Ρωμαίος/Ρωμιός) and "Christian" (Greek: Χριστιανός) to mean a Christian Greek citizen of the [Eastern] Roman Empire. There was always an element of indifference or neglect of everything non-Greek, which was therefore "barbarian". In official discourse, "all inhabitants of the empire were subjects of the emperor, and therefore Romans." Thus the primary definition of Rhōmaios was "political or statist." In order to succeed in being a full-blown and unquestioned "Roman" it was best to be a Greek Orthodox Christian and a Greek-speaker, at least in one's public persona. Yet, the cultural uniformity which the Byzantine church and the state pursued through Orthodoxy and the Greek language was not sufficient to erase distinct identities, nor did it aim to. The highest compliment that could be paid to a foreigner was to call him andreîos Rhōmaióphrōn (Greek: ἀνδρεῖος Ῥωμαιόφρων, roughly "a Roman-minded fellow"). Often one's local (geographic) identity could outweigh one's identity as a Rhōmaios. The terms xénos (Greek: ξένος) and exōtikós (Greek: ἐξωτικός) denoted "people foreign to the local population," regardless of whether they were from abroad or from elsewhere within the Byzantine Empire. "When a person was away from home he was a stranger and was often treated with suspicion. A monk from western Asia Minor who joined a monastery in Pontus was 'disparaged and mistreated by everyone as a stranger'. The corollary to regional solidarity was regional hostility." From an evolutionary standpoint, Byzantium was a multi-ethnic empire that emerged as a Christian empire, soon comprised the Hellenised empire of the East, and ended its thousand-year history, in 1453, as a Greek Orthodox state: an empire that became a nation, almost by the modern meaning of the word. The presence of a distinctive and historically rich literary culture was also very important in the division between "Greek" East and "Latin" West and thus the formation of both. It was a multi-ethnic empire where the Hellenic element was predominant, especially in the later period. Spoken language and state, the markers of identity that were to become a fundamental tenet of nineteenth-century nationalism throughout Europe became, by accident, a reality during a formative period of medieval Greek history. Beginning in the twelfth century, certain Byzantine Greek intellectuals began to use the ancient Greek ethnonym Héllēn (Greek: Ἕλλην) in order to describe Byzantine civilisation. During the later period of the Byzantine Empire, Emperor Theodore I Laskaris (r. 1205–1222) tried to revive Hellenic tradition by fostering the study of philosophy, for in his opinion there was a danger that philosophy "might abandon the Greeks and seek refuge among the Latins". In a letter to Pope Gregory IX, the Byzantine emperor John Vatatzes (r. 1221–1254) claimed to have received the gift of royalty from Constantine the Great, and put emphasis on his "Hellenic" descent, exalting the wisdom of the Greek people. He was presenting Hellenic culture as an integral part of the Byzantine polity in defiance of Latin claims. Byzantine Greeks had always felt superior for being the inheritors of a more ancient civilisation, but such ethnic identifications had not been politically popular up until then. Hence, in the context of increasing Venetian and Genoese power in the eastern Mediterranean, association with Hellenism took deeper root among the Byzantine elite, on account of a desire to distinguish themselves from the Latin West and to lay legitimate claims to Greek-speaking lands. Claims of association with Hellenism continued and increased throughout the Palaiologan dynasty. The scholar, teacher, and translator, John Argyropoulos, addressed Emperor John VIII Palaiologos (r. 1425–1448) as "Sun King of Hellas" and urged the last Byzantine emperor, Constantine XI Palaiologos (r. 1449–1453), to proclaim himself "King of the Hellenes". During the same period, the neo-platonic philosopher George Gemistos Plethon boasted "We are Hellenes by race and culture," and proposed a reborn Byzantine Empire following a utopian Hellenic system of government centered in Mystras. According to the historian George Sphrantzes, on the eve of the Fall of Constantinople, the last Byzantine emperor urged his soldiers to remember that they were the descendants of Greeks and Romans. In the eyes of the West, after the coronation of Charlemagne, the Byzantines were not acknowledged as the inheritors of the Roman Empire. Byzantium was rather perceived to be a corrupted continuation of ancient Greece, and was often derided as the "Empire of the Greeks" or "Kingdom of Greece". Such denials of Byzantium's Roman heritage and ecumenical rights would instigate the first resentments between Greeks and "Latins" (for the Latin liturgical rite) or "Franks" (for Charlemegne's ethnicity), as they were called by the Greeks. Popular Western opinion is reflected in the Translatio militiae, whose anonymous Latin author states that the Greeks had lost their courage and their learning, and therefore did not join in the war against the infidels. In another passage, the ancient Greeks are praised for their military skill and their learning, by which means the author draws a contrast with contemporary Byzantine Greeks, who were generally viewed as a non-warlike and schismatic people. While this reputation seems strange to modern eyes given the unceasing military operations of the Byzantines and their eight century struggle against Islam and Islamic states, it reflects the realpolitik sophistication of the Byzantines, who employed diplomacy and trade as well as armed force in foreign policy, and the high-level of their culture in contrast to the zeal of the Crusaders and the ignorance and superstition of the medieval West. As historian Steven Runciman has put it: A turning point in how both sides viewed each other is probably the massacre of Latins in Constantinople in 1182. The massacre followed the deposition of Maria of Antioch, a Norman-Frankish (therefore "Latin") princess who was ruling as regent to her infant son Emperor Alexios II Komnenos. Maria was deeply unpopular due to the heavy-handed favoritism that had been shown the Italian merchants during the regency and popular celebrations of her downfall by the citizenry of Constantinople quickly turned to rioting and massacre. The event and the horrific reports of survivors inflamed religious tensions in the West, leading to the retaliatory sacking of Thessalonica, the empire's second largest city, by William II of Sicily. An example of Western opinion at the time is the writings of William of Tyre, who described the "Greek nation" as a "a brood of vipers, like a serpent in the bosom or a mouse in the wardrobe evilly requite their guests". ▪ Premium designs ▪ Designs by country ▪ Designs by U.S. state ▪ Most popular designs ▪ Newest, last added designs ▪ Unique designs ▪ Cheap, budget designs ▪ Design super sale DESIGNS BY THEME ▪ Accounting, audit designs ▪ Adult, sex designs ▪ African designs ▪ American, U.S. designs ▪ Animals, birds, pets designs ▪ Agricultural, farming designs ▪ Architecture, building designs ▪ Army, navy, military designs ▪ Audio & video designs ▪ Automobiles, car designs ▪ Books, e-book designs ▪ Beauty salon, SPA designs ▪ Black, dark designs ▪ Business, corporate designs ▪ Charity, donation designs ▪ Cinema, movie, film designs ▪ Computer, hardware designs ▪ Celebrity, star fan designs ▪ Children, family designs ▪ Christmas, New Year's designs ▪ Green, St. Patrick designs ▪ Dating, matchmaking designs ▪ Design studio, creative designs ▪ Educational, student designs ▪ Electronics designs ▪ Entertainment, fun designs ▪ Fashion, wear designs ▪ Finance, financial designs ▪ Fishing & hunting designs ▪ Flowers, floral shop designs ▪ Food, nutrition designs ▪ Football, soccer designs ▪ Gambling, casino designs ▪ Games, gaming designs ▪ Gifts, gift designs ▪ Halloween, carnival designs ▪ Hotel, resort designs ▪ Industry, industrial designs ▪ Insurance, insurer designs ▪ Interior, furniture designs ▪ International designs ▪ Internet technology designs ▪ Jewelry, jewellery designs ▪ Job & employment designs ▪ Landscaping, garden designs ▪ Law, juridical, legal designs ▪ Love, romantic designs ▪ Marketing designs ▪ Media, radio, TV designs ▪ Medicine, health care designs ▪ Mortgage, loan designs ▪ Music, musical designs ▪ Night club, dancing designs ▪ Photography, photo designs ▪ Personal, individual designs ▪ Politics, political designs ▪ Real estate, realty designs ▪ Religious, church designs ▪ Restaurant, cafe designs ▪ Retirement, pension designs ▪ Science, scientific designs ▪ Sea, ocean, river designs ▪ Security, protection designs ▪ Social, cultural designs ▪ Spirit, meditational designs ▪ Software designs ▪ Sports, sporting designs ▪ Telecommunication designs ▪ Travel, vacation designs ▪ Transport, logistic designs ▪ Web hosting designs ▪ Wedding, marriage designs ▪ White, light designs ▪ Magento store designs ▪ OpenCart store designs ▪ PrestaShop store designs ▪ CRE Loaded store designs ▪ Jigoshop store designs ▪ VirtueMart store designs ▪ osCommerce store designs ▪ Zen Cart store designs ▪ Flash CMS designs ▪ Joomla CMS designs ▪ Mambo CMS designs ▪ Drupal CMS designs ▪ WordPress blog designs ▪ Forum designs ▪ phpBB forum designs ▪ PHP-Nuke portal designs ANIMATED WEBSITE DESIGNS ▪ Flash CMS designs ▪ Silverlight animated designs ▪ Silverlight intro designs ▪ Flash animated designs ▪ Flash intro designs ▪ XML Flash designs ▪ Flash 8 animated designs ▪ Dynamic Flash designs ▪ Flash animated photo albums ▪ Dynamic Swish designs ▪ Swish animated designs ▪ jQuery animated designs ▪ WebMatrix Razor designs ▪ HTML 5 designs ▪ Web 2.0 designs ▪ 3-color variation designs ▪ 3D, three-dimensional designs ▪ Artwork, illustrated designs ▪ Clean, simple designs ▪ CSS based website designs ▪ Full design packages ▪ Full ready websites ▪ Portal designs ▪ Stretched, full screen designs ▪ Universal, neutral designs CORPORATE ID DESIGNS ▪ Corporate identity sets ▪ Logo layouts, logo designs ▪ Logotype sets, logo packs ▪ PowerPoint, PTT designs ▪ Facebook themes VIDEO, SOUND & MUSIC ▪ Video e-cards ▪ After Effects video intros ▪ Special video effects ▪ Music tracks, music loops ▪ Stock music bank GRAPHICS & CLIPART ▪ Pro clipart & illustrations, $19/year ▪ 5,000+ icons by subscription ▪ Icons, pictograms |Custom Logo Design $149 ▪ Web Programming ▪ ID Card Printing ▪ Best Web Hosting ▪ eCommerce Software ▪ Add Your Link| |© 1996-2013 MAGIA Internet Studio ▪ About ▪ Portfolio ▪ Photo on Demand ▪ Hosting ▪ Advertise ▪ Sitemap ▪ Privacy ▪ Maria Online|
http://www.qesign.com/sale.php?x=Byzantine_Greeks
13
20
he period of European history known as the Renaissance and Reformation was an age of profound and even revolutionary change. It is true of all revolutions that they cannot be adequately understood without some awareness of the conditions that preceded them and that they eventually destroyed or modified. To appreciate the importance of the Renaissance and the Reformation, the student needs some knowledge of the era that came before them, which we call the Middle Ages. The expression "Middle Ages," from which is derived the adjective "medieval," originated in fifteenth-century Italy. By that time the notion was becoming familiar among certain scholars that the glorious days of classical antiquity, which had ended with the fall of Rome, had been followed by a long interval of darkness. In their own times these scholars discerned signs of a new dawn, a revival, even a "rebirth." They were unwilling or unable to recognize their profound indebtedness to the centuries in between, which they therefore regarded with contempt. Not all scholars felt that way at the time, and no responsible scholars hold that view today. Indeed the Middle Ages were an extraordinarily creative period and the basis not only for the Renaissance and Reformation but also for modern European civilization. ECONOMIC AND SOCIAL LIFE he society of western Europe in the Middle Ages was agrarian the largest segment of the population consisted of the tillers of the soil; the chief basis of wealth and of political power was the land. Industry and commerce were less important relative to agrarian pursuits than they had been in Roman times or were to be in the modern era. The basic unit of agrarian society was the manor. A manor was an agricultural estate belonging to a lord, a member of the noble class. Most of the inhabitants of the manor were peasants, whose basic job was to cultivate the soil for the lord's benefit. The arable land that is, the land on which crops were raised (arable is based on the Latin word for plough) was often cultivated according to the three-field system, a primitive kind of crop rotation. In this system, one part of the land was planted in any given year with a winter crop, and one part with a spring crop; the third part was allowed to lie fallow, because no better method was known for preventing soil depletion. In the following year the three fields would change roles. Thus approximately one-third of the land was always kept out of production. The peasant on a manor which used this cycle did not cultivate consolidated blocks of land. Instead, each of the three fields was divided into strips, and the individual peasant had strips in each field. The lord's own holdings, called the demesne or domain lands, were also, at least in part, scattered among the fields, though the lord also normally had a solid block of land nearer his residence. The lands were cultivated by the peasants acting together; the animals and the instruments needed for cultivation were too costly for the individual peasant, but belonged to the whole community and were used jointly by its members. The first duty of the peasant was to help cultivate the demesne, from which the lord got the entire product. Having fulfilled this obligation, the peasant could turn his attention to his own strips. These were not his property, however; he held them by grant of the lord who was consequently entitled also to an agreed share of their produce, with the remainder going to the cultivator. There were other obligations to the lord: to grind grain into flour, to bake bread from the flour, and to press wine from grapes. The lord's mill, oven, and winepress had to be used, and a fee of so much flour, so many loaves, so much wine had to be paid. When a peasant died, his family paid a death tax; when a member of a peasant's family married, the lord's consent was needed. The peasant possessed, it was said, nihil praeter ventrem nothing but his belly. Peasants were, in law, divided into two categories -- unfree and free. The unfree peasant known on the Continent as a serf, in England as a villein was supposedly entirely at his lord's disposal. The free peasant, on the other hand, had certain rights. He could not, for example, be held to more than a specified number of days of labor on the demesne every week. In practice, the status of the free peasant tended to approach that of the unfree, rather than the other way around. The peasant also had to pay for the services of religion. A tithe of his produce went for the maintenance of the priest. Strictly speaking, tithe means a tenth, but tithes tended to become fixed payments of each kind of produce without much necessary relationship to a tenth. It is to be assumed that the peasant whose life was a constant round of backbreaking drudgery from which he himself derived little profit, was in great need of the consolations of the church. Certainly the church was a pervasive presence in his life, encompassing him in its ministrations throughout all the important events and turning points of his career and indelibly coloring his outlook on the everyday incidents of his existence. The manor also had its judicial aspect. The lord had rights of justice over his peasants, and the lord's court held jurisdiction over many aspects of manorial life. The law that was enforced was the customary law, and custom varied from one manor to another. It was probably custom that provided the chief protection to the peasants against excessive demands by their lords custom and the natural desire of any prudent employer to keep the labor force healthy for his own benefit. The manor was to a considerable degree self-sufficient. Clothing, household utensils, and other necessities were produced right there by peasant craftsmen. Some goods were purchased that were not products of the manor itself, but contact with the outside was comparatively slight. At no time, however, did the manor and manorial system constitute all there was of medieval society. In some areas, agriculture was organized on a non- manorial basis. Moreover, the towns and the cities of the ancient Roman Empire often survived, though much diminished. From the eleventh century, they began to grow again. This growth occurred earliest in Italy and the Netherlands, and spread from there to other parts of western Europe, producing numerous important urban centers. London and Paris, Lübeck and Naples, Bruges and Bergen, and numerous other cities, made their distinctive contributions to medieval life. The underlying force in medieval urban growth was economic, a revival of trade. Such towns and cities as had survived the fall of Rome had owed their urban status to the presence of military garrisons or episcopal sees. They were not very large, and were distinguished from the surrounding countryside largely by the possession of walls. With the revival of trade, this was changed. The towns grew at a rate unknown for centuries; so many persons came to settle outside the old walls that new ones had to be built enclosing a much greater area. Not only did the size of the towns increase, but a whole new way of life came to be established within them. Many of the inhabitants were merchants conducting their trade, often international in scope, from their city headquarters. Along with trade came banking and manufacturing. A class of big businessmen arose, and in connection with it an urban working class, or proletariat. For this new urban society, new types of legal institutions and property tenure had to be devised. A mercantile law, or law merchant, grew up to settle cases arising from trade disputes. Property holding was set free from the complex network of relationships and obligations that had burdened it, and it became possible for city dwellers to hold property outright. One of the most distinctive characteristics of urban life was freedom. Many peasants, perhaps most, were unfree; all town dwellers were free. "The air of the city makes free" was a proverbial saying; a serf who escaped from the manor and lived in a city for a year and a day without being apprehended by his lord became legally a free man. Freedom is contagious; these islands of freedom in a largely unfree society infected neighboring areas, and freedom spread to the countryside. To meet the needs of the expanding urban populations, new lands had to be opened for cultivation. Workers had to be induced to move to this land from their former homes, and one attraction that was held out to them was freedom. Thus from the eleventh century, a series of interrelated developments can be traced which changed the face of Europe. Towns grew and flourished; trade, banking, and manufacturing became established on a new scale; more and more persons achieved the legal status of free men. Along with all this, and possibly more basic than any of it, vast tracts of land, which had been uninhabited or uninhabitable forest or swamp, were cleaned, drained, and subjected to cultivation. The great German "Drive to the East" pushed several hundred miles eastward the boundaries of western and central European settlement, encroaching on the lands of such Eastern Europeans as the Slavs, and producing fateful and enduring consequences. Among the dwellers in the towns were the class of small shopkeepers and craftsmen organized into guilds. Each craft guild regulated a particular branch of economic life in the town. It was a protective organization. On the one hand, it protected its members from outside competition by strictly regulating the conditions under which goods or their makers could come in from other towns. It also protected the members from one another by regulating their hours of work and the number of employees they could hire. Finally, it protected the public by enforcing standards of workmanship on its members. When a boy was accepted by a guild to learn the trade, he was called an apprentice. When his training was completed after a specified period of time -- often seven -- years he became a journeyman, that is a man who worked by the day. (Journe in French means "day.") He could then become a paid worker for a master of his guild. In some areas, as in Germany, it was customary for a journeyman to have a period of travel (the so-called Wanderjahre) before settling down. The ultimate goal of the apprentice and journeyman was to become a master. This meant that he could open his own shop, hire journeymen, and train apprentices. Only the masters were actually members of the guilds and regulated their affairs. To become a master, the aspirant had to satisfy the already existing masters as to his possession of sufficient capital and sufficient competence. One way to fulfill the latter requirement was to produce a piece of work which was worthy of a master that is, a masterpiece. In addition to the craft guilds, there also existed merchant guilds, organizations of the greater businessmen whose enterprises transcended the boundaries of their town and sometimes their nation. The ordinary workers employed by these great businessmen were not allowed to organize into guilds. They were permanently disfranchised, with no economic or political power. They were most vulnerable to all the dangers of life in a medieval town. Economic conditions in these towns were more unstable than in the countryside, and when business was bad it was the workers who suffered most. Living in crowded, unsanitary conditions, they were hit hardest by epidemics and plagues. In bad crop years, when food was scarce and prices high, they were the ones who went hungry. In good times, they might be quiescent; when things were bad, they formed a permanent source of potential discontent and even of revolutionary violence. It was the guild masters who tended to dominate the towns politically. Sometimes this required a struggle against existing authority: a bishop, a nobleman, or an older ruling class. Invariably, however, it was the guilds who won out. In doing so, they managed to establish governments that were more representative than could be found elsewhere in medieval Europe. The degree of independence enjoyed by the towns varied according to the presence or absence of a strong central authority able to subject them to its rule. In Italy, Switzerland, and Germany, where no such government succeeded in establishing itself permanently, towns achieved virtual independence. In France and England strong monarchies kept the towns under their control. In the Netherlands an intermediate status prevailed. While the towns enjoyed a large measure of autonomy, they had overlords, lay and ecclesiastical, who retained considerable power over them. POLITICAL LIFE: FEUDALISM eudalism is a word of fairly recent origin which was coined to describe the type of government that prevailed in medieval Europe. There has been much debate about the origins of feudalism. Did it spring from Roman or Germanic roots? The answer seems to be that it was primarily Germanic in origin, but that some practices and arrangements that grew up in ancient Rome also made their contribution. Medieval feudalism arose in, and was adapted to, a state of society in which land was the source of wealth and military force the basis of power an agrarian society under siege. In the early Middle Ages, after the breakdown of Rome and its institutions, and with western Europe subject to attack by Moslems, Norsemen, and Hungarians, feudalism took shape. Until about the end of the thirteenth century, it succeeded fairly well in maintaining order. With innumerable variations in practice, it prevailed throughout the West, though in some areas, notably Italy, it never took deep roots. Feudalism derives its name from the fief (in Latin, feudum). The fief was generally, though not always, a grant of land from one nobleman to another. The one granting the fief was the lord; the recipient was his vassal. A lord could have several vassals, and a vassal could have several lords. The same man could be both a lord and a vassal. The relationship between lord and vassal was a personal one; in theory, it could not be inherited or transferred. It was established when the vassal swore an oath to be the man (homo in Latin; homme in French) of the lord. This was the oath of homage. The effect of this oath was to set up a series of reciprocal duties and obligations. The lord had to protect his vassal, the vassal to serve his lord. Service to the lord was strictly defined, and limits could not be exceeded except with the vassal's consent. It involved, first of all, military service. The noble class was a class of fighting men, and its members were knights, men who fought on horseback. (Both the Latin and French words for knight, eques and chevalier respectively, have as their roots words for horseman.) This military service was limited to a certain number of days each year; forty was a common number. The vassal also had to make payments to his lord on specified occasions, such as the marriage of the lord's oldest daughter or the knighting of his oldest son. He had to contribute to his lord's ransom when the latter was captured in battle. He was required to extend hospitality to his lord for a given number of days each year; that is, he had to put up at his own castle not only the lord, but the latter's retinue of persons and animals, which might be a considerable one. The feudal vassal also had the general obligation to give advice and counsel to his lord when called upon to do so. This might involve a summons to the lord's court, where the lord met with all his vassals to adjudicate cases arising out of the feudal relationship. Some breach of obligation or some conflict between lord and vassal, or between vassals, might need to be settled. Here the lord and his vassals were acting in what might be called a judicial capacity. Thus the nobles were involved in two different sorts of courts: the manorial court where the lord or his representative judged cases involving the peasants on the manor, and the feudal court for cases involving nobles. Of course, the lord might require other sorts of advice and counsel. Where the feudal lord was also a king, as in England after the Norman Conquest of 1066, his meetings with his vassals form the embryo of national government. When a king summoned his vassals to give him advice and assistance, this meeting was called the curia regis. The fact that the Latin word curia, as used here, can be translated as either court or council, indicates that the functions of government were not classified and specialized then as they are today. This council, or court, of the king and his advisers took actions that were administrative, judicial, fiscal, military and diplomatic, and even legislative. The word legislative must, however, be used here with caution. In the states that arose during the Middle Ages out of the Germanic kingdoms, the idea of legislation making law did not really exist. Law was identified with the established custom of the tribe or of a specific area, and, therefore, it already existed. Thus the law applicable to a particular situation did not need to be made, but to be found, perhaps by asking the oldest inhabitants of a given area. One of the earliest uses of the jury, which means a sworn body of men, was to provide such information. Of course, new laws were made, but for a long time the process was disguised. In England, to take a convenient example, the meetings of the king and his council contained the germ of all the great offices and institutions of royal government. From these meetings, at first so informal and unspecialized, there arose the Chancery, the Treasury, the Exchequer, and the Courts of Common Law. As their activities expanded and their procedures became more complex and elaborate, they tended to acquire an increasing number of functionaries and a body of permanent records. With these developments came necessarily a fixed headquarters; members of the government no longer followed the king but did their business in a permanent location, in Westminster to be exact, though royal judges continued to travel regularly through the country, hearing cases in the king's name. Even Parliament developed out of the primitive curia regis. From time to time, starting, as far as the records show, in the thirteenth century, representatives of the local districts were summoned to appear before the king and his council. These local representatives might come from the shires (counties) or from the boroughs (towns) or both. The kings might summon them to get their consent to new taxation or to some proposed royal policy, or for a variety of other reasons. It was entirely up to the king whether to call them or not; he did not legally require their consent, but apparently found that obtaining it reduced resistance to his policies and thus made the country easier to govern. These representatives, knights of the shires and burgesses from the towns, were not invited to come as part of the royal council; they were summoned to appear before the king and council. Although knights and burgesses might have met separately, they developed a habit of meeting together as one body. One of their functions was to present petitions to the king and council. In addition to local petitions, presented perhaps by representatives from one particular district, there arose the device of common petitions, presented by all the members on behalf of all the communities of the realm. The king and council would examine these petitions, grant some, and deny others. The members or Commons, as they came to be called were not slow to realize that, in asking them for money, the king was putting a powerful weapon in their hands. They could make the granting of funds conditional upon the approval of at least some of their petitions. The kings got the message without difficulty, and thus the power of the purse directly influenced royal decisions. Thus we see the origin of the House of Commons and, in the common petitions granted by the crown, the beginnings of parliamentary legislation. We also see representative government, since the knights and the burgesses were elected. They were not chosen on the basis of what we would today regard as a democratic franchise, but because they were property owners and people of influence. Nevertheless, it is quite likely that they did essentially represent the wishes of the politically conscious groups in the population, and that medieval English government was to a large degree government by consent. Representative government was widespread in the Middle Ages. It was embodied in assemblies of estates, in this connotation meaning distinct social groups or classes. These estates were generally the clergy, the nobility, and the townspeople; in Sweden the peasants came to form a separate estate, but this was unusual. In France the Estates-General consisted of the First Estate, or clergy; the Second Estate, or nobility; the Third Estate, which in theory represented the rest of the population, but in practice was made up of men from the towns. The bulk of the French population, the peasantry, was really unrepresented. In France, besides the Estates-General of the realm as a whole, there were local estates in some of the provinces, particularly those which had been most recently added to the territory of France. These local estates served as valuable buffers between the people of the provinces and the demands of the royal government. Although feudal institutions were established throughout western Europe, the political evolution of feudal states did not always move in the same direction. In France and England, strong monarchies developed; in both countries there was a happy combination of favorable circumstances with a remarkably large number of kings of exceptional strength and ability. The case of Germany, however, shows that under less fortunate conditions feudalism might contribute to a breakdown of effective government. Germany in the Middle Ages was more or less identical with the Holy Roman Empire; though the emperors often laid claim to non-German territories, such as Italy, these claims proved more and more difficult to translate into fact. Throughout much of the tenth and eleventh centuries, Germany was one of the most powerful of the European states, but from the later eleventh century this development was arrested. Conflict between emperors and popes was one factor; this conflict gave the German nobles and princes an opportunity to assert their independence of the emperors, and this independence they never lost. Power in the empire came to rest with the princes, some of whom were lay and some ecclesiastical. The German ecclesiastical princes, the archbishops and bishops, were territorial lords as well as spiritual leaders. As towns grew in Germany, they too acquired a share of political power, particularly the imperial free cities, which owed allegiance only to the emperor and thus enjoyed virtual independence. The growing weakness of the emperors made possible such an assertion of power by lay nobles, prelates, and towns. It was both a cause and an effect of this weakness that the position of emperor never became hereditary. Emperors were elected, and although several successive emperors might be chosen from the same family, no family ever succeeded in making the crown hereditary in its own line. Elective monarchies were normally weaker than hereditary ones, partly because of the concessions that candidates for the throne had to make to those who chose them. After 1356, the Holy Roman emperor was always chosen by seven electors. Three of these were princes of the church: the archbishops of Mainz, Trier, and Cologne. Three were the rulers of important territorial states within Germany: the duke of Saxony, the margrave of Brandenburg, and the count Palatine of the Rhine. The seventh elector was the king of Bohemia. These men formed the electoral college and were also the upper house in the assembly of estates in Germany, the Imperial Diet or Reichstag. When meeting as part of the diet, the electoral college consisted of only the six German electors; the king of Bohemia was present only for imperial elections. The second house of the diet represented the other German princes, some of whom were powerful rulers in their own rights, and the third house (beginning in the sixteenth century) contained delegates from the imperial free cities. The same pattern of a ruler and a body of estates prevailed in the individual German states as well as in the empire as a whole. But an important distinction must be made. While in the separate states the ruler and the estates engaged in struggles for effective power, in the empire as a whole emperor and diet both became steadily weaker. The empire became little more than a very loose collection of states of various sizes, types of government, and degrees of political importance, largely independent of both the emperor and the diet. Some emperors made valiant efforts to reverse this trend, but in the long run these efforts proved ineffective. Nevertheless, in spite of its weaknesses, the Holy Roman Empire was still accorded a type of formal and ceremonial precedence in the courts of Europe. Some emperors who held other positions as well as that of emperor could still play an important role in European affairs. RELIGION: THE CHURCH AND THE PAPACY he inhabitant of western Europe in the Middle Ages was a Christian, a part of the Roman church, which claimed for itself the name of Catholic or Universal. The Church of Rome was monarchical in structure, under the leadership of the bishop of Rome, called the pope (from papa or father, a word originally applied to priests in general). Throughout the Middle Ages, the power of the pope within the church grew steadily, though with occasional setbacks. His power with respect to the secular world probably declined from about the start of the fourteenth century or even earlier. Popes were elected, after 1059, by a body of men known as cardinals, who in their turn were chosen by the pope. It came to be the normal practice for the cardinals to elect one of their own number to the papacy. The College of Cardinals not only chose the pope but also served as his chief advisers. They formed part of the papal headquarters at Rome, known as the Roman Curia, which, like the administrative organs of the European states, became more elaborate as the powers and activities of the papacy grew. The analogy between the Curia and a secular government can be carried further. It acquired many of the organs of such governments, including a body of law with a judicial system to apply it, and a highly developed fiscal system. Indeed, in many ways it led the European states in these areas. The law of the church was called canon law. Based on decisions of popes and of general councils, it constituted one of the professional subjects studied in the universities. A student could become a doctor of canon law, and many men ambitious for advancement in the church acquired this degree as a stepping-stone to such advancement. As the powers of the pope and church increased, the scope of canon law broadened, encompassing an ever larger variety of cases and situations. The pope and his Curia presided over a great hierarchy of priests, the clergy. Within this priesthood, the real ruling class of the church was the episcopate, that is, the archbishops and bishops. Each bishop presided over a diocese and took his title from the city which was the site of his cathedral; each archbishop, or metropolitan, was the head of a province, consisting of several dioceses. In each diocese, the bishop or his representatives, among their numerous functions, presided over a court in which cases were tried under canon law. Appeals from these local courts could be carried to Rome. Another function of bishops was to ordain priests. Each diocese contained a number of parishes, and each parish required a priest to look after the spiritual needs of the people, the laity. For the spiritual welfare of their flocks, the priests delivered sermons and administered sacraments. The seven sacraments provided for the Christian soul from baptism, normally received shortly after birth, to extreme unction, administered to the dying. Confirmation marked the entry of young persons into the church, and the sacrament of ordination, or Holy Orders, made a man a priest; both these sacraments had to be performed by a bishop. Marriage was a sacrament, and divorce was not allowed, though in certain cases a marriage might be annulled. There were two sacraments which were received frequently: Penance and the Lord's Supper. The sacrament of penance centered on the act of confession to a priest. The individual penitent revealed his sins to the priest, or confessor, who then absolved him, sometimes prescribing acts of penance or satisfaction. (In England the word shrive might be used instead of absolve; the confessor shrove the penitent.) In the Fourth Lateran Council of 1215, it was made mandatory for Christians to confess and take Communion at least once a year, and Easter was the time frequently chosen for this purpose. Communion, or the Lord's Supper or Eucharist, was in many ways the central act of the Christian life. Based on Jesus' last supper with his disciples, this sacrament involved the use of bread and wine, consecrated by the priest and then consumed. Originally both the bread and the wine were offered to the laity, but eventually the wine was withheld from them and drunk by the priest alone, while the consecrated wafer, or host, was given to the laymen. The reservation of the wine to the priest exclusively was concurrent with the rise of the doctrine of transubstantiation. In giving bread and wine to his disciples at the supper, Jesus had referred to them as His body and blood, respectively. These words can be interpreted in numerous ways, and the interpretation that was adopted officially by the church was a literal one. It declared that, although the appearance and physical qualities that is, the accidents of the bread and wine do not change, the essence or substance does. When they are consecrated by the priest, therefore, the substance of the bread and wine actually becomes the substance of Christ's body and blood hence the unwieldy but admittedly precise word, transubstantiation. The effect of the sacraments was to confer divine grace, which was necessary for salvation. The efficacy of the sacrament did not depend on the moral character of the priest; an unworthy priest could administer a sacrament effectively. Only those who had been ordained to the priesthood were qualified to administer the sacraments. The one exception came in the case of an unbaptized infant in imminent danger of dying. To save him from the consequences of dying unbaptized, a layman could perform the rite if no priest were available. Since the sacraments were the keys to salvation, and the priests held these keys, the position of the priest was an exalted one. In this manner, the church interpreted the power of the keys to the kingdom of Heaven promised by Jesus to Peter (see Matthew 16). Peter was regarded as the chief of the Apostles and the first bishop of Rome, and the popes as his successors. Respect for the sacraments and doctrines of the church was deeply ingrained, but was not always accompanied by reverence for the clergy. Perhaps it was the exalted spiritual status of the priests that made the ordinary Christian so sharply aware of deviations from the standard required. The archbishops, bishops, and priests, whose offices brought them into constant touch with the lay world, were called the secular clergy. Alongside them there existed the regular clergy, those who lived by a rule (regula in Latin). For the observance of their rules, these regulars were organized in religious orders. Among these were orders of monks, men who lived in disciplined communities, under the vows of poverty, chastity, and obedience. These vows, if faithfully obeyed, meant no belongings of one's own; no contact with women and no impure thoughts; no will of one's own but rather complete submission to one's superiors. The framework of the monastic day was the regular series of religious services from matins to vespers, and within this framework was a highly regulated life of prayer, study, and work. Although the monk's primary aim was the salvation of his own soul, many monasteries were centers of education and scholarship. The monastery was an ideal home for the studious and the contemplative, and much of what was preserved from classical antiquity was preserved by the monks. Some monks were writers themselves, producing works of devotion and of scholarship, including historical, philosophical, theological, and scientific writings. Women too could enter the life of the cloister; orders of nuns abounded, sometimes connected with orders of monks. Among the monastic rules, the most influential was probably the Benedictine Rule of St. Benedict of Nursia (c.480 c.550), which formed the basis of the Benedictine order. There were numerous other rules and orders, among whom the Cistercians and the Carthusians are two of the most famous. Some rules were stricter than others, but all required self-denial and abandonment of the world. Since monks and nuns were bound to an even more austere code than priests, their shortcomings were judged even more harshly, and there grew up an abundant literature of satire and invective directed against lazy, gluttonous, avaricious, and lecherous monks. The nuns were not spared either. Though there was much exaggeration in this criticism, there was no doubt some basis of truth. Not all inmates of monasteries and convents had entered freely as the result of an inner call. Some had been placed there by their parents when they were still too young to choose, and as they grew up and felt the impulses of youth stirring in them, many no doubt bitterly regretted their confinement. Some nunneries became refuges for undowried girls. Some houses, both of men and women, accepted only candidates from noble families and became centers of an aristocratic and elegant way of life, far removed from the ascetic ideal to which they were nominally devoted. As a reaction to these divergences and abuses, reform movements periodically swept through the religious orders and raised their standards to something approaching their original purpose. One of these reform movements produced the orders of friars, of which the two most famous and influential were the Franciscans and Dominicans, founded respectively by the Italian St. Francis of Assisi (1182 1226) and the Spaniard St. Dominic (c.1170 1221). These new orders differed from the older monastic organizations by coming into direct contact with the world, by doing works of social service or by preaching. The official name of the Dominicans, for example, is the Order of Friars Preachers. The friars were originally mendicants, or beggars, living on alms given them voluntarily by the laity. They were dedicated to the ideal of Christian poverty, a constant theme among reformers of the clergy. However, these mendicant orders found it impossible to remain poor; so popular and successful did they become that they were showered with gifts and legacies and became rich and powerful. Consequently reformers arose periodically within these orders and strove to recall them to their pristine simplicity, humility, and poverty. Both the Franciscans and Dominicans distinguished themselves in the fields of education and learning. Many of the most distinguished philosophers, scientists, and theologians of the Middle Ages were members of these orders. The Dominicans became especially prominent also in the work of the Inquisition, an ecclesiastical tribunal set up with the primary mission of seeking out and prosecuting cases of heresy. For this work they were called, in a Latin pun, Domini canes or the "Hounds of the Lord." The two orders became rivals; in the fifteenth century, when the question of the Immaculate Conception of the Virgin came to be a subject of debate within the church, the Franciscans espoused this doctrine, while the Dominicans opposed it. The impact of the friars was profound. They came into existence at a time when religious ferment and questioning seemed to endanger the church, and they succeeded in guiding these currents into approved channels. In the great church of St. Francis at Assisi there is a fresco showing one of the popes, in a dream, seeing a vision of Francis upholding the tottering church. In time, the friars themselves became the objects of much criticism; Dante, for example, in the Divine Comedy, has some harsh things to say about them. In spite of all the criticism of the religious, the ordinary layman undoubtedly considered himself a faithful Christian. It has long been a historical cliché to refer to the Middle Ages as the Age of Faith, as though religious feeling dominated the minds of the masses. Such things, however, cannot be measured. Religious feeling may indeed have been stronger and more widespread than now, but there must also have been innumerable conventional Christians, who accepted without much reflection what the church taught them but who behaved in much the same way as they would have done if they had not been Christians. What we can surely say is that the influence of the organized church was greater than it is today. But the organized church and religious feeling are two separate things, sometimes, but not always, related to each other. In any event, there was universal acceptance of the Christian interpretation of the nature of the universe and man's place in it. God had created man and woman perfect and placed them in an earthly paradise, endowed with freedom of the will to choose whether or not to obey Him. At the onset of temptation, they had disobeyed. This was the Fall, and it resulted in their expulsion from the garden and the loss of their free will. Henceforth, all the human race was tainted with the result of the first transgression, that is, with original sin. It was this sin that corrupted the will, so that man was no longer free to choose good and reject evil. In this state he could not hope for salvation. To redeem man from this desperate condition, God in His mercy sent His Son, both God and man, to redeem sinners and open the way of salvation. Thus Jesus had lived among men, suffered on the cross, risen from the grave, and ascended to Heaven. He left behind him the church, the body of Christ, to make available to mediate redemption to men, through the sacraments. Man's will was not completely in bondage to sin; he could cooperate with divine grace in the work of his salvation. Salvation was offered to all; each was free to accept or to reject it. Those who rejected it were damned; they faced an eternity of torment in Hell. Those whose lives were outstanding for merit and holiness might be received directly in Heaven; that is, they might be saints. Most Christians, dying in the bosom of the church and duly repentant of their sins, went to Purgatory, where, as the word indicates, they were to be cleansed or purged of their sins. The growing importance of the idea of Purgatory had numerous effects on religious practices. The sacrament of penance came to be regarded largely as a means of reducing the amount of purgatorial punishment required for the penitent. Those who were still alive on earth could help in various ways to shorten the period of purgatorial punishment for those who had died. One way was by praying for them; the souls whom Dante encounters in Purgatory frequently ask him to convey to their surviving friends and relatives their desire for prayers. Endowments were established for priests to say Masses for the souls of the dead; in England these endowments were called chantries. One of the chief purposes of the foundation of monasteries was to have regular prayers for the souls of the founder and members of his family. One prayed to the saints in Heaven, but one prayed for the souls in Purgatory. It was the idea of Purgatory that was responsible for the popularity and importance of indulgences. These came into use as a means of attracting men to participate in the Crusades, the series of expeditions which, beginning shortly before 1100, tried to wrest the Holy Land from the Moslem infidels and return it to Christian hands. To induce men to go on these expeditions to take the Cross, the popes offered full remission of sins to those who fell in battle. This was a plenary indulgence and amounted to a promise of immediate admittance to Heaven for those who gave their lives for the faith. From this simple beginning, indulgences enjoyed a rather luxuriant development. Those who donated money for a Crusade might receive the rewards promised to those who went. Visitors to Rome during Jubilee years, the first of which was 1300, could receive indulgences. Indulgences might be granted for specific purposes, to specific territories, and might not be full indulgences but carry only the promise of the remission of a certain amount of purgatorial punishment. They might be hedged about with very careful restrictions and qualifications. However, the practice was undoubtedly corrupted by the increasing rise of the indulgence as a moneymaking device. Salesmen of indulgences were sent out, who in their zeal to attract buyers made extravagant claims which were eagerly swallowed by the simple. The buyers thought they were acquiring immediate tickets to Heaven without the need for repentance or change of heart. These indulgences could also be purchased on behalf of the souls of the departed. The theory of a "Treasury of Merits," which developed from the thirteenth century, made this possible. According to this theory, the saints, while on earth, had performed good works beyond what was required for their own salvation. It was these good works that constituted the treasury, which was inexhaustible and at the disposal of the pope. It is not strange that the abuses of indulgences aroused objections among earnest Christians before Luther's protest against them; they appeared to make salvation mechanical and relieve the individual of responsibility for the state of his soul. There were numerous beliefs and practices which, although approved by the church, were susceptible to abuse through ignorance or exaggeration -- for instance, the veneration of relics. A relic was an object which had been associated with a saint: an article of clothing perhaps, or the instrument of a martyr's suffering, or even a part of the body. While the church sanctioned the veneration of such relics as a way of fostering the desire to emulate the saint's virtues and holiness, it did not approve of the actual worship of such objects. Many persons, however, appear to have worshipped them, often in the hope of obtaining supernatural help thereby. Relics also were subject to abuse for monetary reasons. A church which had an impressive relic or collection of relics attracted pilgrims, who often left money. Numerous relics were spurious, some no doubt through ignorance but others as a result of deliberate deception. Erasmus commented wryly that there were enough pieces of wood from the cross on which Jesus had been crucified to build a ship. But there was a deeper issue. Even if the relics were what they purported to be, and had indeed belonged to a saint, what benefit could accrue to the believer merely from viewing such objects? And was it desirable for people to leave their homes, families, and work to go on pilgrimages for long distances to see such relics? Always present was the danger of externalization of religion, and with the growth of the church into a powerful and complex machine, this danger increased. Hence the repeated call for a religion of the heart, an inward religion which would express itself in a cleansing of the soul and in works of love and service toward one's neighbor. Popular religion contained a good many elements of superstition, some of which came down from antiquity. There was general belief in witches and their malign powers; the church itself upheld this belief and in fact did a good deal to perpetuate it. The air was filled with good and evil spirits, able to help or harm. The line between the natural and the supernatural was not clear, and men and women lived constantly in the presence of occult forces, against which their religion provided the best protection. Thus religion itself was often used as a sort of magic charm. INTELLECTUAL AND ARTISTIC CURRENTS n the field of scholarship and learning, the Middle Ages invented the university. Universities began to come into existence in the twelfth century, arising out of existing institutions such as cathedral schools. A university was a guild (universitas), either of teachers, like the University of Paris, or of students, like the University of Bologna. It was divided into faculties, each of which was responsible for a body of subject matter. The boy who entered a university (girls were not admitted) might be quite young, fifteen years old or even younger. He matriculated in the faculty of Liberal Arts, also called the philosophical faculty, where he studied first the trivium of grammar, logic, and rhetoric. Successful completion of this program entitled him to the degree of Bachelor of Arts. From this he went to the quadrivium of arithmetic, geometry, astronomy, and music. Study of these subjects entitled him to the degree of Master of Arts, which was essentially a license to teach. He was then a member of the guild of teachers and could rent a room and advertise for students, who were expected to pay him a fee. Many Masters of Arts, in addition to their teaching, went on to study in one of the "higher" faculties: medicine, law, or theology. There were two branches of legal study, canon law and civil or Roman law, based on the Code of Justinian. A student might have a degree in either law, or he might be a Doctor of Both Laws (J.U.D. or Juris Utriusque Doctor). The degrees of master and doctor were not distinguished then as they are today, but were more or less synonymous. Certain universities specialized in one subject or another: Bologna was noted for legal study, Paris for theology. Theology had the greatest prestige of any subject, and the theological faculty at Paris was the outstanding one in Europe: Its pronouncements on doctrinal matters had an almost official standing. The development of universities was stimulated by a great increase in knowledge based on the Greek and Roman classics. Many classical works neglected in Europe for centuries began to come back into circulation in the twelfth century, reaching Europe from Arab sources. The Arabs had preserved the tradition of Greek learning during the centuries in which it had been largely lost to Europeans. Of particular importance were the writings of Aristotle, covering a large number of what we would now call scientific and philosophical fields. His prestige in the medieval schools was immense; when Thomas Aquinas referred simply to "The Philosopher," and when Dante mentioned "The master of those who know," it was not necessary to give his name. Scholarship in the universities employed what is known as the scholastic method. This involved the use of a rigorous technique of logical reasoning starting with premises supplied by some standard authority. The authorities were relatively few in number; for theology, there were chiefly the Bible, the writings of the church fathers and the Sentences of the twelfth-century writer Peter Lombard. For secular fields of knowledge the classical texts were widely used. When authorities appeared to disagree, the scholastic writers did not choose one and reject the other, but instead developed a technique for reconciling the apparent differences and explaining away the disagreement. The scholastic method produced great works of synthesis, like the Summa theologiae of Thomas Aquinas (c.1225 74). It encouraged close analysis of texts and fostered depth if not breadth of understanding. It ran the risk of degenerating into subtle and abstruse reasoning about things of minor importance, and of losing contact with the realities of the world what Francis Bacon was to call "the commerce of the mind with things." It would be a mistake to think, however, that there was no interest in nature in the Middle Ages. Important investigation and speculation on scientific subjects were carried on, especially by some Franciscans at Oxford and by a group of Paris scholars in the fourteenth century. Such matters as gravity and motion were studied, and the way was prepared for the scientific revolution which started in the sixteenth century. In art and literature the Middle Ages produced impressive monuments. It is interesting to recall that the word Gothic, applied to works of art, originally meant barbarous and was a term of disparagement. Today we recognize the so-called Gothic cathedrals as outstanding achievements in art, engineering, and religion. Even before the rise of Gothic, there were fine medieval structures in the style known as Romanesque, the English equivalent of which is called Norman. Romanesque or Norman architecture was capable of producing effects of solidity and grandeur, with its massive piers and round arches. It ran the risk of excessive heaviness, and its churches were likely to be dark and gloomy inside, because it had not solved the problem of introducing sufficient light. Since the walls supported the structure, they had to be very heavy, allowing little space for windows.<*link> To solve these problems the Gothic style was worked out. One of its most important accomplishments was to transfer the weight of the building from the walls to the flying buttresses. Since the walls no longer had to bear such a heavy burden, windows could be greatly enlarged. These windows were often made of stained glass, with magnificent effects of color and light. The rounded arch of the Romanesque gave way to a pointed arch, and churches became higher, with their vaults soaring far over the heads of worshippers. The piers were less massive than in the Romanesque. As a result of all these changes, the Gothic cathedrals produced an effect of soaring, of aspiration toward the heavens. The solution of structural problems had made possible a greater expression of religious feeling. Sculpture was also represented in these cathedrals. Some of the statues of saints and of characters from Bible history are vivid and lifelike, refuting any belief that medieval people were not interested in accurate observation of nature. This is further borne out by the decorative carvings of animals, fruits, and flowers, which also show a keen observation and love of nature. Thus the cathedral provided a kind of synthesis of the arts. In the wide range of subject matter encompassed by the sculpture and especially by the stained- glass windows, these great buildings might be considered a synthesis of all medieval life and thought. There was a strong tendency in the Middle Ages toward such building of syntheses. The cathedrals are not the only evidence of this; Aquinas created a great synthesis of theology, and Dante, in his Divine Comedy, accomplished a feat analogous to these.Dante Alighieri (1265-1321) is not easy to classify; possibly none of the greatest men are. More than any other individual he embodies the philosophical, theological, and literary currents of the Middle Ages, but in some ways he looks forward to a later period. He was a Florentine of noble birth who took an active part in the political life of his native city until, in 1302, the faction to which he belonged was defeated, and he became an exile, with a price on his head. He never returned to Florence, although in later years he could have done so in safety, because he would first have been required to undergo a public ceremony of expiation. He would never admit that he had done anything that required forgiveness, and so he preferred to endure the humiliation and loneliness of exile rather than return to the city which alone he thought of as his home. This proud, austere man was familiar with all the intellectual activity of his time, and was further endowed with extraordinary depth of thought and feeling, together with poetic genius. He called his great poem the Comedy; a later generation called it divine. The subject of the Divine Comedy could not be broader in scope or more sublime; it is the journey of the individual soul through Hell, Purgatory, and Heaven. This at least is its story, but there are numerous other levels of meaning, conveyed by allegory and symbol, so that Dante is able to deal with all of his deepest concerns and indeed with all the most significant issues of his day. The poem tells of Dante himself, lost in a dark forest and threatened by wild beasts, rescued by the Roman poet Virgil, who was greatly revered in the Middle Ages because of a mistaken notion that he had foretold the coming of Christianity. Virgil guides Dante first through Hell (the Inferno). Here are all the sinners who are condemned to eternal torment, with the type of suffering always appropriate to the transgression. In addition to figures from the Bible or from Greek and Roman antiquity, there are many from more recent times, including some known personally to Dante. A number of popes are in Hell. Hell is located beneath the earth, in a series of circles through which Dante and his guide descend to the bottom, at the center of the earth. In the prevailing geocentric view of the cosmos, this is the center of the universe. Here, frozen in ice, Satan holds in each of his three mouths one of the blackest sinners of all mankind: Judas, who betrayed the Lord, and Brutus and Cassius, who betrayed Caesar. In this way Dante shows his reverence for the divine mission of the Roman Empire, of which Caesar is the symbol. Near the beginning of their journey, Virgil leads Dante to Limbo, where he meets the great figures of antiquity. Since they lived before Christ, they were never able to receive Christian baptism. They must therefore remain forever cut off from the presence of God, though they are not subject to actual physical torment. Here Dante shows a greater severity than some of his contemporaries. In Limbo, Dante is introduced to the great poets; in addition to Virgil, there are Homer, Horace, Lucan, and Ovid. Among them, says Dante without false modesty, "I made a sixth." From the terrors of Hell, Virgil leads Dante to Purgatory, where the souls of those who died repentant are expiating their sins. Purgatory is a place of punishment and hope; those who have been sent there will reach Heaven in due time. For the final journey to Heaven, Virgil must turn Dante over to another guide. The great Roman poet, as we have seen, is shut out from God's presence. In his allegorical meanings, Virgil represents human reason or philosophy, which cannot by itself lead to God. For this, grace is required. The embodiment of grace, and of theology, is another real person, Beatrice. From childhood, Dante loved from afar Beatrice Portinari, who appears in his work in a glorified form as the bearer of his highest ideals. Now she leads him through Paradise, where he meets many of the great saints. When these saints discuss those still on earth who should be their followers, they become filled with indignation. Thus Peter castigates the popes, and St. Francis and St. Dominic express their disappointment over their own unworthy followers. The climax of the poem comes when Dante, under the guidance now of St. Bernard, receives a beatific vision of the Trinity itself, and the poem concludes with an invocation of the Virgin Mary. Each of the three main sections of the Divine Comedy -- "Inferno," "Purgatory," "Paradise" -- ends with the word stars. In the course of his poem, Dante deals with a vast array of subjects. He treats the urgent theological and philosophical problems of his age. He reveals the nature and arrangement of the universe as it appeared to educated men. He discusses developments in poetry, and pays tribute to his predecessors in the field. He draws on classical history and literature, showing how much was known about antiquity. And he concerns himself with the history of Italy and Florence in recent times, showing how strongly he feels about developments there. Throughout, he manages to tell us a good deal about himself what sort of man he was. He emerges as a highly developed, self-conscious individual, aware of his own greatness, very much his own man, somewhat isolated, and, although highly critical of church and papacy, deeply religious. Even as Dante wrote, forces were at work that were to cause the dissolution of medieval civilization. These forces will be the subject of our next chapter.
http://vlib.iue.it/carrie/texts/carrie_books/gilbert/01.html
13
38
Prostitution may be the oldest profession, but tax collection was surely not far behind. In its early days, taxation did not always involve handing over money. The ancient Chinese paid with pressed tea, and Jivara tribesmen in Brazil stumped up shrunken heads. As the PRICE of their citizenship, ancient Greeks and Romans could be called on to serve as soldiers and had to supply their own weapons. The origins of modern taxation can be traced to wealthy subjects paying money to their king in lieu of military service. The other early source of tax revenue was trade, with tolls and customs duties being collected from travelling merchants. The big advantage of these taxes was that they fell mostly on visitors rather than residents. INCOME TAX, the biggest source of GOVERNMENT funds today in most countries, is a comparatively recent invention, probably because the notion of annual INCOME is itself a modern concept. Governments preferred to tax things that were easy to measure and on which it was thus easy to calculate the liability. This is why early taxes concentrated on tangible items such as LAND and property, physical goods, commodities and ships, as well as things such as the number of windows or fireplaces in a building. In the 20th century, particularly the second half, governments around the world took a growing share of their country's NATIONAL INCOME in tax, mainly to pay for increasingly more expensive defence efforts and for a modern WELFARE state. INDIRECT TAXATION on CONSUMPTION, such as VALUE-ADDED tax, has become increasingly important as DIRECT TAXATION on income and wealth has become increasingly unpopular. But big differences among countries remain. One is the overall level of tax. For example, in United States tax revenue amounts to around one-third of its GDP, whereas in Sweden it is closer to half. Others are the preferred methods of collecting it (direct versus indirect), the rates at which it is levied and the definition of the TAX BASE to which these rates are applied. Countries have different attitudes to PROGRESSIVE and REGRESSIVE TAXATION. There are also big differences in the way responsibility for taxation is divided among different levels of government. Arguably, any tax is a bad tax. But PUBLIC GOODS and other government activities have to be paid for somehow, and economists often have strong views on which methods of taxation are more or less efficient. Most economists agree that the best tax is one that has as little impact as possible on people's decisions about whether to undertake a productive economic activity. High rates of tax on LABOUR may discouragepeople from working, and so result in lower tax revenue than there would be if the tax rate were lower, an idea captured in the LAFFER CURVE. Certainly, the MARGINAL rate of tax may have a bigger effect on incentives than the overall TAX BURDEN. LAND TAX is regarded as the most efficient by some economists and tax on expenditure by others, as it does all the taking after the wealth creation is done. Some economists favour a neutral tax system that does not influence the sorts of economic activities that take place. Others favour using tax, and tax breaks, to guide economic activity in ways they favour, such as to minimise pollution and to increase the attractiveness of employing people rather than CAPITAL. Some economists argue that the tax system should be characterised by both HORIZONTAL EQUITY and VERTICAL EQUITY, because this is fair, and because when the tax system is fair people may find it harder to justify TAX AVOIDANCE and TAX EVASION. However, who ultimately pays (the TAX INCIDENCE) may be different from who is initially charged, if that person can pass it on, say by adding the tax to the price he charges for his OUTPUT. Taxes on companies, for example, are always paid in the end by humans, be they workers, customers or shareholders.
http://www.economist.com/economics-a-to-z/t
13
28
The African-American Civil Rights Movement were social movements in the United States aimed at outlawing racial discrimination against black Americans and restoring voting rights to them. This article covers the phase of the movement between 1955 and 1968, particularly in the South. The wave of inner city riots from 1964 through 1970 undercut support from the white community. The emergence of the Black Power Movement, which lasted from about 1966 to 1975, challenged the established black leadership for its cooperative attitude and its nonviolence, and instead demanded political and economic self-sufficiency. The movement was characterized by major campaigns of civil resistance. Between 1955 and 1968, acts of nonviolent protest and civil disobedience produced crisis situations between activists and government authorities. Federal, state, and local governments, businesses, and communities often had to respond immediately to these situations that highlighted the inequities faced by African Americans. Forms of protest and/or civil disobedience included boycotts such as the successful Montgomery Bus Boycott (1955–56) in Alabama; "sit-ins" such as the influential Greensboro sit-ins (1960) in North Carolina; marches, such as the Selma to Montgomery marches (1965) in Alabama; and a wide range of other nonviolent activities. Noted legislative achievements during this phase of the Civil Rights Movement were passage of Civil Rights Act of 1964, that banned discrimination based on "race, color, religion, or national origin" in employment practices and public accommodations; the Voting Rights Act of 1965, that restored and protected voting rights; the Immigration and Nationality Services Act of 1965, that dramatically opened entry to the U.S. to immigrants other than traditional European groups; and the Fair Housing Act of 1968, that banned discrimination in the sale or rental of housing. African Americans re-entered politics in the South, and across the country young people were inspired to action. ||This section needs additional citations for verification. (April 2008)| Following the American Civil War, three constitutional amendments were passed, including the 13th Amendment that ended slavery, the 14th Amendment that gave African Americans citizenship, and the 15th Amendment that gave African American males the right to vote. From 1865 to 1877 the United States underwent a turbulent Reconstruction Era when reconstructed states in the South resisted the enforcement of these constitutional amendments as former Confederate states were brought back into the United States. In 1871, President Ulysses S. Grant, the U.S. Army, and U.S. Attorney General Amos T. Akerman, initiated a campaign to destroy the Ku Klux Klan under the Enforcement Acts. However, some states were reluctant to enforce the federal measures of the act and other white supremacist groups arose that violently opposed African American equality and suffrage. After the disputed election of 1876 resulted in the end of Reconstruction, Whites in the South regained political control of the region, after mounting intimidation and violence in the elections. Systematic disfranchisement of African Americans took place in Southern states from 1890 to 1908 and lasted until national civil rights legislation was passed in the mid-1960s. For more than 60 years, for example, blacks in the South were not able to elect anyone to represent their interests in Congress or local government. During this period, the white-dominated Democratic Party regained political control over the South. The Republican Party—the "party of Lincoln"—which had been the party that most blacks belonged to, shrank to insignificance as black voter registration was suppressed. Until 1965, the "solid South" was a one-party system. Outside a few areas (usually in remote Appalachia), the Democratic Party nomination was tantamount to election for state and local office. Most of the Republican Party organizations were controlled by African Americans, and they were represented in the national conventions that nominated Republican presidential candidates. Booker T. Washington was a highly visible advisor to Republicans presidents Theodore Roosevelt and William Howard Taft, especially on the matter of federal patronage jobs. During the same time as African Americans were being disfranchised, white Democrats imposed racial segregation by law. Violence against blacks increased. The system of de jure state-sanctioned racial discrimination and oppression that emerged out of the post-Reconstruction South became known as the "Jim Crow" system. It remained virtually intact into the mid-1950s, when most states integrated their schools. Thus, the early 20th century is a period often referred to as the "nadir of American race relations". While problems and civil rights violations were most intense in the South, social tensions affected African Americans in other regions, as well. Characteristics of the post-Reconstruction period: African Americans and other racial minorities rejected this regime. They resisted it in numerous ways and sought better opportunities through lawsuits, new organizations, political redress, and labor organizing (see the African-American Civil Rights Movement (1896–1954)). The National Association for the Advancement of Colored People (NAACP) was founded in 1909. It fought to end race discrimination through litigation, education, and lobbying efforts. Its crowning achievement was its legal victory in the Supreme Court decision Brown v. Board of Education (1954) that rejected separate white and colored school systems and by implication overturned the "separate but equal" doctrine established in Plessy v. Ferguson. The situation for blacks outside the South was somewhat better (in most states they could vote and have their children educated, though they still faced discrimination in housing and jobs). From 1910 to 1970, African Americans sought better lives by migrating north and west. A total of nearly seven million blacks left the South in what was known as the Great Migration. Invigorated by the victory of Brown and frustrated by the lack of immediate practical effect, private citizens increasingly rejected gradualist, legalistic approaches as the primary tool to bring about desegregation. They were faced with "massive resistance" in the South by proponents of racial segregation and voter suppression. In defiance, African Americans adopted a combined strategy of direct action with nonviolent resistance known as civil disobedience, giving rise to the African-American Civil Rights Movement of 1955–68. The strategy of public education, legislative lobbying, and litigation that had typified the Civil Rights Movement during the first half of the 20th Century broadened after Brown to a strategy that emphasized "direct action"—primarily boycotts, sit-ins, Freedom Rides, marches and similar tactics that relied on mass mobilization, nonviolent resistance and civil disobedience. This mass action approach typified the movement from 1960 to 1968. Churches, local grassroots organizations, fraternal societies, and black-owned businesses mobilized volunteers to participate in broad-based actions. This was a more direct and potentially more rapid means of creating change than the traditional approach of mounting court challenges. In 1952, the Regional Council of Negro Leadership (RCNL), led by T. R. M. Howard, a black surgeon, entrepreneur, and planter, organized a successful boycott of gas stations in Mississippi that refused to provide restrooms for blacks. Through the RCNL, Howard led campaigns to expose brutality by the Mississippi state highway patrol and to encourage blacks to make deposits in the black-owned Tri-State Bank of Nashville which, in turn, gave loans to civil rights activists who were victims of a "credit squeeze" by the White Citizens' Councils. The Montgomery Improvement Association—created to lead the Montgomery Bus Boycott managed to keep the boycott going for over a year until a federal court order required Montgomery to desegregate its buses. The success in Montgomery made its leader Dr. Martin Luther King, Jr. a nationally known figure. It also inspired other bus boycotts, such as the highly successful Tallahassee, Florida, boycott of 1956–57. In 1957 Dr. King and Rev. John Duffy, the leaders of the Montgomery Improvement Association, joined with other church leaders who had led similar boycott efforts, such as Rev. C. K. Steele of Tallahassee and Rev. T. J. Jemison of Baton Rouge; and other activists such as Rev. Fred Shuttlesworth, Ella Baker, A. Philip Randolph, Bayard Rustin and Stanley Levison, to form the Southern Christian Leadership Conference. The SCLC, with its headquarters in Atlanta, Georgia, did not attempt to create a network of chapters as the NAACP did. It offered training and leadership assistance for local efforts to fight segregation. The headquarters organization raised funds, mostly from Northern sources, to support such campaigns. It made non-violence both its central tenet and its primary method of confronting racism. In 1959, Septima Clarke, Bernice Robinson, and Esau Jenkins, with the help of the Highlander Folk School in Tennessee, began the first Citizenship Schools in South Carolina's Sea Islands. They taught literacy to enable blacks to pass voting tests. The program was an enormous success and tripled the number of black voters on Johns Island. SCLC took over the program and duplicated its results elsewhere. Spring 1951 was the year in which great turmoil was felt amongst Black students in reference to Virginia state's educational system. At the time in Prince Edward County, Moton High School was segregated and students had decided to take matters into their own hands to fight against two things: the overpopulated school premises and the unsuitable conditions in their school. This particular behavior coming from Black people in the South was most likely unexpected and inappropriate as White people had expectations for Blacks to act in a subordinate manner. Moreover, some local leaders of the NAACP had tried to persuade the students to back down from their protest against the Jim Crow laws of school segregation. When the students did not accept the NAACP's demands, the NAACP automatically joined them in their battle against school segregation. This became one of the five cases that made up what is known today as Brown v. Board of Education. On May 17, 1954, the U.S. Supreme Court handed down its decision regarding the case called Brown v. Board of Education of Topeka, Kansas, in which the plaintiffs charged that the education of black children in separate public schools from their white counterparts was unconstitutional. The opinion of the Court stated that the "segregation of white and colored children in public schools has a detrimental effect upon the colored children. The impact is greater when it has the sanction of the law; for the policy of separating the races is usually interpreted as denoting the inferiority of the Negro group." The lawyers from the NAACP had to gather some plausible evidence in order to win the case of Brown vs. Education. Their way of addressing the issue of school segregation was to enumerate several arguments. One of them pertained to having an exposure to interracial contact in a school environment. It was said that it would, in turn, help to prevent children to live with the pressures that society exerts in regards to race. Therefore, having a better chance of living in democracy. In addition, another was in reference to the emphasis of how "'education’ comprehends the entire process of developing and training the mental, physical and moral powers and capabilities of human beings”. In Goluboff's book, it has been stated that the goals of the NAACP was to bring to the Court’s awareness the fact that African American children were the victims of the legalization of school segregation and were not guaranteed a bright future. Without having the opportunity to be exposed to other cultures, it impedes on how Black children will function later on as adults trying to live a normal life. The Court ruled that both Plessy v. Ferguson (1896), which had established the segregationist, "separate but equal" standard in general, and Cumming v. Richmond County Board of Education (1899), which had applied that standard to schools, were unconstitutional. The following year, in the case known as Brown v. Board of Education, the Court ordered segregation to be phased out over time, "with all deliberate speed". Brown v. Board of Education of Topeka, Kansas (1954) did not overturn Plessy v. Ferguson (1896). Plessy v. Ferguson was segregation based on transportation. Brown v. Board of Education dealt with segregation in education. Brown v. Board of Education did set in motion the future overturning of 'separate but equal'. On May 18, 1954 Greensboro became the first city in the South to publicly announce that it would abide by the U.S. Supreme Court’s Brown v. Board of Education ruling which declared racial segregation in the nation’s public schools unconstitutional. ‘It is unthinkable,’ remarked School Board Superintendent Benjamin Smith, ‘that we will try to [override] the laws of the United States.’ In agreement with Smith’s position, the school board voted six to one to support the court’s ruling. This positive reception for Brown, together with the appointment of African American Dr. David Jones to the school board in 1953, convinced numerous white and black citizens that Greensboro was heading in a forward direction and would likely emerge as a leader in school integration. Integration in Greensboro occurred rather peacefully compared to that of other Southern states such as Alabama, Arkansas, and Virginia where “massive resistance” took hold. On December 1, 1955, nine months after a 15-year-old high school student, Claudette Colvin, refused to give up her seat on a public bus to make room for a white passenger, Rosa Parks (the "mother of the Civil Rights Movement") did the same thing. Parks was secretary of the Montgomery NAACP chapter and had recently returned from a meeting at the Highlander Center in Tennessee where nonviolent civil disobedience as a strategy had been discussed. Parks was arrested, tried, and convicted for disorderly conduct and violating a local ordinance. After word of this incident reached the black community, 50 African-American leaders gathered and organized the Montgomery Bus Boycott to demand a more humane bus transportation system. However, after many reforms were rejected, the NAACP, led by E.D. Nixon, pushed for full desegregation of public buses. With the support of most of Montgomery's 50,000 African Americans, the boycott lasted for 381 days until the local ordinance segregating African-Americans and whites on public buses was lifted. Ninety percent of African Americans in Montgomery partook in the boycotts, which reduced bus revenue by 80% until a federal court ordered Montgomery's buses desegregated in November 1956, and the boycott ended. A young Baptist minister named Martin Luther King, Jr. was president of the Montgomery Improvement Association, the organization that directed the boycott. The protest made King a national figure. His eloquent appeals to Christian brotherhood and American idealism created a positive impression on people both inside and outside the South. Little Rock, Arkansas, was in a relatively progressive Southern state. A crisis erupted, however, when Governor of Arkansas Orval Faubus called out the National Guard on September 4 to prevent entry to the nine African-American students who had sued for the right to attend an integrated school, Little Rock Central High School. The nine students had been chosen to attend Central High because of their excellent grades. On the first day of school, only one of the nine students showed up because she did not receive the phone call about the danger of going to school. She was harassed by white protesters outside the school, and the police had to take her away in a patrol car to protect her. Afterward, the nine students had to carpool to school and be escorted by military personnel in jeeps. Faubus was not a proclaimed segregationist. The Arkansas Democratic Party, which then controlled politics in the state, put significant pressure on Faubus after he had indicated he would investigate bringing Arkansas into compliance with the Brown decision. Faubus then took his stand against integration and against the Federal court order that required it. Faubus' order received the attention of President Dwight D. Eisenhower, who was determined to enforce the orders of the Federal courts. Critics had charged he was lukewarm, at best, on the goal of desegregation of public schools. Eisenhower federalized the National Guard and ordered them to return to their barracks. Eisenhower then deployed elements of the 101st Airborne Division to Little Rock to protect the students. The students were able to attend high school. They had to pass through a gauntlet of spitting, jeering whites to arrive at school on their first day, and to put up with harassment from fellow students for the rest of the year. Although federal troops escorted the students between classes, the students were still teased and even attacked by white students when the soldiers were not around. One of the Little Rock Nine, Minnijean Brown, was suspended for spilling a bowl of chili on the head of a white student who was harassing her in the school lunch line. Later, she was expelled for verbally abusing a white female student. Only one of the Little Rock Nine, Ernest Green, got the chance to graduate; after the 1957–58 school year was over, the Little Rock school system decided to shut public schools completely rather than continue to integrate. Other school systems across the South followed suit. In 1958, the NAACP Youth Council sponsored sit-ins at a Dockum Drug Store in downtown Wichita, Kansas. After three weeks, the movement successfully got the store to change its policy, and soon afterward all Dockum stores in Kansas were desegregated. This movement was quickly followed in the same year by a student sit-in at a Katz Drug Store in Oklahoma City led by Clara Luper, which also was successful. The Civil Rights Movement received an infusion of energy with a student sit-in at a Woolworth's store in Greensboro, North Carolina. On February 1, 1960, four students Ezell A. Blair, Jr. (now known as Jibreel Khazan), David Richmond, Joseph McNeil, and Franklin McCain from North Carolina Agricultural & Technical College, an all-black college, sat down at the segregated lunch counter to protest Woolworth's policy of excluding African Americans. The four students purchased small items in other parts of the store and kept their receipts, then sat down at the lunch counter and asked to be served. After being denied service, they produced their receipts and asked why their money was good everywhere else at the store, but not at the lunch counter. These protesters were encouraged to dress professionally, to sit quietly, and to occupy every other stool so that potential white sympathizers could join in. The Greensboro sit-in was quickly followed by other sit-ins in Richmond, Virginia; Nashville, Tennessee; and Atlanta, Georgia. As students across the south began to "sit-in" at the lunch counters of a few of their local stores, local authority figures sometimes used brute force to physically escort the demonstrators from the lunch facilities. The "sit-in" technique was not new—as far back as 1939, African-American attorney Samuel Wilbert Tucker organized a sit-in at the then-segregated Alexandria, Virginia library. In 1960 the technique succeeded in bringing national attention to the movement. The success of the Greensboro sit-in was followed by a rash of student campaigns throughout the South. Probably the best organized, most highly disciplined, the most immediately effective of these was in Nashville, Tennessee. On March 9, 1960 an Atlanta University Center group of students released An Appeal for Human Rights as a full page advertisement in newspapers, including the Atlanta Constitution, Atlanta Journal, and Atlanta Daily World. This student group, known as the Committee on the Appeal for Human Rights (COAHR), initiated the Atlanta Student Movement and began to lead in Atlanta with sit-ins starting on March 15, 1960. Demonstrators focused not only on lunch counters but also on parks, beaches, libraries, theaters, museums, and other public places. Upon being arrested, student demonstrators made "jail-no-bail" pledges, to call attention to their cause and to reverse the cost of protest, thereby saddling their jailers with the financial burden of prison space and food. In April, 1960 activists who had led these sit-ins were invited by SCLC activist Ella Baker to hold a conference at Shaw University in Raleigh, North Carolina. This conference led to the formation of the Student Nonviolent Coordinating Committee (SNCC). SNCC took these tactics of nonviolent confrontation further, to the freedom rides. Freedom Rides were journeys by Civil Rights activists on interstate buses into the segregated southern United States to test the United States Supreme Court decision Boynton v. Virginia, (1960) 364 U.S. that ended segregation for passengers engaged in interstate travel. Organized by CORE, the first Freedom Ride of the 1960s left Washington D.C. on May 4, 1961, and was scheduled to arrive in New Orleans on May 17. During the first and subsequent Freedom Rides, activists traveled through the Deep South to integrate seating patterns and desegregate bus terminals, including restrooms and water fountains. That proved to be a dangerous mission. In Anniston, Alabama, one bus was firebombed, forcing its passengers to flee for their lives. In Birmingham, Alabama, an FBI informant reported that Public Safety Commissioner Eugene "Bull" Connor gave Ku Klux Klan members fifteen minutes to attack an incoming group of freedom riders before having police "protect" them. The riders were severely beaten "until it looked like a bulldog had got a hold of them." James Peck, a white activist, was beaten so hard he required fifty stitches to his head. Mob violence in Anniston and Birmingham temporarily halted the rides, but SNCC activists from Nashville brought in new riders to continue the journey from Birmingham. In Montgomery, Alabama, at the Greyhound Bus Station, a mob charged another bus load of riders, knocking John Lewis unconscious with a crate and smashing Life photographer Don Urbrock in the face with his own camera. A dozen men surrounded Jim Zwerg, a white student from Fisk University, and beat him in the face with a suitcase, knocking out his teeth. On 24 May 1961, the freedom riders continued their rides into Jackson, Mississippi, where they were arrested for "breaching the peace" by using "white only" facilities. New freedom rides were organized by many different organizations. As riders arrived in Jackson, they were arrested. By the end of summer, more than 300 had been jailed in Mississippi. The jailed freedom riders were treated harshly, crammed into tiny, filthy cells and sporadically beaten. In Jackson, Mississippi, some male prisoners were forced to do hard labor in 100-degree heat. Others were transferred to Mississippi State Penitentiary at Parchman, where their food was deliberately oversalted and their mattresses were removed. Sometimes the men were suspended by "wrist breakers" from the walls. Typically, the windows of their cells were shut tight on hot days, making it hard for them to breathe. Public sympathy and support for the freedom riders led the Kennedy administration to order the Interstate Commerce Commission (ICC) to issue a new desegregation order. When the new ICC rule took effect on November 1, passengers were permitted to sit wherever they chose on the bus; "white" and "colored" signs came down in the terminals; separate drinking fountains, toilets, and waiting rooms were consolidated; and lunch counters began serving people regardless of skin color. The student movement involved such celebrated figures as John Lewis, a single-minded activist; James Lawson, the revered "guru" of nonviolent theory and tactics; Diane Nash, an articulate and intrepid public champion of justice; Bob Moses, pioneer of voting registration in Mississippi; and James Bevel, a fiery preacher and charismatic organizer and facilitator. Other prominent student activists included Charles McDew, Bernard Lafayette, Charles Jones, Lonnie King, Julian Bond, Hosea Williams, and Stokely Carmichael. After the Freedom Rides, local black leaders in Mississippi such as Amzie Moore, Aaron Henry, Medgar Evers, and others asked SNCC to help register black voters and to build community organizations that could win a share of political power in the state. Since Mississippi ratified its constitution in 1890, with provisions such as poll taxes, residency requirements, and literacy tests, it made registration more complicated and stripped blacks from the polls. After so many years, the intent to stop blacks from voting had become part of the culture of white supremacy. In the fall of 1961, SNCC organizer Robert Moses began the first such project in McComb and the surrounding counties in the Southwest corner of the state. Their efforts were met with violent repression from state and local lawmen, White Citizens' Council, and Ku Klux Klan resulting in beatings, hundreds of arrests and the murder of voting activist Herbert Lee. White opposition to black voter registration was so intense in Mississippi that Freedom Movement activists concluded that all of the state's civil rights organizations had to unite in a coordinated effort to have any chance of success. In February 1962, representatives of SNCC, CORE, and the NAACP formed the Council of Federated Organizations (COFO). At a subsequent meeting in August, SCLC became part of COFO. In the Spring of 1962, with funds from the Voter Education Project, SNCC/COFO began voter registration organizing in the Mississippi Delta area around Greenwood, and the areas surrounding Hattiesburg, Laurel, and Holly Springs. As in McComb, their efforts were met with fierce opposition—arrests, beatings, shootings, arson, and murder. Registrars used the literacy test to keep blacks off the voting roles by creating standards that even highly educated people could not meet. In addition, employers fired blacks who tried to register and landlords evicted them from their homes. Over the following years, the black voter registration campaign spread across the state. Similar voter registration campaigns—with similar responses—were begun by SNCC, CORE, and SCLC in Louisiana, Alabama, southwest Georgia, and South Carolina. By 1963, voter registration campaigns in the South were as integral to the Freedom Movement as desegregation efforts. After passage of the Civil Rights Act of 1964, protecting and facilitating voter registration despite state barriers became the main effort of the movement. It resulted in passage of the Voting Rights Act of 1965. ||This section contains weasel words: vague phrasing that often accompanies biased or unverifiable information. (May 2010)| Beginning in 1956, Clyde Kennard, a black Korean War-veteran, tried to enroll at Mississippi Southern College (now the University of Southern Mississippi) under the GI Bill at Hattiesburg. Dr. William David McCain, the college president, tried to prevent his enrollment by appealing to local black leaders and the segregationist state political establishment. He used the Mississippi State Sovereignty Commission, of which he was a member. It was a state-funded organization that tried to counter the civil rights movement by positively portraying segregationist policies. More significantly, it collected data on activists, harassed them legally, and used economic boycotts against them by threatening their jobs (or causing them to lose their jobs) to try to suppress their work. Kennard was twice arrested on trumped-up charges, and eventually convicted and sentenced to seven years in the state prison. After three years at hard labor, Kennard was paroled by Mississippi Governor Ross Barnett. Journalists had investigated his case and publicized the state's mistreatment of his colon cancer. McCain’s role in Kennard's arrests and convictions is unknown. While trying to prevent Kennard's enrollment, McCain made a speech in Chicago, with his travel sponsored by the Mississippi State Sovereignty Commission. He described the blacks' seeking to desegregate Southern schools as "imports" from the North. (Kennard was a native and resident of Hattiesburg.) "We insist that educationally and socially, we maintain a segregated society. ... In all fairness, I admit that we are not encouraging Negro voting," he said. "The Negroes prefer that control of the government remain in the white man's hands." Note: Mississippi had passed a new constitution in 1890 that effectively disfranchised most blacks by changing electoral and voter registration requirements; although it deprived them of constitutional rights authorized under post-Civil War amendments, it survived US Supreme Court challenges at the time. It was not until after passage of the 1965 Voting Rights Act that most blacks in Mississippi and other southern states gained federal protection to enforce their right to vote. In September 1962, James Meredith won a lawsuit to secure admission to the previously segregated University of Mississippi. He attempted to enter campus on September 20, on September 25, and again on September 26. He was blocked by Mississippi Governor Ross Barnett, who said, "[N]o school will be integrated in Mississippi while I am your Governor." The Fifth U.S. Circuit Court of Appeals held Barnett and Lieutenant Governor Paul B. Johnson, Jr. in contempt, with fines of more than $10,000 for each day they refused to allow Meredith to enroll. Attorney General Robert Kennedy sent in a force of U.S. Marshals. On September 30, 1962, Meredith entered the campus under their escort. Students and other whites began rioting that evening, throwing rocks and then firing on the U.S. Marshals guarding Meredith at Lyceum Hall. Two people, including a French journalist, were killed; 28 marshals suffered gunshot wounds; and 160 others were injured. After the Mississippi Highway Patrol withdrew from the campus, President John F. Kennedy sent regular US Army forces to the campus to quell the riot. Meredith began classes the day after the troops arrived. Kennard and other activists continued to work on public university desegregation. In 1965 Raylawni Branch and Gwendolyn Elaine Armstrong became the first African-American students to attend the University of Southern Mississippi. By that time, McCain helped ensure they had a peaceful entry. In 2006, Judge Robert Helfrich ruled that Kennard was factually innocent of all charges for which he had been convicted in the 1950s. The SCLC, which had been criticized by some student activists for its failure to participate more fully in the freedom rides, committed much of its prestige and resources to a desegregation campaign in Albany, Georgia, in November 1961. King, who had been criticized personally by some SNCC activists for his distance from the dangers that local organizers faced—and given the derisive nickname "De Lawd" as a result—intervened personally to assist the campaign led by both SNCC organizers and local leaders. The campaign was a failure because of the canny tactics of Laurie Pritchett, the local police chief, and divisions within the black community. The goals may not have been specific enough. Pritchett contained the marchers without violent attacks on demonstrators that inflamed national opinion. He also arranged for arrested demonstrators to be taken to jails in surrounding communities, allowing plenty of room to remain in his jail. Prichett also foresaw King's presence as a danger and forced his release to avoid King's rallying the black community. King left in 1962 without having achieved any dramatic victories. The local movement, however, continued the struggle, and it obtained significant gains in the next few years. The Albany movement was shown to be an important education for the SCLC, however, when it undertook the Birmingham campaign in 1963. Executive Director Wyatt Tee Walker carefully planned strategy and tactics for the campaign. It focused on one goal—the desegregation of Birmingham's downtown merchants, rather than total desegregation, as in Albany. The movement's efforts were helped by the brutal response of local authorities, in particular Eugene "Bull" Connor, the Commissioner of Public Safety. He had long held much political power, but had lost a recent election for mayor to a less rabidly segregationist candidate. Refusing to accept the new mayor's authority, Connor intended to stay in office. The campaign used a variety of nonviolent methods of confrontation, including sit-ins, kneel-ins at local churches, and a march to the county building to mark the beginning of a drive to register voters. The city, however, obtained an injunction barring all such protests. Convinced that the order was unconstitutional, the campaign defied it and prepared for mass arrests of its supporters. King elected to be among those arrested on April 12, 1963. While in jail, King wrote his famous "Letter from Birmingham Jail" on the margins of a newspaper, since he had not been allowed any writing paper while held in solitary confinement. Supporters appealed to the Kennedy administration, which intervened to obtain King's release. King was allowed to call his wife, who was recuperating at home after the birth of their fourth child, and was released early on April 19. The campaign, however, faltered as it ran out of demonstrators willing to risk arrest. James Bevel, SCLC's Director of Direct Action and Director of Nonviolent Education, then came up with a bold and controversial alternative: to train high school students to take part in the demonstrations. As a result, in what would be called the Children's Crusade, more than one thousand students skipped school on May 2 to meet at the 16th Street Baptist Church to join the demonstrations. More than six hundred marched out of the church fifty at a time in an attempt to walk to City Hall to speak to Birmingham's mayor about segregation. They were arrested and put into jail. In this first encounter the police acted with restraint. On the next day, however, another one thousand students gathered at the church. When Bevel started them marching fifty at a time, Bull Connor finally unleashed police dogs on them and then turned the city's fire hoses water streams on the children. National television networks broadcast the scenes of the dogs attacking demonstrators and the water from the fire hoses knocking down the schoolchildren. Widespread public outrage led the Kennedy administration to intervene more forcefully in negotiations between the white business community and the SCLC. On May 10, the parties announced an agreement to desegregate the lunch counters and other public accommodations downtown, to create a committee to eliminate discriminatory hiring practices, to arrange for the release of jailed protesters, and to establish regular means of communication between black and white leaders. Not everyone in the black community approved of the agreement— the Rev. Fred Shuttlesworth was particularly critical, since he was skeptical about the good faith of Birmingham's power structure from his experience in dealing with them. Parts of the white community reacted violently. They bombed the Gaston Motel, which housed the SCLC's unofficial headquarters, and the home of King's brother, the Reverend A. D. King. Kennedy prepared to federalize the Alabama National Guard if the need arose. Four months later, on September 15, a conspiracy of Ku Klux Klan members bombed the Sixteenth Street Baptist Church in Birmingham, killing four young girls. Other events of the summer of 1963: On June 11, 1963, George Wallace, Governor of Alabama, tried to block the integration of the University of Alabama. President John F. Kennedy sent a force to make Governor Wallace step aside, allowing the enrollment of two black students. That evening, President Kennedy addressed the nation on TV and radio with his historic civil rights speech. The next day, Medgar Evers was murdered in Mississippi. The next week, as promised, on June 19, 1963, President Kennedy submitted his Civil Rights bill to Congress. A. Philip Randolph had planned a march on Washington, D.C. in 1941 to support demands for elimination of employment discrimination in defense industries; he called off the march when the Roosevelt administration met the demand by issuing Executive Order 8802 barring racial discrimination and creating an agency to oversee compliance with the order. Randolph and Bayard Rustin were the chief planners of the second march, which they proposed in 1962. In 1963, the Kennedy administration initially opposed the march out of concern it would negatively impact the drive for passage of civil rights legislation. However, Randolph and King were firm that the march would proceed. With the march going forward, the Kennedys decided it was important to work to ensure its success. Concerned about the turnout, President Kennedy enlisted the aid of additional church leaders and the UAW union to help mobilize demonstrators for the cause. The march was held on August 28, 1963. Unlike the planned 1941 march, for which Randolph included only black-led organizations in the planning, the 1963 march was a collaborative effort of all of the major civil rights organizations, the more progressive wing of the labor movement, and other liberal organizations. The march had six official goals: Of these, the march's major focus was on passage of the civil rights law that the Kennedy administration had proposed after the upheavals in Birmingham. National media attention also greatly contributed to the march's national exposure and probable impact. In his section "The March on Washington and Television News," William Thomas notes: "Over five hundred cameramen, technicians, and correspondents from the major networks were set to cover the event. More cameras would be set up than had filmed the last presidential inauguration. One camera was positioned high in the Washington Monument, to give dramatic vistas of the marchers". By carrying the organizers' speeches and offering their own commentary, television stations literally framed the way their local audiences saw and understood the event. |Problems listening to this file? See media help.| The march was a success, although not without controversy. An estimated 200,000 to 300,000 demonstrators gathered in front of the Lincoln Memorial, where King delivered his famous "I Have a Dream" speech. While many speakers applauded the Kennedy administration for the efforts it had made toward obtaining new, more effective civil rights legislation protecting the right to vote and outlawing segregation, John Lewis of SNCC took the administration to task for not doing more to protect southern blacks and civil rights workers under attack in the Deep South. After the march, King and other civil rights leaders met with President Kennedy at the White House. While the Kennedy administration appeared sincerely committed to passing the bill, it was not clear that it had the votes in Congress to do it. However when President Kennedy was assassinated on November 22, 1963, the new President Lyndon Johnson decided to use his influence in Congress to bring about much of Kennedy's legislative agenda. St. Augustine, on the northeast coast of Florida was famous as the "Nation's Oldest City," founded by the Spanish in 1565. It became the stage for a great drama leading up to the passage of the landmark Civil Rights Act of 1964. A local movement, led by Dr. Robert B. Hayling, a black dentist and Air Force veteran, had been picketing segregated local institutions since 1963, as a result of which Dr. Hayling and three companions, James Jackson, Clyde Jenkins, and James Hauser, were brutally beaten at a Ku Klux Klan rally in the fall of that year. Nightriders shot into black homes, and teenagers Audrey Nell Edwards, JoeAnn Anderson, Samuel White, and Willie Carl Singleton (who came to be known as "The St. Augustine Four") spent six months in jail and reform school after sitting in at the local Woolworth's lunch counter. It took a special action of the governor and cabinet of Florida to release them after national protests by the Pittsburgh Courier, Jackie Robinson, and others. In 1964, Dr. Hayling and other activists urged the Southern Christian Leadership Conference to come to St. Augustine. The first action came during spring break, when Hayling appealed to northern college students to come to the Ancient City, not to go to the beach, but to take part in demonstrations. Four prominent Massachusetts women—Mrs. Mary Parkman Peabody, Mrs. Esther Burgess, Mrs. Hester Campbell (all of whose husbands were Episcopal bishops), and Mrs. Florence Rowe (whose husband was vice president of John Hancock Insurance Company) came to lend their support. The arrest of Mrs. Peabody, the 72 year old mother of the governor of Massachusetts, for attempting to eat at the segregated Ponce de Leon Motor Lodge in an integrated group, made front page news across the country, and brought the civil rights movement in St. Augustine to the attention of the world. Widely publicized activities continued in the ensuing months, as Congress saw the longest filibuster against a civil rights bill in its history. Dr. Martin Luther King, Jr. was arrested at the Monson Motel in St. Augustine on June 11, 1964, the only place in Florida he was arrested. He sent a "Letter from the St. Augustine Jail" to a northern supporter, Rabbi Israel Dresner of New Jersey, urging him to recruit others to participate in the movement. This resulted, a week later, in the largest mass arrest of rabbis in American history—while conducting a pray-in at the Monson. A famous photograph taken in St. Augustine shows the manager of the Monson Motel pouring acid in the swimming pool while blacks and whites are swimming in it. The horrifying photograph was run on the front page of the Washington newspaper the day the senate went to vote on passing the Civil Rights Act of 1964. In the summer of 1964, COFO brought nearly 1,000 activists to Mississippi—most of them white college students—to join with local black activists to register voters, teach in "Freedom Schools," and organize the Mississippi Freedom Democratic Party (MFDP). Many of Mississippi's white residents deeply resented the outsiders and attempts to change their society. State and local governments, police, the White Citizens' Council and the Ku Klux Klan used arrests, beatings, arson, murder, spying, firing, evictions, and other forms of intimidation and harassment to oppose the project and prevent blacks from registering to vote or achieving social equality. On June 21, 1964, three civil rights workers disappeared. James Chaney, a young black Mississippian and plasterer's apprentice; and two Jewish activists, Andrew Goodman, a Queens College anthropology student; and Michael Schwerner, a CORE organizer from Manhattan's Lower East Side, were found weeks later, murdered by conspirators who turned out to be local members of the Klan, some of them members of the Neshoba County sheriff's department. This outraged the public, leading the U.S. Justice Department along with the FBI (the latter which had previously avoided dealing with the issue of segregation and persecution of blacks) to take action. The outrage over these murders helped lead to the passage of the Civil Rights Act. (See Mississippi civil rights workers murders for details). From June to August, Freedom Summer activists worked in 38 local projects scattered across the state, with the largest number concentrated in the Mississippi Delta region. At least 30 Freedom Schools, with close to 3,500 students were established, and 28 community centers set up. Over the course of the Summer Project, some 17,000 Mississippi blacks attempted to become registered voters in defiance of the red tape and forces of white supremacy arrayed against them—only 1,600 (less than 10%) succeeded. But more than 80,000 joined the Mississippi Freedom Democratic Party (MFDP), founded as an alternative political organization, showing their desire to vote and participate in politics. Though Freedom Summer failed to register many voters, it had a significant effect on the course of the Civil Rights Movement. It helped break down the decades of people's isolation and repression that were the foundation of the Jim Crow system. Before Freedom Summer, the national news media had paid little attention to the persecution of black voters in the Deep South and the dangers endured by black civil rights workers. The progression of events throughout the South increased media attention to Mississippi. The deaths of affluent northern white students and threats to other northerners attracted the full attention of the media spotlight to the state. Many black activists became embittered, believing the media valued lives of whites and blacks differently. Perhaps the most significant effect of Freedom Summer was on the volunteers, almost all of whom—black and white—still consider it to have been one of the defining periods of their lives. Although President Kennedy had proposed civil rights legislation and it had support from Northern Congressmen, Southern Senators blocked the bill by threatening filibusters. After considerable parliamentary maneuvering and 54 days of filibuster on the floor of the United States Senate, President Johnson got a bill through the Congress. On July 2, 1964, President Johnson signed the Civil Rights Act of 1964, that banned discrimination based on "race, color, religion, sex or national origin" in employment practices and public accommodations. The bill authorized the Attorney General to file lawsuits to enforce the new law. The law also nullified state and local laws that required such discrimination. Blacks in Mississippi had been disfranchised by statutory and constitutional changes since the late 19th century. In 1963 COFO held a Freedom Vote in Mississippi to demonstrate the desire of black Mississippians to vote. More than 80,000 people registered and voted in the mock election, which pitted an integrated slate of candidates from the "Freedom Party" against the official state Democratic Party candidates. In 1964, organizers launched the Mississippi Freedom Democratic Party (MFDP) to challenge the all-white official party. When Mississippi voting registrars refused to recognize their candidates, they held their own primary. They selected Fannie Lou Hamer, Annie Devine, and Victoria Gray to run for Congress, and a slate of delegates to represent Mississippi at the 1964 Democratic National Convention. The presence of the Mississippi Freedom Democratic Party in Atlantic City, New Jersey, was inconvenient, however, for the convention organizers. They had planned a triumphant celebration of the Johnson administration’s achievements in civil rights, rather than a fight over racism within the Democratic Party. All-white delegations from other Southern states threatened to walk out if the official slate from Mississippi was not seated. Johnson was worried about the inroads that Republican Barry Goldwater’s campaign was making in what previously had been the white Democratic stronghold of the "Solid South", as well as support that George Wallace had received in the North during the Democratic primaries. Johnson could not, however, prevent the MFDP from taking its case to the Credentials Committee. There Fannie Lou Hamer testified eloquently about the beatings that she and others endured and the threats they faced for trying to register to vote. Turning to the television cameras, Hamer asked, "Is this America?" Johnson offered the MFDP a "compromise" under which it would receive two non-voting, at-large seats, while the white delegation sent by the official Democratic Party would retain its seats. The MFDP angrily rejected the "compromise." The MFDP kept up its agitation at the convention, after it was denied official recognition. When all but three of the "regular" Mississippi delegates left because they refused to pledge allegiance to the party, the MFDP delegates borrowed passes from sympathetic delegates and took the seats vacated by the official Mississippi delegates. National party organizers removed them. When they returned the next day, they found convention organizers had removed the empty seats that had been there the day before. They stayed and sang "freedom songs". The 1964 Democratic Party convention disillusioned many within the MFDP and the Civil Rights Movement, but it did not destroy the MFDP. The MFDP became more radical after Atlantic City. It invited Malcolm X, then a spokesman for the Nation of Islam, to speak at one of its conventions and opposed the war in Vietnam. After the 1964 professional American Football League season, the AFL All-Star Game had been scheduled for early 1965 in New Orleans' Tulane Stadium. After numerous black players were refused service by a number of New Orleans hotels and businesses, and white cabdrivers refused to carry black passengers, black and white players alike lobbied for a boycott of New Orleans. Under the leadership of Buffalo Bills' players, including Cookie Gilchrist, the players put up a unified front. The game was moved to Jeppesen Stadium in Houston. The discriminatory practices that prompted the boycott were illegal under the Civil Rights Act of 1964, which had been signed in July 1964. This new law likely encouraged the AFL players in their cause. It was the first boycott by a professional sports event of an entire city. |Problems listening to these files? See media help.| SNCC had undertaken an ambitious voter registration program in Selma, Alabama, in 1963, but by 1965 had made little headway in the face of opposition from Selma's sheriff, Jim Clark. After local residents asked the SCLC for assistance, King came to Selma to lead several marches, at which he was arrested along with 250 other demonstrators. The marchers continued to meet violent resistance from police. Jimmie Lee Jackson, a resident of nearby Marion, was killed by police at a later march in February 17, 1965. Jackson's death prompted James Bevel, director of the Selma Movement, to initiate a plan to march from Selma to Montgomery, the state capital. On March 7, 1965, acting on Bevel's plan, Hosea Williams of the SCLC and John Lewis of SNCC led a march of 600 people to walk the 54 miles (87 km) from Selma to the state capital in Montgomery. Only six blocks into the march, at the Edmund Pettus Bridge, state troopers and local law enforcement, some mounted on horseback, attacked the peaceful demonstrators with billy clubs, tear gas, rubber tubes wrapped in barbed wire, and bull whips. They drove the marchers back into Selma. John Lewis was knocked unconscious and dragged to safety. At least 16 other marchers were hospitalized. Among those gassed and beaten was Amelia Boynton Robinson, who was at the center of civil rights activity at the time. The national broadcast of the news footage of lawmen attacking unresisting marchers' seeking the right to vote provoked a national response, as had scenes from Birmingham two years earlier. The marchers were able to obtain a court order permitting them to make the march without incident two weeks later. After a second march on March 9 to the site of Bloody Sunday, local whites murdered another voting rights supporter, Rev. James Reeb. He died in a Birmingham hospital March 11. On March 25, four Klansmen shot and killed Detroit homemaker Viola Liuzzo as she drove marchers back to Selma at night after the successfully completed march to Montgomery. Eight days after the first march, President Johnson delivered a televised address to support the voting rights bill he had sent to Congress. In it he stated: But even if we pass this bill, the battle will not be over. What happened in Selma is part of a far larger movement which reaches into every section and state of America. It is the effort of American Negroes to secure for themselves the full blessings of American life. Their cause must be our cause too. Because it is not just Negroes, but really it is all of us, who must overcome the crippling legacy of bigotry and injustice. And we shall overcome. Johnson signed the Voting Rights Act of 1965 on August 6. The 1965 act suspended poll taxes, literacy tests, and other subjective voter tests. It authorized Federal supervision of voter registration in states and individual voting districts where such tests were being used. African Americans who had been barred from registering to vote finally had an alternative to taking suits to local or state courts. If voting discrimination occurred, the 1965 act authorized the Attorney General of the United States to send Federal examiners to replace local registrars. Johnson reportedly told associates of his concern that signing the bill had lost the white South as voters for the Democratic Party for the foreseeable future. The act had an immediate and positive impact for African Americans. Within months of its passage, 250,000 new black voters had been registered, one third of them by federal examiners. Within four years, voter registration in the South had more than doubled. In 1965, Mississippi had the highest black voter turnout at 74% and led the nation in the number of black public officials elected. In 1969, Tennessee had a 92.1% turnout; Arkansas, 77.9%; and Texas, 73.1%. Several whites who had opposed the Voting Rights Act paid a quick price. In 1966 Sheriff Jim Clark of Alabama, infamous for using cattle prods against civil rights marchers, was up for reelection. Although he took off the notorious "Never" pin on his uniform, he was defeated. At the election, Clark lost as blacks voted to get him out of office. Clark later served a prison term for drug dealing. Blacks' regaining the power to vote changed the political landscape of the South. When Congress passed the Voting Rights Act, only about 100 African Americans held elective office, all in northern states of the U.S. By 1989, there were more than 7,200 African Americans in office, including more than 4,800 in the South. Nearly every Black Belt county (where populations were majority black) in Alabama had a black sheriff. Southern blacks held top positions in city, county, and state governments. Atlanta elected a black mayor, Andrew Young, as did Jackson, Mississippi, with Harvey Johnson, Jr., and New Orleans, with Ernest Morial. Black politicians on the national level included Barbara Jordan, who represented Texas in Congress, and Andrew Young was appointed United States Ambassador to the United Nations during the Carter administration. Julian Bond was elected to the Georgia State Legislature in 1965, although political reaction to his public Opposition to the U.S. involvement in the Vietnam War prevented him from taking his seat until 1967. John Lewis represents Georgia's 5th congressional district in the United States House of Representatives, where he has served since 1987. |Problems listening to this file? See media help.| Rev. James Lawson invited King to Memphis, Tennessee, in March 1968 to support a sanitation workers' strike. These workers launched a campaign for union representation after two workers were accidentally killed on the job, and King considered their struggle to be a vital part of the Poor People's Campaign he was planning. A day after delivering his famous "I've Been to the Mountaintop" sermon, King was assassinated on April 4, 1968. Riots broke out in more than 110 cities across the United States in the days that followed, notably in Chicago, Baltimore, and in Washington, D.C. The damage done in many cities destroyed black businesses. The day before King's funeral, April 8, Coretta Scott King and three of the King children led 20,000 marchers through the streets of Memphis, holding signs that read, "Honor King: End Racism" and "Union Justice Now". National Guardsmen lined the streets, perched on M-48 tanks, bayonets mounted, with helicopters circling overhead. On April 9 Mrs. King led another 150,000 in a funeral procession through the streets of Atlanta. Her dignity revived courage and hope in many of the Movement's members, cementing her place as the new leader in the struggle for racial equality. Coretta King famously remarked, [Martin Luther King, Jr.] gave his life for the poor of the world, the garbage workers of Memphis and the peasants of Vietnam. The day that Negro people and others in bondage are truly free, on the day want is abolished, on the day wars are no more, on that day I know my husband will rest in a long-deserved peace.—Coretta King Rev. Ralph Abernathy succeeded King as the head of the SCLC and attempted to carry forth King's plan for a Poor People's March. It was to unite blacks and whites to campaign for fundamental changes in American society and economic structure. The march went forward under Abernathy's plainspoken leadership but did not achieve its goals. On 17 December 1951, the Communist Party–affiliated Civil Rights Congress delivered the petition We Charge Genocide: "The Crime of Government Against the Negro People", often shortened to We Charge Genocide, to the United Nations in 1951, arguing that the U.S. federal government, by its failure to act against lynching in the United States, was guilty of genocide under Article II of the UN Genocide Convention. The petition was presented to the United Nations at two separate venues: Paul Robeson, concert singer and activist, to a UN official in New York City, while William L. Patterson, executive director of the CRC, delivered copies of the drafted petition to a UN delegation in Paris. Patterson, the editor of the petition, was a leader in the Communist Party USA and head of the International Labor Defense, a group that offered legal representation to communists, trade unionists, and African-Americans in cases involving issues of political or racial persecution. As earlier Civil Rights figures like Robeson, Dubois and Patterson became more politically radical (and therefore targets of Cold War anti-Communism by the US. Government) they lost favor with both mainstream Black America and the NAACP. In order to secure a place in the mainstream and gain the broadest base, it was a matter of survival for the new generation of civil rights activists to openly distance themselves from anything and anyone Communist associated. Even with this distinction however, many civil rights leaders and organizations were still investigated by the FBI under J Edgar Hoover and labeled "Communist" or "subversive." In the early 1960s, the practice of distancing the Civil Rights Movement from "Reds" was challenged by the Student Nonviolent Coordinating Committee who adopted a policy of accepting assistance and participation by anyone, regardless of political affiliation, who supported the SNCC program and was willing to "put their body on the line." At times this political openness put SNCC at odds with the NAACP. During the years preceding his election to the presidency, John F. Kennedy's record of voting on issues of racial discrimination had been scant. Kennedy openly confessed to his closest advisors that during the first months of his presidency, his knowledge of the civil rights movement was "lacking". For the first two years of the Kennedy administration, attitudes to both the president and attorney general, Robert F. Kennedy, were mixed. Many viewed the administration with suspicion. A well of historical cynicism toward white liberal politics had left a sense of uneasy disdain by African-Americans toward any white politician who claimed to share their concerns for freedom. Still, many had a strong sense that in the Kennedys there was a new age of political dialogue beginning. Although observers frequently assert the phrase "The Kennedy administration" or even, "President Kennedy" when discussing the legislative and executive support of the Civil Rights movement, between 1960 and 1963, many of the initiatives were the result of Robert Kennedy's passion. Through his rapid education in the realities of racism, Robert Kennedy underwent a thorough conversion of purpose as Attorney-General. Asked in an interview in May 1962, "What do you see as the big problem ahead for you, is it Crime or Internal Security?" Robert Kennedy replied, "Civil Rights." The President came to share his brother's sense of urgency on the matters to such an extent that it was at the Attorney-General's insistence that he made his famous address to the nation. When a white mob attacked and burned the First Baptist Church in Montgomery, Alabama, where King held out with protesters, the Attorney-General telephoned King to ask him not to leave the building until the U.S. Marshals and National Guard could secure the area. King proceeded to berate Kennedy for "allowing the situation to continue". King later publicly thanked Robert Kennedy's commanding the force to break up an attack, which might otherwise have ended King's life. The relationship between the two men underwent change from mutual suspicion to one of shared aspirations. For Dr King, Robert Kennedy initially represented the 'softly softly' approach that in former years had disabled the movement of blacks against oppression in the U.S. For Robert Kennedy, King initially represented what he then considered an unrealistic militancy. Some white liberals regarded the militancy itself as the cause of so little governmental progress. King initially regarded much of the efforts of the Kennedys as an attempt to control the movement and siphon off its energies. Yet he came to find the efforts of the brothers to be crucial. It was at Robert Kennedy's constant insistence, through conversations with King and others, that King came to recognize the fundamental nature of electoral reform and suffrage—the need for black Americans to actively engage not only protest but political dialogue at the highest levels. In time the president gained King's respect and trust, via the frank dialogue and efforts of the Attorney-General. Robert Kennedy became very much his brother's key advisor on matters of racial equality. The president regarded the issue of civil rights to be a function of the Attorney-General's office. With a very small majority in Congress, the president's ability to press ahead with legislation relied considerably on a balancing game with the Senators and Congressmen of the South. Indeed, without the support of Vice-President Lyndon Johnson, who had years of experience in Congress and longstanding relations there, many of the Attorney-General's programs would not have progressed. By late 1962, frustration at the slow pace of political change was balanced by the movement's strong support for legislative initiatives: housing rights, administrative representation across all US Government departments, safe conditions at the ballot box, pressure on the courts to prosecute racist criminals. King remarked by the end of the year, "This administration has reached out more creatively than its predecessors to blaze new trails, [notably in voting rights and government appointments]. Its vigorous young men [had launched] imaginative and bold forays [and displayed] a certain élan in the attention they give to civil-rights issues." From squaring off against Governor George Wallace, to "tearing into" Vice-President Johnson (for failing to desegregate areas of the administration), to threatening corrupt white Southern judges with disbarment, to desegregating interstate transport, Robert Kennedy came to be consumed by the Civil Rights movement and later carried it forward into his own bid for the presidency in 1968. On the night of Governor Wallace's capitulation, President Kennedy gave an address to the nation, which marked the changing tide, an address that was to become a landmark for the ensuing change in political policy. In it President Kennedy spoke of the need to act decisively and to act now: "We preach freedom around the world, and we mean it, and we cherish our freedom here at home, but are we to say to the world, and much more importantly, to each other that this is the land of the free except for the Negroes; that we have no second-class citizens except Negroes; that we have no class or caste system, no ghettoes, no master race except with respect to Negroes? Now the time has come for this Nation to fulfill its promise. The events in Birmingham and elsewhere have so increased the cries for equality that no city or State or legislative body can prudently choose to ignore them."—President Kennedy, Assassination cut short the life and careers of both the Kennedy brothers and Dr. Martin Luther King, Jr. The essential groundwork of the Civil Rights Act 1964 had been initiated before John F. Kennedy was assassinated. The dire need for political and administrative reform had been driven home on Capitol Hill by the combined efforts of the Kennedy brothers, Dr. King (and other leaders) and President Lyndon Johnson. In 1966, Robert Kennedy undertook a tour of South Africa in which he championed the cause of the anti-apartheid movement. His tour gained international praise at a time when few politicians dared to entangle themselves in the politics of South Africa. Kennedy spoke out against the oppression of the native population. He was welcomed by the black population as though a visiting head of state. In an interview with LOOK Magazine he said: At the University of Natal in Durban, I was told the church to which most of the white population belongs teaches apartheid as a moral necessity. A questioner declared that few churches allow black Africans to pray with the white because the Bible says that is the way it should be, because God created Negroes to serve. "But suppose God is black", I replied. "What if we go to Heaven and we, all our lives, have treated the Negro as an inferior, and God is there, and we look up and He is not white? What then is our response?" There was no answer. Only silence.—Robert Kennedy , LOOK Magazine Many in the Jewish community supported the Civil Rights Movement. In fact, statistically Jews were one of the most actively involved non-black groups in the Movement. Many Jewish students worked in concert with African Americans for CORE, SCLC, and SNCC as full-time organizers and summer volunteers during the Civil Rights era. Jews made up roughly half of the white northern volunteers involved in the 1964 Mississippi Freedom Summer project and approximately half of the civil rights attorneys active in the South during the 1960s. Jewish leaders were arrested while heeding a call from Rev. Dr. Martin Luther King, Jr. in St. Augustine, Florida, in June 1964, where the largest mass arrest of rabbis in American history took place at the Monson Motor Lodge—a nationally important civil rights landmark that was demolished in 2003 so that a Hilton Hotel could be built on the site. Abraham Joshua Heschel, a writer, rabbi and professor of theology at the Jewish Theological Seminary of America in New York was outspoken on the subject of civil rights. He marched arm-in-arm with Dr. King in the 1965 March on Selma. In the Mississippi Burning murders of 1964, the two white activists killed, Andrew Goodman and Michael Schwerner, were both Jewish. Brandeis University, the only nonsectarian Jewish-sponsored college university in the world, created the Transitional Year Program (TYP)in 1968, in part response to Rev. Dr. Martin Luther King's assassination. The faculty created it to renew the University's commitment to social justice. Recognizing Brandeis as a university with a commitment to academic excellence, these faculty members created a chance to disadvantaged students to participate in an empowering educational experience. The program began by admitting 20 black males. As it developed, two groups have been given chances. The first group consists of students whose secondary schooling experiences and/or home communities may have lacked the resources to foster adequate preparation for success at elite colleges like Brandeis. For example, their high schools do not offer AP or honors courses nor high quality laboratory experiences. Students selected had to have excelled in the curricula offered by their schools. The second group of students includes those whose life circumstances have created formidable challenges that required focus, energy, and skills that otherwise would have been devoted to academic pursuits. Some have served as heads of their households, others have worked full-time while attending high school full-time, and others have shown leadership in other ways. While Jews were very active in the civil rights movement in the South, in the North, many had experienced a more strained relationship with African Americans. In communities experiencing white flight, racial rioting, and urban decay, Jewish Americans were more often the last remaining whites in the communities most affected. With Black militancy and the Black Power movements on the rise, Black Anti-Semitism increased leading to strained relations between Blacks and Jews in Northern communities. In New York City, most notably, there was a major socio-economic class difference in the perception of African Americans by Jews. Jews from better educated Upper Middle Class backgrounds were often very supportive of African American civil rights activities while the Jews in poorer urban communities that became increasingly minority were often less supportive largely in part due to more negative and violent interactions between the two groups. King reached the height of popular acclaim during his life in 1964, when he was awarded the Nobel Peace Prize. His career after that point was filled with frustrating challenges. The liberal coalition that had gained passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965 began to fray. King was becoming more estranged from the Johnson administration. In 1965 he broke with it by calling for peace negotiations and a halt to the bombing of Vietnam. He moved further left in the following years, speaking of the need for economic justice and thoroughgoing changes in American society. He believed change was needed beyond the civil rights gained by the movement. King's attempts to broaden the scope of the Civil Rights Movement were halting and largely unsuccessful, however. King made several efforts in 1965 to take the Movement north to address issues of employment and housing discrimination. SCLC's campaign in Chicago publicly failed, as Chicago Mayor Richard J. Daley marginalized SCLC's campaign by promising to "study" the city's problems. In 1966, white demonstrators holding "white power" signs in notoriously racist Cicero, a suburb of Chicago, threw stones at marchers demonstrating against housing segregation. By the end of World War II, more than half of the country's black population lived in Northern and Western industrial cities rather than Southern rural areas. Migrating to those cities for better job opportunities, education and to escape legal segregation, African Americans often found segregation that existed in fact rather than in law. While after the 1920s, the Ku Klux Klan was not prevalent, by the 1960s other problems prevailed in northern cities. Beginning in the 1950s, deindustrialization and restructuring of major industries: railroads and meatpacking, steel industry and car industry, markedly reduced working-class jobs, which had earlier provided middle-class incomes. As the last population to enter the industrial job market, blacks were disadvantaged by its collapse. At the same time, investment in highways and private development of suburbs in the postwar years had drawn many ethnic whites out of the cities to newer housing in expanding suburbs. Urban blacks who did not follow the middle class out of the cities became concentrated in the older housing of inner-city neighborhoods, among the poorest in most major cities. Because jobs in new service areas and parts of the economy were being created in suburbs, unemployment was much higher in many black than in white neighborhoods, and crime was frequent. African Americans rarely owned the stores or businesses where they lived. Many were limited to menial or blue-collar jobs, although union organizing in the 1930s and 1940s had opened up good working environments for some. African Americans often made only enough money to live in dilapidated tenements that were privately owned, or poorly maintained public housing. They also attended schools that were often the worst academically in the city and that had fewer white students than in the decades before WWII. The racial makeup of most major city police departments, largely ethnic white (especially Irish), was a major factor in adding to racial tensions. Even a black neighborhood such as Harlem had a ratio of one black officer for every six white officers. The majority-black city of Newark, New Jersey had only 145 blacks among its 1322 police officers. Police forces in Northern cities were largely composed of white ethnics, descendants of 19th-century immigrants: mainly Irish, Italian, and Eastern European officers. They had established their own power bases in the police departments and in territories in cities. Some would routinely harass blacks with or without provocation. One of the first major race riots took place in Harlem, New York, in the summer of 1964. A white Irish-American police officer, Thomas Gilligan, shot 15-year-old James Powell, who was black, for allegedly charging him armed with a knife. It was found that Powell was unarmed. A group of black citizens demanded Gilligan's suspension. Hundreds of young demonstrators marched peacefully to the 67th Street police station on July 17, 1964, the day after Powell's death. The police department did not suspend Gilligan. Although the precinct had promoted the NYPD's first black station commander, neighborhood residents were frustrated with racial inequalities. They looted and burned anything that was not black-owned in the neighborhood. Bedford-Stuyvesant, a major black neighborhood in Brooklyn erupted next. That summer, rioting also broke out in Philadelphia, for similar reasons. In the aftermath of the riots of July 1964, the federal government funded a pilot program called Project Uplift. Thousands of young people in Harlem were given jobs during the summer of 1965. The project was inspired by a report generated by HARYOU called Youth in the Ghetto. HARYOU was given a major role in organizing the project, together with the National Urban League and nearly 100 smaller community organizations. Permanent jobs at living wages were still out of reach of many young black men. In 1965, President Lyndon B. Johnson signed the Voting Rights Act, but the new law had no immediate effect on living conditions for blacks. A few days after the act became law, a riot broke out in the South Central Los Angeles neighborhood of Watts. Like Harlem, Watts was an impoverished neighborhood with very high unemployment. Its residents were supervised by a largely white police department that had a history of abuse against blacks. While arresting a young man for drunk driving, police officers argued with the suspect's mother before onlookers. The conflict triggered a massive destruction of property through six days of rioting. Thirty-four people were killed and property valued at about $30 million was destroyed, making the Watts Riots among the most expensive in American history. With black militancy on the rise, ghetto residents directed acts of anger at the police. Black residents growing tired of police brutality continued to riot. Some young people joined groups such as the Black Panthers, whose popularity was based in part on their reputation for confronting police officers. Riots among blacks occurred in 1966 and 1967 in cities such as Atlanta, San Francisco, Oakland, Baltimore, Seattle, Cleveland, Cincinnati, Columbus, Newark, Chicago, New York City (specifically in Brooklyn, Harlem and the Bronx), and worst of all in Detroit. In Detroit, a comfortable black middle class had begun to develop among families of blacks who worked at good-paying jobs in the automotive industry. Blacks who had not moved upward were living in much worse conditions, subject to the same problems as blacks in Watts and Harlem. When white police officers shut down an illegal bar on a liquor raid and arrested a large group of patrons during the hot summer, furious residents rioted. One significant effect of the Detroit riot was the acceleration of "white flight", an ethnic succession by which ethnic white residents, who had become better established economically, moved out of inner-city neighborhoods to newer housing in the suburbs, which were first settled by European Americans, or whites. Poorer migrants and immigrants had the older housing in the city. Demonstrating the economic basis of the suburban migration, Detroit lost some of its black middle class as well, as did cities such as Washington, DC and Chicago during the next decades. As a result of suburbanization, the riots, and migration of jobs to the suburbs, formerly prosperous industrial cities, such as Detroit, Newark, and Baltimore, now have less than 40% white population. Newark is close enough to New York to attract new immigrants from Asia and the Middle East as well. Changes in industry caused continued job losses, depopulation of middle classes, and concentrated poverty in such cities in the late 20th century. President Johnson created the National Advisory Commission on Civil Disorders in 1967. The commission's final report called for major reforms in employment and public assistance for black communities. It warned that the United States was moving toward separate white and black societies. In April 1968 after the assassination of Dr. Martin Luther King, Jr. in Memphis, Tennessee, rioting broke out in cities across the country from frustration and despair. These included Cleveland, Baltimore, Washington, D.C., Chicago, New York City and Louisville, Kentucky. As in previous riots, most of the damage was done in black neighborhoods. In some cities, it has taken more than a quarter of a century for these areas to recover from the damage of the riots; in others, little recovery has been achieved. Programs in affirmative action resulted in the hiring of more black police officers in every major city. Today blacks make up a proportional majority of the police departments in cities such as Baltimore, Washington, New Orleans, Atlanta, Newark, and Detroit. Civil rights laws have reduced employment discrimination. The conditions that led to frequent rioting in the late 1960s have receded, but not all the problems have been solved. With industrial and economic restructuring, hundreds of thousands of industrial jobs disappeared since the later 1950s from the old industrial cities. Some moved South, as has much population following new jobs, and others out of the U.S. altogether. Civil unrest broke out in Miami in 1980, in Los Angeles in 1992, and in Cincinnati in 2001. At the same time King was finding himself at odds with factions of the Democratic Party, he was facing challenges from within the Civil Rights Movement to the two key tenets upon which the movement had been based: integration and non-violence. Stokely Carmichael, who became the leader of SNCC in 1966, was one of the earliest and most articulate spokespersons for what became known as the "Black Power" movement after he used that slogan, coined by activist and organizer Willie Ricks, in Greenwood, Mississippi on June 17, 1966. In 1966 SNCC leader Stokely Carmichael began urging African American communities to confront the Ku Klux Klan armed and ready for battle. He felt it was the only way to ever rid the communities of the terror caused by the Klan. Several people engaging in the Black Power movement started to gain more of a sense in black pride and identity as well. In gaining more of a sense of a cultural identity, several blacks demanded that whites no longer refer to them as "Negroes" but as "Afro-Americans." Up until the mid-1960s, blacks had dressed similarly to whites and straightened their hair. As a part of gaining a unique identity, blacks started to wear loosely fit dashikis and had started to grow their hair out as a natural afro. The afro, sometimes nicknamed the "'fro," remained a popular black hairstyle until the late 1970s. Black Power was made most public, however, by the Black Panther Party, which was founded by Huey Newton and Bobby Seale in Oakland, California, in 1966. This group followed the ideology of Malcolm X, a former member of the Nation of Islam, using a "by-any-means necessary" approach to stopping inequality. They sought to rid African American neighborhoods of police brutality and created a ten-point plan amongst other things. Their dress code consisted of black leather jackets, berets, slacks, and light blue shirts. They wore an afro hairstyle. They are best remembered for setting up free breakfast programs, referring to police officers as "pigs", displaying shotguns and a raised fist, and often using the statement of "Power to the people". Black Power was taken to another level inside prison walls. In 1966, George Jackson formed the Black Guerrilla Family in the California San Quentin State Prison. The goal of this group was to overthrow the white-run government in America and the prison system. In 1970, this group displayed their dedication after a white prison guard was found not guilty of shooting and killing three black prisoners from the prison tower. They retaliated by killing a white prison guard. |Problems listening to this file? See media help.| Released in August 1968, the number one Rhythm & Blues single for the Billboard Year-End list was James Brown's "Say It Loud – I'm Black and I'm Proud". In October 1968, Tommie Smith and John Carlos, while being awarded the gold and bronze medals, respectively, at the 1968 Summer Olympics, donned human rights badges and each raised a black-gloved Black Power salute during their podium ceremony. Incidentally, it was the suggestion of white silver medalist, Peter Norman of Australia, for Smith and Carlos to each wear one black glove. Smith and Carlos were immediately ejected from the games by the United States Olympic Committee, and later the International Olympic Committee issued a permanent lifetime ban for the two. However, the Black Power movement had been given a stage on live, international television. King was not comfortable with the "Black Power" slogan, which sounded too much like black nationalism to him. SNCC activists, in the meantime, began embracing the "right to self-defense" in response to attacks from white authorities, and booed King for continuing to advocate non-violence. When King was murdered in 1968, Stokely Carmichael stated that whites murdered the one person who would prevent rampant rioting and that blacks would burn every major city to the ground. In every major city from Boston to San Francisco, racial riots broke out in the black community following King's death and as a result, "White Flight" occurred from several cities leaving Blacks in a dilapidated and nearly unrepairable city. Conditions at the Mississippi State Penitentiary at Parchman, then known as Parchman Farm, became part of the public discussion of civil rights after activists were imprisoned there. In the spring of 1961, Freedom Riders came to the South to test the desegregation of public facilities. By the end of June 1963, Freedom Riders had been convicted in Jackson, Mississippi. Many were jailed in Mississippi State Penitentiary at Parchman. Mississippi employed the trusty system, a hierarchical order of inmates that used some inmates to control and enforce punishment of other inmates. In 1970 the civil rights lawyer Roy Haber began taking statements from inmates. He collected 50 pages of details of murders, rapes, beatings and other abuses suffered by the inmates from 1969 to 1971 at Mississippi State Penitentiary. In a landmark case known as Gates v. Collier (1972), four inmates represented by Haber sued the superintendent of Parchman Farm for violating their rights under the United States Constitution. Federal Judge William C. Keady found in favor of the inmates, writing that Parchman Farm violated the civil rights of the inmates by inflicting cruel and unusual punishment. He ordered an immediate end to all unconstitutional conditions and practices. Racial segregation of inmates was abolished. And the trustee system, which allow certain inmates to have power and control over others, was also abolished. The prison was renovated in 1972 after the scathing ruling by Judge Keady; he wrote that the prison was an affront to "modern standards of decency." Among other reforms, the accommodations were made fit for human habitation. The system of "trusties" was abolished. (The prison had armed lifers with rifles and given them authority to oversee and guard other inmates, which led to many abuses and murders.) In integrated correctional facilities in northern and western states, blacks represented a disproportionate number of the prisoners, in excess of their proportion of the general population. They were often treated as second-class citizens by white correctional officers. Blacks also represented a disproportionately high number of death row inmates. Eldridge Cleaver's book Soul on Ice was written from his experiences in the California correctional system; it contributed to black militancy. There was an international context for the actions of the U.S. Federal government during these years. It had stature to maintain in Europe and a need to appeal to the people in the Third World. In Cold War Civil Rights: Race and the Image of American Democracy, the historian Mary L. Dudziak argued that Communists critical of the United States criticized the nation for its hypocrisy in portraying itself as the "leader of the free world," when so many of its citizens were subjected to severe racial discrimination and violence. She argued that this was a major factor in the government moving to support civil rights legislation. |Wikimedia Commons has media related to: History of civil rights in the United States| Post-Civil Rights Movement: Here you can share your comments or contribute with more information, content, resources or links about this topic.
http://www.mashpedia.com/African-American_Civil_Rights_Movement_(1955%E2%80%931968)
13
29
An Introduction to Molecular Biology/DNA the unit of life Genes are made from a long molecule called DNA, which is copied and inherited across generations. DNA is made of simple units that line up in a particular order within this large molecule. The order of these units carries genetic information, similar to how the order of letters on a page carry information. The language used by DNA is called the genetic code, which lets organisms read the information in the genes. This information is the instructions for constructing and operating a living organism. Deoxyribonucleic acid(DNA): Deoxyribonucleic acid (/diˌɒksiˌraɪbɵ.njuːˌkleɪ.ɨk ˈæsɪd/ , or DNA, is a nucleic acid that contains the genetic instructions used in the development and functioning of all known living organisms (with the exception of RNA viruses). The main role of DNA molecules is the long-term storage of information. DNA is often compared to a set of blueprints, like a recipe or a code, since it contains the instructions needed to construct other components of cells, such as proteins and RNA molecules. The DNA segments that carry this genetic information are called genes, but other DNA sequences have structural purposes, or are involved in regulating the use of this genetic information. DNA consists of two long polymers of simple units called nucleotides, with backbones made of sugars and phosphate groups joined by ester bonds. These two strands run in opposite directions to each other and are therefore anti-parallel. Attached to each sugar is one of four types of molecules called bases. It is the sequence of these four bases along the backbone that encodes information. This information is read using the genetic code, which specifies the sequence of the amino acids within proteins. The code is read by copying stretches of DNA into the related nucleic acid RNA, in a process called transcription. The structure of DNA was first discovered by James D. Watson and Francis Crick. It is the same for all species, comprising two helical chains each coiled round the same axis, each with a pitch of 34 Ångströms (3.4 nanometres) and a radius of 10 Ångströms (1.0 nanometres). Within cells, DNA is organized into long structures called chromosomes. These chromosomes are duplicated before cells divide, in a process called DNA replication. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts.In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed. The DNA double helix is stabilized by hydrogen bonds between the bases attached to the two strands. The four bases found in DNA are adenine (abbreviated A), cytosine (C), guanine (G) and thymine (T). These four bases are attached to the sugar/phosphate to form the complete nucleotide, as shown for adenosine monophosphate. DNA is a genetic material Griffith's experiment was conducted in 1928 by Frederick Griffith, one of the first experiments suggesting that bacteria are capable of transferring genetic information through a process known as transformation. Griffith used two strains of Streptococcus pneumoniae bacteria which infect mice – a type III-S (smooth) and type II-R (rough) strain. The III-S strain covers itself with a polysaccharide capsule that protects it from the host's immune system, resulting in the death of the host, while the II-R strain doesn't have that protective capsule and is defeated by the host's immune system. A German bacteriologist, Fred Neufeld, had discovered the three pneumococcal types (Types I, II, and III) and discovered the Quellung reaction to identify them in vitro. Until Griffith's experiment, bacteriologists believed that the types were fixed and unchangeable, from one generation to another. In this experiment, bacteria from the III-S strain were killed by heat, and their remains were added to II-R strain bacteria. While neither alone harmed the mice, the combination was able to kill its host. Griffith was also able to isolate both live II-R and live III-S strains of pneumococcus from the blood of these dead mice. Griffith concluded that the type II-R had been "transformed" into the lethal III-S strain by a "transforming principle" that was somehow part of the dead III-S strain bacteria. Today, we know that the "transforming principle" Griffith observed was the DNA of the III-S strain bacteria. While the bacteria had been killed, the DNA had survived the heating process and was taken up by the II-R strain bacteria. The III-S strain DNA contains the genes that form the protective polysaccharide capsule. Equipped with this gene, the former II-R strain bacteria were now protected from the host's immune system and could kill the host. The exact nature of the transforming principle (DNA) was verified in the experiments done by Avery, McLeod and McCarty and by Hershey and Chase. Alfred Hershey and Martha Chase conducted series of experiments in 1952 by , confirming that DNA was the genetic material, which had first been demonstrated in the 1944 Avery–MacLeod–McCarty experiment. These experiments are known as Hershey Chase experiments. The existence of DNA was known to biologists since 1869, most of them assumed that proteins carried the information for inheritance that time. Hershey and Chase conducted their experiments on the T2 phage. The phage consists of a protein shell containing its genetic material. The phage infects a bacterium by attaching to its outer membrane and injecting its genetic material and leaving its empty shell attached to the bacterium. In their first set of experiments, Hershey and Chase labeled the DNA of phages with radioactive Phosphorus-32 (p32) (the element phosphorus is present in DNA but not present in any of the 20 amino acids which are component of proteins). They allowed the phages to infect E. coli, and through several elegant experiments were able to observe the transfer of P32 labeled phage DNA into the cytoplasm of the bacterium. In their second set of experiments, they labeled the phages with radioactive Sulfur-35 (Sulfur is present in the amino acids cysteine and methionine, but not in DNA). Following infection of E. coli they then sheared the viral protein shells off of infected cells using a high-speed blender and separated the cells and viral coats by using a centrifuge. After separation, the radioactive S35 tracer was observed in the protein shells, but not in the infected bacteria, supporting the hypothesis that the genetic material which infects the bacteria was DNA and not protein. Hershey shared the 1969 Nobel Prize in Physiology or Medicine for his “discoveries concerning the genetic structure of viruses.” Oswald T. Avery, Colin MacCleod, Maclyn McCarty with Francis Crick and James D Watson Structure of DNA Two helical strands form the DNA backbone. Another double helix may be found by tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not directly opposite each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell, but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form. Base pairing Of DNA Chargaff's rules was given by Erwin Chargaff which state that DNA from any cell of all organisms should have a 1:1 ratio of pyrimidine and purine bases and, more specifically, that the amount of guanine is equal to cytosine and the amount of adenine is equal to thymine. This pattern is found in both strands of the DNA. They were discovered by Austrian chemist Erwin Chargaff. In molecular biology, two nucleotides on opposite complementary DNA strands that are connected via hydrogen bonds are called a base pair (often abbreviated bp). In the canonical Watson-Crick DNA base pairing, Adenine (A) forms a base pair with Thymine (T) and Guanine (G) forms a base pair with Cytosine (C). In RNA, thymine is replaced by Uracil (U). Alternate hydrogen bonding patterns, such as the wobble base pair and Hoogsteen base pair, also occur—particularly in RNA—giving rise to complex and functional tertiary structures. Purine base The German chemist Emil Fischer in 1884 gave the name 'purine' (purum uricum). He synthesized it for the first time in 1899 by uric acid which had been isolated from kidney stones by Scheele in 1776. Beside from DNA and RNA, purines are also components in a number of other important biomolecules, such as ATP, GTP, cyclic AMP, NADH, and coenzyme A. Purine itself, has not been found in nature, but it can be produced by organic synthesis.A purine is a heterocyclic aromatic organic compound, consisting of a pyrimidine ring fused to an imidazole ring. Adenine is one of the two purine nucleobases (the other being guanine) used in forming nucleotides of the nucleic acids (DNA or RNA). In DNA, adenine binds to thymine via two hydrogen bonds to assist in stabilizing the nucleic acid structures. Adenine forms adenosine, a nucleoside, when attached to ribose, and deoxyadenosine when attached to deoxyribose. It forms adenosine triphosphate (ATP), a nucleotide, when three phosphate groups are added to adenosine. Guanine, along with adenine and cytosine, is present in both DNA and RNA, whereas thymine is usually seen only in DNA, and uracil only in RNA. In DNA, guanine is paired with cytosine. With the formula C5H5N5O, guanine is a derivative of purine, consisting of a fused pyrimidine-imidazole ring system with conjugated double bonds. Guanine has two tautomeric forms, the major keto form and rare enol form. It binds to cytosine through three hydrogen bonds. In cytosine, the amino group acts as the hydrogen donor and the C-2 carbonyl and the N-3 amine as the hydrogen-bond acceptors. Guanine has a group at C-6 that acts as the hydrogen acceptor, while the group at N-1 and the amino group at C-2 act as the hydrogen donors. Pyrimidine base Pyrimidine is a heterocyclic aromatic organic compound similar to benzene and pyridine, containing two nitrogen atoms at positions 1 and 3 of the six-member ring. It is isomeric with two other forms of diazine.Three nucleobases found in nucleic acids, cytosine (C), thymine (T), and uracil (U), are pyrimidine derivatives. A pyrimidine has many properties in common with pyridine, as the number of nitrogen atoms in the ring increases the ring pi electrons become less energetic and electrophilic aromatic substitution gets more difficult while nucleophilic aromatic substitution gets easier. An example of the last reaction type is the displacement of the amino group in 2-aminopyrimidine by chlorine and its reverse. Reduction in resonance stabilization of pyrimidines may lead to addition and ring cleavage reactions rather than substitutions. One such manifestation is observed in the Dimroth rearrangement. Compared to pyridine, N-alkylation and N-oxidation is more difficult, and pyrimidines are also less basic: The pKa value for protonated pyrimidine is 1.23 compared to 5.30 for pyridine. Pyrimidine also is found in meteorites, although scientists still do not know its origin. Pyrimidine also photolytically decomposes into Uracil under UV light. Cytosine can be found as part of DNA, as part of RNA, or as a part of a nucleotide. As cytidine triphosphate (CTP), it can act as a co-factor to enzymes, and can transfer a phosphate to convert adenosine diphosphate (ADP) to adenosine triphosphate (ATP).The nucleoside of cytosine is cytidine. In DNA and RNA, cytosine is paired with guanine. However, it is inherently unstable, and can change into uracil (spontaneous deamination). This can lead to a point mutation if not repaired by the DNA repair enzymes such as uracil glycosylase, which cleaves a uracil in DNA. Cytosine can also be methylated into 5-methylcytosine by an enzyme called DNA methyltransferase or be methylated and hydroxylated to make 5-hydroxymethylcytosine. Active enzymatic deamination of cytosine or 5-methylcytosine by the APOBEC family of cytosine deaminases could have both beneficial and detrimental implications on various cellular processes as well as on organismal evolution. The implications of deamination on 5-hydroxymethylcytosine, on the other hand, remains less understood. Thymine (T, Thy) is one of the four nucleobases in the nucleic acid of DNA that are represented by the letters G–C–A–T. The others are adenine, guanine, and cytosine. Thymine is also known as 5-methyluracil, a pyrimidine nucleobase. As the name suggests, thymine may be derived by methylation of uracil at the 5th carbon. In RNA, thymine is replaced with uracil in most cases. In DNA, thymine(T) binds to adenine (A) via two hydrogen bonds, thus stabilizing the nucleic acid structures. Uracil found in RNA, it base-pairs with adenine and replaces thymine during DNA transcription. Methylation of uracil produces thymine. It turns into thymine to protect the DNA and to improve the efficiency of DNA replication. Uracil can base-pair with any of the bases, depending on how the molecule arranges itself on the helix, but readily pairs with adenine because the methyl group is repelled into a fixed position. Uracil pairs with adenine through hydrogen bonding. Uracil is the hydrogen bond acceptor and can form two hydrogen bonds. Uracil can also bind with a ribose sugar to form the ribonucleoside uridine. When a phosphate attaches to uridine, uridine 5'-monophosphate is produced. Nucleosides are glycosylamines consisting of a nucleobase (often referred to as simply base) bound to a ribose or deoxyribose sugar via a beta-glycosidic linkage. Examples of nucleosides include cytidine, uridine, adenosine, guanosine, thymidine and inosine. Nucleosides can be phosphorylated by specific kinases in the cell on the sugar's primary alcohol group (-CH2-OH), producing nucleotides, which are the molecular building-blocks of DNA and RNA. Nucleosides can be produced by de novo synthesis pathways, in particular in the liver, but they are more abundantly supplied via ingestion and digestion of nucleic acids in the diet, whereby nucleotidases break down nucleotides (such as the thymine nucleotide) into nucleosides (such as thymidine) and phosphate. 1. Adenosine is a nucleoside composed of a molecule of adenine attached to a ribose sugar molecule (ribofuranose) moiety via a β-N9-glycosidic bond. 2.Cytidine is a nucleoside molecule that is formed when cytosine is attached to a ribose ring (also known as a ribofuranose) via a β-N1-glycosidic bond. Cytidine is a component of RNA. 3.Guanosine is a purine nucleoside comprising guanine attached to a ribose (ribofuranose) ring via a β-N9-glycosidic bond. Guanosine can be phosphorylated to become guanosine monophosphate (GMP), cyclic guanosine monophosphate (cGMP), guanosine diphosphate (GDP), and guanosine triphosphate (GTP). 4.Thymidine (more precisely called deoxythymidine; can also be labelled deoxyribosylthymine, and thymine deoxyriboside) is a chemical compound, more precisely a pyrimidine deoxynucleoside. Deoxythymidine is the DNA nucleoside T, which pairs with deoxyadenosine (A) in double-stranded DNA. If cytosine is attached to a deoxyribose ring, it is known as a deoxycytidine A nucleotide is composed of a nucleobase (nitrogenous base), a five-carbon sugar (either ribose or 2'-deoxyribose), and one to three phosphate groups. Together, the nucleobase and sugar comprise a nucleoside. The phosphate groups form bonds with either the 2, 3, or 5-carbon of the sugar, with the 5-carbon site most common. Cyclic nucleotides form when the phosphate group is bound to two of the sugar's hydroxyl groups. Ribonucleotides are nucleotides where the sugar is ribose, and deoxyribonucleotides contain the sugar deoxyribose. Nucleotides can contain either a purine or a pyrimidine base. Nucleic acids are polymeric macromolecules made from nucleotide monomers. In DNA, the purine bases are adenine and guanine, while the pyrimidines are thymine and cytosine. RNA uses uracil in place of thymine. Adenine always pairs with thymine by 2 hydrogen bonds, while guanine pairs with cytosine through 3 hydrogen bonds, each due to their unique structures. A deoxyribonucleotide is the monomer, or single unit, of DNA, or deoxyribonucleic acid. Each deoxyribonucleotide comprises three parts: a nitrogenous base, a deoxyribose sugar, and one or more phosphate groups. The nitrogenous base is always bonded to the 1' carbon of the deoxyribose, which is distinguished from ribose by the presence of a proton on the 2' carbon rather than an -OH group. The phosphate groups bind to the 5' carbon of the sugar. When deoxyribonucleotides polymerize to form DNA, the phosphate group from one nucleotide will bond to the 3' carbon on another nucleotide, forming a phosphodiester bond via dehydration synthesis. New nucleotides are always added to the 3' carbon of the last nucleotide, so synthesis always proceeds from 5' to 3'. A phosphodiester bond is a group of strong covalent bonds between a phosphate group and two 5-carbon ring carbohydrates (pentoses) over two ester bonds. Phosphodiester bonds are central to most life on Earth, as they make up the backbone of the strands of DNA. In DNA and RNA, the phosphodiester bond is the linkage between the 3' carbon atom of one sugar molecule and the 5' carbon of another, deoxyribose in DNA and ribose in RNA. The phosphate groups in the phosphodiester bond are negatively-charged. Because the phosphate groups have a pKa near 0, they are negatively-charged at pH 7. This repulsion forces the phosphates to take opposite sides of the DNA strands and is neutralized by proteins (histones), metal ions such as magnesium, and polyamines. In order for the phosphodiester bond to be formed and the nucleotides to be joined, the tri-phosphate or di-phosphate forms of the nucleotide building blocks are broken apart to give off energy required to drive the enzyme-catalyzed reaction. When a single phosphate or two phosphates known as pyrophosphates break away and catalyze the reaction, the phosphodiester bond is formed. Hydrolysis of phosphodiester bonds can be catalyzed by the action of phosphodiesterases which play an important role in repairing DNA sequences. In biological systems, the phosphodiester bond between two ribonucleotides can be broken by alkaline hydrolysis because of the free 2' hydroxyl group. Forms of DNA A-DNA: A-DNA is one of the many possible double helical structures of DNA. A-DNA is thought to be one of three biologically active double helical structures along with B- and Z-DNA. It is a right-handed double helix fairly similar to the more common and well-known B-DNA form, but with a shorter more compact helical structure. It appears likely that it occurs only in dehydrated samples of DNA, such as those used in crystallographic experiments, and possibly is also assumed by DNA-RNA hybrid helices and by regions of double-stranded RNA. B-DNAThe most common form of DNA is B DNA. The DNA double helix is a spiral polymer of nucleic acids, held together by nucleotides which base pair together. In B-DNA, the most common double helical structure, the double helix is right-handed with about 10–10.5 nucleotides per turn. The double helix structure of DNA contains a major groove and minor groove, the major groove being wider than the minor groove. Given the difference in widths of the major groove and minor groove, many proteins which bind to DNA do so through the wider major groove. Z-DNA: Z-DNA is one of the many possible double helical structures of DNA. It is a left-handed double helical structure in which the double helix winds to the left in a zig-zag pattern (instead of to the right, like the more common B-DNA form). Z-DNA is thought to be one of three biologically active double helical structures along with A- and B-DNA. Z-DNA is quite different from the right-handed forms. In fact, Z-DNA is often compared against B-DNA in order to illustrate the major differences. The Z-DNA helix is left-handed and has a structure that repeats every 2 base pairs. The major and minor grooves, unlike A- and B-DNA, show little difference in width. Formation of this structure is generally unfavourable, although certain conditions can promote it; such as alternating purine-pyrimidine sequence (especially poly(dGC)2), negative DNA supercoiling or high salt and some cations (all at physiological temperature, 37 °C, and pH 7.3-7.4). Z-DNA can form a junction with B-DNA (called a "B-to-Z junction box") in a structure which involves the extrusion of a base pair. The Z-DNA conformation has been difficult to study because it does not exist as a stable feature of the double helix. Instead, it is a transient structure that is occasionally induced by biological activity and then quickly disappears. |Diameter||23 Å (2.3 nm)||20 Å (2.0 nm)||18 Å (1.8 nm)| |Repeating unit||1 bp||1 bp||2 bp| |Inclination of bp to axis||+19°||−1.2°||−9°| |Rise/bp along axis||2.3 Å (0.23 nm)||3.32 Å (0.332 nm)||3.8 Å (0.38 nm)| |Pitch/turn of helix||28.2 Å (2.82 nm)||33.2 Å (3.32 nm)||45.6 Å (4.56 nm)| |Mean propeller twist||+18°||+16°||0°| |Glycosyl angle||anti||anti||C: anti, |Sugar pucker||C3'-endo||C2'-endo||C: C2'-endo, bp-Base pair, nm-nano meter Noncoding genomic DNA In molecular biology, noncoding DNA describes components of an organism's DNA sequences that do not encode for protein sequences. Pseudogenes Pseudogenes are DNA sequences, related to known genes, that have lost their protein-coding ability or are otherwise no longer expressed in the cell. Pseudogenes arise from retrotransposition or genomic duplication of functional genes, and become "genomic fossils" that are nonfuctional due to mutations that prevent the transcription of the gene, such as within the gene promoter region, or fatally alter the translation of the gene, such as premature stop codons or frameshifts. Pseudogenes resulting from the retrotransposition of an RNA intermediate are known as processed pseudogenes; pseudogenes that arise from the genomic remains of duplicated genes or residues of inactivated genes are nonprocessed pseudogenes. While Dollo's Law suggests that the loss of function in pseudogenes is likely permanent, silenced genes may actually retain function for several million years and can be "reactivated" into protein-coding sequences and a substantial number of pseudogenes are actively transcribed. Because pseudogenes are presumed to evolve without evolutionary constraint, they can serve as a useful model of the type and frequencies of various spontaneous genetic mutations. Coiling of DNA DNA supercoiling is important for DNA packaging within all cells. Because the length of DNA can be thousands of times that of a cell, packaging this genetic material into the cell or nucleus (in eukaryotes) is a difficult feat. Supercoiling of DNA reduces the space and allows for a lot more DNA to be packaged. In prokaryotes, plectonemic supercoils are predominant, because of the circular chromosome and relatively small amount of genetic material. In eukaryotes, DNA supercoiling exists on many levels of both plectonemic and solenoidal supercoils, with the solenoidal supercoiling proving most effective in compacting the DNA. Solenoidal supercoiling is achieved with histones to form a 10nm fiber. This fiber is further coiled into a 30nm fiber, and further coiled upon itself numerous times more. DNA packaging is greatly increased during nuclear division events such as mitosis or meiosis, where DNA must be compacted and segregated to daughter cells. Condensins and cohesins are Structural Maintenance of Chromosome proteins that aid in the condensation of sister chromatids and the linkage of the centromere in sister chromatids. These SMC proteins induce positive supercoils. Supercoiling is also required for DNA/RNA synthesis. Because DNA must be unwound for DNA/RNA polymerase action, supercoils will result. The region ahead of the polymerase complex will be unwound; this stress is compensated with positive supercoils ahead of the complex. Behind the complex, DNA is rewound and there will be compensatory negative supercoils. It is important to note that topoisomerases such as DNA gyrase (Type II Topoisomerase) play a role in relieving some of the stress during DNA/RNA synthesis. NA supercoiling can be described numerically by changes in the 'linking number' Lk. The linking number is the most descriptive property of supercoiled DNA. Lko, the number of turns in the relaxed (B type) DNA plasmid/molecule, is determined by dividing the total base pairs of the molecule by the relaxed bp/turn which, depending on reference is 10.4-10.5. Lk is merely the number of crosses a single strand makes across the other in a planar projection. The topology of the DNA is described by the equation below in which the linking number is equivalent to the sum of TW, which is the number of twists or turns of the double helix, and Wr which is the number of coils or 'writhes'. If there is a closed DNA molecule, the sum of TW and Wr, or the linking number, does not change. However, there may be complementary changes in TW and Wr without changing their sum. The change in the linking number, ΔLk, is the actual number of turns in the plasmid/molecule, Lk, minus the number of turns in the relaxed plasmid/molecule Lko. If the DNA is negatively supercoiled ΔLk < 0. The negative supercoiling implies that the DNA is underwound. A standard expression independent of the molecule size is the "specific linking difference" or "superhelical density" denoted σ. σ represents the number of turns added or removed relative to the total number of turns in the relaxed molecule/plasmid, indicating the level of supercoiling. The linking number is a numerical invariant that describes the linking of two closed curves in three-dimensional space. Intuitively, the linking number represents the number of times that each curve winds around the other. The linking number is always an integer, but may be positive or negative depending on the orientation of the two curves. Since the linking number L of supercoiled DNA is the number of times the two strands are intertwined (and both strands remain covalently intact), L cannot change. The reference state (or parameter) L0 of a circular DNA duplex is its relaxed state. In this state, its writhe W = 0. Since L = T + W, in a relaxed state T = L. Thus, if we have a 400 bp relaxed circular DNA duplex, L ~ 40 (assuming ~10 bp per turn in B-DNA). Then T ~ 40. - Positively supercoiling: - T = 0, W = 0, then L = 0 - T = +3, W = 0, then L = +3 - T = +2, W = +1, then L = +3 - Negatively supercoiling: - T = 0, W = 0, then L = 0 - T = -3, W = 0, then L = -3 - T = -2, W = -1, then L = -3 Negative supercoils favor local unwinding of the DNA, allowing processes such as transcription, DNA replication, and recombination. Negative supercoiling is also thought to favour the transition between B-DNA and Z-DNA, and moderate the interactions of DNA binding proteins involved in gene regulation. Histones: The DNA binding protein Histones were discovered in 1884 by Albrecht Kossel. The word "histone" dates from the late 19th century and is from the German "Histon", of uncertain origin: perhaps from Greek histanai or from histos. Until the early 1990s, histones were dismissed by most as inert packing material for eukaryotic nuclear DNA, based in part on the "ball and stick" models of Mark Ptashne and others who believed transcription was activated by protein-DNA and protein-protein interactions on largely naked DNA templates, as is the case in bacteria. During the 1980s, work by Michael Grunstein demonstrated that eukaryotic histones repress gene transcription, and that the function of transcriptional activators is to overcome this repression. We now know that histones play both positive and negative roles in gene expression, forming the basis of the histone code. The discovery of the H5 histone appears to date back to 1970's, and in classification it has been grouped with The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer, forming two nearly symmetrical halves by tertiary structure (C2 symmetry; one macromolecule is the mirror image of the other).The H2A-H2B dimers and H3-H4 tetramer also show pseudodyad symmetry. The 4 'core' histones (H2A, H2B, H3 and H4) are relatively similar in structure and are highly conserved through evolution, all featuring a 'helix turn helix turn helix' motif (which allows the easy dimerisation). They also share the feature of long 'tails' on one end of the amino acid structure - this being the location of post-translational modification (see below). It has been proposed that histone proteins are evolutionarily related to the helical part of the extended AAA+ ATPase domain, the C-domain, and to the N-terminal substrate recognition domain of Clp/Hsp100 proteins. Despite the differences in their topology, these three folds share a homologous helix-strand-helix (HSH) motif. Using an electron paramagnetic resonance spin-labeling technique, British researchers measured the distances between the spools around which eukaryotic cells wind their DNA. They determined the spacings range from 59 to 70 Å.In all, histones make five types of interactions with DNA: Helix-dipoles from alpha-helices in H2B, H3, and H4 cause a net positive charge to accumulate at the point of interaction with negatively charged phosphate groups on DNA Hydrogen bonds between the DNA backbone and the amide group on the main chain of histone proteins Nonpolar interactions between the histone and deoxyribose sugars on DNA Salt bridges and hydrogen bonds between side chains of basic amino acids (especially lysine and arginine) and phosphate oxygens on DNA Non-specific minor groove insertions of the H3 and H2B N-terminal tails into two minor grooves each on the DNA molecule The highly basic nature of histones, aside from facilitating DNA-histone interactions, contributes to the water solubility of histones. Histones are subject to post translational modification by enzymes primarily on their N-terminal tails, but also in their globular domains. Such modifications include methylation, citrullination, acetylation, phosphorylation, SUMOylation, ubiquitination, and ADP-ribosylation. This affects their function of gene regulation. In general, genes that are active have less bound histone, while inactive genes are highly associated with histones during interphase. It also appears that the structure of histones has been evolutionarily conserved, as any deleterious mutations would be severely maladaptive. Histone DNA interaction The core histone proteins contain a characteristic structural motif termed the "histone fold" which consists of three alpha-helices (α1-3) separated by two loops (L1-2). In solution the histones form H2A-H2B heterodimers and H3-H4 heterotetramers. Histones dimerise about their long α2 helices in an anti-parallel orientation, and in the case of H3 and H4, two such dimers form a 4-helix bundle stabilised by extensive H3-H3’ interaction. The H2A/H2B dimer binds onto the H3/H4 tetramer due to interactions between H4 and H2B which include the formation of a hydrophobic cluster. The histone octamer is formed by a central H3/H4 tetramer sandwiched between two H2A/H2B dimers. Due to the highly basic charge of all four core histones, the histone octamer is only stable in the presence of DNA or very high salt concentrations. Nucleosomes form the fundamental repeating units of eukaryotic chromatin, which is used to pack the large eukaryotic genomes into the nucleus while still ensuring appropriate access to it (in mammalian cells approximately 2 m of linear DNA have to be packed into a nucleus of roughly 10 µm diameter). Nucleosomes are folded through a series of successively higher order structures to eventually form a chromosome; this both compacts DNA and creates an added layer of regulatory control which ensures correct gene expression. Nucleosomes are thought to carry epigenetically inherited information in the form of covalent modifications of their core histones. The nucleosome hypothesis was proposed by Don and Ada Olins in 1974 and Roger Kornberg. The nucleosome core particle ) consists of about 146 bp of DNA wrapped in 1.67 left-handed superhelical turns around the histone octamer, consisting of 2 copies each of the core histones H2A, H2B, H3, and H4. Adjacent nucleosomes are joined by a stretch of free DNA termed "linker DNA" (which varies from 10 - 80 bp in length depending on species and tissue type. DNA-binding domains One or more DNA-binding domains are often part of a larger protein consisting of additional domains with differing function. The additional domains often regulate the activity of the DNA-binding domain. The function of DNA binding is either structural or involving transcription regulation, with the two roles sometimes overlapping. DNA-binding domains with functions involving DNA structure have biological roles in the replication, repair, storage, and modification of DNA, such as methylation. Many proteins involved in the regulation of gene expression contain DNA-binding domains. For example, proteins that regulate transcription by binding DNA are called transcription factors. The final output of most cellular signaling cascades is gene regulation. The DBD interacts with the nucleotides of DNA in a DNA sequence-specific or non-sequence-specific manner, but even non-sequence-specific recognition involves some sort of molecular complementarity between protein and DNA. DNA recognition by the DBD can occur at the major or minor groove of DNA, or at the sugar-phosphate DNA backbone (see the structure of DNA). Each specific type of DNA recognition is tailored to the protein's function. For example, the DNA-cutting enzyme DNAse I cuts DNA almost randomly and so must bind to DNA in a non-sequence-specific manner. But, even so, DNAse I recognizes a certain 3-D DNA structure, yielding a somewhat specific DNA cleavage pattern that can be useful for studying DNA recognition by a technique called DNA footprinting. Many DNA-binding domains must recognize specific DNA sequences, such as DBDs of transcription factors that activate specific genes, or those of enzymes that modify DNA at specific sites, like restriction enzymes and telomerase. The hydrogen bonding pattern in the DNA major groove is less degenerate than that of the DNA minor groove, providing a more attractive site for sequence-specific DNA recognition. The specificity of DNA-binding proteins can be studied using many biochemical and biophysical techniques, such as gel electrophoresis, analytical ultracentrifugation, calorimetry, DNA mutation, protein structure mutation or modification, nuclear magnetic resonance, x-ray crystallography, surface plasmon resonance, electron paramagnetic resonance, cross-linking and Microscale Thermophoresis (MST). Types of DNA-binding domains Originally discovered in bacteria, the helix-turn-helix motif is commonly found in repressor proteins and is about 20 amino acids long. In eukaryotes, the homeodomain comprises 2 helices, one of which recognizes the DNA (aka recognition helix). They are common in proteins that regulate developmental processes (PROSITE HTH). Crystallographic structure (PDB 1R4O) of a dimer of the zinc finger containing DBD of the glucocorticoid receptor (top) bound to DNA (bottom). Zinc atoms are represented by grey spheres and the coordinating cysteine sidechains are depicted as sticks. The zinc finger This domain is generally between 23 and 28 amino acids long and is stabilized by coordinating Zinc ions with regularly spaced zinc-coordinating residues (either histidines or cysteines). The most common class of zinc finger (Cys2His2) coordinates a single zinc ion and consists of a recognition helix and a 2-strand beta-sheet. In transcription factors these domains are often found in arrays (usually separated by short linker sequences) and adjacent fingers are spaced at 3 basepair intervals when bound to DNA. The basic leucine zipper (bZIP) domain contains an alpha helix with a leucine at every 7th amino acid. If two such helices find one another, the leucines can interact as the teeth in a zipper, allowing dimerization of two proteins. When binding to the DNA, basic amino acid residues bind to the sugar-phosphate backbone while the helices sit in the major grooves. It regulates gene expression.The bZip family of transcription factors consist of a basic region that interacts with the major groove of a DNA molecule through hydrogen bonding, and a hydrophobic leucine zipper region that is responsible for dimerization. Consisting of about 110 amino acids, the winged helix (WH) domain has four helices and a two-strand beta-sheet. Winged helix turn helix The winged helix turn helix domain (wHTH) SCOP 46785 is typically 85-90 amino acids long. It is formed by a 3-helical bundle and a 4-strand beta-sheet (wing). The Helix-loop-helix domain is found in some transcription factors and is characterized by two α helices connected by a loop. One helix is typically smaller and due to the flexibility of the loop, allows dimerization by folding and packing against another helix. The larger helix typically contains the DNA-binding regions. HMG-box domains are found in high mobility group proteins which are involved in a variety of DNA-dependent processes like replication and transcription. The domain consists of three alpha helices separated by loops. DNA sequencing RNA sequencing was one of the earliest forms of nucleotide sequencing. The major landmark of RNA sequencing is the sequence of the first complete gene and the complete genome of Bacteriophage MS2, identified and published by Walter Fiers and his coworkers at the University of Ghent (Ghent, Belgium), between 1972 and 1976. Prior to the development of rapid DNA sequencing methods in the early 1970s by Frederick Sanger at the University of Cambridge, in England and Walter Gilbert and Allan Maxam at Harvard, a number of laborious methods were used. For instance, in 1973, Gilbert and Maxam reported the sequence of 24 basepairs using a method known as wandering-spot analysis. The chain-termination method developed by Sanger and coworkers in 1975 soon became the method of choice, owing to its relative ease and reliability. Maxam and Gilbert method In 1976–1977, Allan Maxam and Walter Gilbert developed a DNA sequencing method based on chemical modification of DNA and subsequent cleavage at specific bases. Although Maxam and Gilbert published their chemical sequencing method two years after the ground-breaking paper of Sanger and Coulson on plus-minus sequencing,Maxam–Gilbert sequencing rapidly became more popular, since purified DNA could be used directly, while the initial Sanger method required that each read start be cloned for production of single-stranded DNA. However, with the improvement of the chain-termination method (see below), Maxam-Gilbert sequencing has fallen out of favour due to its technical complexity prohibiting its use in standard molecular biology kits, extensive use of hazardous chemicals, and difficulties with scale-up. The method requires radioactive labeling at one 5' end of the DNA (typically by a kinase reaction using gamma-32P ATP) and purification of the DNA fragment to be sequenced. Chemical treatment generates breaks at a small proportion of one or two of the four nucleotide bases in each of four reactions (G, A+G, C, C+T). For example, the purines (A+G) are depurinated using formic acid, the guanines (and to some extent the adenines) are methylated by dimethyl sulfate, and the pyrimidines (C+T) are methylated using hydrazine. The addition of salt (sodium chloride) to the hydrazine reaction inhibits the methylation of thymine for the C-only reaction. The modified DNAs are then cleaved by hot piperidine at the position of the modified base. The concentration of the modifying chemicals is controlled to introduce on average one modification per DNA molecule. Thus a series of labeled fragments is generated, from the radiolabeled end to the first "cut" site in each molecule. The fragments in the four reactions are electrophoresed side by side in denaturing acrylamide gels for size separation. To visualize the fragments, the gel is exposed to X-ray film for autoradiography, yielding a series of dark bands each corresponding to a radiolabeled DNA fragment, from which the sequence may be inferred. Also sometimes known as "chemical sequencing", this method led to the Methylation Interference Assay used to map DNA-binding sites for DNA-binding proteins. Dideoxynucleotide Chain-termination methods Because the chain-terminator method (or Sanger method after its developer Frederick Sanger) is more efficient and uses fewer toxic chemicals and lower amounts of radioactivity than the method of Maxam and Gilbert, it rapidly became the method of choice. The key principle of the Sanger method was the use of dideoxynucleotide triphosphates (ddNTPs) as DNA chain terminators. The classical chain-termination method requires a single-stranded DNA template, a DNA primer, a DNA polymerase, normal deoxynucleotidephosphates (dNTPs), and modified nucleotides (dideoxyNTPs) that terminate DNA strand elongation. These ddNTPs will also be radioactively or fluorescently labelled for detection in automated sequencing machines. The DNA sample is divided into four separate sequencing reactions, containing all four of the standard deoxynucleotides (dATP, dGTP, dCTP and dTTP) and the DNA polymerase. To each reaction is added only one of the four dideoxynucleotides (ddATP, ddGTP, ddCTP, or ddTTP) which are the chain-terminating nucleotides, lacking a 3'-hydroxyl (OH) group required for the formation of a phosphodiester bond between two nucleotides, thus terminating DNA strand extension and resulting in DNA fragments of varying length. The newly synthesized and labelled DNA fragments are heat denatured, and separated by size (with a resolution of just one nucleotide) by gel electrophoresis on a denaturing polyacrylamide-urea gel with each of the four reactions run in one of four individual lanes (lanes A, T, G, C); the DNA bands are then visualized by autoradiography or UV light, and the DNA sequence can be directly read off the X-ray film or gel image. In the image on the right, X-ray film was exposed to the gel, and the dark bands correspond to DNA fragments of different lengths. A dark band in a lane indicates a DNA fragment that is the result of chain termination after incorporation of a dideoxynucleotide (ddATP, ddGTP, ddCTP, or ddTTP). The relative positions of the different bands among the four lanes are then used to read (from bottom to top) the DNA sequence. Technical variations of chain-termination sequencing include tagging with nucleotides containing radioactive phosphorus for radiolabelling, or using a primer labeled at the 5’ end with a fluorescent dye. Dye-primer sequencing facilitates reading in an optical system for faster and more economical analysis and automation. The later development by Leroy Hood and coworkers of fluorescently labeled ddNTPs and primers set the stage for automated, high-throughput DNA sequencing. Chain-termination methods have greatly simplified DNA sequencing. For example, chain-termination-based kits are commercially available that contain the reagents needed for sequencing, pre-aliquoted and ready to use. Limitations include non-specific binding of the primer to the DNA, affecting accurate read-out of the DNA sequence, and DNA secondary structures affecting the fidelity of the sequence. Dye-terminator sequencing Dye-terminator sequencing utilizes labelling of the chain terminator ddNTPs, which permits sequencing in a single reaction, rather than four reactions as in the labelled-primer method. In dye-terminator sequencing, each of the four dideoxynucleotide chain terminators is labelled with fluorescent dyes, each of which emit light at different wavelengths. Owing to its greater expediency and speed, dye-terminator sequencing is now the mainstay in automated sequencing. Its limitations include dye effects due to differences in the incorporation of the dye-labelled chain terminators into the DNA fragment, resulting in unequal peak heights and shapes in the electronic DNA sequence trace chromatogram after capillary electrophoresis (see figure to the left). This problem has been addressed with the use of modified DNA polymerase enzyme systems and dyes that minimize incorporation variability, as well as methods for eliminating "dye blobs". The dye-terminator sequencing method, along with automated high-throughput DNA sequence analyzers, is now being used for the vast majority of sequencing projects. Common challenges of DNA sequencing include poor quality in the first 15–40 bases of the sequence and deteriorating quality of sequencing traces after 700–900 bases. Base calling software typically gives an estimate of quality to aid in quality trimming. In cases where DNA fragments are cloned before sequencing, the resulting sequence may contain parts of the cloning vector. In contrast, PCR-based cloning and emerging sequencing technologies based on pyrosequencing often avoid using cloning vectors. Recently, one-step Sanger sequencing (combined amplification and sequencing) methods such as Ampliseq and SeqSharp have been developed that allow rapid sequencing of target genes without cloning or prior amplification. Current methods can directly sequence only relatively short (300–1000 nucleotides long) DNA fragments in a single reaction. The main obstacle to sequencing DNA fragments above this size limit is insufficient power of separation for resolving large DNA fragments that differ in length by only one nucleotide. In all cases the use of a primer with a free 5' end is essential. Automation and sample preparation Automated DNA-sequencing instruments (DNA sequencers) can sequence up to 384 DNA samples in a single batch (run) in up to 24 runs a day. DNA sequencers carry out capillary electrophoresis for size separation, detection and recording of dye fluorescence, and data output as fluorescent peak trace chromatograms. Sequencing reactions by thermocycling, cleanup and re-suspension in a buffer solution before loading onto the sequencer are performed separately. A number of commercial and non-commercial software packages can trim low-quality DNA traces automatically. These programs score the quality of each peak and remove low-quality base peaks (generally located at the ends of the sequence). The accuracy of such algorithms is below visual examination by a human operator, but sufficient for automated processing of large sequence data sets. Polymerase chain reaction PCR is used to amplify a specific region of a DNA strand (the DNA target). Most PCR methods typically amplify DNA fragments of up to ~10 kilo base pairs (kb), although some techniques allow for amplification of fragments up to 40 kb in size. A basic PCR set up requires several components and reagents.These components include: DNA template that contains the DNA region (target) to be amplified. Two primers that are complementary to the 3' (three prime) ends of each of the sense and anti-sense strand of the DNA target. Taq polymerase or another DNA polymerase with a temperature optimum at around 70 °C. Deoxynucleotide triphosphates (dNTPs), the building-blocks from which the DNA polymerase synthesizes a new DNA strand. Buffer solution, providing a suitable chemical environment for optimum activity and stability of the DNA polymerase. Divalent cations, magnesium or manganese ions; generally Mg2+ is used, but Mn2+ can be utilized for PCR-mediated DNA mutagenesis, as higher Mn2+ concentration increases the error rate during DNA synthesis Monovalent cation potassium ions. The PCR is commonly carried out in a reaction volume of 10–200 μl in small reaction tubes (0.2–0.5 ml volumes) in a thermal cycler. The thermal cycler heats and cools the reaction tubes to achieve the temperatures required at each step of the reaction (see below). Many modern thermal cyclers make use of the Peltier effect, which permits both heating and cooling of the block holding the PCR tubes simply by reversing the electric current. Thin-walled reaction tubes permit favorable thermal conductivity to allow for rapid thermal equilibration. Most thermal cyclers have heated lids to prevent condensation at the top of the reaction tube. Older thermocyclers lacking a heated lid require a layer of oil on top of the reaction mixture or a ball of wax inside the tube. Figure 1: Schematic drawing of the PCR cycle. (1) Denaturing at 94–96 °C. (2) Annealing at ~65 °C (3) Elongation at 72 °C. Four cycles are shown here. The blue lines represent the DNA template to which primers (red arrows) anneal that are extended by the DNA polymerase (light green circles), to give shorter DNA products (green lines), which themselves are used as templates as PCR progresses. Typically, PCR consists of a series of 20-40 repeated temperature changes, called cycles, with each cycle commonly consisting of 2-3 discrete temperature steps, usually three . The cycling is often preceded by a single temperature step (called hold) at a high temperature (>90 °C), and followed by one hold at the end for final product extension or brief storage. The temperatures used and the length of time they are applied in each cycle depend on a variety of parameters. These include the enzyme used for DNA synthesis, the concentration of divalent ions and dNTPs in the reaction, and the melting temperature (Tm) of the primers.Initialization step: This step consists of heating the reaction to a temperature of 94–96 °C (or 98 °C if extremely thermostable polymerases are used), which is held for 1–9 minutes. It is only required for DNA polymerases that require heat activation by hot-start PCR. Denaturation step: This step is the first regular cycling event and consists of heating the reaction to 94–98 °C for 20–30 seconds. It causes DNA melting of the DNA template by disrupting the hydrogen bonds between complementary bases, yielding single-stranded DNA molecules. Annealing step: The reaction temperature is lowered to 50–65 °C for 20–40 seconds allowing annealing of the primers to the single-stranded DNA template. Typically the annealing temperature is about 3-5 degrees Celsius below the Tm of the primers used. Stable DNA-DNA hydrogen bonds are only formed when the primer sequence very closely matches the template sequence. The polymerase binds to the primer-template hybrid and begins DNA synthesis. Extension/elongation step: The temperature at this step depends on the DNA polymerase used; Taq polymerase has its optimum activity temperature at 75–80 °C, and commonly a temperature of 72 °C is used with this enzyme. At this step the DNA polymerase synthesizes a new DNA strand complementary to the DNA template strand by adding dNTPs that are complementary to the template in 5' to 3' direction, condensing the 5'-phosphate group of the dNTPs with the 3'-hydroxyl group at the end of the nascent (extending) DNA strand. The extension time depends both on the DNA polymerase used and on the length of the DNA fragment to be amplified. As a rule-of-thumb, at its optimum temperature, the DNA polymerase will polymerize a thousand bases per minute. Under optimum conditions, i.e., if there are no limitations due to limiting substrates or reagents, at each extension step, the amount of DNA target is doubled, leading to exponential (geometric) amplification of the specific DNA fragment. Final elongation: This single step is occasionally performed at a temperature of 70–74 °C for 5–15 minutes after the last PCR cycle to ensure that any remaining single-stranded DNA is fully extended. Final hold: This step at 4–15 °C for an indefinite time may be employed for short-term storage of the reaction. To check whether the PCR generated the anticipated DNA fragment (also sometimes referred to as the amplimer or amplicon), agarose gel electrophoresis is employed for size separation of the PCR products. The size(s) of PCR products is determined by comparison with a DNA ladder (a molecular weight marker), which contains DNA fragments of known size, run on the gel alongside the PCR products. Facts to be remembered DNA Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates and make the DNA. In 1865 Gregor Mendel's paper, Experiments on Plant Hybridization In 1869, DNA was first isolated by the Swiss physician Friedrich Miescher who discovered a microscopic substance in the pus of discarded surgical bandages. From 1880-1890 Walther Flemming, Eduard Strasburger, and Edouard van Beneden elucidate chromosome distribution during cell division In 1889 Hugo de Vries postulates that "inheritance of specific traits in organisms comes in particles", naming such particles "(pan)genes" In 1903 Walter Sutton hypothesizes that chromosomes, which segregate in a Mendelian fashion, are hereditary units In 1905 William Bateson coins the term "genetics" in a letter to Adam Sedgwick and at a meeting in 1906 In 1908 Hardy-Weinberg law derived. In 1910 Thomas Hunt Morgan shows that genes reside on chromosomes In 1913 Alfred Sturtevant makes the first genetic map of a chromosome In 1913 Gene maps show chromosomes containing linear arranged genes In 1918 Ronald Fisher publishes "The Correlation Between Relatives on the Supposition of Mendelian Inheritance" the modern synthesis of genetics and evolutionary biology starts. See population genetics. In 1928 Frederick Griffith discovers that hereditary material from dead bacteria can be incorporated into live bacteria (see Griffith's experiment) in 1931 Crossing over is identified as the cause of recombination In 1933 Jean Brachet is able to show that DNA is found in chromosomes and that RNA is present in the cytoplasm of all cells. In 1937 William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure. In 1928, Frederick Griffith discovered that traits of the "smooth" form of the Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. In 1952, Alfred Hershey and Martha Chase in the Hershey–Chase experiment showed that DNA is the genetic material of the T2 phage. In 1953, James D. Watson and Francis Crick suggested double-helix model of DNA structure. Purines are found in high concentration in meat and meat products, especially internal organs such as liver and kidney. Examples of high-purine sources include: sweetbreads, anchovies, sardines, liver, beef kidneys, brains, meat extracts (e.g., Oxo, Bovril), herring, mackerel, scallops, game meats, beer (from the yeast) and gravy. bp = base pair(s) One bp corresponds to circa 3.4 Å of length along the strand kb (= kbp) = kilo base pairs = 1,000 bp Mb = mega base pairs = 1,000,000 bp Analysis of DNA topology uses three values: L = linking number - the number of times one DNA strand wraps around the other. It is an integer for a closed loop and constant for a closed topological domain. T = twist - total number of turns in the double stranded DNA helix. This will normally tend to approach the number of turns that a topologically open double stranded DNA helix makes free in solution: number of bases/10.5, assuming there are no intercalating agents (e.g., chloroquine) or other elements modifying the stiffness of the DNA. W = writhe - number of turns of the double stranded DNA helix around the superhelical axis L = T + W and ΔL = ΔT + ΔW Any change of T in a closed topological domain must be balanced by a change in W, and vice versa. This results in higher order structure of DNA. A circular DNA molecule with a writhe of 0 will be circular. If the twist of this molecule is subsequently increased or decreased by supercoiling then the writhe will be appropriately altered, making the molecule undergo plectonemic or toroidal superhelical coiling. When the ends of a piece of double stranded helical DNA are joined so that it forms a circle the strands are topologically knotted. This means the single strands cannot be separated any process that does not involve breaking a strand (such as heating). The task of un-knotting topologically linked strands of DNA falls to enzymes known as topoisomerases. These enzymes are dedicated to un-knotting circular DNA by cleaving one or both strands so that another double or single stranded segment can pass through. This un-knotting is required for the replication of circular DNA and various types of recombination in linear DNA which have similar topological constraints. Gb = giga base pairs = 1,000,000,000 bp. 1972 Development of recombinant DNA technology, which permits isolation of defined fragments of DNA; prior to this, the only accessible samples for sequencing were from bacteriophage or virus DNA. 1977 The first complete DNA genome to be sequenced is that of bacteriophage φX174. 1977 Allan Maxam and Walter Gilbert publish "DNA sequencing by chemical degradation". Frederick Sanger, independently, publishes "DNA sequencing with chain-terminating inhibitors". 1984 Medical Research Council scientists decipher the complete DNA sequence of the Epstein-Barr virus, 170 kb. 1986 Leroy E. Hood's laboratory at the California Institute of Technology and Smith announce the first semi-automated DNA sequencing machine. 1987 Applied Biosystems markets first automated sequencing machine, the model ABI 370. 1990 The U.S. National Institutes of Health (NIH) begins large-scale sequencing trials on Mycoplasma capricolum, Escherichia coli, Caenorhabditis elegans, and Saccharomyces cerevisiae (at US$0.75/base). 1991 Sequencing of human expressed sequence tags begins in Craig Venter's lab, an attempt to capture the coding fraction of the human genome. 1995 Craig Venter, Hamilton Smith, and colleagues at The Institute for Genomic Research (TIGR) publish the first complete genome of a free-living organism, the bacterium Haemophilus influenzae. The circular chromosome contains 1,830,137 bases and its publication in the journal Science marks the first use of whole-genome shotgun sequencing, eliminating the need for initial mapping efforts. 1996 Pål Nyrén and his student Mostafa Ronaghi at the Royal Institute of Technology in Stockholm publish their method of pyrosequencing. 1998 Phil Green and Brent Ewing of the University of Washington publish “phred” for sequencer data analysis. 2001 A draft sequence of the human genome is published. 2004 454 Life Sciences markets a parallelized version of pyrosequencing.The first version of their machine reduced sequencing costs 6-fold compared to automated Sanger sequencing, and was the second of a new generation of sequencing technologies, after MPSS List of bases found in DNA and RNA |Name||3-D structure||Abbreviation||Structural formula||Classification||Found in| - Griffith experiment - Hershey–Chase experiment - Hershey, A.D. and Chase, M. (1952) Independent functions of viral protein and nucleic acid in growth of bacteriophage. J Gen Physiol. 36:39–56. - A very–MacLeod–McCarty experiment - Base pair - Phosphodiester bond - Noncoding DNA - DNA supercoil - Vologodskii AV, Lukashin AV, Anshelevich VV, et al. (1979). "Fluctuations in superhelical DNA". Nucleic Acids Res 6: 967–682. doi:10.1093/nar/6.3.967. - H. S. Chawla (2002). Introduction to Plant Biotechnology. Science Publishers. ISBN 1578082285. - Kayne PS, Kim UJ, Han M, Mullen JR, Yoshizaki F, Grunstein M. Extremely conserved histone H4 N terminus is dispensable for growth but essential for repressing the silent mating loci in yeast. Cell. 1988 Oct 7;55(1):27-39. PMID: 3048701 - Crane-Robinson C, Dancy SE, Bradbury EM, Garel A, Kovacs AM, Champagne M, Daune M (August 1976). "Structural studies of chicken erythrocyte histone H5". Eur. J. Biochem. 67 (2): 379–88. doi:10.1111/j.1432-1033.1976.tb10702.x. PMID 964248. - Aviles FJ, Chapman GE, Kneale GG, Crane-Robinson C, Bradbury EM (August 1978). "The conformation of histone H5. Isolation and characterisation of the globular segment". Eur. J. Biochem. 88 (2): 363–71. doi:10.1111/j.1432-1033.1978.tb12457.x. PMID 689022. - DNA sequencing - Smith LM, Sanders JZ, Kaiser RJ, et al (1986). "Fluorescence detection in automated DNA sequence analysis". Nature 321 (6071): 674–9. doi:10.1038/321674a0. PMID 3713851. "We have developed a method for the partial automation of DNA sequence analysis. Fluorescence detection of the DNA fragments is accomplished by means of a fluorophore covalently attached to the oligonucleotide primer used in enzymatic DNA sequence analysis. A different coloured fluorophore is used for each of the reactions specific for the bases A, C, G and T. The reaction mixtures are combined and co-electrophoresed down a single polyacrylamide gel tube, the separated fluorescent bands of DNA are detected near the bottom of the tube, and the sequence information is acquired directly by computer.". - Smith LM, Fung S, Hunkapiller MW, Hunkapiller TJ, Hood LE (April 1985). "The synthesis of oligonucleotides containing an aliphatic amino group at the 5' terminus: synthesis of fluorescent DNA primers for use in DNA sequence analysis". Nucleic Acids Res. 13 (7): 2399–412. doi:10.1093/nar/13.7.2399. PMID 4000959. PMC 341163. http://nar.oxfordjournals.org/cgi/pmidlookup?view=long&pmid=4000959. - "Phred - Quality Base Calling". http://www.phrap.com/phred/. Retrieved 2011-02-24. - "Base-calling for next-generation sequencing platforms — Brief Bioinform". http://bib.oxfordjournals.org/content/early/2011/01/18/bib.bbq077.full. Retrieved 2011-02-24. - Murphy, K.; Berg, K.; Eshleman, J. (2005). "Sequencing of genomic DNA by combined amplification and cycle sequencing reaction". Clinical chemistry 51 (1): 35–39. - Sengupta, D.; Cookson, B. (2010). "SeqSharp: A general approach for improving cycle-sequencing that facilitates a robust one-step combined amplification and sequencing method". The Journal of molecular diagnostics : JMD 12 (3): 272–277. - Polymerase chain reaction
http://en.wikibooks.org/wiki/An_Introduction_to_Molecular_Biology/DNA_the_unit_of_life
13
17
Empowering Learners and Teachers - "With great power comes great responsibility." (Stan Lee: Uncle Ben, talking to Peter Parker in Spiderman movie.) People setting educational goals find it easy to include “Empowering students” or "Enabling students." However, they find it hard to agree on what these lofty phrases mean or how to accomplish them. (They may also discover that they want these goals to apply only when it’s not inconvenient for those in charge.) Many of the examples in this document are oriented toward math education. However, the general ideas cut across all disciplines. Empowering and enabling, though closely related, have somewhat different meanings. Encarta® World English Dictionary © 1999 Microsoft Corporation states: Empower em·pow·er vt 1. to give somebody power or authority (often passive) 2. to give somebody a sense of confidence or self-esteem Synonyms: authorize, allow, sanction Enable en·a·ble vt 1. to provide somebody with the resources, authority, or opportunity to do something 2. to make something possible or feasible Synonyms: allow, facilitate, permit, make possible This document may serve as a platform for people to share ideas on how our informal and formal education systems can and should empower and enable students and their teachers. Two key student-oriented questions immediately come to mind: 1. What various powers and opportunities do we want to make available to students? 2. How should responsibilities change as students realize and accept their growing powers and opportunities? This document also discusses empowering and enabling teachers, and the same two questions apply. Many K-12 teachers feel that the increasing emphasis on assessment and high-stakes testing is driving curriculum and instruction in a manner that disempowers teachers and decreases their opportunities to make curriculum content, instruction, and assessment decisions best suited for their particular students. Roles of Empowering and Enabling in Developing Expertise In terms of how I (David Moursund) think about informal and formal education, “empowering and enabling” people means giving - permission or encouragement, - assistance such as instruction, - appropriate other resources such as tools so that the person can gain in expertise and can use the increased expertise. Such increasing personal expertise normally leads to increased, valid self-esteem. Who decides on what areas of expertise are to be developed? In a rigid top-down system, these decisions are made at levels above the teacher. In some sense, many schools are factory-like environments in which teachers are expected to teach a prescribed curriculum and students are expected to learn the prescribed curriculum. The teachers are the machinery and the students are the products. Permission to gain and use expertise are key issues. As a lifelong student, I find it useful to think in terms of ideas such as: - I give (or, fail to give) myself permission to develop more expertise in an area. I may withhold this self-permission if I think that this will be in agreement with what my parents or some other adults want. Alternatively, I may give myself permission and be strongly intrinsically motivated to gain expertise in areas that go against the wishes of my parents or other people. - My parents and/or other caregivers give (or, fail to give) me permission to develop more expertise in an area. Their decisions on giving or withholding permission may not be in tune with my interests and natural abilities. - The cultural environment that surrounds me gives (or, fails to give) me permission to develop more expertise in an area. - Our legal system and government give (or, fails to give) me permission to develop more expertise in an area. In the above list, expertise is a concept applicable to every area of learning ad skill building. An area of expertise can be very narrow, perhaps even a small island of expertise, and is certainly not restricted to standard academic subjects. Suppose that I have a younger sibling I like to tease. With appropriate practice, feedback from my sibling, and perhaps feedback from others, I’ll probably become much “better” at such teasing. That is, my level of "teasing younger sibling" increases. Somewhat similarly, many children tend to be self-centered, paying little attention to others’ needs and wants. (All children go through a phase of being egocentric.) Perhaps you have experienced their disruptive behavior. Through practice, such children can gain in expertise in their self-centered disruptive behavior. Good parenting skills that include denial of permission and instruction in more appropriate behavior can channel the child's expertise-building abilities in other directions. Parents can begin to apply these skills as soon as a baby is able to associate cause with effect (consequences). Permission or its denial is often subtle. Libraries in the elementary and secondary schools seldom contain sexually explicit magazines. Indeed, strong efforts usually are made to ensure that the available library materials are "appropriate" according to "standards" generally agreeable to the community and, often, to any vocal faction thereof. The Web broadens this "library problem" since it makes a huge library available to anyone who ca connect to the Internet. For me, what comes out of such examples is that the permission aspect of empowerment needs to be well reasoned, and given with wisdom and foresight. One of the responsibilities of parenthood is to give or withhold permission based on having a greater breadth and depth of experience, knowledge, wisdom, and foresight than does a child. The same responsibilities apply to schoolteachers and our overall formal educational system. A Great Video Showing Empowerment Through Education I highly recommend you spend 18 minutes with the video http://www.ted.com/talks/view/id/156, a moving story of a person trying to improve the educational system in Ghana. It talks about leadership and empowerment in a way that is powerful, moving, and thought provoking. It presents a picture of education to substantially improve a country and a continent. The Website has this description: - Patrick Awuah left a comfortable life in Seattle to return to Ghana and co-found, against the odds, a liberal arts college. Why? Because he believes that Ghana's failures in leadership—and he gives several mind-boggling examples—stem from a university system that fails to train real leaders. In a talk that brought the TEDGlobal audience enthusiastically to their feet, he explains how a true liberal arts education—steeped in critical thinking, idealism, and public service—can produce the quick-thinking, ethical leaders needed to move his country forward. Empower and Enable for Now and in the Future Think about the idea of empowerment and enablement for immediate use and for use in the future. This relates to immediate gratification versus delayed gratification. It relates to the often-stated goal that education should prepare students for gainful employment. It relates to requiring or strongly encouraging students to take courses that prepare them for certain courses they might take in the future, or that prepare them to go to college or vocational institutions. As very young children gain in cognitive maturity, they begin to understand that there is a tomorrow, and they begin to understand that their actions in the past and today will affect them tomorrow. This also relates to the idea that actions have consequences and to develop a habit of thinking about possible consequences of contemplated actions. Almost all children have some difficulty in learning about such causality and about taking responsibility for their own actions (unless the consequences are immediate); some children have considerable difficulty. Research in cognitive neuroscience is increasing our understanding of impulse control and why some children are naturally much better at it than others. We try to educate students to consider the possible effects of an action they are considering. We know that often, for the immature, to think is to do (that is, immediate action occurs as soon as one has a thought about doing that action). Couple this with some of the effects of one's actions not being seen until far in the subjective future, and you see the challenge. - As an aside, consider large issues such as global warming, extinction of various species, and poverty. Dealing maturely with such problems requires very large numbers of people to learn to take responsibility for their collective actions related to the issue. We need leaders who will facilitate this group effort. This type of analysis relates to considering the use ad misuse of the commons. For me, this line of thinking leads to a need for a strong informal and formal educational track that helps students think in terms of cause and effect, along with short-term and longer-term consequences of currently planned or taken actions. Parents struggle with this with their children, because it is really hard for a young child to think about possible consequences of actions and to make decisions that will lead to desirable longer-term consequences—and this assumes the parents have developed these skills. - As an aside, it is clear to me that many adults have considerable difficulty in thinking about the longer-term effects of their actions. A good example is making purchases using high interest rate credit cards and building up large credit card debts in the process. Indeed, our national leaders display a similar lack of restraint as they often pass spending bills that lead to increases in the National Debt. Schools and students in school struggle with this because much of formal education focuses on consequences that, from a student’s point of view, are "far over the horizon." The consequences are so far into the future, that they have little or no meaning to the student. This is true for much of math beyond arithmetic, for history, for much of the sciences, and for almost any subject that a student doesn’t happen to be inherently and immediately interested in. This struggle is built into the nature of schools. In large part, schools exist to teach people that which they need, or will need, but which they will not learn during the routines of their lives outside of school. That is, school education tends to be future oriented. Note, however, that schools have a hard time adjusting to a rapidly changing future. We especially see this in areas such as the computer field and other rapidly changing technologies Immediate and Delayed Feedback Learning requires feedback. The feedback can come from oneself. I am hungry and as I wander through the woods, doing my “hunter-gatherer” thing, I see some berries that are visually appealing. I cautiously taste and eat one. My taste system immediately rejects the berry. I gag, and I feel ill. In this one trial learning event I learn to never eat this type of berry again. Suppose, however, the berry tastes good and my stomach does not reject it. I eat quite a few, and then continue with my hunting and gathering. I eat a variety of other roots, fruits, and so on. Later in the day I grow ill, throw up, and nearly pass out. The cause or causes may be quite complex. For example, two of the things that I ate may have chemically reacted with each other and produced a poison. A person’s brain and body is well equipped to deal with immediate feedback situations. It is not so well equipped to deal with delayed feedback situations. The cause and effect is often not clear. Moreover, one may well get immediate gratification (that is, positive reinforcement), and only much later be faced with long term consequences. This happens, for example, when one buys using a credit card, has the immediate gratification of owning and using the goods, and only much later faces the consequences of needing to pay for them. Our informal and formal educational system faces the challenge of students encountering more and more immediate gratification situations. There has been a substantial increase in immediate gratification through computer games, television, cell telephones, and so on. The same situation exists for adults. A great many adults have trouble resisting the immediate gratification of the various forms of entertainment, food, and buying goods. Thus, our informal and formal educational systems are faced by the challenge of helping students of all ages gain the maturity, knowledge, and skills to effectively deal the issues of immediate gratification and long-term consequences. Gaining in impulse control as a critical component of learning to be a responsible adult. Applications of Ideas Given Above This section looks at several educationally-oriented examples based on immediate ad delayed gratification, and empowerment of students and their teachers. Example: Reading and Math Education In this example, let's assume that a child is growing up in a setting where the responsible adult caregivers are reasonably proficient in reading, writing, and arithmetic. Thus, the child "sees" the adults making routine use such knowledge and skills. The adults read to the child, and this lap sitting, being read to, and interaction with the adult and the book are pleasurable. Notice the child's immediate gratification, and that the child has no insight into how this repeated experience contributes to future learning of reading and writing. The adult reader makes a conscious decision that may well include giving up some current gratification (the adult could spent the time doing other things or could be reading the book for the fourteenth time) in order to increase long-term bonding with the child, provide gratification to the child, and contribute to the child's current and future education. We know that this reading is an important part of a young child's education and that it helps build a foundation for future learning. It may well help the child become intrinsically motivated to want to learn to read. - This is an aside. This example muddled my thoughts on intrinsic versus extrinsic motivation. The extrinsic encouragement of the parents may well lead to intrinsic motivation of the child. Reflecting on this, I concluded that almost all intrinsic motivation is latent; that is, one cannot develop intrinsic motivation for something until one has experienced that something. Green Eggs and Ham furnishes an example. Thus, the child may want to learn do reading-types of things even before getting to school, may thoroughly enjoy the opportunity to learn to read, and may become a proficient reader. All of this can occur with the child having little or no insight into how good reading skills will be useful throughout years to come. Learning to read has another feature that helps in the learning process. As children learn to read, they are empowered to read self-selected materials on topics and/or in areas they find intrinsically motivating. This is a huge step forward in informal education and in formal education that provides the learner with electives. Learning a sport or a complex game entails a similar process. As expertise grows, learners get feedback from themselves and from others about their behaviors and progress. Now, think about self-selection regarding arithmetic (math). Children see and hear the adults and older children telling time and acting on the results, dealing with money, reading a calendar, and so on. Children are taught to count (say the number words) and to establish a one to one correspondence between the number words and items in a set of objects. Counting likely is tied in with sitting on an adult's lap, being read to, and receiving direct instruction on counting various objects in pictures. If the adults have good parenting skills, children receive immediate positive feedback for every counting effort. Thus, I see a strong link between the reading and oral aspects of counting and simple arithmetic that occur in the home environment and children's motivation to continue to learn more about reading and math during the first years of schooling. However, let's look at telling and understanding time, and calendaring. This is another piece of the math example. Time of day, day of week, day of month, month, and year are complex and challenging ideas, objectively and socially. (If you doubt this, consider the problem of writing a program or spreadsheet formula to display how old one is given the date and time of birth and the current date and time.) Teaching occurs both informally and formally at home and at school. We know that the importance of time, time telling, being punctual, and so on varies considerably among cultures. (For one discussion of this, see http://parkinslot.blogspot.com/2004/02/culture-and-punctuality.html.) A culture in which most workers punch time clocks will consider it important to teach students about time measurement, paying attention to the time, and taking responsibility for such actions as being on time (or failing to be on time) for work or school. Thus, we can think of helping a student learn about time as empowering the student. As teachers, we sense the immediate and long-term benefits to our students from what they do under our supervision. However, young students see and understand these benefits—and many do not until they realize that entrance into the world of work is impending. Thus, we have an example of a conflict between adults deciding what will empower students, and students understanding that they are being empowered and having extrinsic motivation to gain increased expertise in the area. By definition, motivation is seldom a problem if there’s intrinsic motivation; there may be a problem with balancing time spent learning in one area with time needed for other activities. Roughly speaking, children tend to enjoy school math up through the third grade. For many students, there is a significant decline in interest, perceived value, intrinsic motivation, and so on starting at about the fourth grade. It is then when the curriculum moves beyond whole-number and decimal addition and subtraction, identification of fractions, direct measurement, sorting, and recognition of geometric shapes. While adults believe that it is very important to teach this "higher" math (fourth grade and higher) and believe that this empowers students, many students do not agree. They do not see immediate benefits. Indeed, many experience boredom, or failure, or achieve “success” only by following the “recipe” without comprehension. (This lack of foresight is not to be wondered at. In life apart from some technical work, how often does one use math beyond the third-grade level?) One can analyze any school curriculum content and curriculum strand from the point of view of empowering students. This analysis can examine who is making the decision as to whether the student is being empowered. One can examine possible negative consequences of an adult-set goal of empowering students that results in many students being disempowered. Information and Communication Technology (ICT) adds questions to discussions about empowerment and enabling during education. When we provide ICT tools to students and teach their uses, what is being gained and what is being lost? Is the student being appropriately empowered in both the short and long run? The digital watch is a commonplace ICT object. A young person can learn to “read” a digital watch, to recognize and say the numbers and words representing time of day, day of the week, day of the month, and so on. However, this data may have no meaning to the student. Contrast this with a child learning to read an analog watch or clock, and learning to read a calendar. An analog watch is like many other analog measuring instruments in that one can "see" the amount of time remaining before an event (such as lunch) occurs. A physical calendar is an analog-type display device. One can see the days laid out as days of a week and days of a month. One can readily count the number of days before the next Saturday arrives. The analog watch and calendar are more visually in tune with the way most people's minds work, as contrasted with digital equivalents; and they display information in context. The following was contributed 5/5/2008 by Laura Dunkin (EDT630) Reading comprehension can be introduced at a very young age. When parents read to their children, it creates a positive bond between them. This bond can be nourished by continuing the routine. Eventually, parents can ask questions about the stories being read. Or, ask the child to tell the story in his or her own words. These routines are the beginning of the wonderful world of reading. Teachers also have an enormous role to play when it comes to teaching reading comprehension. Comprehension is the ultimate reason for reading. It is an imperative part of the learning process. Unless comprehension is fully achieved, a student’s experience with a text is not complete. It is a meaning-making process that cannot occur unless a student’s individual style of learning is met. The teacher should implement several different comprehension strategies in order to ensure that each student grasps the meaning of the story at hand. The teacher needs to teach and utilize the strategies in such a manner that the students are aware of and constantly monitor their own thinking processes as they read. Teachers need to be aware of the methods they use when instructing students on comprehension. They need to fully understand the strategies that they are presenting and use them in their own reading and learning activities. They need to constantly model, discuss, and participate in the implementation of comprehension strategies. Many instructors are unaware of the importance of finding different strategies to use when teaching comprehension. It is important that a teacher observe and interact with his or her students in order to research and find the most effective comprehension strategies to use. Each student has an individual learning style, and therefore, each student must be equipped with the knowledge to determine what he or she needs as a reader. A reader should be able to discuss and defend characters and plots to gain full meaning of a text. The simple act of recalling a story does not give a proper example of comprehension. When students comprehend text, learning has taken place. Then, after they understand what the author is conveying, students might realize that they enjoy learning about a specific topic. When students become interested in something teachers need to encourage them to read more books pertaining to that subject. This helps to create a firm foundation for the love of reading. Empowering Students to Help Make Classroom Rules This section is specifically directed at classroom teachers. Soon after first contact with a class, you—like most classroom teachers—probably state and explain the rules that students are to follow. You have developed these rules through years of experience or have perhaps secured a list from a more experienced teacher. A different approach is to make use of a set of rules and ways of implementing the rules that have been developed by researchers in the field. Many schools located throughout the country have adopted such effective behavioral support tools and methods. There are other alternatives. An example is provided by the approach used by Kathie Marshall. Quoting from her article in September 2007 Teacher Magazine: - [I say to my class:] Welcome to a new school year, students. It is my goal that each of you will be happy in our classroom each and every day. In order to make that happen, though, I have to be happy, too. So let’s work together to develop some class rules and routines that work for all of us. - During nearly three decades as a classroom teacher, I have never had a problem getting students to develop a list of guidelines both they and I could live with. And I never hesitated to throw in rules that mattered to me. I called them my “pet peeves.” Notice how this approach meets the needs of the teacher and at the same time gives some ownership to the students. In addition, it gives the teacher an opportunity to learn from the students. A Website that discusses this approach is http://www.education.ky.gov/KDE/Instructional+Resources/Career+and+Technical+Education/Establishing+Classroom+Rules.htm. Empowerment in Math Education Our society considers math to be such an important discipline of study that formal, required schooling in this discipline begins in kindergarten and s continues year after year after year. Indeed, students may be required to take three years of math during their four years of high school. They may be required to pass certain math tests in order to graduate from high school. They may face additional math course requirements in college. Through third grade, students are easily convinced that what they are learning is useful. They can think of immediate uses. As students move on to higher grades, the math they are exposed to is more abstract and more separated from their current, everyday lives. It is increasingly separate from (not related to or integrated into) the rest of their everyday school curriculum. Students are told that "you will need this in the future." (They also can be told that the math they’re studying enables them to use spreadsheets to greater effect—that they’ll now how to turn a situation that involves quantities into a well-defined problem, to determine what data is needed, and to set up a spreadsheet to provide answers.) I have a doctorate in math, and I view the word through "math colored" glasses. That is, I look for math problems and I think mathematically as a routine part of my life. Numbers are my friends, and patterns and relationships related to math intrinsically interest me. I think it would be nice if more people had this love for, appreciation of, interest in, and ability routinely to use math. Unfortunately, our current math system is not good at achieving these results. A large percentage of adults make statements such as "I hated math when I was in school." and "I can't do math." In essence, our math education system has not mathematically empowered these people. Indeed, it seems to have disempowered them. Many math education leaders are aware of this situation and have given deep thought as to what might change the situation. My own thoughts center on topics such as: - Thoroughly integrating use of calculators and computers into curriculum content, instructional processes, and assessment. This includes teaching computational thinking as a routine component of the entire math curriculum. Computational thinking is also briefly discussed later in this document. - A significant increase in use of modern computer-assisted learning and distance learning. - An emphasis on helping students to gain in their Piagetian math cognitive maturity and in their math maturity. (Read about these ideas in http://iae-pedia.org/Good_Math_Lesson_Plans.) - An increased emphasis on helping students to learn to learn math, learn to self-assess their math work, and learn to take increased responsibility for their own math learning. - An increased emphasis on routinely integrating use of math into other curriculum areas. A significant aspect of this would be teaching and using computational thinking as part of every discipline. Learned Helplessness in Math Researchers have build up extensive literature on learned helplessness. Some of this research certainly applies to many of the "I can't do math" people. It appears that many such people are sort of bragging that they can't do math. Here is a brief introduction to the topic of learned helplessness: - In early 1965, Martin E. P. Seligman and his colleagues, while studying the relationship between fear and learning, accidentally discovered an unexpected phenomenon while doing experiments on dogs using Pavlovian (classical conditioning). As you may observe in yourselves or a dog, when you are presented with food, you have a tendency to salivate. Pavlov discovered that if a ringing bell or tone is repeatedly paired with this presentation of food, the dog salivates. Later, all you have to do is ring the bell and the dog salivates. However, in Seligman's experiment, instead of pairing the tone with food, he paired it with a harmless shock, restraining the dog in a hammock during the learning phase. The idea, then, was that after the dog learned this, the dog would feel fear on the presentation of a tone, and would then run away or do some other behavior. Next, they put the conditioned dog into a shuttlebox, which consists of a low fence dividing the box into two compartments. The dog can easily see over the fence, and jump over if it wishes. So they rang the bell. Surprisingly, nothing happened! (They were expecting the dog to jump over the fence.) Learned helplessness in math may well be a fear of failure (a disempowering situation) that comes from the way math is traditionally taught. Quoting from Culture, communication, and mathematics learning: - Many Americans are convinced that they can never learn mathematics. This persuasive attitude is an example of what psychologists call learned helplessness. McLeod & Ortega (1993) define learned helplessness in the mathematics education context as "a pattern of behavior whereby students attribute failure to lack of ability" (p. 28). These authors contrast learned helplessness with mastery orientation. In mastery orientation students have confidence in their ability to solve challenging problems. Learned helplessness is negatively related with persistence, while mastery orientation is positively connected with persistence. - McLeod & Ortega (1993) found a student's self-concept could be modified by social context. They describe how classroom conversation, such as a teacher's characterization of a problem as "easy" can profoundly demoralize students. The National Council of Teachers of Mathematics [NCTM] Assessment Standards for School Mathematics (1995) defines mathematical disposition as "interest in, and appreciation for, mathematics; a tendency to think and act in positive ways; includes confidence, curiosity, perseverance, flexibility, inventiveness, and reflectivity in doing mathematics (p. 88). The critics of the Standards dismiss this notion of disposition as nonsense and advocate a back-to-basics approach. In the words of Jennings (1996), "get a math book, make students practice problems, have them do simple addition, subtraction, and multiplication in their heads, give them standardized tests, and drop the group work." This back-to-basics orientation seems more rooted in nostalgia than actual research. McLeod and Ortega (1993) give us reason to hope that if we address the affective components of mathematics education, as suggested in the NCTM Standards, we can improve students' achievements. Appropriate teaching can overcome or prevent math learned helplessness. Quoting from a Scientific American article titled The Secret to Raising Smart Kids: - People can learn to be helpless, too, but not everyone reacts to setbacks this way. I wondered: Why do some students give up when they encounter difficulty, whereas others who are no more skilled continue to strive and learn? One answer, I soon discovered, lay in people’s beliefs about why they had failed. - In particular, attributing poor performance to a lack of ability depresses motivation more than does the belief that lack of effort is to blame. In 1972, when I taught a group of elementary and middle school children who displayed helpless behavior in school that a lack of effort (rather than lack of ability) led to their mistakes on math problems, the kids learned to keep trying when the problems got tough. They also solved many of the problems even in the face of difficulty. Another group of helpless children who were simply rewarded for their success on easy problems did not improve their ability to solve hard math problems. These experiments were an early indication that a focus on effort can help resolve helplessness and engender success. - Subsequent studies revealed that the most persistent students do not ruminate about their own failure much at all but instead think of mistakes as problems to be solved. At the University of Illinois in the 1970s I, along with my then graduate student Carol Diener, asked 60 fifth graders to think out loud while they solved very difficult pattern-recognition problems. Some students reacted defensively to mistakes, denigrating their skills with comments such as “I never did have a good memory,” and their problem-solving strategies deteriorated. - Others, meanwhile, focused on fixing errors and honing their skills. One advised himself: “I should slow down and try to figure this out.” Two schoolchildren were particularly inspiring. One, in the wake of difficulty, pulled up his chair, rubbed his hands together, smacked his lips and said, “I love a challenge!” The other, also confronting the hard problems, looked up at the experimenter and approvingly declared, “I was hoping this would be informative!” Predictably, the students with this attitude outperformed their cohorts in these studies. One other factor deserves consideration. The student may not be physiologically ready to comprehend a particular math topic. In that case, the student is really helpless (except for carefully following directions by rote). When this student becomes an adult, that earlier, helpless student is still within. In that case, the teacher of the adult will be well advised to explain that the adult is no longer that child who was put in the unfortunate situation, to backtrack to that point in the student’s math career, to teach at that point, and feel good when the adult student’s face lights up. My personal opinion is that our math education system is doing a poor job of aligning itself with the math cognitive development research findings. The curriculum often teaches at a level that is substantially higher than a student’s math cognitive developmental level. ICT and Empowerment Information and Communication Technology (ICT) has brought us new tools, new areas to learn, new aids to learning, and new aids to assessment. Even an inexpensive 6-function, solar battery-powered calculator serves to highlight some of the challenges. Does teaching third graders how to use it and allowing the student to use it at will constitute "appropriate" empowerment? The calculator-equipped students quickly and accurately carry out 8-digit addition, subtraction, multiplication, division, and square root in decimal notation. But three obvious difficulties are that students may not: - Have an adequate understanding of numbers and arithmetic to know when to use a calculator and how to detect errors that come from mistakes in keyboarding and other sources. One of the most important aspects of math education is "sense making." Pushing keys to "get answers" does not contribute to student sense-making any more than memorizing and blindly following computational by-hand algorithms. - Develop their mental arithmetic abilities—to do exact and estimative computations—very important skills for lifelong success. - Be gaining foundational knowledge and understanding of algorithmic procedures that will adequately serve them in future studies of arithmetic and other math, not to mention thinking through giving direction in general. Teachers and parents have wildly varying opinions about providing students with calculators. The National Council of Teachers of Mathematics (USA) has been actively supporting calculator use in the curriculum since 1980. Commonly, students may use calculator on state and national tests. In spite of the permissions inherent to the NCTM and testing acceptance of calculators, many teachers still insist that students learn paper and pencil arithmetic algorithms and spend a great deal of time developing speed and accuracy in their use. A point in favor of this is that it’s easier to see process. A counter to that is that once students can explain the algorithm in terms of “how” and “why (and memorized the appropriate tables),” they have achieved algorithmic and mathematical understanding (and perhaps should receive Certificates of Mastery). However, teachers who allow calculators should explain why to parents and should explain that time is valuable in education and that time saved from busywork should be invested in learning. Here is a different type of example. Historically, our educational system has spent a great deal of time and effort having students develop a "good hand"—that is, nice looking cursive handwriting. Now that computers are readily available, an alternative is for students to learn hand printing and keyboarding. This trend is now well established in terms of student behavior, and it is beginning to be supported in some schools. Are we empowering or disempowering students by allowing this trend to continue? Attractive handwriting is an artistic accomplishment; legible handwriting or printing is effective and courteous. Finally, consider students learning to use a card catalog and browsing the shelves as they learn to retrieve information from a physical library. Card catalogs have gone away, and physical libraries have been supplemented and supplanted by virtual libraries. Students now learn to use a search engine and a browser. Still, just as it’s pleasant to sit (or lie on a living-room rug) with a newspaper, feet up and beverage within easy reach, there is pleasure in handling a book that catches one’s eye and in “panning for gold” on library shelves. In my opinion, the information retrieval example gets to the heart of the empowerment issue. Advances in technology provide us with powerful new aids to problem solving. Problems of information collection, information storage, information manipulation and processing, information retrieval, and information use have existed on earth for hundreds of millions of years and for thousands of years in consciously symbolic forms. ICT provides us with a number of powerful aids to such endeavors. We now live in a world where both the "traditional" aids and the ICT-based aids are commonly used, and where there is now a strong trend to make more use of and become more dependent on ICT-based aids. For example, when you’re at a computer and you want to know the meaning of words or find the words you want, do you use ICT or paper dictionaries and thesauruses? Almost always, there is gain and loss when we embrace new technologies and emphasize instruction related to them. Teachers, schools parents, testing agencies, and governments are struggling to find the balance. I find it interesting to watch the struggles. Students are now allowed to use a word processor in some assessment situations. I wonder, how long it will be before "open computer with full connectivity" will be mandated in state and national tests? I think maybe there is a guideline for this general issue: Use the new technology if it offers (or soon will offer) more effectiveness, efficiency, or pleasure. Use an older technology if doing so is necessary or even more effective, efficient, or pleasurable. (Consider love letters, lecture visuals, giving somebody directions to your residence, shopping lists, chess, and Super Smash Bros. Brawl.) The "open computer, open connectivity" includes full use of artificially intelligent aids to information retrieval and problem solving. Computer systems are getting smarter and smarter. If that concerns you or is even frightening, consider adding in brain-enhancing drugs, along with genetic engineering to make people physically and mentally more capable. Parents, our educational system, and our whole society face major challenges in learning how to deal with these current and developing situations. I find it interesting to compare this developing situation with the existing problems of drugs used by some athletes to enhance their performance. Incidentally, are we going to have routine testing of students for possible use of cognitive enhancement drugs? (Caffeine is an example of such a drug.) Computational Thinking Empowers ICT provides or helps provide an easily attainable level or type of in-practice expertise in many different disciplines. For example, people can investigate a multitude of scenarios using a spreadsheet while knowing little of the underlying math. It is evident that with appropriate ICT aids, one can relatively quickly obtain a personally useful level of expertise over a wide range of areas. In every academic discipline, computer technology is increasingly used to represent and solve (or help solve) a range of problems. Many of these problems are ones people find useful to be able to solve. For a simple set of examples, think about using digital cameras. The problems of developing one's film and printing pictures from the developed film have gone away—as well as the expense of buying film. Editing by use of a computer is much easier than analog editing techniques. Adding sound effects, music, text, and so on is much easier to do in a computer environment. The YouTube Website provides good evidence that many thousands of people have been empowered by digital photography. Thus, we have a situation in which a student can learn to make use of computers in a particular discipline or subdiscipline, and thereby gain the ability to solve a variety of problems within the discipline. The ratio of "power" a student gains compared to the amount of time and effort required might be quite high relative to that of a traditional approach. In brief summary, student is empowered by: - Gaining knowledge and skill in learning to make use of ICT hardware and software. This learning serves a person well across disciplines and into the future as new ICT hardware and software are developed, and as the person encounters new areas where ICT is useful in representing and solving problems. - Learning to think about problems to be solved and tasks to be accomplished in terms of the capabilities and limitations of ICT. This computational thinking is useful in all disciplines and serves a student well as ICT continues its rapid pace of improvement. A word of philosophical caution: Intrinsic purposes should be reserved for natural entities, and non-controllable purposes should never be built into artificial entities. As noted in the quote at the beginning of this document, "with great power comes great responsibility." Teachers, parents, and others who help empower students must help the students learn to make responsible use of their increasing power. A example of the this is provided by the process of helping a student learn to drive a car and get a drivers license, versus helping a student to become a responsible, considerate driver. A good teacher has an impressive synergistic array of people skills, content knowledge, and pedagogical knowledge. It takes natural ability, a willingness “to gladly learn and gladly teach,” a great deal of informal and formal education, and considerable experience to become a good teacher. Now, throw into the teaching milieu the pace of change in the totality of pedagogical and content knowledge, and the development of ICT-based aids to teaching, learning, and assessment. Add in other changes, such as those in our students and in our overall culture and society. It is no wonder that many teachers feel overworked and underappreciated. Historically, teaching has been a type of "cottage industry" with each teacher being able to do her or his thing behind a closed classroom door. In many cases, the teacher had no competition, as there was only one teacher in the school, or one teacher per grade level, or only one science teacher in the high school, and so on. This situation has been greatly changed by consolidation of schools and school districts, increasing population, and a variety of approaches to accountability. It is also being changed by the rapid change in communication systems and in access to information. The teacher and the small library in a schoolroom or school now face strong competition from the Web.” Think about how a teacher's level of empowerment is changed as students gain access to information and students learn to participate in and make use of "tools" such as social networking via computer. In addition, the field of computer-assisted learning (CAL) continues to make significant progress. Think about whether a teacher feels increased empowerment when a school decides to put in CAL labs and require their use. Teachers are told that research evidence indicates that use of the computers will increase test scores; they may also be told that CAL greatly reduces the drudgery of correcting routine work. Hmmm. Hmmm. (That is, double hmmm.) It is no wonder that so many teachers feel a drop in their levels of empowerment. It may well be that Information and Communication Technology, increased emphasis on accountability, and increased emphasis on state and national testing, is disempowering teachers. A philosophical word: A wise society will test for diagnoses and progress, will relate accountability to situation, and will use ICT (and all tools) to enable themselves to be more fully human. Improve Education via Getting Better Teachers TEDS-M (April 2010). International Study on Preparation of Teachers of Mathematics. Retrieved 5/10/2010 from See http://hub.mspnet.org/index.cfm/20671. Quoting from the Website: - "The Teacher Education and Development Study in Mathematics (TEDS-M) examined teacher preparation in 16 countries looking at how primary level and middle school level teachers of mathematics were trained. The study examined the course taking and practical experiences provided by teacher preparation programs at colleges, universities and normal schools. The study reveals that middle school mathematics teacher preparation is not up to the task. U.S. future teachers find themselves, straddling the divide between the successful and the unsuccessful, leaving the U.S. with a national choice of which way to go. The findings of TEDS-M additionally revealed that the preparation of elementary teachers to teach mathematics was comparatively somewhat better as the U.S. found itself in the middle of the international distribution. - U.S. future teachers are getting weak training mathematically, and are just not prepared to teach the demanding mathematics curriculum we need especially for middle schools if we hope to compete internationally. It is important for us as a nation to understand that teacher preparation programs are critical, not only for future teachers, but also for the children they will be teaching. It is quite striking that the performance of the future teachers in terms of their mathematics content knowledge at both levels parallels so closely that of the students they teach." I highly recommend the New York Times article: Dillon, Sir Michael Barber (8/15/07). “Imported from Britain: Ideas to Improve Schools.” Retrieved 2/17/08: http://www.nytimes.com/2007/08/15/education/15face.html. Quoting from the article: - “What have all the great school systems of the world got in common?” he said, ticking off four systems that he said deserved to be called great, in Finland, Singapore, South Korea, and Alberta, Canada. “Four systems, three continents—what do they have in common? - “They all select their teachers from the top third of their college graduates, whereas the U.S. selects its teachers from the bottom third of graduates. This is one of the big challenges for the U.S. education system: What are you going to do over the next 15 to 20 years to recruit ever better people into teaching?” - South Korea pays its teachers much more than England and America, and has accepted larger class sizes as a trade-off, he said. - Finland, by contrast, draws top-tier college graduates to the profession not with huge paychecks, but by fostering exceptionally high public respect for teachers, he said. Here is a way to think about this situation in terms of empowerment: - Being smarter than average in a particular area or activity and being selected because of this qualification. A different way of looking at this is that if a teacher isn't relatively smart, the teacher is "one down" relatively to the demands of the job. - Receiving a good rate of pay. A different way of looking at this is that there can easily be a feeling of disempowerment that comes from a low rate of pay and other poor working conditions. - Being highly respected. Being highly respected helps one to feel good about her- or him self. That in turn gives a feeling of being empowered. Summary: What Shall Be the Resolution? The issues of empowering students and their teachers are complex. The fast pace of technological change adds to the complexity of the issues. Here are a couple of goals to keep in mind: - All humans should have power within limits and should experience just accountability. - All humans should have the resources they need to fulfill, within time constraints, their non-destructive potentials. We shall never meet these goals. Nevertheless, both can guide us as we live our lives and as we affect the lives of others. This article is strongly slanted toward math education and roles of computers in math education. However, many of the ideas are applicable in other disciplines that are standard in the school curriculum. For example, you may have heard people say, "I can't do art." or "I can't do music." Such comments are indications of learned helplessness and poor education in these areas. Computer technology now provides powerful aids to doing art and music. This "doing" in a computer environment is often perceived by the doer to be quite successful and becomes intrinsically motivating. BNET (n.d.). Successful intelligence in the classroom. BNET Business Network. Retrieved 5/4/08: http://findarticles.com/p/articles/mi_m0NQM/is_4_43/ai_n8686065. The unifying these of this article is that students can learn to make use of their intelligence in a manner that helps them to succeed in school and other endeavors. Quoting from the article: - … successful intelligence is the use of an integrated set of abilities needed to attain success in life, however an individual defines it, within his or her sociocultural context. Thus, there is no one definition of intelligence. People are successfully intelligent by virtue of recognizing their strengths and making the most of them at the same time they recognize their weaknesses and find ways to correct or compensate for them. Both are important. On one hand, students need to learn to correct aspects of their performance in which they are underperforming. On the other hand, they have to recognize that they probably will never be superb at all kinds of performance. It helps to find ways around weaknesses, such as seeking help from others and giving it in return. In other words, people find their own unique path to being intelligent. Successfully intelligent people adapt to, shape, and select environments. In adaptation, they change themselves to fit the environment. For example, a teacher may adapt to the expectations of her principal by teaching in a way she believes the principal will endorse. In shaping, people change the environment to fit them. The teacher may try to persuade the principal to support a new way of teaching different from what the principal has been accustomed to in the past. And in selection, they find a new environment. For example, the teacher may decide to seek a placement in another school if she is unable to convince the principal that her way of teaching is valid and will result in benefits for the students. They accomplish these ends by finding a balance in their use of analytical, creative, and practical abilities (Sternberg, 1997a, 1999). Dillon, Sir Michael Barber (8/15/07). Imported from Britain: Ideas to improve schools. Retrieved 2/17/08: http://www.nytimes.com/2007/08/15/education/15face.html?_r=2&oref=slogin&oref=slogin. Dweck, Carol S. (11/28/07). The Secret to Raising Smart Kid. Scientific American Mind. Retrieved 2/17/08: http://www.sciam.com/article.cfm?id=the-secret-to-raising-smart-kids&print=true. Fisher, D. and Frey, N. (2008). Better Learning Through Structured Teaching: A Framework for the Gradual Release of Responsibility. ASCD. Part of the book is available free at http://www.ascd.org/portal/site/ascd/menuitem.b71d101a2f7c208cdeb3ffdb62108a0c/template.book?bookMgmtId=1b446048f2a18110VgnVCM1000003d01a8c0RCRD Quoting from the ASCD Smart Brief: - All teachers want their students to become independent learners, but even motivated students are sometimes reluctant to take responsibility for their own learning. The authors of ASCD's new book, "Better Learning Through Structured Teaching," provide a proven method for gradually enabling students to take on more of the "work" of classroom learning. The book includes a lot of practical strategies that help teachers use this approach, plus tips on how to differentiate instruction, make effective use of class time, and plan backwards from learning objectives. Hureaux, Michael and Femiano, Robert (2/12/08). Teachers key to school reform. settlepi.com. Retrieved 2/17/08: http://seattlepi.nwsource.com/opinion/351030_schoolreform13.html. This newspaper "opinion" piece presents arguments against school reform efforts that fail to adequately empower teachers. Quoting the first paragraph of the article: - But the feds are not alone in placing the blame on teachers. Educational consultants argue similarly, including the company recently hired by the Gates Foundation for Seattle Public Schools, McKinsey and Co. In their 2006 Report to Ohio Board of Education, (also funded by Gates) the consultants focused their proposals to "address the single-most important factor affecting student achievement: teacher quality." The teachers' union, the Seattle Education Association, recently voted against participating in the audit. Marchall, Kathie (9/18/07). Teaching secrets: How to smile before Christmas. Teacher Magazine. Retrieved 2/17/08: http://www.teachermagazine.org/tm/articles/2007/09/18/04tln_marshall_web.h18.html. Moursund, David (n.d.) Two brains are better than one. Retrieved 5/7/08: http://iae-pedia.org/Two_Brains_Are_Better_Than_One. Moursund, D.G. (2008). Education for increasing expertise. The book in PDF and Microsoft Word formats can be accessed at: http://iae-pedia.org/Education_for_Increasing_Expertise. This is a book for middle school and junior high school students. The underlying message is summarized by: - This document focuses on two major ideas for improving our educational system: - Facilitating students to take steadily increasing responsibility for their own education. - Emphasizing student learning for building expertise—the knowledge and skills to solve problems and accomplish tasks using their own physical and mental capabilities in conjunction with: A) contemporary tools designed to aid physical and mental capabilities; B) the physical and mental capabilities of other people; and C) the accumulated knowledge of the human race. Stansbury, Meris (3/3/08). U.S. educators seek lessons from Scandinavia. eSchoolNews. Retrieved 3/5/08: http://www.eschoolnews.com/news/top-news/?i=52770;_hbguid=31475690-290f-4e70-8ce4-2742f7b52b83&d=top-news. Quoting from the article: - A delegation led by the Consortium for School Networking (CoSN) recently toured Scandinavia in search of answers for how students in that region of the world were able to score so high on a recent international test of math and science skills. They found that educators in Finland, Sweden, and Denmark all cited autonomy, project-based learning, and nationwide broadband internet access as keys to their success. - What the CoSN delegation didn’t find in those nations were competitive grading, standardized testing, and top-down accountability—all staples of the American education system. - In all three countries, students start formal schooling at age seven after participating in extensive early-childhood and preschool programs focused on self-reflection and social behavior, rather than academic content. By focusing on self-reflection, students learn to become responsible for their own education, delegates said. Note the last paragraph in the quoted material. In essence, the education systems in those three countries believe that students are empowered and will succeed at higher levels if they are given increased responsibility for their own education and taught how to make use of this empowerment. Here is another quote from the article: - Therefore, teachers are extremely autonomous in their work. So are students. For example, internet-content filtering in the three countries is based largely on a philosophy of student responsibility. Internet filters rarely exist on school computers, other than for protection from viruses or spam. As a school librarian in Copenhagen said, “The students understand that the computers are here for learning.” Links to Other IAE Resources This is a collection of IAE publications related to the IAE document you are currently reading. It is not updated very often, so important recent IAE documents may be missing from the list. This component of the IAE-pedia documents is a work in progress. If there are few entries in the next four subsections, that is because the links have not yet been added. IAE-pedia (IAE's Wiki) I-A-E Books and Miscellaneous Other This initial version of this page was developed by David Moursund. His work on this page has been inspired and prompted by a sequence of email exchanges with David Burrowes. David Burrowes is a regular contributor to a page of video recommendations in the IAE-pedia. Dick Ricketts provided a careful edit of both the contents and the writing in this document. This included making substantial contributions to the content.
http://iae-pedia.org/Empowering_Learners_and_Teachers
13
22
Graph a Function and It's Derivative It is often difficult to picture the relationship between a function and it's derivative. This tutorial displays the graph of the derivative of a function directly beside the graph of the function, for the sake of comparison. The function is drawn on the left side, the derivative on the right. Students often expect the derivative of a function to be positive where the function itself is positive and for the derivative of a function to be negative where the function itself is negative. Actually, the derivative is positive where original function is increasing. To emphasize this point, regions where the derivative is positive are colored in blue on both pictures. Also, where the function is increasing, and where the derivative is positive, the graphs are drawn in red. To help be able to use the visualization skills that you gain from this program, you are expected to guess where a local minimum of the function will be located. You must therefore be able to determine the x coordinate of a point that is at the bottom of a hill. After you enter your function or derivative, a dialogue will appear that explains this. When you click on the dialogue will disappear and the derivative will be graphed on the right hand side. However, nothing will be graphed on the left. You are expected to identify a local minimum by clicking on the derivative graph, on the right. The x coordinate of your graph will be used to draw two green vertical lines of points with the same x coordinate, one on each graph. Hence it does not matter how high or low on the graph that you click, only where you your click is from left and right. After you click, the function on the left will also be drawn. If your click was correct, the green line on the left should pass through the graph at the bottom of the hill. If your answer was incorrect, note the correct x coordinate on the left hand side. Then find that x coordinate on the right hand side. See if you can determine what property of the derivative graph characterizes the correct x value. I could tell you, but you will remember it if you discover it on your own. Once you understand how the program works you will not have read the dialogue over and over. There is a button on the dialogue labeled "Dismiss". When you click on that buton, the dialogue will disappear never reappear. If you need to read it again, you will have to resart the applet. It may even be necessary to quit the browser and go back to the page. bookmarking the page before you quit the browser will make it easier to return to the program. This program is mainly designed so that you can enter a function and see its derivative graphed beside your function. However, you may enter a derivative and see a function drawn, on the left, that has your entry as its derivative. Remember, no matter which you enter, the derivative is always on the right. The derivative and the function having a particular derivative are drawn via numerical techniques. Computation is thus very rapid. However, this program is unable to display a formula for either of the companion functions drawn. This tutorial is not as interactive as some of the previous, but addresses a critical issue in the taking of a derivative. I recommend that you graph as many homework problems as can be put into the function input panel. This demonstration can be of considerable assistance. I hope that you find that to be the case.
http://kerbaugh.uncfsu.edu/derivative/derivative.html
13
14
Changes in Agriculture... In the 1850s, subsistence agriculture dominated the Southern Piedmont. Although a handful of large plantations dotted the region, most farmers worked small pieces of land, raising grains, vegetables, and animals to feed their family, and bartered with neighbors for most of the goods they could not produce themselves. On the eve of the Civil War, however, railroads began to enter this backcountry, making the land accessible for commercial agriculture and industry. The Civil War, of course, abolished slavery, breaking up the large plantations into small plots, still owned by wealthy white families but now worked by African American and white sharecroppers. Crop liens, agreements in which farmers who needed land and credit to buy supplies worked property owned by planters who needed labor, transformed the agricultural economy. Merchants also gained power as a result of the liens, providing supplies and lines of credit to farmers in exchange for a share of their crops. Often, because the merchants charged high interest rates, farmers had no choice but to plant cash crops such as cotton and tobacco, shifting their focus from food to commercial production. As a result, farmers began to buy instead of grow more of the crops their families needed for survival, necessitating even further indebtedness to merchants. Fence laws, which kept farmers from allowing livestock to roam land that they did not own, and higher taxes further decreased the limited resources of small farmers and made them increasingly dependent on the market economy. When crop prices fell in the 1870s, 1880s, and 1890s, farmers plunged into deepening debt. Of course, not all farmers experienced this difficult period in the same way. Farmers who owned large tracts of land could subdivide it, rent pieces of it to others, and profit from the sharecropping system that others experienced more negatively. Families who owned their own small farms could remain more independent and fare better financially than sharecroppers. The changes in agriculture, however, touched all Piedmont farmers to one degree or another, gradually eroding the ability of farm families to remain on the land. Farm families did not take these changes lying down; instead, they tried to work harder, barter with neighbors, and pull together to continue their familiar and independent rural lifestyle for as long as possible. Women and children increased their work in the fields to help their families tend more land for greater income. Communities bound themselves together through ties of kin, friendship, churchgoing, and social activity. Neighbors helped one another with farm work and when illness or hard times befell local families. Rural ties that had bound communities for generations remained strong, even as the power of merchants was pushing rural families to the financial breaking point. From Farm to Factory... During these hard times for farm families, merchants experienced dramatic growth in their economic power. As they accumulated capital, many invested in the construction of textile mills that converted into yarn and cloth the cotton that was grown by their clients. In North Carolina, an average of six new mills were built every year between 1880 and 1900. By 1900, the state was home to 177 mills, the vast majority of which were located in the Piedmont. Mill owners often emphasized that their factories would provide work for the growing number of rural poor whites, failing to acknowledge that the same forces that were causing rural people's economic hardships facilitated the accumulation of wealth and the construction of mills by entrepreneurs. Farm families were drawn to factory work by the promise of a steady wage and their sometimes desperate desire to escape a future of poverty and debt. Labor recruiters often visited rural areas and convinced families to move to the mills for a chance at a better life. In other instances, friends or family members already working in the mills encouraged folks back home to take up factory labor.. Some families tried to combine factory and farm work, laboring in the mills after the harvest and then returning to the land when it was time to sow again. For most families, however, the move to the mills marked an enduring break with rural life. Over time, they came to think of themselves as a distinct new class of "cotton mill people."
http://www.historians.org/tl/LessonPlans/nc/Leloudis/land.html
13
17
Hepatitis, which refers to inflammation of the liver, may be caused by multiple factors; many different viruses, in particular, may cause hepatitis. Letters are used to distinguish the types of viral hepatitis from one another. The most common types in the United States are hepatitis A, B, C, and D. Hepatitis E, which shares features with A, is not endemic to the States. All of the hepatitis viruses can cause an acute inflammation of the liver that lasts several weeks or months and sometimes leads to acute liver failure. Hepatitis B, C, and D viruses can cause chronic, even lifelong hepatitis, resulting in cirrhosis, liver cancer, or liver failure. Hepatitis viruses have several modes of transmission. Hepatitis A is transmitted by ingestion of food or water that is contaminated with feces from a person infected with the hepatitis A virus. It is diagnosed with a specific blood test. Hepatitis E is also caused by contaminated food and water and can be detected in a blood test or a stool sample. Testing for hepatitis E is generally performed only if a traveler appears to have contracted infection in a country where E is common. Hepatitis B is transmitted through infected blood, unclean needles, or unprotected sex with a person who has the disease, or by an infected mother to her infant. It, too, is diagnosed with blood tests. Hepatitis C is most often contracted through exposure to contaminated blood, though it can occur as a result of sharing needles, can be passed from mother to newborn, and—rarely—can be contracted through unprotected sex. Hepatitis C is diagnosed with a blood test for antibodies to the virus; however, these may not be detected for a month to a year after a person has contracted the C virus. Hepatitis D is a coinfection that occurs only in the presence of a hepatitis B infection and is transmitted through blood and sexual secretions. Hepatitis D may show up in a hepatitis B carrier or as a coinfection in an individual with hepatitis B. The Centers for Disease Control and Prevention has estimated that approximately 400,000 to 600,000 people were infected with some type of viral hepatitis during the 1990s. Because fatality from hepatitis is relatively low, mortality figures are a poor indicator of the actual impact of these diseases. Hepatitis is a major public-health issue in the United States and worldwide. In the United States, for example, hepatitis C infection is approximately four times as common as HIV infection. Vaccines are available to prevent hepatitis A and B. There is no vaccine to prevent hepatitis C. Safe handling of blood products or the injured can reduce the risk of hepatitis C infection. These practices are recommended to healthcare and emergency workers, first responders, and soldiers who encounter blood daily and are at high risk for hepatitis C. Vaccination for hepatitis B also prevents D. Each year, some 30 million people travel to countries where hepatitis viruses are widespread or epidemic. Travelers need to take special precautions against ingesting these viruses in tap water, ice, raw and unpeeled fruits and vegetables, and raw or partially cooked shellfish and other foods. With care, proper hygiene, frequent hand-washing, safe sex practices, and widespread use of the available vaccines, the majority of viral hepatitis cases are preventable. Hepatitis B is a viral infection of the liver, and the ninth leading cause of death. An estimated 2 billion people have been infected with the hepatitis B virus worldwide, and some 300 million are chronically infected and become carriers of the virus. In the United States about 1 in 20 people has been infected—some 1.2 million chronically—and is a carrier of the virus. Hepatitis B accounts for roughly 17,000 hospitalizations and 5,500 deaths in the nation yearly. The hepatitis B virus is transmitted when infected blood or bodily fluids pass through the skin or mucous membranes of an uninfected person. Transmission can occur through unprotected sex, intravenous drug use, unintended needle sticks, exposure to contaminated blood in healthcare or correctional settings or during accidents and disasters, or even through tattooing and piercing. Infected women also pass the virus to their newborns during childbirth. The incubation period after exposure to the virus ranges from 60 to 180 days, averaging about 75 days, followed by onset of the illness. Hepatitis B is diagnosed with a panel of simple blood tests; however, it takes four to six weeks after exposure for the virus to be detected in blood. In its acute stage, hepatitis B may cause mild symptoms of fatigue, fever, joint and muscle pain, and loss of appetite and is sometimes mistaken for the flu. Less common but more serious symptoms also include severe nausea and vomiting, a swollen stomach, and jaundice, or a yellowing of the skin and eyes. These symptoms require immediate medical attention. For some people, hepatitis B is a "silent infection" and results in no symptoms. Infected individuals without symptoms feel well and do not realize they have hepatitis B. These people may unknowingly pass the infection on to others. Some 90 percent of healthy adults who become infected with hepatitis B recover fully and develop antibodies to protect against future hepatitis B infections. Only a small number, about 5 to 10 percent, will be unable to clear the virus from their bodies and will develop a chronic infection. A blood test six months after diagnosis that still shows the presence of the virus indicates a chronic infection. Adults taking steroids or those with a serious underlying illness, such as kidney disease, are at greatest risk of chronic infection. However, infants and young children are far more adversely affected by hepatitis B than are adults. Nearly all newborns who are infected with hepatitis B will develop chronic infections. Among young children, the chronicity rate is about 70 percent. For this reason, hepatitis B immunization is recommended by the Centers for Disease Control and Prevention for all infants and young children. People suffering chronic hepatitis B have a high risk of serious complications. Some 15 to 25 percent die prematurely of liver cancer or cirrhosis—scarring that irreversibly damages the liver and impairs function. Hepatitis B is 100 times more infectious than the AIDS viruses, yet a safe and effective vaccine can prevent most cases of this illness. Not only is vaccination now standard for infants and young children in the United States; it is highly recommended for household contacts and sexual partners of anyone suffering from chronic hepatitis B. Vaccination is the only effective way to prevent the spread of the hepatitis B virus. Hepatitis D is another type of viral infection of the liver that exists only in the presence of hepatitis B. People who are infected with hepatitis B can also become infected with hepatitis D at the same time. Individuals with a B and D coinfection often suffer more severe symptoms of illness and have a higher risk of liver failure than those with hepatitis B alone. Among people with chronic hepatitis B who are later infected with hepatitis D, a "superinfection" develops. Cirrhosis may occur more often in those with a "superinfection." Hepatitis D is spread in the same way as hepatitis B, through exchange of infected blood or bodily fluids. Unprotected sex and intravenous drug use put people at high risk of infection. Settings where blood may be exchanged, such as healthcare institutions or tattoo parlors, provide an environment for hepatitis D transmission as well. The infection can also be passed from infected mothers to their newborns. The only way to prevent hepatitis D is to prevent hepatitis B through vaccination. The liver is an organ essential to life. It weighs about 3 pounds in women and 4 pounds in men. It is located underneath the ribs and extends horizontally from the middle of the body to the right side. Its surface is smooth and convex. It consists of a myriad of microscopic units called lobules. The liver stores vitamins, sugar, and iron. It controls production and removal of cholesterol. It clears the body of wastes and poisons and removes bacteria from the bloodstream to combat infection. It releases bile, a substance necessary for digestion and absorption of key nutrients. In addition, it converts nutrients into clotting factors, to stop excessive bleeding, and immune factors to fight foreign invaders. If the liver fails, a person can live only a day or two. But if even as much as 75 percent to 80 percent of it is removed or destroyed acutely in a healthy individual, the liver will grow new, healthy liver cells and continue to perform its essential functions. Both hepatitis B and D viruses are transmitted by the introduction of infected blood or other body fluids through the skin or mucous membranes into the body of an uninfected person. Transmission can occur during sexual relations; through injection with drugs; by sharing personal care items, such as a toothbrush, razor, or nail clipper; or by direct contact with blood or body fluids from an infected person, as in a hospital. Pregnant women can pass the virus to their babies. Many cases of acute hepatitis B occur sporadically without any known source. Infections may be acquired at birth or during early childhood. Perinatal or early infection has declined as a result of passive immunization with HBV immune globulin in high-risk situations and the initiation of universal HBV vaccination at birth. Infection control practices, changes in blood donation screening, and blood transfusion protocols have also contributed to the decline in the incidence of hepatitis B. In the United States, hepatitis B viral infection occurs primarily in adults and adolescents. In Asian countries, the infection occurs most often during childhood through child-to-child or mother-to-child transmission. Risk factors for hepatitis B infection include a variety of activities or settings where infected blood or bodily fluids can be exchanged. These include: Sex: Multiple sexual partners, unprotected sex, and men who have sex with men are at increased risk of hepatitis B infections. The risk of infection is notably high in promiscuous homosexual men, but it is also transmitted sexually from men to women and women to men. Transmission is probably prevented by correct use of condoms. People who are married to or have sexual relationships with heterosexuals or homosexuals who have chronic hepatitis B infections are also at high risk and should be vaccinated. Sexually active teens who may lack knowledge of the virus and fail to use protection during sex are at high risk. Drug use: Injecting drugs, particularly using shared needles, puts a person at very high risk of contracting hepatitis B. Healthcare employment: Doctors, nurses, first responders, emergency technicians, or other health and emergency workers who are exposed to blood are at high risk of infection and should be vaccinated against hepatitis B. Social service settings: Staff and residents in facilities for the developmentally disabled, in group homes, or in correctional institutions are also at risk and should be vaccinated. Kidney disease: Patients with kidney disease and those undergoing dialysis are at increased risk of infection. Household contacts: Living in the household of someone with chronic hepatitis B often results in infection, particularly if there is sharing of nonsterilized personal care items. War and natural disasters: War and natural disasters may expose individuals to contaminated blood and fluids. Soldiers and relief workers often serve in countries where hepatitis B is endemic. Foreign travel: Travelers to regions where hepatitis B is common (Asia, Africa, South America, the Pacific Islands, eastern Europe, and the Middle East) should be vaccinated to prevent infection. Adoption: Families considering adoption, either domestic or international, should be vaccinated. Studies show that asymptomatic adoptees, particularly from countries where hepatitis B is widespread, can infect the family. Tattoos, piercings, beauty treatments: Body piercing, using improperly sterilized equipment during medical or beauty procedures (such as manicures or pedicures), or tattooing with potentially contaminated needles or ink can lead to hepatitis B infection. Last reviewed on 7/23/09 U.S. News's featured content providers were not involved in the selection of advertisers appearing on this website, and the placement of such advertisement in no way implies that these content providers endorse the products and services advertised. Disclaimer and a note about your health.
http://health.usnews.com/health-conditions/infectious-diseases/hepatitis-b/overview
13
109
In this lesson, students learn about credit cards while they investigate and comprehend the concepts of credit and credit ratings, or scores. They watch a segment from the PBS series What’s Up in Finance? to see how Anna, a small-business owner, is reviewed positively by a lending committee based on her strong credit rating. Students then play an online game to learn about the different costs of borrowing money on a credit card. Afterward, they complete a hands-on activity in which they examine a hypothetical credit-history scenario and learn how specific actions impact a person's credit score. As a culminating activity, students compare different credit card offers for young people, and determine which card offers the best deal. They use a chart to make a comparison of the different features of the cards, including annual fees, APRs, and rewards. Students will be able to: - Understand the concept of credit - Identify the components of a credit score - Recognize the importance of having good credit - Learn techniques for building a strong credit history - Understand the concept of interest - Compute interest amounts on a loan - Analyze different interest rates (3) 45 minute class periods Green Chic QuickTime Video In this video from What’s Up in Finance? Anna needs a loan for her eco-friendly fashion start-up. It Costs What?! Flash Interactive The goal of this game is to compare the cost that interest and fees can add to a purchase when the purchase is made using a credit card. Before The Lesson Bookmark the Web site used in the lesson on each computer in your classroom. Using a social bookmarking tool such as del.icio.us or diigo (or an online bookmarking utility such as portaportal) will allow you to organize all the links in a central location. - Preview all of the video segments and Web sites used in the lesson to make certain that they are appropriate for your students, currently available, and accessible from your classroom. - Download the video segments used in this lesson onto your hard drive, or prepare to stream the clips from your classroom. - Print out the "Credit Card Components" Teacher Organizer to copy the terms and definitions on the board. - Print out the Student Organizers: "Credit Score," "Credit History," "Credit Card Offers," and "Credit Card Comparison," and make enough copies so that each student has one copy of each organizer. Introductory Activity: Setting the Stage - Open the discussion by asking if any of the students has a credit card, and if so, how often they use it. Also, ask students if their parents have credit cards and if they understand how credit cards work. - Next, using the Credit Card Components Teacher Organizer, write the following terms relating to credit on the board: credit, credit card, credit risk, interest, APR, and credit limit. Discuss each of the terms with the class. - Explain how credit cards work. Clarify that credit cards allow you to purchase something, but put off the payment until the payment is due on the card, usually sometime within a month of making the purchase. However, if you leave your payment on the card for more than a month, you have to pay interest on the balance, or the amount still owed to the credit card. It is important for students to understand that the percentage of interest can vary widely, depending on the credit card. - Discuss the difference between credit cards, debit cards, and cash. Both credit cards and debit cards allow you to use a plastic card to pay for a purchase, but debit cards actually take the money directly from your savings or checking account within a couple of days. - Next, ask students if they have any ideas about how banks and credit card companies make decisions about granting credit cards to individuals. - Explain to the students that the most important issue for a bank or credit card company is how likely is it that the money owed to the card will be repaid on time. Tell the students that banks use a numerical score called a "credit score" or a "credit rating" to help them predict a person's future behavior with a credit card. The credit rating takes into account a variety of behaviors made in the past -- it is a way to numerically represent a person's history of using credit cards and other loans. - Explain to the class that they will be watching a short video about a fashion designer who is trying to expand her business and needs to take out a loan in order to do so. Ask students to pay attention to how Anna and her business are perceived by the group of lenders that review her request for a loan. How does her credit rating affect her loan request? Play the Green Chic segment for the class. - Review the Green Chic segment, discussing with students why a strong personal credit rating would help Anna get a loan for her business. (Answer: Because her strong personal credit rating, or score, showed that Anna has a history of using credit wisely -- e.g., repaying loans and credit cards on time.) - Begin a discussion about credit scores, how lenders use them, and what makes up an individual's credit score. Explain that a credit score is a number calculated using a number of different variables. The resulting score helps lenders determine how likely a borrower is to pay a loan or credit card back on time. In other words, a score is a snapshot of "credit risk" at a given time. - Ask students if they know which organizations calculate credit scores: is it the banks, the government, or private organizations? (Answer: Private organizations calculate credit scores. One well-known organization is the Fair Isaac Corporation, which produces the "FICO score" -- the most widely used credit score. Another credit score is the VantageScore.) - Hand out the Credit Score Student Organizer. Ask students to read about how a strong credit score can help them. Discuss why it is important to have a good credit score. (Answer: A good credit score facilitates the process of borrowing money for much-needed items like homes or cars, and also helps with credit card approvals, apartment approvals, etc. It also allows borrowing with lower interest rates, which saves money.) - Using the Credit Score Student Organizer, write the percentages that make up a credit score on the board. Have students read about each on their Credit Score Student Organizers. - Next, explain to students that they will be analyzing a credit history scenario, looking at the actions that one person made and how they affected her credit score. - Hand out the Credit History Student Organizer. Ask students to complete the grid for Part 1, filling in the "Why Does Her Action Affect Her Score?" column. (They should refer to the Credit Score Student Organizer.) - Review Part 1 as a class, and discuss why each of Angela's behaviors had an impact on her credit score. Compute how much the credit score fell as a result of the behaviors. Refer Credit History Answer Key - Next, ask students to complete the grid for Part 2, filling in the "Follow-up Action" and "Score" columns. The information for this is listed on the Credit Score Student Organizer. - Discuss Part 2 as a class, looking at the different ways a person can improve a credit score. Compute how much the credit score rose with these behaviors. Refer to the Credit History Answer Key for answers. - Introduce the "It Costs What?!" online game. There are three parts to this game: a credit card "Crash Course," a set of "Case Files," and a section on "Choosing Wisely." Ask students to go through the three sections of the game and to consider how they would make decisions about borrowing money on a credit card. Students then play It Costs What?! Flash Interactive to apply the concepts that they have learned. - Ask students to report back on the four characters in the "Case Files" section of the game. How did the various credit cards, and the different ways the cards were used by the players, affect how much the characters paid for the digital music players? - Hand out the Credit Card Offers Student Organizer. Ask students to read through each of the five different credit card offers. - Discuss which offers look good and why. Ask students to compare the offers to the terms of the credit cards they learned about while playing "It Costs What?!" (Answer: The credit cards with high interest rates end up costing students much more money). - Next, hand out the Credit Card Comparison Student Organizer. Ask students to fill out the grid given the information from the Credit Card Offers Student Organizer. Advise them that the offers may be described in ways that make them look attractive, but they should carefully read the details of each offer. - Now, ask students to take a good look at the grid. Which card actually offers the best overall deal? (Answer: For ongoing balances, #3, because the interest rate is the lowest and the rewards of the other two are not worth much. The best option overall is the ATM card, because there is no concern for rates or fees.) - As an extension activity, ask students to collect credit card offers that arrive at home or that they notice displayed in public places. Once offers have been compiled, do a similar activity to the above to analyze and evaluate real-world credit card offers.
http://www.teachersdomain.org/resource/fin10.socst.personfin.credit.lpcredit/
13
23
The Audiogram is the graphical representation of the results of the air conduction and bone conduction hearing tests. The vertical lines represent the test frequencies, arranged from low pitched on the left to high pitched on the right. The horizontal lines represent loudness, from very soft at the top to very loud at the bottom. The Audiogram shows the minimum volume at which a person can detect a tone played at a particular frequency. "X" is used for the left ear and "O" represents scores for the right ear. The scores are compared to results obtained from persons with normal hearing – the line at 0dB. Sometimes the audiogram will also show bracket symbols "[" and "]". These represent scores based on bone conduction tests, which as discussed earlier, bypass the outer ear and middle ear. Interpreting the Audiogram The Audiologist will use the following characteristics of the audiogram to explaining the results of the audiogram: Type of hearing loss - Conductive – Normal hearing for bone conduction scores ([ & ]), and showing a hearing loss for Air Conduction scores (X & O) - Sensioneural – Hearing loss (equally) for both air and bone conduction - Mixed - Hearing loss for bone conduction score, and an even greater hearing loss for Air conduction scores Severity of loss - The lower the scores fall on the Audiogram, the more severe the hearing loss. Slope of loss - Flat loss – A hearing loss where hearing is relatively even across all frequencies, which is more common for conductive hearing losses. - Sloping loss – Increasing degree of hearing loss the higher the frequency. This is the most common hearing loss that will be shown due to the ageing process and noise damage. - Other: Less common shapes include reverse slopes, cookie bites, corner audiogram How the ears compare - Monaural loss: Loss is only in one ear - Binaural loss: Loss is in both ears - Symmetrical: Hearing is relatively even in both ears - Asymmetrical: Hearing loss in one ear is significantly worse than the other ear. "My hearing is pretty good other than for those high frequencies" In interpreting an audiogram, it is a common for clients to misinterpret the results ~ looking at the good news rather than taking in the whole story. Low frequencies of sounds found in speech (125dB – 1000 dB) are largely responsible for a person's interpretation of the volume of speech. High frequencies are responsible for the clarity that someone interprets speech. Some of the high frequency elements of speech include those made by words containing letters such as "f", "ph", "th", "s" and "t". Because these sounds are difficult for someone with high frequency loss to hear, they may often mistake what someone has said. For this reason, many people with greater losses in the higher frequencies commonly feel that: - "I can hear ok, it is just that people sound like they're mumbling". Here we can see the Audiograms of three people: - - Annie (75 years) – Housewife and grandmother of 12 wonderful grandchildren - Bill (55-years) ~ Carpenter - David (12-years) ~ Great cricketer Annie has a moderate hearing loss that is known as Presbycusis. This results from degeneration of the hair receptors within the cochlear due to the ageing process. Before she was fitted with hearing aids, Annie always found conversations with her younger grandchildren particularly difficult - especially when in a noisy situation. She also found telephone conversations difficult and noisy restaurants were the "bane of her existence". Bill has been on the tools for 40-years as a carpenter and admits to rarely using ear protection for most of that time. His sharply sloping loss in the higher frequencies can largely be put down to noise induced hearing loss produced by electrical saws and other equipment that he has used in his job. David is currently suffering from a conductive hearing loss due to a nasty illness that has led to fluid gathering in his middle ear. He is not hearing very well at the moment and his ears are hurting and "feel tight on the inside". This infection is causing a problem with the passing of sound through his middle ear, as can be seen by the Normal hearing scores he has from his Bone conduction tests, represented by the "[" and "]", but impaired Air Conduction results.
http://www.hearingpro.com.au/hearing-tests/interpreting-an-audiogram
13
15
WHAT SCIENTIFIC CONCEPT WOULD IMPROVE EVERYBODY'S COGNITIVE TOOLKIT? STEPHEN M. KOSSLYN Director, Center for Advanced Study in the Behavioral Sciences, Stanford University; Author, Image and Mind The concept of constraint satisfaction is crucial for understanding and improving human reasoning and decision making. A "constraint" is a condition that must be taken into account when solving a problem or making a decision, and "constraint satisfaction" is the process of meeting the relevant constraints. The key idea is that often there are only a few ways to satisfy a full set of constraints simultaneously. For example, when moving into a new house, my wife and I had to decide how to arrange the furniture in the bedroom. We had an old headboard, which was so rickety that it had to be leaned against a wall. This requirement was a constraint on the positioning of the headboard. The other pieces of furniture also had requirements (constraints) on where they could be placed. Specifically, we had two small end tables that had to be next to either side of the headboard; a chair that needed to be somewhere in the room; a reading lamp that needed to be next to the chair; and an old sofa that was missing one of its rear legs, and hence rested on a couple of books — and we wanted to position it so that people couldn't see the books. Here was the remarkable fact about our exercises in interior design: Virtually always, as soon as we selected the wall for the headboard, bang! The entire configuration of the room was determined. There was only one other wall large enough for the sofa, which in turn left only one space for the chair and lamp. In general, the more constraints, the fewer the possible ways of satisfying them simultaneously. And this is especially the case when there are many "strong" constraints. A strong constraint is like the locations of the end tables: there are very few ways to satisfy them. In contrast, a "weak" constraint, such as the location of the headboard, can be satisfied in many ways (many positions along different walls would work). What happens when some constraints are incompatible with others? For instance, say that you live far from a gas station and so you want to buy an electric automobile — but you don't have enough money to buy one. Not all constraints are equal in importance, and as long as the most important ones are satisfied "well enough," you may have reached a satisfactory solution. For example, although an optimal solution to your transportation needs might have been an electric car, a hybrid that gets excellent gas mileage might be good enough. In addition, once you begin the constraint satisfaction process, you can make it more effective by seeking out additional constraints. For example, when you are deciding what car to buy, you might start with the constraints of (a) your budget and (b) your desire to avoid going to a filling station. You then might consider the size of car needed for your purposes, length of the warrantee, and styling. You may be willing to make tradeoffs, for example, by satisfying some constraints very well (such as mileage) but just barely satisfying others (e.g., styling). Even so, the mere fact of including additional constraints at all could be the deciding factor. Constraint satisfaction is pervasive. For example: • This is how detectives — from Sherlock Holmes to the Mentalist — crack their cases, treating each clue as a constraint and looking for a solution that satisfies them all. • This is what dating services strive to do — find the clients' constraints, identify which constraints are most important to him or her, and then see which of the available candidates best satisfies the constraints. • This is what you go through when finding a new place to live, weighing the relative importance of constraints such as the size, price, location, and type of neighborhood. • And this is what you are do when you get dressed in the morning: you choose clothes that "go with each other" (both in color and style). Constraint satisfaction is pervasive in part because it does not require "perfect" solutions. It's up to you to decide what the most important constraints are, and just how many of the constraints in general must be satisfied (and how well they must be satisfied). Moreover, constraint satisfaction need not be linear: You can appreciate the entire set of constraints at the same time, throwing them into your "mental stewpot" and letting them simmer. And this process need not be conscious. "Mulling it over" seems to consist of engaging in unconscious constraint satisfaction. Finally, much creativity emerges from constraint satisfaction. Many new recipes were created when chefs discovered that only specific ingredients were available — and they thus were either forced to substitute different ingredients or to come up with a new "solution" (dish) to be satisfied. Creativity can also emerge when you decide to change, exclude, or add a constraint. For example, Einstein had one of his major breakthroughs when he realized that time need not pass at a constant rate. Perhaps paradoxically, adding constraints can actually enhance creativity — if a task is too open or unstructured, it may be so unconstrained that it is difficult to devise any solution. DANIEL C. DENNETT Philosopher; University Professor, Co-Director, Center for Cognitive Studies, Tufts University; Breaking Everybody knows about the familiar large-scale cycles of nature: day follows night follows day summer-fall-winter-spring-summer-fall-winter-spring, the water cycle of evaporation and precipitation that refills our lakes, scours our rivers and restores the water supply of every living thing on the planet. But not everybody appreciates how cycles — every spatial and temporal scale from the atomic to the astronomic — are quite literally the hidden spinning motors that power all the wonderful phenomena of nature. Nikolaus Otto built and sold the first internal combustion gasoline engine in 1861, and Rudolf Diesel built his engine in 1897, two brilliant inventions that changed the world. Each exploits a cycle, the four-stroke Otto cycle or the two-stroke Diesel cycle, that accomplishes some work and then restores the system to the original position so that it is ready to accomplish some more work. The details of these cycles are ingenious, and they have been discovered and optimized by an R & D cycle of invention that is several centuries old. An even more elegant, micro-miniaturized engine is the Krebs cycle, discovered in 1937 by Hans Krebs, but invented over millions of years of evolution at the dawn of life. It is the eight-stroke chemical reaction that turns fuel — into energy in the process of metabolism that is essential to all life, from bacteria to redwood trees. Biochemical cycles like the Krebs cycle are responsible for all the motion, growth, self-repair, and reproduction in the living world, wheels within wheels within wheels, a clockwork with trillions of moving parts, and each clock has to be rewound, restored to step one so that it can do its duty again. All of these have been optimized by the grand Darwinian cycle of reproduction, generation after generation, picking up fortuitous improvements over the eons. At a completely different scale, our ancestors discovered the efficacy of cycles in one of the great advances of human prehistory: the role of repetition in manufacture. Take a stick and rub it with a stone and almost nothing happens — few scratches are the only visible sign of change. Rub it a hundred times and there is still nothing much to see. But rub it just so, for a few thousand times, and you can turn it into an uncannily straight arrow shaft. By the accumulation of imperceptible increments, the cyclical process creates something altogether new. The foresight and self-control required for such projects was itself a novelty, a vast improvement over the repetitive but largely instinctual and mindless building and shaping processes of other animals. And that novelty was, of course, itself a product of the Darwinian cycle, enhanced eventually by the swifter cycle of cultural evolution, in which the reproduction of the technique wasn't passed on to offspring through the genes but transmitted among non-kin conspecifics who picked up the trick of imitation. The first ancestor who polished a stone into a handsomely symmetrical hand axe must have looked pretty stupid in the process. There he sat, rubbing away for hours on end, to no apparent effect. But hidden in the interstices of all the mindless repetition was a process of gradual refinement that was well nigh invisible to the naked eye designed by evolution to detect changes occurring at a much faster tempo. The same appearance of futility has occasionally misled sophisticated biologists. In his elegant book, Wetware, the molecular and cell biologist Dennis Bray describes cycles in the nervous system: In a typical signaling pathway, proteins are continually being modified and demodified. Kinases and phosphates work ceaselessly like ants in a nest, adding phosphate groups to proteins and removing them again. It seems a pointless exercise, especially when you consider that each cycle of addition and removal costs the cell one molecule of — one unit of precious energy. Indeed, cyclic reactions of this kind were initially labeled "futile." But the adjective is misleading. The addition of phosphate groups to proteins is the single most common reaction in cells and underpins a large proportion of the computations they perform. Far from being futile, this cyclic reaction provides the cell with an essential resource: a flexible and rapidly tunable device. The word "computations" is aptly chosen, for it turns out that all the "magic" of cognition depends, just as life itself does, on cycles within cycles of recurrent, re-entrant, reflexive information-transformation processes from the biochemical scale within the neuron to the whole brain sleep cycle, waves of cerebral activity and recovery revealed by EEGs. Computer programmers have been exploring the space of possible computations for less than a century, but their harvest of invention and discovery so far includes millions of loops within loops within loops. The secret ingredient of improvement is always the same: practice, practice, practice. It is useful to remember that Darwinian evolution is just one kind of accumulative, refining cycle. There are plenty of others. The problem of the origin of life can be made to look insoluble ("irreducibly complex") if one argues, as Intelligent Design advocates have done, that "since evolution by natural selection depends on reproduction," there cannot be a Darwinian solution to the problem of how the first living, reproducing thing came to exist. It was surely breathtakingly complicated, beautifully designed — must have been a miracle. If we lapse into thinking of the pre-biotic, pre-reproductive world as a sort of featureless chaos of chemicals (like the scattered parts of the notorious jetliner assembled by a windstorm), the problem does look daunting and worse, but if we remind ourselves that the key process in evolution is cyclical repetition (of which genetic replication is just one highly refined and optimized instance), we can begin to see our way to turning the mystery into a puzzle: How did all those seasonal cycles, water cycles, geological cycles, and chemical cycles, spinning for millions of years, gradually accumulate the preconditions for giving birth to the biological cycles? Probably the first thousand "tries" were futile, near misses. But as Cole Porter says in his most sensual song, see what happens if you "do it again, and again, and again." A good rule of thumb, then, when confronting the apparent magic of the world of life and mind is: look for the cycles that are doing all the hard work. Postdoctoral Fellow, University of British Columbia When it comes to common resources, a failure to cooperate is a failure to control consumption. In Hardin's classic tragedy, everyone overconsumes and equally contributes to the detriment of the commons. But a relative few can also ruin a resource for the rest of us. Biologists are familiar with the term 'keystone species', coined in 1969 after Bob Paine's intertidal exclusion experiments. Paine found that by removing the few five-limbed carnivores, Pisaster ochraceus, from the seashore, he could cause an overabundance of its prey, mussels, and a sharp decline in diversity. Without seastars, mussels outcompeted sponges. No sponges, no nudibranchs. Anenomes were also starved out because they eat what the seastars dislodge. Pisaster was the keystone that kept the intertidal community together. Without it, there were only mussels, mussels, mussels. The term keystone species, inspired by the purple seastar, refers to a species that has a disproportionate effect relative to its abundance. In human ecology, I imagine diseases and parasites play a similar role to Pisaster in Paine's experiment. Remove disease (and increase food) and Homo sapiens takeover. Humans inevitably restructure their environment. But not all human beings consume equally. While a keystone species refers to a specific species that structures an ecosystem, I consider keystone consumers to be a specific group of humans that structures a market for a particular resource. Intense demand by a few individuals can bring flora and fauna to the brink. There are keystone consumers in the markets for caviar, slipper orchids, tiger penises, plutonium, pet primates, diamonds, antibiotics, Hummers, and seahorses. Niche markets for frog legs in pockets of the U.S., Europe, and Asia are depleting frog populations in Indonesia, Ecuador, and Brazil. Seafood lovers in high-end restaurants are causing stocks of long-lived fish species like Orange roughy or toothfish in Antarctica to crash. The desire for shark fin soup by wealthy Chinese consumers has led to the collapse of several shark species. One in every four mammals (1,141 of the 5,487 mammals on Earth) is threatened with extinction. At least 76 mammals have become extinct since the 16th century, many, like the Tasmanian tiger, the great auk, and the Steller sea cow, due to hunting by a relatively small group. It is possible for a small minority of humans to precipitate the disappearance of an entire species. The consumption of non-living resources is also imbalanced. The 15% of the world's population that lives in North America, Western Europe, Japan and Australia consumes 32 times more resources, like fossil fuels and metals, and produces 32 times more pollution than the developing world, where the remaining 85% of humans live. City-dwellers consume more than people living in the countryside. A recent study determined the ecological footprint for an average resident of Vancouver, British Columbia was 13 times higher than his suburban/rural counterpart. Developed nations, urbanites, ivory collectors: the keystone consumer depends on the resource in question. In the case of water, agriculture accounts for 80% of use in the U.S., i.e. large-scale farms are the keystone consumers. So why do many conservation efforts focus on households rather than water efficiency on farms? The keystone consumer concept helps focus conservation efforts where returns on investments are highest. Like keystone species, keystone consumers also have a disproportionate impact relative to their abundance. Biologists identify keystone species as conservation priorities because their disappearance could cause the loss of many other species. In the marketplace, keystone consumers should be priorities because their disappearance could lead to the recovery of the resource. Humans should protect keystone species and curb keystone consumption. The lives of others depend on it. Musician, Computer Scientist; Pioneer of Virtural Reality; Author, You Are Not A Gadget: A Manifesto It is the stuff of children's games. In the game of "telephone," a secret message is whispered from child to child until it is announced out loud by the final recipient. To the delight of all, the message is typically transformed into something new and bizarre, no matter the sincerity and care given to each retelling. Humor seems to be the brain's way of motivating itself — through pleasure — to notice disparities and cleavages in its sense of the world. In the telephone game we find glee in the violation of expectation; what we think should be fixed turns out to be fluid. When brains get something wrong commonly enough that noticing the failure becomes the fulcrum of a simple child's game, then you know there's a hitch in human cognition worth worrying about. Somehow, we expect information to be Platonic and faithful to its origin, no matter what history might have corrupted it. The illusion of Platonic information is confounding because it can easily defeat our natural skeptical impulses. If a child in the sequence sniffs that the message seems too weird to be authentic, she can compare notes most easily with the children closest to her, who received the message just before she did. She might discover some small variation, but mostly the information will appear to be confirmed, and she will find an apparent verification of a falsity. Another delightful pastime is over-transforming an information artifact through digital algorithms that are useful if used sparingly, until it turns into something quite strange. For instance, you can use one of the online machine translation services to translate a phrase through a ring of languages back to the original and see what you get. The phrase, "The edge of knowledge motivates intriguing online discussions" transforms into "Online discussions in order to stimulate an attractive national knowledge" in four steps on Google's current translator. (English->German->Hebrew->Simplified Chinese->English) We find this sort of thing funny, just like children playing "telephone," as well we should, because it sparks our recollection that our brains have unrealistic expectations of information transformation. While information technology can reveal truths, it can also create stronger illusions than we are used to. For instance, sensors all over the world, connected through cloud computing, can reveal urgent patterns of change in climate data. But endless chains of online retelling also create an illusion for masses of people that the original data is a hoax. The illusion of Platonic information plagues finance. Financial instruments are becoming multilevel derivatives of the real actions on the ground that finance is ultimately supposed to motivate and optimize. The reason to finance the purchasing of a house ought to be at least in part to get the house purchased. But an empire of specialists and giant growths of cloud computers showed, in the run up to the Great Recession, that it is possible for sufficiently complex financial instruments to become completely disconnected from their ultimate purpose. In the case of complex financial instruments, the role of each child in the telephone game does not correspond to a horizontal series of stations that relay a message, but a vertical series of transformations that are no more reliable. Transactions are stacked on top of each other. Each transaction is based on a formula that transforms the data of the transactions beneath it on the stack. A transaction might be based on the possibility that a prediction of a prediction will have been improperly predicted. The illusion of Platonic information reappears as a belief that a higher-level representation must always be better. Each time a transaction is gauged to an assessment of the risk of another transaction, however, even if it is in a vertical structure, at least a little bit of error and artifact is injected. By the time a few layers have been compounded, the information becomes bizarre. Unfortunately, the feedback loop that determines whether a transaction is a success or not is based only on its interactions with its immediate neighbors in the phantasmagorical abstract playground of finance. So a transaction can make money based on how it interacted with the other transactions it referenced directly, while having no relationship to the real events on the ground that all the transactions are ultimately rooted in. This is just like the child trying to figure out if a message has been corrupted only by talking to her neighbors. In principle, the Internet can make it possible to connect people directly to information sources, to avoid the illusions of the game of telephone. Indeed this happens. Millions of people had a remarkable direct experience of the Mars rovers. The economy of the Internet as it has evolved incentivizes aggregators, however. Thus we all take seats in a new game of telephone, in which you tell the blogger who tells the aggregator of blogs, who tells the social network, who tells the advertiser, who tells the political action committee, and so on. Each station along the way finds that it is making sense, because it has the limited scope of the skeptical girl in the circle, and yet the whole systems becomes infused with a degree of nonsense. A joke isn't funny anymore if it's repeated too much. It is urgent for the cognitive fallacy of Platonic information to be universally acknowledged, and for information systems to be designed to reduce cumulative error. Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, The Lightness of Being When I first took up the piano, merely hitting each note required my concentrated attention. With practice, however, I began to work in phrases and chords. Eventually I was able to produce much better music with much less conscious effort. Evidently, something powerful had happened in my brain. That sort of experience is very common, of course. Something similar occurs whenever we learn a new language, master a new game, or get comfortable in a new environment. It seems very likely that a common mechanism is involvedf. I think it's possible to identify, in broad terms, what that mechanism is: We create hidden layers. The scientific concept of a hidden layer arose from the study of neural networks. Here a little picture is worth a thousand words: In this picture, the flow of information runs from top to bottom. Sensory neurons — the eyeballs at the top — take input from the external world and encode it into a convenient form (which is typically electrical pulse trains for biological neurons, and numerical data for the computer "neurons" of artificial neural networks). They distribute this encoded information to other neurons, in the next layer below. Effector neurons — the stars at the bottom — send their signals to output devices (which are typically muscles for biological neurons, and computer terminals for artificial neurons). In between are neurons that neither see nor act upon the outside world directly. These inter-neurons communicate only with other neurons. They are the hidden layers. The earliest artificial neural networks lacked hidden layers. Their output was, therefore, a relatively simple function of their input. Those two-layer, input-output "perceptrons" had crippling limitations. For example, there is no way to design a perceptron that, faced with pictures of a few black circles on a white background, counts the number of circles. It took until the 1980s, decades after the pioneering work, for people to realize that including even one or two hidden layers could vastly enhance the capabilities of their neural networks. Nowadays such multilayer networks are used, for example, to distill patterns from the explosions of particles that emerge from high-energy collisions at the Large Hadron Collider. They do it much faster and more reliably than humans possibly could. David Hubel and Torstein Wiesel were awarded the 1981 Nobel Prize in physiology or medicine for figuring out what neurons in the visual cortex are doing. They showed that successive hidden layers first extract features of visual scenes that are likely to be meaningful (for example, sharp changes in brightness or color, indicating the boundaries of objects), and then assemble them into meaningful wholes (the underlying objects). In every moment of our adult waking life, we translate raw patterns of photons impacting our retinas — photons arriving every which way from a jumble of unsorted sources, and projected onto a two-dimensional surface — into the orderly, three-dimensional visual world we experience. Because it involves no conscious effort, we tend to take that everyday miracle for granted. But when engineers tried to duplicate it, in robotic vision, they got a hard lesson in humility. Robotic vision remains today, by human standards, primitive. Hubel and Wiesel exhibited the architecture of Nature's solution. It is the architecture of hidden layers. Hidden layers embody, in a concrete physical form, the fashionable but rather vague and abstract idea of emergence. Each hidden layer neuron has a template. It becomes activated, and sends signals of its own to the next layer, precisely when the pattern of information it's receiving from the preceding layer matches (within some tolerance) that template. But this is just to say, in precision-enabling jargon, that the neuron defines, and thus creates, a new emergent concept. In thinking about hidden layers, it's important to distinguish between the routine efficiency and power of a good network, once that network has been set up, and the difficult issue of how to set it up in the first place. That difference is reflected in the difference between playing the piano, say, or riding a bicycle, or swimming, once you've learned (easy), and learning to do those things in the first place (hard). Understanding exactly how new hidden layers get laid down in neural circuitry is a great unsolved problem of science. I'm tempted to say it's the greatest. Liberated from its origin in neural networks, the concept of hidden layers becomes a versatile metaphor, with genuine explanatory power. For example, in my own work in physics I've noticed many times the impact of inventing names for things. When Murray Gell-Mann invented "quarks", he was giving a name to a paradoxical pattern of facts. Once that pattern was recognized, physicists faced the challenge of refining it into something mathematically precise and consistent; but identifying the problem was the crucial step toward solving it! Similar, when I invented "anyons" I knew I had put my finger on a coherent set of ideas, but I hardly anticipated how wonderfully those ideas would evolve and be embodied in reality. In cases like this, names create new nodes in hidden layers of thought. I'm convinced that the general concept of hidden layers captures deep aspects of the way minds — whether human, animal, or alien; past, present, or future — do their work. Minds mobilize useful concepts by embodying them in a specific way, namely as features recognized by hidden layers. And isn't it pretty that "hidden layers" is itself a most useful concept, worthy to be included in hidden layers everywhere? Physicist, Harvard University; Author, Warped Passages The word "science" itself might be the best answer to this year's Edge question. The idea that we can systematically understand certain aspects of the world and make predictions based on what we've learned — while appreciating and categorizing the extent and limitations of what we know — plays a big role in how we think. Many words that summarize the nature of science such as "cause and effect," "predictions," and " experiments," as well as words that describe probabilistic results such as "mean," "median," "standard deviation," and the notion of "probability" itself help us understand more specifically what this means and how to interpret the world and behavior within it. "Effective theory" is one of the more important notions within and outside of science. The idea is to determine what you can actually measure and decide — given the precision and accuracy of your measuring tools — and to find a theory appropriate to those measurable quantities. The theory that works might not be the ultimate truth—but it's as close an approximation to the truth as you need and is also the limit to what you can test at any given time. People can reasonably disagree on what lies beyond the effective theory, but in a domain where we have tested and confirmed it, we understand the theory to the degree that it's been tested. An example is Newton's Laws, which work as well as we will ever need when they describe what happens to a ball when we throw it. Even though we now know quantum mechanics is ultimately at play, it has no visible consequences on the trajectory of the ball. Newton's Laws are part of an effective theory that is ultimately subsumed into quantum mechanics. Yet Newton's Laws remain practical and true in their domain of validity. It's similar to the logic you apply when you look at a map. You decide the scale appropriate to your journey — are you traveling across the country, going upstate, or looking for the nearest grocery store — and use the map scale appropriate to your question. Terms that refer to specific scientific results can be efficient at times but they can also be misleading when taken out of context and not supported by true scientific investigation. But the scientific methods for seeking, testing, and identifying answers and understanding the limitations of what we have investigated will always be reliable ways of acquiring knowledge. A better understanding of the robustness and limitations of what science establishes, as well as probabilistic results and predictions, could make the world a better place. Media theorist, Author of Life Inc and Program or Be Programmed Technologies Have Biases People like to think of technologies and media as neutral and that only their use or content determines their impact. Guns don't kill people, after all, people kill people. But guns are much more biased toward killing people than, say, pillows — even though many a pillow has been utilized to smother an aging relative or adulterous spouse. Our widespread inability to recognize or even acknowledge the biases of the technologies we use renders us incapable of gaining any real agency through them. We accept our iPads, Facebook accounts and automobiles at face value — as pre-existing conditions — rather than tools with embedded biases. Marshall McLuhan exhorted us to recognize that our media have impacts on us beyond whatever content is being transmitted through them. And while his message was itself garbled by the media through which he expressed it (the medium is the what?) it is true enough to be generalized to all technology. We are free to use any car we like to get to work — gasoline, diesel, electric, or hydrogen — and this sense of choice blinds us to the fundamental bias of the automobile towards distance, commuting, suburbs, and energy consumption. Likewise, soft technologies from central currency to psychotherapy are biased in their construction as much as their implementation. No matter how we spend US dollars, we are nonetheless fortifying banking and the centralization of capital. Put a psychotherapist on his own couch and a patient in the chair, and the therapist will begin to exhibit treatable pathologies. It's set up that way, just as Facebook is set up to make us think of ourselves in terms of our "likes" and an iPad is set up to make us start paying for media and stop producing it ourselves. If the concept that technologies have biases were to become common knowledge, we would put ourselves in a position to implement them consciously and purposefully. If we don't bring this concept into general awareness, our technologies and their effects will continue to threaten and confound us. Neurologist & Cognitive Neuroscientist, The New School; Coauthor, Children's Learning and Attention Problems The Expanding In-Group The ever-cumulating dispersion, not only of information, but also of population, across the globe, is the great social phenomenon of this age. Regrettably, cultures are being homogenized, but cultural differences are also being demystified, and intermarriage is escalating, across ethnic groups within states and between ethnicities across the world. The effects are potentially beneficial for the improvement of cognitive skills, from two perspectives. We can call these "the expanding in-group" and the "hybrid vigor" effects. The in-group versus out-group double standard, which had and has such catastrophic consequences, could in theory be eliminated if everyone alive were to be considered to be in everyone else's in-group. This Utopian prospect is remote, but an expansion of the conceptual in-group would expand the range of friendly, supportive and altruistic behavior. This effect may already be in evidence in the increase in charitable activities in support of foreign populations that are confronted by natural disasters. Donors identifying to a greater extent with recipients make this possible. The rise in frequency of international adoptions also indicates that the barriers set up by discriminatory and nationalistic prejudice are becoming porous. The other potential benefit is genetic. The phenomenon of hybrid vigor in offspring, which is also called heterozygote advantage, derives from a cross between dissimilar parents. It is well established experimentally, and the benefits of mingling disparate gene pools are seen not only in improved physical but also in improved mental development. Intermarriage therefore promises cognitive benefits. Indeed, it may already have contributed to the Flynn effect, the well known worldwide rise in average measured intelligence, by as much as three I.Q. points per decade, over successive decades since the early twentieth century. Every major change is liable to unintended consequences. These can be beneficial, detrimental or both. The social and cognitive benefits of the intermingling of people and populations are no exception, and there is no knowing whether the benefits are counterweighed or even outweighed by as yet unknown drawbacks. Nonetheless, unintended though they might be, the social benefits of the overall greater probability of in-group status, and the cognitive benefits of increasing frequency of intermarriage entailed by globalization may already be making themselves felt. Psychologist, University of Virginia; Author, The Happiness Hypothesis Humans are the giraffes of altruism. We're freaks of nature, able (at our best) to achieve ant-like levels of service to the group. We readily join together to create superorganisms, but unlike the eusocial insects, we do it with blatant disregard for kinship, and we do it temporarily, and contingent upon special circumstances (particularly intergroup conflict, as is found in war, sports, and business). Ever since the publication of G. C. Williams' 1966 classic Adaptation and Natural Selection, biologists have joined with social scientists to form an altruism debunkery society. Any human or animal act that appears altruistic has been explained away as selfishness in disguise, linked ultimately to kin selection (genes help copies of themselves), or reciprocal altruism (agents help only to the extent that they can expect a positive return, including to their reputations). But in the last few years there's been a growing acceptance of the fact that "Life is a self-replicating hierarchy of levels," and natural selection operates on multiple levels simultaneously, as Bert Hölldobler and E. O. Wilson put it in their recent book, The Superorganism. Whenever the free-rider problem is solved at one level of the hierarchy, such that individual agents can link their fortunes and live or die as a group, a superorganism is formed. Such "major transitions" are rare in the history of life, but when they have happened, the resulting superorganisms have been wildly successful. (Eukaryotic cells, multicelled organisms, and ant colonies are all examples of such transitions). Building on Hölldobler and Wilson's work on insect societies, we can define a "contingent superorganism" as a group of people that form a functional unit in which each is willing to sacrifice for the good of the group in order to surmount a challenge or threat, usually from another contingent superorganism. It is the most noble and the most terrifying human ability. It is the secret of successful hive-like organizations, from the hierarchical corporations of the 1950s to the more fluid dot-coms of today. It is the purpose of basic training in the military. It is the reward that makes people want to join fraternities, fire departments, and rock bands. It is the dream of fascism. Having the term "contingent superorganism" in our cognitive toolkit may help people to overcome 40 years of biological reductionism and gain a more accurate view of human nature, human altruism, and human potential. It can explain our otherwise freakish love of melding ourselves (temporarily, contingently) into something larger than ourselves. Director, MIT Center for Bits and Atoms; Author, FAB Truth is a Model The most common misunderstanding about science is that scientists seek and find truth. They don't — they make and test models. Kepler packing Platonic solids to explain the observed motion of planets made pretty good predictions, which were improved by his laws of planetary motion, which were improved by Newton's laws of motion, which were improved by Einstein's general relativity. Kepler didn't become wrong because of Newton being right, just as Newton didn't then become wrong because of Einstein being right; this succession of models differed in their assumptions, accuracy, and applicability, not their truth. This is entirely unlike the polarizing battles that define so many areas of life: either my political party, or religion, or lifestyle is right, or yours is, and I believe in mine. The only thing that's shared is the certainty of infallibility. Building models is very different from proclaiming truths. It's a never-ending process of discovery and refinement, not a war to win or destination to reach. Uncertainty is intrinsic to the process of finding out what you don't know, not a weakness to avoid. Bugs are features — violations of expectations are opportunities to refine them. And decisions are made by evaluating what works better, not by invoking received wisdom. These are familiar aspects of the work of any scientist, or baby: it's not possible to learn to talk or walk without babbling or toddling to experiment with language and balance. Babies who keep babbling turn into scientists who formulate and test theories for a living. But it doesn't require professional training to make mental models — we're born with those skills. What's needed is not displacing them with the certainty of absolute truths that inhibit the exploration of ideas. Making sense of anything means making models that can predict outcomes and accommodate observations. Truth is a model.
http://edge.org/q2011/q11_5.html
13
18
Richmond has not always been the capital of Virginia. When the English colonists arrived in 1607, the paramount chief of the local tribes (Powhatan) ruled his territory (Tsenacommacah) from Werowocomoco, located on what we now call the York River. Powhatan's brother, Parahunt, ruled a subordinate town located at the base of the waterfalls on Powhatan's River (what we now call the James River). In 1607, the English colonists established their official seat of government at Jamestown. That location was about 15 miles south of Parahunt's capital at Werowocomoco - but about 80 miles as a boat travels on the river. The colonists shifted their capital in 1699 to Williamsburg, long after the remnants of the Algonquian-speaking natives had lost control over Tsenacommacah except for small reservations (including two that still remain, on the Pamunkey and Mattaponi rivers). In 1776, the rebellious Virginians declared Williamsburg to be the capital of an independent state (which the new state constitution labelled a "Commonwealth"). Independence changed the status, but not the location, of the capital in Williamsburg. In 1780 the Virginians moved their state capital inland from Williamsburg to Richmond, in hopes that the rebellious Virginia government would be less vulnerable to British attack. The tactic failed, and the British successfully marched into Richmond twice in 1781 - but there were few state government buildings to destroy. Virginia committed to a national government based on the Articles of Confederation, which the General Assembly ratified in 1778. The Articles finally went into effect after Maryland ratified them in 1781, creating the first version of the United States of America - but establishing the confederation had little impact on the status of Richmond. As the capital of the Commonwealth of Virginia, Richmond remained the capital of an independent state that was loosely allied with 12 other independent states. On June 26, 1788, Virginia ratified the new US Constitution. With the creation of the new Federal government based on that document, Richmond became the capital of just one state, a subordinate government in the new national union. Depending upon how you view the Civil War, that status has never changed - or you can claim that the state capital moved again in 1861, 1863, and 1865. If you adopt the Union perspective on the Civil War and the official role of the Restored Government of Virginia, the capital of Virginia shifted to Wheeling in 1861 after Virginia voted to secede from the Union. In 1863, West Virginia joined the Union as an independent state, so the capital of Virginia moved again to Alexandria. The last move was back to Richmond in 1865, after the defeat of the Confederate armies and the dissolution of the Confederate government. NOTE: The state capitol is the building that houses the Virginia General Assembly. The capital (spelled with an "a" instead of an "o") is the city in which the General Assembly meets. At the Federal Level, Washington DC is the capital city and the US Congress meets in the Capitol building. Charlottesville (May/June, 1781), Staunton (June, 1781), and Lynchburg (April, 1865) could claim to have served briefly as the capital city of Virginia, since some form of the General Assembly met there officially. last five locations of the capitals of Virginia Map source: USGS National Atlas The National Capital The District of Columbia was 100 square miles when established in 1800. The Virgina portion was "revested" in 1846, moving the boundary north to the low water mark of the Potomac River along the shoreline of Alexandria and what is today Arlington County. In 1910, the District was in need of land for a prison. It initially purchased 1,500 acres downstream of Alexandria, but local objections forced the city to transfer the land to another "federal" jurisdiction, the Army. To supply water to Camp AA Humphreys (renamed Fort Belvoir in 1935), the Corps of Engineers built Lake Accotink in 1918, when the area was so rural that the camp could use Accotink Creek as an open pipeline. So the District purchased land at Lorton. Now that too is "inappropriate" to Fairfax residents, and the various prisons built by the District at Lorton are edging towards history. The house with the quirky sign out front, "Prison View Estates," will soon be facing a development and parkland known as Laurel Hill. Sic gloria transit. Political power is not concentrated exclusively in capitals. In Powhatan's day, each werowance had authority within his own town, though it was limited by the power of their paramount chief. The English settled at Jamestown, and it was the initial center of government for the colony. John Smith sent some people to Kecoughtan in 1608, to spread out the colonists during a period of intense food shortage. The city now at that site, Hampton, claims to be the longest continuously-settled English speaking community in North America. [Henricus, the second town to be established by the colonists in 1610, was never re-settled after being destroyed in 1622 during a major attack by the Powhatans.] After Lord de la Warre rescued the starving colonists in 1610, the population of Virginia remained concentrated along the James River. The London Company slightly decentralized the political authority in the colony with the initial creation of "hundreds," self-sufficient settlements that were required to be spaced several miles apart from each other. The major decentralization of authority away from Jamestown started in 1618, when Governor Yeardley authorized local courts in Charles City and other "convenient places." In 1634, sixteen years later, the Generally Assembly created eight official "shires" (afterwards called "counties"). This established an official, but lower, level of colonial government outside of the capital. County courthouses became key locations for executive, legislative, and judicial procedures. Colonial Virginians relied upon county courts for decisions that exceeded the authority of the landowners on individual plantations, but did not require the attention of the Governor, his Council, or the General Assembly. Colonial officials were all acting as proxies for the King, at least in theory. As power was decentralized throughout the colony, however, Virginia officials naturally responded more to the concerns of their neighbors than of a distant leader across the Atlantic. The only officials elected by the local residents before 1776 were the two Burgesses to the General Assembly. The justices on the court were appointed by the Governor. If there was a vacancy, the local justices recommended possible replacements - but until 1851, the Governor decided who would be appointed. Justices with the highest social ranking were considered to be members of the "quorum," and all sessions had to include at least one member with that status. The clerk of the court and the sheriff were appointed by the county court. They received no salary, but instead set fees for their services. The sheriff, for example, earned a fee for collecting taxes- and he earned nothing if he failed to collect... What happened at the county courthouse? Typically, once a month the community would assemble there for one day to hear the justices of the county court: There was no separation of powers, no separation between executive, legislative, and judicial authorities, at the county court. Though the vestry and the county court were separate organizations, overlapping membership was common - the small percentage of the population that were Virginia gentry controlled all official forms of authority in colonial society. There was a geographic separation of powers, however. The General Court (which met every quarter) and the General Assembly (which met roughly once a year) assembled in the capital, while the county courts normally met each month at the county seats. It was convenient to have the courthouse accessible to the general population. New counties were formed when it became too burdensome for a large percentage of the population to travel in one day by horseback to the courthouse, and courthouses were often moved to new locations as population centers changed. In an agricultural society with few "central places," the county court days provided rare opportunities for farmers to assemble, buy or sell items, and break the monotony of rural living. Taverns supplied food, drink, and lodging... merchants opened stables and stores... and towns grew up around most of Virginia's courthouses.
http://www.virginiaplaces.org/vacities/26capitals.html
13
21
(Last Updated on : 07/04/2009) Lord Cornwallis concluded the Permanent Settlement Act of 1793. Permanent Settlement was a grand contract between the East India Company and the Landholders of Bengal (Zamindars and independent Talukdars of all designations). Under this act, the landholders and Zamindars were admitted as the absolute owners of landed property to the colonial state system. Not only those, the Zamindars and landholders were allowed to hold their proprietary right at a rate that never changed. Under this contract of Permanent Settlement, the Government could not enhance the revenue demands on Zamindars. Earlier, zamindars of Bengal, Bihar and Orissa had officials for collecting revenue on behalf of the Mughal emperor and his representative or Diwan in Bengal. The Diwan would in turn supervise on them so that there is no less or excessive pressure for earning revenue. East India Company was able to win over Diwani or the right to rule Bengal following the victory in Battle of Buxar in 1764. The Company thus had the responsibility of ruling but it lacked the trained administrator, especially with the persons who knew local tradition and custom. As a result the landlords and Zamindars had to deposit the revenue to the corrupted officials of East India Company. As a consequence the revenue had no certain amount. There was constant pressure to exceed the amount as well as the revenue was never used for the social welfare. The devastating famine of Bengal was caused mainly due to lack of insight of the officials of East India Company. The officials of Company in Calcutta thus understood the importance of supervising of the revenue earning but the question of having incentives over the tax was ignored. Thereby the Governor General Warren Hastings introduced a system of five-yearly inspections and collecting the revenue. The bad side of this system was the appointed tax farmers absconded with as much money as they could earn within this five years of period. The consequences were disastrous and the Parliament came to know about the corruption of East India Company. In 1784 British Prime Minister Pitt the Younger tried to alter the Calcutta Administration with Pitt`s India Act and in the year 1786 lord Charles Cornwallis was sent out to India to supervise the alteration. In 1786 the Court of Directors of East India Company first proposed The Permanent Settlement Act for Bengal. The act was proposed as they were acting against the policy of attempt to increase the taxation of Zamindars. Between 1786 and 1790 the Governor General Lord Cornwallis and Sir John Shore (the later Governor General himself) debated over whether or not to introduce Permanent settlement Act in Bengal. Shore`s point of argument was that the native Zamindars could not trust the permanent Settlement and it would take a long time for them to realize the genuineness of this act. But Cornwallis believed that they would immediately accept Permanent Settlement Act and start investing in improving their land. In 1790 the Court of Directors passed a ten-year (Decennial) Settlement Act to the zamindars, which was later changed to Permanent Settlement Act on 1793. By Permanent Settlement act the security of tenure of the lands were guaranteed to the landlords and the process of paying tax was clear, In short, the former landholders and revenue intermediaries were benefited as their proprietorship on lands they held was assured. This also ensured the minimization of the fortune made on revenues earned by the Company officials. Smallholders were not allowed to sell their lands though their new landlords had no chance to deprive them. The Permanent Settlement Act brought the improvement of the lands by the landowners as they took care of drainage and irrigation. Construction of roads and bridges were encouraged which were lacking in the state of Bengal. As the land revenue got fixed zamindars could securely invest the rest of the money to increase their income without the fear of tax increment. Corwallis made the motivation of the company clear by stating "when the demand of government is fixed, an opportunity is afforded to the landholder of increasing his profits, by the improvement of his lands." The earning of company was thus assured as there were no shortage in the revenues due to defaulting Zamindars, who fell into debts as they could not fix their budged due to fluctuation of revenue. The Permanent Settlement Act had definitely some objectives in view, which can be summarized as : Earning revenue could be made certain. Ensuring a minimum amount of revenue The system needed less supervision, so officials could be engaged in other spheres of administration Forging an alliance between Zamindar class and British Colonial rulers. The goals were achieved largely though not entirely. The immediate consequence of Permanent Settlement act was sudden as well as dramatic but there were also results, which were apparently not apprehended before. The Government tax demand was inflexible and the collectors of East India Company refused to make any adjustment during the time of drought, flood or other natural calamity. This was the drawback of the Permanent Settlement Act, that caused many Zamindars to fall into arrears. The Company`s policy was to put the land in auction, whose taxes are not fulfilled. This created a new market for the land. Many Indian officials of East India Company purchased this land. Thus a new class of bureaucrats was created who purchased lands those were under assessed and profitable. This led to two possibilities- one, to manipulate the system to bring to sale the lands they wanted specifically and the other was that the officials could be purchased by bribing them in order to get possession of a certain land. Thus this bureaucrats class became rich by unfair means. Thus, the Permanent Settlement Act led to commercialization of land, which did not exist in Bengal before. This in consequence created a change in social background. Those who were "lineages and local chiefs" turned to "under civil servants and their descendants, and to merchants and bankers." The new landlord class was generated who had no connection with their lands but managed the property through the managers. There was some obvious influence of Permanent Settlement Act. The company hoped that Zamindar class would be their revenue generating machine as well as they would serve as intermediaries for the political aspect of their rule and would protect British Government in all their interests. However, in course of time it acted both ways. Zamindars were the natural protectors of the British rulers but when the British policy changed during mid -nineteenth century that interfered with social reform, some Zamindars put themselves in opposition. The agreement of permanent Settlement Act only included the revenue earning but there was no mention of the use of the land. Thus to earn more money from the land, the Company officials and Zamindars insisted on planting Indigo and cotton rather than wheat and rice. This was the cause of many worst famines of the Bengal. Another disadvantage was creation of absentee Zamindar class who did not pay attention in the improvement of land. Thus, by Permanent Settlement Act of 1793, Zamindar class became more powerful than they were in the Mughal period.
http://www.indianetzone.com/14/permanent_settlement_act_1793.htm
13
16
Rivers, and their sedimentary deposits, provide one of the important non-marine environments for fossilization. An basic understanding of river deposits can not only help one find fossils, but help to determine the paleoenvironment in which they lived. This page provides a background on a "meandering" river including its structure, dynamics and sedimentary deposit. Meandering streams spread out in low-relief valleys in the interiors of continents or in low-relief coastal plains. A characteristic of water flowing in a definite channel is that it tends to meander, not flow in a straight line. As meanders, or bows, in the river form, their presence only influences more meandering. The reason for this is that the main directional force of the river ends up colliding with the outer bank when it reaches a meander. The force of the water cuts into this outer bank (called the "cut bank") causing erosion and extending the meander outward (see Figure 1, below). At the same time, it generates a circular motion in the water which starts at the cut bank and rotates downward toward the bottom of the channel, then toward the inside bank and then back up to the surface. This causes sediments eroded from the cut bank to be deposited on the inside bank, known as the point bar. The result is that the channel is deeper on the outer cut bank than on the inner point bar. The material deposited on the point bar side forms a slope angling downward (deeper) from the point bar towards the cut bank. This is known as the slip-off slope. The process of meandering may continue until one meander comes in contact with another meander. The river will then take the easiest, more direct, path which isolates the previous river channel. This creates a new body of water, called an oxbow lake, which is separate from the river's movement. Over time, a river may meander in many ways, leaving sedimentary deposits and oxbow lakes throughout a valley (see Figure 2, below). Figure 1 - The effect of a curved channel on water flow Movement of a meandering channel over time |Stream channel||The channel containing the flow of day-to-day water.| |Flood Plain||Low, flat bottom areas that are covered by water only during a flood.| |Point bar||Sand bar developing on the inside bank of the meander bend while the channel is migrating and the outside bank is eroded. Location of point-bar is clearly influenced by the meandering path.| |Cut Bank||Outer bank of meander where force of water cuts both outward and downward.| |Oxbow lake deposits||Meander loop bypassed by the river, because of a neck-cutoff, i.e. when both ends of the meander meet, or a chute-cutoff, i.e. reoccupation by the river of more direct course. Fined grained sediments generally fill the abandoned channel and form impermeable units (mud-plugs).| |Overbank deposits||Sediments deposited by a river on a valley floor outside the stream channel. Such waters usually contain much sediment in suspension, resulting in fine layers of silt/sand deposition.| |Avulsion||When a channel breaches its levee and takes a new course. If the avulsion fails, deposited material form crevasse-splay deposits.| |Crevasse splay||When a channel temporarily floods over its bank depositing a lobe shape of sediments. If the event is a violent flood with mudflow, the deposit may consist of chaotic assemblages of course- and fine- grained sediments (gravel).| |Alternate bars||longitudinal sand deposit in the course of the stream. Alternate bars migrate down the channel| |Slip-off slope||Sedimentary slope from point bar of meander towards cut bank. Consists of layer upon layer of sediments, or lateral accretion units.| In times of flooding, the river breaches its banks. It may temporarily cut through the outer levee and spill large quantities of water and sediments. This is known as a crevasse splay, which is typically in a lobe shape with a mixture of fine- and coarse-grained sediments (see Point Bar Sequence of Figure 3; and orange crevasse deposits of Figure 4). In a less violent flood, the level of the water may rise gradually and simply overflow its bank (overbank deposits) without actually breaching the levee. This also results in flooding of the flood plain, but the resulting sedimentary deposits will be fine-grained sheets of silt as opposed to the mixture of sediments in the crevasse splay. The stratigraphic column of an ancient meandering river is shown below in Figure 3. Figure 3 - The Meandering River (Lynn S. Fichter ) See how these concepts are put to work in understanding the Castle Rock rainforest in "Making of a Fossil Rainforest". River", diagram, Lynn S. Fichter, Organization of Sedimentary Rock Site, Department of Geology/Environmental Science, James Madison University Harrisonburg, Virginia (used with permission) 2. Borehole archives, Rhine-Meuse delta studies, Department of Physical Geography (Utrecht University, Netherlands) 3. "Meandering River Channels", Colorado Water Resources, Colorado State University 4. Leopold, Luna B. and Walter B. Langbein. A Primer on Water. U.S. Geological Survey. U.S. Government Printing Office: Washington D.C., 1960. 5. Ritter, Dale F., R. Craig Kochel, and Jerry R. Miller. Process Geomorphology. Wm. C. Brown Publishers: Dubuque, Iowa, 1995. 6. Principles of Sedimentology, Gerald Friedman & John Sanders, 1978, John Wiley & Sons.
http://www.paleocurrents.com/castle_rock/docs/meandering_river.html
13
19
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Connotation, Character, and Color Imagery in The Great Gatsby |Grades||9 – 12| |Lesson Plan Type||Unit| |Estimated Time||Twelve 50-minute sessions| - explore the concepts of connotation and denotation. - research and discuss cultural connotations of colors. - track color imagery in The Great Gatsby. - analyze a character from The Great Gatsby, based on their observations of related color imagery. BEFORE READING THE BOOK - Write the word Red at the top of the board or a sheet of white paper. - Ask students to brainstorm other words for the color red and write their responses on the board or chart paper. Possible responses include burgundy, cardinal, carmine, cerise, cherry, cranberry, crimson, garnet, maroon, pink, rose, ruby, scarlet, vermilion, and wine. Students may also include compound words such as brick-red or blood-red. Allow students to explore the range of possible words. If students have difficulty thinking of options, suggest that they think about names for paint colors, crayon colors, or even fingernail polish. - Share paint swatches or crayon names that you gathered before the session. Ask students to look for swatches or list of names for colors that they would identify as a shade of red. - Compare the names for the paint swatches to the list of words for the color red that students brainstormed. - Ask the students the following questions: - How would readers or listeners react to these color names? - What associations will they make? - What would you expect from a can of paint named after these colors? - Why would a paint company use one of these names for their products? What kind of buyer would they be trying to attract? - How would readers or listeners react to these color names? - Introduce the idea of connotation, defining it as the associations that people make with a word. You can contrast connotation with the denotative value of a word, its more literal meaning, and give an example of a word (such as "chicken") which has particular connotation depending on the listener: to a poultry farmer, it might bring one thing to mind; to a restaurant owner, another thing; to someone who is afraid, still another thing. In the phrase, "chicken soup," it can bring to mind another kind of thoughts. If desired, share online definitions of connotation and denotation: - Ask students to apply this idea to the colors that they have listed as well as the colors on the paint swatches or the crayon names. Encourage them to discuss how the colors are connotative by asking such questions as "Why would you (or wouldn't you) use this color name for a paint color?" and "Are there other products that this color name would be appropriate for?" If students need more suggestions, you might ask them to compare the names for paint or crayon colors to colors used to describe cars, fingernail polish, or clothing (and how clothing colors differ by who might wear the article of clothing). If your students need more information to understand connotation, share the What Does the Word Chicken Mean? sheet as an overhead or handout to demonstrate the many connotations of the word. You can either explore the various meanings of the word in whole class discussion or divide your class into small groups that consider one or more of the images each then share their findings with the class before proceeding. Once students have completed this practice, you might return to your discussion of paint or crayon colors, perhaps asking students to think of a new name for a particular shade and to support their choice by explaining the connotations associated with their selection. - Once you've defined connotation and you're satisfied that students understand the concept, divide students into eight small groups. Each group will be assigned a color to research, so eight groups are needed to cover the range of colors. - Assign a different color to each group, so that you have a group for each of the following: red, blue, green, yellow, purple, orange, white, and black. If you have any students who have difficulties differentiating among certain colors, be sure to assign them to a color that they are able to distinguish. - Explain that each group will research and compile information about the cultural connotations of the particular color they have been assigned during the next class session. After they complete their research, the group will create a presentation for the class that explains the connotations of their color. If desired, you may also ask students to create a handout for the class on their color. - For homework, ask students to log places where they have seen their color in their journals. For instance, someone in the red group might write down "stop sign," and someone in the yellow group might write down "school bus." - Remind students that during this session they will research and compile information about the cultural connotations of the particular color they have been assigned. After they complete their research, the group will create a 3 to 5 minute presentation for the class that explains the connotations of their color. If desired, you may also ask students to create a handout for the class on their color. - Demonstrate the Exploring Cultural Connotations of Color travelogue, which asks students to visit four Websites and gather details on the associations and connotations for their group's color. Be sure to show students how to print out or save their research. - Give students the rest of the session to research and work on their presentations. - As groups finish their online research, ask them to look through their lists of color examples from their homework and think about how the information on connotations relates to the examples that they have gathered. Encourage students to incorporate examples in their presentations. - As students shift from research to creating their presentations, provide chart paper and markers or other supplies that will help them with their work. If computer access is adequate, you might ask groups to create PowerPoint presentation. - Circulate among students as they work, providing feedback and support. - At the end of the session, remind students that they will present the research on their group's color at the beginning of the next session. - Give students five to ten minutes to make last-minute preparations and to practice their presentations. - Have groups present their color research to the entire class, allowing about five minutes per group. - Encourage class discussion about the research, especially sharing of examples of color use that now seem meaningful in ways that they didn't previously. For instance, ask students to think about why fast-food restaurants use the colors that they do in their logos and designs. - After you've discussed the general connotations of individual colors, spend a few minutes talking about what happens when colors are combined—Do their meanings complement one another? Do they mean something else? A simple, but likely obvious example to use is a combination of the colors red, white, and blue. What happens when those three colors are used together? How do their connotations change from those that each suggested when considered in isolation? - Ask students to predict how the information about colors that they have explored will affect a work of literature. If students have recently read works that featured color imagery, you might refer to the examples as part of students' discussion of the issue. - Ask student to read Robert Frost's short poem "Nothing Gold Can Stay" for homework, and write in their journals about the poet's use of color imagery and how the imagery relates to the color research the class has conducted. Encourage students to use the terms connotation and denotation as part of their entry. - Read Frost's "Nothing Gold Can Stay" to the class, and ask students to share their comments and observations on the poem's use of color. You can have students read their journal entries to the class, or ask students to discuss generally based on their entries. Provide reinforcement for correct use of the terms connotation and denotation as well as for concrete connections between imagery in the poem and the class's color research. - Once you're satisfied that students understand the idea, explain that the class will be tracking color imagery through the novel The Great Gatsby by F. Scott Fitzgerald. - Use the F. Scott Fitzgerald: Career Timeline, from PBS' American Masters, to introduce biographical information on Fitzgerald's life (or ask students to explore the interactive timeline individually at computers). - If desired, share additional resources from the F. Scott Fitzgerald Centenary site, which includes biographical material, photographs, texts, and critical essays. - Explain that Fitzgerald relies on color imagery to reveal details about the character, plot, and setting in his novel. - Pass out copies of the Color Imagery Journals and explain that students will use the form to track the novelist's use of color imagery as they read. Alternately, display an overhead of the Color Imagery Journals and ask students to copy the 4-column format into their journals, and explain that students will track the color imagery by recording it in their journals as they read. - Demonstrate the process of filling out the Color Journal form—either fill out a blank form as a class, or display an overhead of the sample color journal. - Stress that students are not expected to find and list every single reference, especially if looking for colors disrupts their reading. - Answer any questions that students have about the process. - Ask students to begin reading the book and tracking its color imagery for homework. WHILE READING THE BOOK - Cover the novel in your class sessions as you would any other reading, completing any comprehension and discussion activities that are appropriate for your students. Discuss color imagery as the issues come up during your conversations about the various sections of the novel. - For additional, structured activities for the novel, try the following lesson plans: - The "Secret Society" and Fitzgerald's The Great Gatsby, from EDSITEment - Murder and Mayhem—The Great Gatsby: The Facts Behind the Fiction, from the Library of Congress's American Memory Project - The "Secret Society" and Fitzgerald's The Great Gatsby, from EDSITEment AFTER READING THE BOOK - After you have finished reading the novel, ask students to review their Color Imagery Journal entries. Ask students to choose a particular color to track through the novel, noting how Fitzgerald uses the color and the character(s) that it relates to. You might share an example with students to be sure that they understand the expectations. For instance, Fitzgerald often mentions shades of red when Tom is in a scene. Explain that students' job is to think about why Fitzgerald has made this association between color and character. - Have students freewrite for ten minutes about the character who is most often associated with the chosen color and what they noticed as they reviewed their journals. - Arrange students in random groups of two or three members each—there is no need to group them based on the colors they have written about. In fact, it's desirable for the groups to discuss a range of colors and characters. - In these groups, ask students to share and discuss their observations and freewriting. Encourage students to talk about the color, character, general conclusions, and questions. - If student groups have not brought up the topic on their own, ask the groups to draw direct connections to their research on color connotations from the earlier sessions in the unit. - Bring the class together, and divide the board into five sections, one each for Daisy, Tom, Jordan, Gatsby, and Others (or post a piece of chart paper for each character). - As an entire class, list the colors associated with each of the characters along with the possible symbolic meanings based on students' presentations on the colors. - Once all the characters have been labeled, discuss the results. Students may disagree about what a particular color tells readers about the characters. Encourage students to point to evidence in the novel that supports their interpretations. - For homework, ask students to gather their conclusions about the character and color they wrote about at the beginning of the session. - Invite students to share any comments from their homework or reflections on the color imagery in the novel. - Explain that students will use their Color Imagery Journals and research on color associations to write a final paper that explains their analysis of a specific character from the novel. - Pass out copies of the character analysis assignment and character analysis rubric. Explain the assignment and answer any questions that students have about the activity. - Point out the resources that students can use as they work on their character analysis papers. Specifically talk about how to use notes in the Color Imagery Journals and the presentation information from earlier sessions on color associations. Additionally, remind students that their notes from the previous class session and that they wrote for homework include details that they can use in their drafting process. - Students may be concerned that they missed important references to the colors that they are researching. If you find this situation in your class, visit the online version of The Great Gatsby, and show to use the Find command in their Web browser to locate particular color references in the book. - Allow students to begin work on their drafts during the time remaining in class. Students can share drafts as the session progresses. - At the end of the session, remind students when the final draft of their work will be due. - Continue the lesson by allowing additional class sessions for students to write, share their drafts with small groups, and compare their work to the rubric. - Since students' work will include quotations from the novel, the class may benefit from a minilesson on how to punctuate sentences using quotation marks. During the editing process for drafts of the character analysis, use the ReadWriteThink lesson Inside or Outside? A Minilesson on Quotation Marks and More to discuss the punctuation conventions; then have students apply the minilesson to their drafts. - Monitor student interaction and progress during group work and research sessions to assess social skills and assist any students having problems with the project. - Check students’ Color Imagery Journals for completion and detail. If possible, monitor entries informally while students are reading so that you can provide advice and feedback before students finish reading the novel. Since the color journals will be resources for students’ character analysis papers, it’s ideal to ensure that their notes will be helpful in later sessions. - Use the rubric to assess students’ final drafts.
http://www.readwritethink.org/classroom-resources/lesson-plans/connotation-character-color-imagery-831.html?tab=4
13
16
Slavery in the colonial United States |By country or region| |Opposition and resistance| Although they knew about Spanish and Portuguese slave trading, the British did not conceive of using slave labor in the Americas until the 17th century. British travelers were fascinated by the dark-skinned people they found in West Africa, and sought to create mythologies that situated these new human beings in their view of the cosmos. The first Africans to arrive in England came voluntarily with John Lok, who intended to teach them English in order to facilitate trading. This model gave way to a slave trade initiated by John Hawkins, who captured 300 Africans and sold them to the Spanish. Blacks in England were subordinate but did not have the legal status of chattel slaves. In 1607, English settlers established Jamestown as the first permanent English colony in the New World. Tobacco became the chief crop of the colony, due to the efforts of John Rolfe in 1611. Once it became clear that tobacco was going to drive the Jamestown colony, more labor was needed. The British aristocracy needed to find a labor force to work on its plantations in the Americas. The major possibilities were indentured servants from Britain, native Americans, and West Africans. Towards indigenous Americans, the English entertained two lines of thought simultaneously. Because these people were lighter skinned, they were seen as more European and therefore as candidates for civilization. At the same time, because they were occupying the land desired by the colonial powers, they were from the beginning targets of a potential military campaign. At first, indentured servants were used as the needed labor. These servants provided up to seven years of free service and had their trip to Jamestown paid for by someone in Jamestown. Once the seven years was over, the indentured servant was free to live in Jamestown as a regular citizen. However, colonists began to see indentured servant as too costly, and in 1619, Dutch traders brought the first African slaves to Jamestown. The first enslaved Africans in US territory San Miguel de Gualdape The first enslaved Africans arrived in what is now the United States as part of the San Miguel de Gualdape colony (most likely located in the Winyah Bay area of present-day South Carolina), founded by Spanish explorer Lucas Vásquez de Ayllón in 1526. On October 18, 1526, Ayllón died and the colony was almost immediately disrupted by a fight over leadership, during which the slaves revolted and fled the colony to seek refuge among local Native Americans. Many of the colonists died shortly afterwards of an epidemic, and the colony was abandoned, leaving the escaped enslaved Africans behind in what is now South Carolina. In addition to being the first instance of enslaved Africans in the United States, San Miguel de Guadalpe was also the first documented slave rebellion on North American soil. |British America (minus North America)||18.4%| |British North America||6.45%| |Dutch West Indies||2.0%| |Danish West Indies||0.3%| In 1565, the colony of Saint Augustine in Florida, founded by Pedro Menéndez de Avilés became the first permanent European settlement in North America, and included an unknown number of free and enslaved Africans that were part of this colonial expedition. Until the early 18th century, enslaved Africans were difficult to acquire in the colonies that became the United States, as most were sold in the West Indies. One of the first major establishments of African slavery in these colonies occurred with the founding of Charles Town and South Carolina in 1670. The colony was founded mainly by planters from the overpopulated sugar island colony of Barbados, who brought relatively large numbers of African slaves from that island. The Carolinians transformed the Indian slave trade during the late 17th and early 18th centuries by treating slaves as a trade commodity to be exported, mainly to the West Indies. Alan Gallay estimates that between 1670 and 1715, between 24,000 and 51,000 Indian slaves were exported from South Carolina—much more than the number of Africans imported to the colonies of the future United States during the same period. The first Africans to be brought to English North America landed in Virginia in 1619. These individuals appear to have been treated as indentured servants, and a significant number of enslaved Africans even won their freedom through fulfilling a work contract or for converting to Christianity. Some successful free people of color, such as Anthony Johnson, acquired slaves or indentured servants themselves. To many historians, notably Edmund Morgan, this evidence suggests that racial attitudes were much more flexible in 17th century Virginia than they would subsequently become. A 1625 census recorded 23 Africans in Virginia. In 1649 there were 300, and in 1690 there were 950. These included a Black landowner named Anthony Johnson, New England The French introduced legalized slavery into their colonies in the Illinois Country. After the port of New Orleans, to the south, was founded in 1718, more African slaves were imported to the Illinois Country for use as agricultural laborers. By the mid-eighteenth century, slaves accounted for as high as a third of the population. The development of slavery in 17th-century America The Dutch West India Company introduced slavery in 1625 with the importation off eleven enslaved blacks who worked as farmers, fur traders, and builders to New Amsterdam (present day New York City), capital of the nascent province of New Netherland, which later expanded across the North River (Hudson River) to Bergen (in today's New Jersey). Later slaves were held privately by the settlers to the area. Although enslaved, the Africans had a few basic rights and families were usually kept intact. Admitted to the Dutch Reformed Church and married by its ministers, their children could be baptized. Slaves could testify in court, sign legal documents, and bring civil actions against whites. Some were permitted to work after hours earning wages equal to those paid to white workers. When the colony fell, the company freed all its slaves, establishing early on a nucleus of free negros. The barriers of slavery hardened in the second half of the 17th century, and imported Africans' prospects grew increasingly dim. By 1640, the Virginia courts had sentenced at least one black servant, John Punch, to slavery. In 1656 Elizabeth Key won a suit for freedom based on her father's status as a free Englishman, and his having baptized her as Christian in the Church of England. In 1662 the Virginia House of Burgesses passed a law with the doctrine of partus, stating that any child born in the colony would follow the status of its mother, bond or free. This was an overturn of a longheld principle of English Common Law, whereby a child's status followed that of the father. It enabled slaveholders and other white men to hide the mixed-race children born of their rape of slave women and removed their responsibility to acknowledge, support, or emancipate the children. During the second half of the 17th century, the British economy improved and the supply of British indentured servants declined, as poor Britons had better economic opportunities at home. At the same time, Bacon's Rebellion of 1676 led planters to worry about the prospective dangers of creating a large class of restless, landless, and relatively poor white men (most of them former indentured servants). Wealthy Virginia and Maryland planters began to buy slaves in preference to indentured servants during the 1660s and 1670s, and poorer planters followed suit by c.1700. (Slaves cost more than servants, so initially only the wealthy could invest in slaves.) The first British colonists in Carolina introduced African slavery into the colony in 1670, the year the colony was founded, and slavery spread rapidly throughout the Southern colonies. Northerners also purchased slaves, though on a much smaller scale. Northern slaves typically dwelled in towns and worked as artisans and artisans' assistants, sailors and longshoremen, and domestic servants. The slave trade to the mid-Atlantic colonies increased substantially in the 1680s, and by 1710 the African population in Virginia had increased to 23,100 (42% of total); Maryland contained 8,000 Africans (23% of total). English colonists not only imported Africans but also captured Native Americans, impressing them into slavery. Many Native Americans were shipped as slaves to the Caribbean. Many of these slaves from the British colonies were able to escape by heading south, to the Spanish colony of Florida. There they were given their freedom, if they declared their allegiance to the King of Spain and accepted the Catholic Church. In 1739 Fort Mose was established by African American freedmen and became the northern defense post for St. Augustine. In 1740, English forces attacked and destroyed the fort, which was rebuilt in 1752. Because Fort Mose became a haven for escaped slaves from the English colonies to the north, it is considered a precursor site of the Underground Railroad. Curiously, chattel slavery developed in British North America before the legal apparatus that supported slavery did. During the late 17th century and early 18th century, harsh new slave codes limited the rights of African slaves and cut off their avenues to freedom. The first full-scale slave code in British North America was South Carolina's (1696), which was modeled on the Barbados slave code of 1661 and was updated and expanded regularly throughout the 18th century. A 1691 Virginia law prohibited slaveholders from emancipating slaves unless they paid for the freedmen's transportation out of Virginia. Virginia criminalized interracial marriage in 1691, and subsequent laws abolished blacks' rights to vote, hold office, and bear arms. Virginia's House of Burgesses established the basic legal framework for slavery in 1705. The Atlantic slave trade to North America Only a fraction of the enslaved Africans brought to the New World ended up in British North America—perhaps 5%. The vast majority of slaves shipped across the Atlantic were sent to the Caribbean sugar colonies, Brazil, or Spanish America. Throughout the Americas, but especially in the Caribbean, tropical disease took a large toll on their population and required large numbers of replacements. Many Africans had a limited natural immunity to yellow fever and malaria, but malnutrition, poor housing and inadequate clothing allowances, and overwork contributed to a high mortality rate. In British North America the slave population rapidly increased themselves, where in the Caribbean they did not. The lack of proper nourishment, being depressed sexually, and poor health are possible reasons. Of the small numbers of babies born to slaves in the Caribbean, only about 1/4 survived miserable conditions on a sugar plantation. It was not only the major colonial powers in Europe such as France, Spain, England, the Netherlands or Portugal that were involved in the transatlantic person trade. Small countries, such as Sweden or Denmark, tried to get into this lucrative business. For more information about this, see The Swedish slave trade. Indentured servitude Some historians, notably Edmund Morgan, have suggested that indentured servitude provided a model for slavery in 17th-century Virginia. In theory, indentured servants sold their labor voluntarily for a period of years (typically four to seven), after which they would be freed with "freedom dues" of cash, clothing, tools, and/or land. In practice, indentured servitude could be like slavery and was often a violent system; some Englishmen and Englishwomen (felons and those who were kidnapped) were compelled to become indentured servants, and in the early 17th century, many indentured servants did not live long enough to be freed. The principal significance of indentured servitude, Morgan argues, is that it accustomed 17th century Virginia planters to use physical violence (including beating and rape) to compel workers to work. This set a precedent for the violence of African chattel slavery, which the British colonies first adopted on a large scale in the 1660s and 1670s. Enslavement of Native Americans Pre-contact indigenous peoples in the American southeast had practiced a form of slavery on people captured during warfare. Larger societies structured as chiefdoms kept slaves as unpaid field laborers, while in band societies the ownership of enslaved captives attested to their captor's military prowess. Some war captives were also subjected to ritualized torture and execution. Alan Gallay and other historians emphasize differences between Native American enslavement of war captives and the European slave trading system, into which numerous native peoples were integrated. In North America, among the indigenous people, slavery was more a 'rite of passage' or system of assimilating outside individuals into groups rather than a property or ownership right. Richard White, in The Middle Ground elucidates the complex social relationships between American Indian groups and the early empires, including 'slave' culture and scalping. Robbie Ethridge states, "Let there be no doubt…that the commercial trade in Indian slaves was not a continuation and adaptation of pre-existing captivity patterns. It was a new kind of slaving, requiring a new kind of occupational specialty…organized militaristic slavers." Puritan New England, Virginia, Spanish Florida, and the Carolina colonies engaged large-scale enslavement of Native Americans, often through the use of Indian proxies to wage war and acquire the slaves. In New England, slave raiding accompanied the Pequot War and King Philip's War, but declined after the latter war ended in 1676. Enslaved Indians were in Jamestown from the early years of the settlement, but large-scale cooperation between English slavers and the Westo and Occaneechi peoples, whom they armed with guns, did not begin until the 1640s. These groups conducted enslaving raids in what is now Georgia, Tennessee, North Carolina, South Carolina, Florida, and possible Alabama. The Carolina slave trade, which included both trading and direct raids by colonists, was the largest among the British colonies in North America, estimated at 24,000 to 51,000 Indians by Gallay. Historian Ulrich Phillips argues that Africans were inculcated as slaves and the best answer to the labor shortage in the New World because American Indian slaves were more familiar with the environment, and would often successfully escape into the wilderness that African slaves had much more difficulty surviving in. Also, early colonial America depended heavily on the sugar trade, which led to malaria, a disease the Africans were far less susceptible to than Native American slaves. The rise of the anti-slavery movement African and African American slaves expressed their opposition to slavery through armed uprisings such as the Stono Rebellion and the New York Slave Insurrection of 1741, through malingering and tool-breaking, and most commonly, by running away, either for short periods or permanently. Until the Revolutionary era, almost no white American colonists spoke out against slavery. Even the Quakers generally tolerated slaveholding (and slave-trading) until the mid-18th century, although they emerged as vocal opponents of slavery in the Revolutionary era. In 1688, 4 German Quakers in Germantown, a town outside Philadelphia, wrote a petition against the use of slaves by the English colonists in the nearby countryside. They presented the petition to their local Quaker Meeting, and the Meeting was sympathetic, but could not decide what the appropriate response should be. The Meeting passed the petition up the chain of authority to Philadelphia Yearly Meeting, where it continued to be ignored and was archived and forgotten for 150 years. In 1844 the petition was rediscovered and became a focus of the burgeoning abolitionist movement. It was the first public American document of its kind to protest slavery. It was also one of the first public declarations of universal human rights. Thus although the petition itself was forgotten, the idea that every human has equal rights was discussed in Philadelphia Quaker society over the next century. Slavery was officially sanctioned by Philadelphia Yearly Meeting in 1776. Following the Revolution, the northern states all abolished slavery, with New Jersey acting last in 1804. By 1808 all states (except South Carolina) had banned the international buying or selling of slaves. Acting on the advice of President Thomas Jefferson, who denounced the international trade as "violations of human rights which have been so long continued on the unoffending inhabitants of Africa, in which the morality, the reputation, and the best interests of our country have long been eager to proscribe" in 1807 Congress banned the international slave trade. However, the domestic slave trade continued. The French colony of St. Domingue abolished slavery in the massive slave uprising that accompanied the Haitian Revolution; emancipation was officially proclaimed in 1793. Haiti was the first government in the Americas to abolish slavery, and the Haitian Revolution inspired some copycat movements in North America, notably Gabriel's Rebellion of 1800, which failed. Slavery proved to be a key contributing issue to the American Civil War (1861–1865) and the United States finally abolished slavery by the 13th Amendment to the Constitution in 1865. Critics of slavery as an economic institution argued that the practice was inherently inefficient and unprofitable in the long run. A popular myth suggests that slavery in the South would have died out even without a Civil War due to its inability to maintain lasting economic gains. Southern abolitionists reasoned that slaves did not have the necessary personal incentive needed to propel productivity in the farming sector. Studies have since shown that slavery was indeed a highly efficient mode of production for particular crops like sugar and cotton. A plantation's gang system made use of an effective division of labor wherein slaves worked on tasks that suited their physical capabilities in an organizational setting not unlike that of a factory. See also - History of slavery in Maryland - Slavery in the United States - Slavery at common law - Slavery in Canada - Slavery in the Spanish New World colonies - Polly Berry - Lucy Delaney - New York Times - Oxford Journals - Los Angeles Times - Wood, Origins of American Slavery (1997), p. 21. "Yet those in high places who advocated the overseas expansion of England did not propose that West Africans could, should, or would be enslaved by the English in the Americas. Indeed, West Africans scarcely figured at all in the sixteenth-century English agenda for the New World." - Wood, Origins of American Slavery (1997), p. 23. "More than anything else it was the blackness of West Africans that at once fascinated and repelled English commentators. The negative connotations that the English had long attached to the color black were to deeply prejudice their assessment of West Africans." - Wood, Origins of American Slavery (1997), p. 26. "It seems that these men were the first West Africans to set foot in England, and their arrival marked the beginning of a black British population. The men in question had come to England willingly. Lok's sole motive was to facilitate English trading links with West Africa. He intended that these five men should be taught English, and something about English commercial practices, and then returned home to act as intermediaries between the English and their prospective West African trading partners." - Wood, Origins of American Slavery (1997), p. 27. - Wood, Origins of American Slavery (1997), p. 28. - New York Times - Wood, Origins of American Slavery (1997), p. 18. - Wood, Origins of American Slavery (1997), pp. 34–39. - Frontier Resources - Margaret F. Pickett; Dwayne W. Pickett (15 February 2011). The European Struggle to Settle North America: Colonizing Attempts by England, France and Spain, 1521-1608. McFarland. p. 26. ISBN 978-0-7864-5932-2. Retrieved 29 May 2012. - Stephen D. Behrendt, David Richardson, and David Eltis, W. E. B. Du Bois Institute for African and African-American Research, Harvard University. Based on "records for 27,233 voyages that set out to obtain enslaved Africans for the Americas". Stephen Behrendt (1999). "Transatlantic Slave Trade". Africana: The Encyclopedia of the African and African American Experience. New York: Basic Civitas Books. ISBN 0-465-00071-1. - Wood, Origins of American Slavery (1997), pp. 64–65. - Gallay, Alan. (2002) The Indian Slave Trade: The Rise of the English Empire in the American South 1670–1717. Yale University Press: New York. ISBN 0-300-10193-7, pg. 299 - Edmund S. Morgan, American Slavery, American Freedom: The Ordeal of Colonial Virginia (New York: Norton, 1975), pp.154–157. - Morgan, American Slavery, American Freedom pp.327–328. - Wood, Origins of American Slavery (1997), p. 78. - Wood, Origins of American Slavery (1997), pp. 94–95. - Wood, Origins of American Slavery (1997), p. 103. - Ekberg, Carl J. (2000). French Roots in the Illinois Country. University of Illinois Press. pp. 2–3. ISBN 0-252-06924-2. - Hodges, Russel Graham (1999), Root and Branch: African Americans in New York and East Jersey, 1613-1863, Chapel Hill, North Carolina: University of North Carolina Press - Shakir, Nancy. "Slavery in New Jersey". Slaveryinamerica. Retrieved 2008-10-22. - Karnoutsos, Carmela. "Undergound Railroad". Jersey City Past and Present. New Jersey City University. Retrieved 2011-03-27. - Wood, Origins of American Slavery (1997), p. 88. - Aboard the Underground Railroad - Fort Mose Site - Alan Taylor, American Colonies (New York: Viking, 2001), p. 213. - Alan Taylor, American Colonies (New York: Viking, 2001), p. 156. - America Past and Present Online - The Laws of Virginia (1662, 1691, 1705) - Wood, Origins of American Slavery (1997), p. 92. "In 1705, almost exactly a century after the first colonists had set foot in Jamestown, the House of Burgesses codified and systematized Virginia's laws of slavery. These laws would be modified and added to over the next century and a half, but the essential legal framework within which the institution of slavery would subsequently operate had been put in place." - Gallay, Alan. (2002) The Indian Slave Trade: The Rise of the English Empire in the American South 1670–1717. Yale University Press: New York. ISBN 0-300-10193-7, pg. 29 - Gallay, Alan. (2002) The Indian Slave Trade: The Rise of the English Empire in the American South 1670–1717. Yale University Press: New York. ISBN 0-300-10193-7, p. 187–90. - "Europeans did not introduce slavery or the notion of slaves as labourers to the American South but instead were responsible for stimulating a vast trade in humans as commodities." (p. 29) "In Native American societies, ownership of individuals was more a matter of status for the owner and a statement of debasement and "otherness" for the slave than it was a means to obtain economic rewards from unfree labor. … The slave trade was an entirely new enterprise for most people of all three culture groups [Native American, European, and African]." (p. 8) Gallay, Alan. (2002) The Indian Slave Trade: The Rise of the English Empire in the American South 1670–1717. Yale University Press: New York. ISBN 0-300-10193-7, pg. 29 - White, Richard. (1991) The Middle Ground: Indians, Empires, and Republics in the Great Lakes Region. Cambridge University Press. ISBN 0-521-42460-7 - Ethridge, From Chicaza to Chickasaw (2010), p. 93. - Ethridge, From Chicaza to Chickasaw (2010), pp. 97–98. - Ethridge, From Chicaza to Chickasaw (2010), p. 109. - Ethridge, From Chicaza to Chickasaw (2010), p. 65. - Figures cited in Ethridge, From Chicaza to Chickasaw (2010), p. 237. - Phillips, Ulrich. American Negro Slavery (1918) - Dumas Malone, Jefferson and the President: Second Term, 1805-1809 (1974) pp. 543-4 - Ethridge, Robbie Franklyn (2010). From Chicaza to Chickasaw: The European invasion and the transformation of the Mississippian world, 1540-1715. Chapel Hill: University of North Carolina Press. ISBN 978-0-8078-3435-0 0807834351 Check - Wood, Betty. The Origins of American Slavery. New York: Hill and Wang, 1997. ISBN: 978-0-8090-1608-2. Further reading - Aptheker, Herbert. American Negro Slave Revolts.New York: International Publishers, 1963. - Berlin, Ira. Many Thousands Gone: The First Two Centuries of Slavery in North America. Cambridge: Belknap Press of Harvard University, 1998. - Genovese, Eugene D. Roll, Jordan, Roll: The World the Slaves Made. New York: Pantheon, 1974. - Gutman, Herbert G. The Black Family in Slavery and Freedom, 1750–1925. New York: Pantheon, 1976. - Huggins, Nathan. Black Odyssey: The African-American Ordeal in Slavery. New York: Pantheon, 1990. - Jewett, Clayton E. and John O. Allen; Slavery in the South: A State-By-State History (Greenwood Press, 2004) - Levine, Lawrence W. Black Culture and Black Consciousness: Afro-American Folk Thought from Slavery to Freedom. New York: Oxford University Press, 1977. - Morgan, Edmund S. American Slavery, American Freedom: The Ordeal of Colonial Virginia. New York: Norton, 1975. - Olwell, Robert. Masters, Slaves, & Subjects: The Culture of Power in the South Carolina Low Country, 1740–1790 (1998). - Schwalm, Leslie A. A Hard Fight for We: Women's Transition from Slavery to Freedom in South Carolina. Urbana: University of Illinois Press, 1997. - White, Deborah Gray. Ar'n't I a Woman? Female Slaves in the Plantation South. New York: Norton, 1985. - Williams, Eric, Capitalism and Slavery. 4th edition, 1975. - Wood, Betty. Slavery in Colonial America, 1619-1776 (2005) - Wood, Betty. Slavery In Colonial Georgia, 1730-1775 (2007) - Wood, Peter H. Black Majority: Negroes in Colonial South Carolina from 1670 through the Stono Rebellion (1974).
http://en.wikipedia.org/wiki/Slavery_in_Colonial_America
13
15
Common Core State Standards (CCSS) and Students With Hearing Loss: Important Role for Educational Audiologists Common Core State Standards (CCSS) are a set of grade-specific skills and concepts that all students are expected to acquire in grades K–12 so that they are prepared to succeed in college course work and workforce training programs. The CCSS Initiative was a state-led effort coordinated by the National Governors Association Center for Best Practices (NGA Center) and the Council of Chief State School Officers (CCSSO). Most states have adopted the CCSS. The CCSS are to be adopted verbatim; however, states have the option to individualize and expand the standards by adding an additional 15% of state-developed standards. How Do CCSS Apply to Students With Hearing Loss? CCSS define the knowledge and skills that all students should acquire to be successful after high school graduation. To participate in the general education curriculum, students with hearing loss need individualized supports and services that enable them to achieve the same high standards required of their peers without hearing loss. These supports and services may include instructional and classroom modifications and accommodations (including sophisticated personal and classroom technology) to ensure access to classroom instruction. Students with hearing loss often also require related services in areas of speaking and listening, language, communication, reading, social, and self-advocacy skills. Linking Individualized Education Program (IEP) activities to content standards helps ensure students with hearing loss have opportunities to reinforce the CCSS addressed in their classrooms. Instructional and Classroom Supports and Services for Students With Hearing Loss Determining services, placements, and accommodations for students with hearing loss requires a comprehensive review of students' needs. Examples of the areas the IEP team should consider include: - need for related services and supports (e.g., speech-language, educational audiology, English language learning, occupational therapy, physical therapy, counseling, parent training) - language level - communication mode (e.g., signed English, spoken English, American Sign Language) - personal hearing technology (e.g., hearing aids, cochlear implants) - need for other hearing assistive technology (e.g., FM system, classroom distribution system) - need for interpreter services - classroom environment (e.g., acoustics, size, lighting) - instructional accommodations (e.g., teacher speaking style, language models, use of visual information, classroom technology) Creating Standards-Based IEPs In response to adoption of CCSS, many states now implement standards-based IEPs. Aligning IEP goals and objectives with CCSS ensures students receive individualized services and supports that help them participate and make progress in the general education curriculum. Seven steps to developing standards-based IEPs [PDF] are described in detail on the National Association of State Directors of Special Education (NASDSE) Project Forum website, including: Step 1: Consider the grade-level content standards for the grade in which the student is enrolled or would be enrolled based on age. Step 2: Examine classroom and student data to determine where the student functions in relation to grade-level standards. Step 3: Develop the present level of academic achievement and functional performance. Step 4: Develop measurable annual goals aligned with grade-level academic content standards. Step 5: Assess and report the student's progress throughout the year. Step 6: Identify specially designed instruction, including accommodations and/or modifications, the student needs to access and progress in the general education curriculum. Step 7: Determine the most appropriate assessment option. The Role of the Educational Audiologist The role of the educational audiologist on the educational team supporting students with hearing loss cannot be underestimated. The efforts of the IEP team need to be guided by a complete understanding of the child's hearing loss and overall needs. This knowledge must, in turn, be coordinated with and integrated into ongoing classroom instruction and extracurricular activities. The audiologist is the education team member with comprehensive knowledge about hearing loss and its consequences. Therefore, audiologists provide an excellent resource for comprehensive assessment, direct/indirect services, in-service activities, and public information efforts that can significantly enhance the intervention efforts of the education team. It is vital that all service providers work collaboratively to support the student and address his/her individual needs. Classroom Acoustics Resources Common Core State Standards: A Resource for SLPs From Content Standards to IEP Goals (ASHA Professional Development) Guidelines for Audiology Service Provision in and for Schools Schools Survey Report: Trends in Educational Audiology 2010-2012 [PDF] CCSS: Application to Students with Disabilities [PDF] Common Core State Standards Initiative Iowa Department of Education: The Expanded Core Curriculum for Students Who Are Deaf or Hard of Hearing National Association of State Directors of Special Education Project Forum website IEP Checklist: Recommended Accommodations and Modifications for Students With Hearing Loss [PDF]
http://www.asha.org/aud/Common-Core-State-Standards-and-Students-With-Hearing-Loss/
13
34
Print this page. Home / Browse / California Gold Rush, Effect of the The California gold rush did not have the positive impact on Arkansas envisioned by its promoters, who hoped for Fort Smith (Sebastian County) to become the hub of westward migration. It did force Arkansas out of its frontier status as people went farther west to California. It also shifted population. John L. Ferguson wrote that, following 1850, Arkansans searching out new opportunity were continuing to move westward; by 1860, some 2,000 Arkansans lived in California, while another 11,000 had emigrated to Texas. The Arkansas Gazette of May 14, 1852, noted that “it is calculated that out of every 100 persons who have gone to California, fifty have been ruined, forty no better than they would have been had they stayed at home, five a little better, and four still better, and one has made a fortune.” But in 1849, the future beckoned. President James Polk’s State of the Union message was headline news in the Arkansas Gazette on December 22, 1848. Polk confirmed rumors that had circulated for months—that there was indeed gold in California, and plenty of it. Hundreds of Arkansans began to make preparations for the 2,000-mile trek to the goldfields. The failure of the State and Real Estate banks had depreciated currency, businesses and citizens were in debt, and the treasury was empty. Some decided to go for the adventure, while others joined up to escape an unhappy marriage and/or debt. Whatever the reason, the decision to risk the journey was further sealed when the Arkansas State Gazette published a letter from Dr. Walter Colton Alcalde of Monterey, California, on December 29, 1848. He wrote about the fabulous profits to be made as miners recovered what was thought to be an inexhaustible supply of gold, then valued at $18 an ounce. Similar reports were published on a daily basis, adding further encouragement to a prospective gold seeker. Travelers had a number of choices for their route to California. Some opted for the popular northern Overland Trail, which went by way of Nebraska, Wyoming, and Utah, while others chose one of the branches of the Santa Fe Trail, both of which led out of Independence, Missouri. A number of Washington County emigrants teamed up with a group of Cherokee and blazed the Cherokee Trail, which meandered in a northwest direction out of Fayetteville (Washington County) to join the Santa Fe Trail in the area of McPherson, Kansas, then traveled as far as Pueblo, Colorado, turning north to join the Overland Trail and crossing the Sierra Nevada into California. Arkansans played an important role in setting a course for California emigrants out of Fort Smith by opening up a southern route to California. With some foresight, creative entrepreneurs began meeting to promote the reopening of a trace leading from Fort Smith across Indian Territory to Santa Fe, which Josiah Gregg had blazed along the Canadian River in the spring of 1839 on one of his early trading excursions. The Fort Smith Herald began an advertising campaign resulting from hundreds of letters received from prospective gold seekers all over the eastern seaboard. It responded with a circular bearing the caption, “HO FOR CALIFORNIA,” offering information on the proposed route, which they called the Fort Smith-Santa Fe Trail. Included was advice on food supplies, wagons and teams, clothing, and weapons. As thousands began to gather in Fort Smith and neighboring Van Buren (Crawford County), they learned that they would not be traveling unprotected. The editor of the Fort Smith Herald and several colleagues asked one of Arkansas’s U.S. senators, Solon Borland, to petition the secretary of war, William L. Marcy, for a 100-man military escort to protect the “Forty-niners,” as they came to be known, and to construct a national road between Fort Smith and Santa Fe. An army appropriation bill was approved on March 3, 1849, for a sum of $50,000, which was to cover the expense of a survey by the topographical engineers and, at the same time, provide escort for the emigrants. Captain Randolph B. Marcy of the Fifth United States Infantry was selected to escort the emigrants, and Lieutenant J. H. Simpson, topographical engineer, was appointed to survey and construct a wagon road from Fort Smith to Santa Fe. The troops started out on April 4, 1849. Citizens wasted no time in organizing. For example, some forty citizens of Little Rock (Pulaski County) formed a company in early January (the Little Rock and California Association), published its articles of association, and recommended a departure date from Fort Smith on March 25, 1849. The Fort Smith Company had begun forming as early as December 1848. Others followed suit. More than 100 people from Clarksville (Johnson County) organized under Redmond Rogers, and those from Fayetteville organized under Captain Lewis Evans. From Van Buren came a group with wagons and another with pack mules. Cherokee and Arkansans formed a company under J. N. A. Carter at Fort Gibson in Indian Territory. In the beginning, this was mainly an emigration of men, both married and single, although there were a few families in the Clarksville Company. Many men left behind their wives to run plantations and businesses and care for children for at least two years, while they labored in the mines. As Arkansans began gathering at Fort Smith, hundreds of individuals from neighboring states joined the crowd. By April 28, a correspondent from the Baltimore Sun guessed that some 900 wagons with 2,000 emigrants, along with thousands of mules, horses, and oxen, had left with the troops on April 4. There was no doubt that the emigration helped the Fort Smith and Arkansas economy considerably, and in spite of the fact that thousands of citizens had left for the gold fields, the 1850 population of Arkansas was 209,000, an increase of 112,000 citizens since 1840. Emigrants were left to choose from a number of travel options out of Santa Fe. Many companies had divided into smaller units, while others reorganized. Some sold their wagons in exchange for pack mules and opted for a course down the Rio Grande to modern Truth and Consequences, New Mexico, where they turned west to travel a rugged terrain along the course of the Gila River all the way to the Yuma Crossing on the Colorado River. Most Arkansas emigrants, however, opted to follow a trail opened by Philip St. George Cooke in 1846 as he led the Mormon Battalion, another contingent of the Army of the West, to California. In this case, they left the Rio Grande near modern Garfield, New Mexico, traveled southwest over the Animas Mountains into the New Mexico Bootheel, then through the rugged Guadalupe Pass on the international border with Mexico. At this point, the emigrants found themselves in Sonora, Mexico, near the ruins of San Bernardino Rancho, where they hunted wild bulls. They followed the Santa Cruz River to the Mission San Xavier del Bac and the presidio of Tucson, Arizona, before trudging across a searing Sonoran desert to the villages of the friendly Pima Indians on the Gila River. Continuing west along the Gila River, these travelers ultimately connected with the Colorado River at Yuma, Arizona, where Yuma Indians helped swim their animals and wagons across. Robert Brownlee of Little Rock was one of many Arkansans who left a journal of his experiences, and there are many whose correspondence of the time survives. Arkansas newspapers frequently published the written works of those traveling to California during the gold rush. Excerpts from the journal of F. J. Thibault were published in the Arkansas State Gazette & Democrat on January 31, May 9, and July 11, 1851; letters by Alden M. Woodruff, son of Arkansas Gazette founder William E. Woodruff, were published in the Arkansas State Democrat on August 7 and December 21, 1849, and January 25, 1850, and in the Arkansas State Gazette & Democrat on April 19 and 26 and October 11, 1850. Following the gold rush, Arkansans’ travel between Fort Smith and Santa Fe began to spark national interest, and in 1853, Lieutenant Amiel Weeks Whipple began his survey at Fort Smith for a railroad from the Mississippi River to the Pacific Ocean, basically following the emigrant road along the Canadian River. That railroad route never materialized. Next came the Butterfield Overland Express in 1858, terminating in Fort Smith. Again, with variations in the route, the Fort Smith-Santa Fe Trail ultimately became Interstate 40 across Oklahoma, New Mexico, and northern Arizona into California. The southern segment of the trail evolved into Interstate 10 between Las Cruces, New Mexico, and Tucson. There is no solid estimate as to the number of Arkansans who returned home after their sojourn in the mines. Some returned to Arkansas to retrieve their families and moved to California for a year or two before returning permanently. A number died and were buried in graves near the trail. Some returned to Arkansas to lead another contingent to the mines. Others never came back. A majority likely became heartily sick of the whole experience and returned home without having achieved much in the way of riches. James Calvin (Cal) Jarnagin was one of the lucky gold seekers. He traveled to California with the Clarksville Company from Johnson County and ended up in Sonora, in the Southern Mining District. Here, he lived in a rough log cabin with several fellow miners. One day in 1850 found him “digging by myself in the claim when suddenly my pick broke away part of a bank of earth and there in front of me a piece of gold lay exposed…shaped like a common corn-doger, it was very nearly pure gold!” As it turns out, the nugget weighed in at 23 pounds, 11 3/4 ounces. Ultimately, Jarnagin sold it for $3,000. Following a number of years’ residence in California, he returned to Arkansas and married Matilda Caroline Pittman. James McVicar traveled with the Little Rock Company and ultimately settled in Quartzburg, in the Southern Mining District, where he became a partner in the enormously rich Washington Quartz Mine in 1850. Sam Ward wrote that the proprietors offered him half of the mine for $14,000. In addition, McVicar opened a “trading establishment,” where he had an “eating house, bakery, and rummery” near Indian Bar on the Tuolumne River. McVicar ultimately returned to Little Rock, where he married Amanda Miller in 1856. Some found gold in other ways. Robert Brownlee of Little Rock opened a tent store in the mining town of Agua Fria. He and his partner, John W. Clark, made a good deal of money selling mining supplies, canned goods, liquor, clothing, and even Panama hats to the miners and members of the Mariposa Battalion. Brownlee returned to Little Rock briefly to sell the house he built (now preserved at the Historic Arkansas Museum) and moved permanently to his ranch in Napa County, California. Whatever the case, the men and women who participated in the greatest mass migration in American history would never be the same. Although some may have been embittered, others were strengthened by the experience. Many returned with a greater understanding of their fellow man and the indigenous peoples and cultures they met along the way. They encountered and overcame enormous physical challenges on the deserts and in the mountains. And they saw an immense, beautiful country, largely uninhabited except for its Native Americans. Their collected reminiscences form a unique perspective of adventure and the origin of southern trails in an era that continues to excite the imagination. For additional information:Akins, Jerry. “Fort Smith and the Gold Rush: The Impact of the 1849 Gold Seekers on the City.” Journal of the Fort Smith Historical Society 34 (September 2010): 16–19. Alfred D. King Journal, 1849. Special Collections. University of Arkansas Libraries, Fayetteville, Arkansas. “Arkansas’ Golden Army of ’49.” Special issue, Arkansas Historical Quarterly 6 (Spring 1947). Bier, James A. Western Emigrant Trails, 1830–1870: Major Trails, Cutoffs, and Alternates. 2d ed. Independence, MO: The Oregon-California Trails Association, 1993. Brownlee, Robert. An American Odyssey: The Autobiography of a 19th-Century Scotsman, Robert Brownlee, at the Request of His Children. Napa County, California, October 1892. Edited by Patricia A. Etter. Fayetteville: University of Arkansas Press, 1986. Collins, Carvell. Sam Ward in the Gold Rush. Palo Alto, CA: Stanford University Press, 1949. Conway, Mary. “Little Rock Girl Rides Horseback to California in Gold Rush Days.” Pulaski County Historical Review 12 (March 1964): 6–9. Emory, William Hemsley. Notes of a Military Reconnaissance from Fort Leavenworth, in Missouri to San Diego, in California, Including Part of the Arkansas, del Norte and Gila Rivers. Thirtieth Congress, First Session, Ex. Doc. No. 41, 1848. Etter, Patricia A. To California on the Southern Route, 1849: A History and Bibliography. Spokane, WA: The Arthur H. Clark Company, 1998. Ferguson, John L., and J. H. Atkinson. Historic Arkansas. Little Rock: Arkansas History Commission, 1966. Foreman, Grant. Marcy and the Gold Seekers: With the Journal of Capt. R. B. Marcy With an Account of the Gold Rush over the Southern Route. Norman: University of Oklahoma Press, 1968. Gregg, Josiah. Commerce of the Prairies. New York: H. G. Langley, 1844. Griffith, Nancy Snell. “Batesville and the 1849 California Gold Rush.” Independence County Chronicle 40 (October 1998–January 1999): 39–57. Self, Jean. “Jarnagin’s Gold: Big Nuggets Were Found in Old Sonora Camp.” The Quarterly of the Tuolumne County Historical Society 7 (October–December 1967): 221–223. Stith, Matthew M. “‘How! For California!’: Fort Smith, Van Buren, and the Rush to the Gold Fields.” Ozark Historical Review 35 (2006): 50–61. Patricia A. EtterEmeritus College, Arizona State University Last Updated 9/2/2010 About this Entry: Contact the Encyclopedia / Submit a Comment / Submit a Narrative
http://www.encyclopediaofarkansas.net/encyclopedia/entry-detail.aspx?entryID=4211
13
30
|Yale-New Haven Teachers Institute||Home| For a short time after the Civil War there was some racial tolerance in the South. W.E.B. DuBois in Black Reconstruction discusses this period. He denies that blacks were simply given their freedom and documented the claim that they earned and deserved liberty because of their own struggles as Union soldiers. It is estimated that 200,000 blacks served in the Union Army. DuBois called reconstruction a high point in American democracy. According to DuBois, carpetbaggers were depicted as peace corps workers; the Ninth Crusade was made up of northern schoolteachers who went South to instruct freedmen on how to exercise the rights of citizenship, as well as to write, read,and figure. During this time Negroes held elective offices and were quickly learning to govern by governing. DuBois defended the character and ability of these Negro leaders. However, the election of 1876 and the resulting compromise changed this situation. The Compromise of 1877 had its roots in the growth of industrialism in the New South and the domination of politics by industrial rather than agrarian concerns. This compromise demonstrated the unwillingness on the part of the North and the South to enforce reconstruction and brought the death of the ideals and lessons of the Civil War. Southern state legislatures passed laws to disenfranchise Negroes who had been voting in large numbers. In 1896 there were over 130,000 Negro voters in Louisiana but by 1900, barely 5,000. Ordinances such as the Poll Tax, Grandfather, Good Character and Understanding Clauses were instruments employed to halt Negroes from exercising their rights. These laws were rigged to disqualify Negroes who might risk trying to mark a ballot. Also, the Ku Klux Klan was used to terrorize Negroes, and Klansmen murdered and tortured Negroes to prevent them from voting. They justified lynching because they believed that the black race was inherently inferior to the Caucasian race. By 1910 the Negro was effectively disenfranchised in eight southern states. Thus, at the turn of the century, the Negro’s position in the South reached its lowest point since the days of the Black Codes. T. Thomas Fortune, a black intellectual of the time, . . .“Since history showed industrial condition to be regulated directly and indirectly by the political condition of the people, disenfranchisement caused the economic plight of the Negro.” Intolerable racial conditions mounted partly because of the Plessy v. Ferguson Supreme Court decision of 1896 which legalized rigid segregation. Negroes became discontented and economic conditions in the South made life more difficult for them. From the 1870’s to the 20th century the South’s economy became unstable. The Depression of 1873 greatly affected the southern farmers. As cotton prices dipped from about 15 cents a pound in the 1870’s to 7 cents a pound in the 80’s, a wave of migration occurred. Some Negroes opted to migrate to Kansas because of railroads, the press, and politicians who had publicized the possibility of instant wealth. This movement became known as the Kansas Exodus. After investigation by a Senate Sub-Committee in 1884, it was concluded that the principal impetus for the migration came from the lower classes and was from one agricultural region to another. Charles S. Johnson concluded in his survey of Negro migration between 1865-1920, “How much is the migration a flight from Persecution”: Reasons are one thing, motives another...Persecution plays its part—a considerable one. But when the whole of migration is considered, this part seems to be limited. It is indeed more likely that Negroes, like all others with a spark of ambition and self interest, have been deserting soil which cannot yield returns in proportion to their population increase.1915 was the onset of the great migration of Negroes to Northern cities. At this time the South was suffering from floods and the boll weevil. Both injured the faltering southern economy. Increased mechanization of farms because of the Industrial Revolution displaced many Negroes. The employment picture was bleak because there were no jobs on farms and many unemployed Negroes made their way to cities. Poor whites who had also been displaced from the land got the traditional Negro jobs of elevator operators, busboys, domestics, butlers,and sanitation workers. The South offered little or no economic advancement for Negroes so they sought better conditions elsewhere. Because of rapid mechanization of factories and the first World War a huge demand for labor developed in northern urban industrial areas. In order to increase factory output to meet orders, northern industrial bosses dispatched labor agents to the South to recruit Negroes. These agents promised Negroes employment and supplied them with free railroad transportation. Despondent blacks seized the opportunity for a new life and began to leave in large numbers. The white ruling class of the South resented this loss of the cheap labor supply. Ordinances were passed to halt the exodus. An example of such a law was the ordinance in Macon, Georgia requiring labor recruiters to pay $25,000 for a license. In December, 1916, one thousand Negroes gathered at the Macon, Georgia, railroad station expecting to leave; instead they were dispersed by the police. Other southern communities stopped trains and prohibited the sale of railroad tickets to blacks; still, large numbers of people fled. There were two basic flows of people out of the South. One direction was from the Mississippi River to Chicago, Detroit, Milwaukee, and Gary, Indiana. These migrants were originally from Louisiana, Mississippi, and Arkansas. The eastern flow moved northward along the Atlantic coast and followed railroad lines. These people settled in cities such as New York and Philadelphia and were predominantly from Georgia, Alabama, South Carolina,and Florida. The migrants included some preachers and politicians, but the majority were half-educated or illiterate rural residents too restless and proud to live according to the terms set in the South. Emigration from the eleven states of the old Confederacy skyrocketed from 207,000 in 1900-1910 to 478,000 from 1910-1920. Nearly 800,000 left during the ’20’s and almost 400,000 during the Depression of the 1930’s. Once lines of contact were established between families and friends in these northern cities movement became easier. Most of the Negroes who moved North crowded into the twelve largest cities. Migration to the large city has always been a painful experience, both to the newly arrived as well as to the established city dwellers. Once the Negro became visible in northern cities, Jim Crow laws were passed barring Negroes from restaurants, theaters, hotels,and stores; the Y.M.C.A. erected Negro branches. In Washington, D.C., the resolution to the race problem was to deny its existence. Migrants soon discovered that their past rural residence had not prepared them for urban life. Unlike the European immigrant, the Negro was handicapped; he or she was colored. W.E.B. DuBois in The Philadelphia Negro, published in 1899, discussed this problem. DuBois observed that an increasing proportion of Negroes were city born and raised but too many occupied the same relative position in society as did their parents and grandparents, because of color prejudice. Most Negroes found themselves surrounded by prejudice, discrimination and segregation and were forced to reside in the run-down areas of the city. Drake and Cayton in Black Metropolis have described the evolution of the Negro community in Chicago as a growing population that gradually developed a business and professional class and its own community institutions. In the 1880’s Negroes had ethnic dualism regarding themselves as part of a larger community and maintaining connection with those in power. After 1900 Negroes lost their sense of interrelatedness between black and white Chicago and put an emphasis on self-reliance and development of power within the Negro community.W.E.B. DuBois in the “Social Evolution of the Black South,” (American Negro Monographs, I (1911)) writes that the city plays a constructive role in race relations, in spite of segregation. He advanced the thesis that, given the American race system, it was in the city that advancement would occur and that it would take place because of collective solidarity. This statement is provocative when applied to Harlem. Harlem has been the intellectual and cultural center of American Negroes; it has been called a world in itself, a symbol of liberty. The Harlem Renaissance emerged in this area in 1921 with the musical Shuffle Along. At this time blacks became a component in urban living; they held industrial jobs and developed financial resources; some joined the middle class. Because of these reasons race-conscious artists were encouraged to develop works in art, music and literature. The black middle class became interested in aesthetics and promoted and attended Negro productions. These people also created an interest in African motifs and were responsible for the development of interest in black African backgrounds. The Depression erased the advancements made during the renaissance. Historically the renaissance ended with the Harlem Riot of 1935, but the death in literature was announced when Richard Wright published Uncle Tom’s Children in 1938. One critic has said the Harlem riot of 1935 was a symbolic act marking the death of the myth of a gay Harlem. Of course, Negroes would have developed more economically, culturally and politically had there not been an American race system. The achievements made however, were rooted in the value of self-reliance and group solidarity. Harlem is a section of Manhattan, New York, and is a community within another geographic community. By World War I Harlem had become predominantly Negro. The greatest influx of blacks occurred between 1920-1930. By 1930 there were more Negroes in New York than in Birmingham, Alabama, Memphis, Tennessee, and St. Louis, Missouri. The chart below documents the migration to Harlem and refers to the backgrounds of migrants. |South Carolina||33,765||Washington, D.C.||3,358||Texas||1,282| Harlem also became a haven for immigrants from twelve Caribbean islands. Immigration from the Caribbean was easy because no quota system was applied by the Bureau of Immigration. By 1930, 25% of Harlem’s population consisted of Caribbean immigrants. The islanders also resented America’s race system but their presence resulted frequently in intraracial antagonism. Caribbean immigrants unified into groups whose aim was to alleviate racial tensions. Three of the groups were The West Indian Reform Association, The West Indian Committee on America and the Foreign Born Citizen’s Alliance. The aim of these groups was never fully realized because intraracial antagonism was never overcome. The Harlem area deteriorated during the Depression, which, as we have seen caused the demise of the renaissance. At this time average earnings for Negroes in Harlem were lower than for whites in New York; inferior wages resulted in an inability to adequately supply life’s necessities. An Urban League survey conducted at the onset of the Depression showed that realty values in Harlem were appreciating while depreciating elsewhere. This resulted in Harlem residents’ paying more for rent than did residents in other Manhattan boroughs. 33% of a Harlemite’s paid earnings was used for rent while elsewhere whites paid 20%. Housing in Harlem had been erected for people with different cultures and family structures; 75% of the dwellings were built pre-1900. The average apartment had five to seven rooms and was intended for large families. Black migrants were younger and in need of smaller residences. In order to survive, during the 30’s, one in four blacks commercialized the large apartments. Many times rooms were sublet to strangers who were immoral and undesirable. Sometimes rent parties were held so that occupants of these large dwellings might not be evicted. Life in Harlem was difficult. Because of segregation some Negroes formed their own businesses to serve black patrons. These businesses emphasized racial solidarity to solve black problems and encouraged a society in which Negroes could live untouched by discrimination, thereby undertaking an elevation process without white assistance or interference. These businesses had a difficult time surviving because of the social and economic conditions in Harlem which occurred during the Depression. Harlem never again recovered the glory of the renaissance and emerged as and remained a slum. A slum is defined as a poor, densely populated area of a city. Yet, even in decadence, the Negro urban resident had become resourceful and proud, had made cultural and political achievements, and had denounced American racism. The following quotation by a nineteen-year-old male Harlem resident summarized the purpose of this unit. I would like to see the day when my people have dignity and pride in themselves as black people. And when this comes about, when they realize that we are capable of all things and can do anything under the sun that a man can do, then all these things will come about—equality, great people, presidents—everything.Migrants demanded their rights, became aware of their ancestry and identified with Africa; at the same time they regarded blacks as a part of America. Today, 97% of the blacks in America reside in urban areas. However, a new trend has developed in some northern cities; some blacks are returning to the South. Although this is not true of all northeastern states, in Connecticut blacks are moving out faster than they are entering. A total of 10,300 more blacks moved out of Connecticut than into the state between the 1970 census and mid-1975, according to new figures on the racial composition of the population recently released by the Census Bureau. Connecticut’s net out-migration—5.7% of the state’s total black population—was the highest proportion of blacks moving out of any of the fifty states. Connecticut did not have a total loss of black population in 1970-75, however, because there were 17,000 more births than deaths among black residents who remained. The black population of Connecticut grew by 6,700—the difference between 17,000 excess births over deaths and the net outward migration figure of 10,300. In Connecticut’s case, blacks left because of the high cost of living and unemployment. Also, some Connecticut industries have relocated in southern areas. One example is the Seam Co., originally a New Haven factory, which moved to Atlanta, Georgia, because of lower production costs in the South. 1. At the end of this lesson students will be able to identify and spell correctly the places involved in migration. 2. Students will be able to illustrate the two routes followed by migrants. 3. Students will draw and color the two migration routes. They will include states and cities involved in the migration. 4. Using maps and atlases students will study the physical location of New York and Chicago. They will use physical, climate, and resource maps and decide why people would select these areas to settle. Ask students to find the Atlantic Ocean on the map. Then instruct them to write the names of states that touch the ocean. Once this is completed, have them locate the Mississippi River. Have them write the states the Mississippi River touches. After this is accomplished, ask students to use a map of North America and find the twelve Caribbean Islands that participated in migration. Let students also write these islands down. Allow time for the students to identify these places. Call a few to a pull-down map and have them point out various places. Once students can readily identify these places, introduce the two migration routes, Mississippi River and Atlantic coastal railroad lines. Be certain each student is able to identify the routes and states that the routes passed through. Students must also understand that geographic location aided migrants in the selection of a route. Once students are proficient in this task of route identification, pass out drawing paper and colored pencils. Have students draw, not trace, the two routes. Included are the states of origination as well as termination. After this has been completed ask students to use physical, climate, and resource maps of the United States to study the desirability of New York and Chicago. Students are to write down landforms, proximity to water, climate and resource power, and from this data decide why these cities were settled by migrants. Using the same criteria compare New York and Chicago to Sante Fe, New Mexico. Summary and Evaluation As homework have students develop an essay comparing Sante Fe and New York. They are to include landforms, rivers, resources, climates, power, and population densities. From this data students are to hypothesize reasons for settlement. All of the information can be found in any current atlas. 1. At the conclusion of this lesson students will be able to define the following terms: economics, demand, supply, depression, migration, Depression of 1873, Kansas Exodus, Industrial Revolution, mechanization, traditional Negro jobs, labor agent and Great Depression. 2. Students will fully be aware of the part national conditions played in affecting the South’s economy. 3. Because of the bleak economic picture in the South students will realize that blacks began to look elsewhere to live. 4. Students will list reasons people would leave the South. Motivation: (teacher states) “Can we imagine what it would be like to have no job, no money, no home, no food and no hope? Blacks in the South, because of economic and social conditions, were faced with these conditions. What would you do?” (Allow time for discussion). “Well let’s find out what southern blacks did.” The following terms should be defined either on a chalk board or ditto sheet. After a thorough discussion of these terms, distribute a ditto sheet summary of economic conditions in the South. Include references to the Depression of 1873, the decrease in prices for cotton from 15¢ per pound in the 1870’s to 7¢ per pound in the 1890’s; wheat from 95¢ per bushel in 1880 to 83¢ in 1890 and 50¢ in 1895. Between 1870-1895 corn prices declined fifty percent and tobacco, hogs, sheep, butter, and cheese were all in the same downward spiral. Also include the quotation of T. Thomas Fortune (it is found in this unit). Discuss the Kansas Exodus and movement from one agricultural region to another. Be certain students understand that until 1915 migration was from one rural area to another. Conditions in 1915 should also be explored. Include consideration of the floods, boll weevil, and mechanization of farms. Cite the bleak economic picture on southern farms and in cities. Instruct students to read the ditto sheet and demonstrate comprehension by listing reasons people might consider leaving the South. Once there is an awareness of reasons, distribute a ditto sheet which summarizes national conditions in 1915. Include mention of the rapid mechanization of factories and World War I. Discuss the need for labor and the use of agents to recruit southern blacks. It is important to include methods employed by white southerners to halt the exodus. Stress that no method employed was able to stop the wave of migration. 1. economics—deals with money, how we make a living; demand and supply. 2. demand—need or want of a particular item; the more we want, the higher the price, and the less we demand, the lower the price. 3. supply—how much of an item is grown or produced; if there is more than is wanted, price goes down or, if there is less than is needed, price goes up. 4. depression—a time when the economy fails, money becomes worthless, banks close, businesses close and there is high unemployment. 5. migration—the movement of people from one place to another within a country. 6. Depression of 1873—also called Panic of 1873; it began on September 8, 1873, when New York Warehouse and Securities Company went into bankruptcy. Ten days later Jay Cooke and Company, a famous banking house, failed. On September 20 the New York Stock Exchange suspended trading for ten days. Railroads halted construction and defaulted on bonds. Mills closed down and threw half the factory population out of work. By 1876 and 1877, 18,000 businesses had failed. In the South farm prices declined. The typical southern farm was small and unmechanized, devoted to producing a cash staple of cotton or tobacco and nothing else. The exceptions were rice plantations on the coast of Louisiana, and Texas, sugar plantations in Louisiana, and truck farms of the southern coastal plain. Cotton prices sunk to new lows. 7. Kansas Exodus—the movement of blacks from one rural area to another caused by railroads, politicians, the press and dreams of becoming rich; it involved mostly lower-class people. 8. Industrial Revolution—the use of machines instead of hand labor for production; machines were able to produce things quicker, cheaper, and in greater amounts. Mechanization required some skilled laborers and also displaced farm hands. 9. mechanization—the use of machinery. 10. traditional Negro jobs—these included busboys, elevator operators, domestic workers, porters, butlers, waiters (to name a few). 11. labor agent—a person dispatched to recruit workers. 12. Great Depression—this occurred in 1929 and extended into the 1930’s; at this time banks closed, businesses failed, unemployment rose, money became worthless. Summary and Evaluation Once students have read the ditto sheets ask them to imagine that they are black residents of Macon, Georgia in 1916. There is no work for them on the farm and a labor agent has just promised them a job in New York and a free railroad ticket. Conditions in Macon are hostile: they are hungry, trains are halted, there is violence. As an essay assignment students are to express what they would do. Would they remain in Macon hoping things would get better or would they leave for New York and seek a better life in an unknown place? After correcting the drafts students should submit a final paper which they will read aloud to their classmates. 1. Students will develop an awareness of the handicap suffered by black migrants—color prejudice—through a thorough investigation of The Philadelphia Negro, especially an excerpt from “The Contact of the Races”. 2. Through library research, students will assess W.E.B. DuBois. 3. Students will learn about Harlem, its Renaissance and its decline into a slum in the 1930’s. Motivation: teacher reads aloud “Credo” by W.E.B. DuBois. I believe in the Prince of Peace. I believe that war is murder. I believe that armies and navies are at bottom the tinsel and braggadocio of oppression and wrong, and I believe that the wicked conquest of weaker and darker nations by nations whiter and stronger but foreshadows the death of that strength. I believe in liberty for all men: the space to stretch their arms and their souls, the right to breathe ... the freedom to choose their friends, enjoy the sunshine, and ride on the railroads, uncursed by color: thinking, dreaming, working as they will in a kingdom of beauty and love....Finally, I believe in Patience—patience with the weakness of the weak and the strength of the strong, the prejudice of the Ignorant and the ignorance of the Blind; patience with the tardy triumph of Joy and the mad chastening of Sorrow—patience with God! Ask the students what they think of this. Get several comments and then lead into a discussion of W.E.B. DuBois. Background on DuBois—William Edward Burghardt DuBois was born in Great Barrington, Massachusetts, in 1868 and was educated in its public schools. He entered Fisk University in 1885 and was graduated in 1888. He entered Harvard from which he was graduated, cum laude, in 1890. Five years later he received a Ph.D. from Harvard. His career may be divided into 5 periods: DuBois wrote 19 books and hundreds of editorials, articles, and pamphlets. His published writings spanned some 60 years, and nearly all deal with the racial problem. W.E.B. DuBois died in Accra, Ghana,on August 27, 1963, at age 95. For further information consult reference books or DuBois’ writings cited in the bibliography. 1. Researcher and university instructor 1895Ð1910 2. Editor of the NAACP’s The Crisis, 1910Ð1934 3. Second period as university instructor, 1934Ð1944 4. Second period with the NAACP, 1944Ð1948 5. International years, 1948Ð1963. A ditto should be made of the following excerpt from The Philadelphia Negro. The work is entitled “The Contact of the Races.” COLOR PREJUDICE—Incidentally throughout this study the prejudice against the Negro has been again and again mentioned. It is time now to reduce this somewhat indefinite term to something tangible. Everybody speaks of the matter, everybody knows that it exists, but in just what form it shows itself or how influential it is few agree. In the Negro’s mind, color prejudice in Philadelphia is that widespread feeling of dislike for his blood, which keeps him and his children out of decent employment, from certain public conveniences and amusements, from hiring houses in many sections, and in general, from being recognized as a man. Negroes regard this prejudice as the chief cause of their present unfortunate condition. On the other hand most white people are quite unconscious of any such powerful and vindictive feeling; they regard color prejudice as the easily explicable feeling that intimate social intercourse with a lower race is not only undesirable but impracticable if our present standards of culture are to be maintained; and although they are aware that some people feel the aversion more intensely than others, they cannot see how such a feeling has much influence on the real situation or alters the social condition of the mass of Negroes.Once this has been distributed and read, discuss what is brought out in the excerpt. Have students list different examples of prejudice and have students write out DuBois’ definition of color prejudice. Ask them if this definition is applicable today. Once students thoroughly understand this study, ask them if they agree with the proposition laid down by DuBois in 1899? Is this proposition applicable today? As a matter of fact, color prejudice in this city is something between these two extreme views: it is not to-day responsible for all, or perhaps the greater part of the Negro problems, or of the disabilities under which the race labors; on the other hand it is a far more powerful social force than most Philadelphians realize. The practical results of the attitude of most of the inhabitants of Philadelphia toward persons of Negro decent are as follows: 1. As to getting work: No matter how well trained a Negro may be, or how fitted for work of any kind, he cannot in the ordinary course of competition hope to be much more than a menial servant. He cannot get clerical or supervisory work to do save in exceptional cases. He cannot teach save in a few of the remaining Negro schools. He cannot become a mechanic except for small transient jobs, and cannot join a trades union. A Negro woman has but three careers open to her in this city: domestic service, sewing, or married life. 2. As to keeping work: The Negro suffers in competition more severely than white men. Change in fashion is causing him to be replaced by whites in the better paid positions of domestic service. Whim and accident will cause him to lose a hard-earned place more quickly than the same things would affect a white man. Being few in number compared with the whites the crime or carelessness of a few of his race is easily imputed to all, and the reputations of the good, industrious and reliable suffer thereby. Because Negro workmen may not often work side by side with white workmen, the individual black workman is rated not by his own efficiency, but by the efficiency of a whole group of black fellow workmen which may often be low. Because of these difficulties which virtually increase competition in his case, he is forced to take lower wages for the same work than white workmen. 3. As to entering new lines of work: Men are used to seeing Negroes in inferior positions; when, therefore, by any change a Negro gets in a better position, most men immediately conclude that he is not fitted for it, even before he has a chance to show his fitness. If, therefore, he set up a store, men will not patronize him; If he is put into public position men will complain. If he gain a position in the commercial world, men will quietly secure his dismissal or see that a white man succeeds him. 4. As to his expenditure: The comparative smallness of the patronage of the Negro, and the dislike of other customers makes it usual to increase the charges or difficulties in certain directions in which a Negro must spend money. He must pay more house-rent for worse houses than most white people pay. He is sometimes liable to insult or reluctant service in some restaurants, hotels and stores, at public resorts, theaters and places of recreation; and at nearly all barber shops. 5. As to his children: The Negro finds it extremely difficult to rear children in such an atmosphere and not have them either cringing or impudent: if he impresses upon them patience with their lot, they may grow up satisfied with their condition; if he inspires them with ambition to rise, they may grow to despise their own people, hate the whites and become embittered with the world. His children are discriminated against, often in public schools. They are advised when seeking employment to become waiters and maids. They are liable to species of insult and temptation peculiarly trying to children. 6. As to social intercourse: In all walks of life the Negro is liable to meet some objection to his presence or some discourteous treatment; and the ties of friendship or memory seldom are strong enough to hold across the color line. If an invitation is issued to the public for any occasion, the Negro can never know whether he would be welcomed or not; if he goes he is liable to have his feelings hurt and get into unpleasant altercation; if he stays away, he is blamed for indifference. If he meet a lifelong white friend on the street, he is in a dilemma; if he does not greet the friend he is put down as boorish and impolite; if he does greet the friend he is liable to be flatly snubbed. If by chance he is introduced to a white woman or man, he expects to be ignored on the next meeting, and usually is. White friends may call on him, but he is scarcely expected to call on them, save for strictly business matters. If he gain the affections of a white woman and marry her he may invariably expect that slurs will be thrown on her reputation and on his, and that both his and her race will shun their company. When he dies he cannot be buried beside white corpses. 7. The result: Any one of these things happening now and then would not be remarkable or call for especial comment; but when one group of people suffer all these little differences of treatment and discriminations and insults continually, the result is either discouragement, or bitterness, or over sensitiveness, or recklessness. And a people feeling thus cannot do their best. Presumably the first impulse of the average Philadelphian would be emphatically to deny any such marked and blighting discrimination as the above against a group of citizens in this metropolis. Every one knows that in the past color prejudice in the city was deep and passionate; living men can remember when a Negro could not sit in a street car or walk many streets in peace. These times have passed, however, and may imagine that active discrimination against the Negro has passed with them. Careful inquiry will convince any such one of his error. To be sure a colored man to-day can walk the street of Philadelphia without personal insult; he can go to theaters, parks and some places of amusement without meeting more than stares and discourtesy; he can be accommodated at most hotels and restaurants, although his treatment in some would not be pleasant. All this is a vast advance and augurs much for the future. An yet all that has been said of the remaining discrimination is but too true.... After this has been completed, a discussion of Harlem should take place. Include DuBois’ statements, found in the text, from “Social Evolution of the Black South.” A ditto should be made and should include the following facts about Harlem as the intellectual and cultural center of American Negroes: the emergence of the renaissance because of the development of a Negro middle class, the decline of the renaissance because of the Depression, and finally the emergence of Harlem as a slum. Explain the influx of blacks into Harlem and the complications that arose with the arrival of black Caribbean immigrants. Discuss inferior wages, high rents, inappropriate housing, and difficult living conditions. Ask the students to define a slum. Once comments from students have been discussed, discuss the fact that amid the dirt and decadence emerged a new Negro. 1. The Philadelphia Negro is still a model of racial and urban studies. 2. DuBois conducted most of the research, including interviewing 5,000 people. 3. The Philadelphia Negro reported that the Negro problem was one involving the poor and dispossessed and had nothing to do with inherent inferiority. 4. DuBois’ writings constitute the most important body of work in the history of the Black movement. 5. The applicability of DuBois to current conditions. - Callow, Alexander, ed. American Urban History: an Interpretative Reader. New York: Oxford University Press, 1969. - Cousins, Albert N., and Hans Nagpul, ed. Urban Man and Society. New York: Alfred A. Knopf, 1970. - Davis, Bertha, Dorothy Arnof, and Charlotte Davis. Background For Tomorrow and American History. New York: Macmillan Co., 1970. - Eldredge, Wentworth H., ed. Taming Megalopolis: What is and What Could Be How to Manage in an Urbanized World. New York: Doubleday: Anchor Books, 1967. - Locke, Alain, ed. The New Negro, New York, 1925. - Pleski, Harry A.; Kaiser, Ernest, ed. The Negro Almanac. New York: Bel Whether Co., 1971. - Toomer, Jean. Cane. New York: Harper & Row, 1969. - Wright, Richard. Black Boy. New York: Harper & Row, 1945. - Wright, Richard. Native Son. New York: Harper & Row, 1940. - Murphy, Raymond E. The American City: An Urban Geography. 2nd ed. New York: McGraw Hill Book Co., 1974. - Report of the National Advisory Commission on Civil Rights. “The Migration of Negroes From the South.” pp. 239Ð247. - Clark, Kenneth, B. Dark Ghettos. New York: Harper & Row, 1967. - Drake and Cayton. Black Metropolis. - DuBois, William Edward Burghardt. The Philadelphia Negro. New York: Signet Classics, 1967. - DuBois, W.E.B. “The Crisis.” “The Lynching Industry,” “The Shubuta Lynchings,” “Lynching,” “Logic,” “Jim Crow,” “Migration a Crime in Georgia,” “Migration of Negroes,” “Brothers Come North.” New York: Signet Books, 1970. - DuBois, W.E.B. Souls of Black Folk. New York: Mentor Books, 1967. - DuBois, W.E.B. Black Reconstruction. New York: The New American Library, 1970. - DuBois, W.E.B. “Social Evolution of the Black South.” American Negro Monographs No. 4 (1911). - Glazer, Nathan, and Daniel Moynihan. Beyond the Melting Pot. Massachusetts, M.I.T. Press, 1963. - Greer, Scott. Governing the Metropolis. New York: John Wiley and Sons, 1962. - Johnson, Charles S. “How Much is Migration a Flight from Persecution.” Opportunity, I (Sept., 1923), pp. 272Ð74. - Meier, August. From Plantation to Ghetto. New York: Hill and Wang, 1970. - Silbermann, Charles. Crisis in Black and White. New York: Random House, 1964. - Billingsley, Andrew. Black Families in White America. Englewood Cliffs, New Jersey: Prentice Hall, 1968. - Duncan, Otis Dudley. “Methodological Issues in the Analysis of Social Mobility.” Ed. Smelser, Neil J., Seymour Lipset. In Social Structure and Mobility in Economic Development. Chicago: Aldine Co., 1966. - Goode, William. “Family Mobility.” Ed. Reinhard, Bendix, and Seymour Lipset. In Social Mobility in Industrial Societies. New York: The Free Press, 1966. - Logen, Rayford. The Betrayal of the Negro. New York: Signet, 1970. - Osofsky, Gilbert. Harlem: The Making of A Ghetto. New York: Harper & Row, 1968. - Rose, Harold M. “Social Processes in the City, Race and Urban Residential Choice.” Resource Paper No. 6. Committee on College Geography, Association of American Geographers. Washington, D.C.: 1969. - Scanzoni John A. The Black Family in Modern Society. Boston: Allyn and Bacon, Inc., 1971. - Taueber Karl. Taueber Alma. Negroes in Cities. Chicago: Aldine Pub. Co., 1965. - Burks, Edward C. “Letter From Washington.” “Census Finds Blacks Leaving State.” Connecticut Weekly: New York Times: Sunday 16 April, 1978, pg. 4. - Davis, Bertha; Dorothy Arnot, and Charlotte Davis. Background for Tomorrow and American History. New York: Macmillan Co., 1970. - Ploski, Harry A., Ernest Kaiser, ed. The Negro Almanac. New York: Bell Whether Co., 1971. - DuBois, William Edward Burghardt. The Philadelphia Negro. New York: Signet Classics, 1967. - DuBois, W.E.B. Writings For Children. New York: Signet Classics, 1967. - Franklin, John Hope. An Illustrated History of Black Americans. New York: Time-Life, 1973. - Chambers, Bradford. Chronicles of Negro Protest—A Background Book. New York: Parents Magazine Press, 1968. 1. Kolevzon, Edward R. and John A. Heine. Our World and Its Peoples. Boston: Allyn and Bacon Inc., 1977. 2. Schwartz and O’Connor. Exploring The Western World. Boston: A Globe Book, 1973. 3. Schrier, Philip. United States Understanding Through Inquiry. New York: American Book Co., 1971. - Audio Visual materials (City of New Haven) l6mm movies. - Frederick Douglass illustrates racial conditions in North. - Prudence Crandall illustrates race prejudice. - Wilson, Walter, ed. The Selected Writings of W.E.B. DuBois. New York: Mentor Book, 1970. - Locke, Alain, ed. The New Negro. New York, 1925. - Meier, August. Negro Thought in America 1880Ð1915. Michigan: University of Michigan Press, Ann Arbor Paperbacks, 1966. - Goude’s World Atlas - Rand McNally Classroom Atlas - ditto masters - colored pencils - drawing paper - writing paper Contents of 1978 Volume II | Directory of Volumes | Index | Yale-New Haven Teachers Institute
http://www.yale.edu/ynhti/curriculum/units/1978/2/78.02.05.x.html
13
159
Oil shale is commonly defined as a fine-grained sedimentary rock containing organic matter that yields substantial amounts of oil and combustible gas upon destructive distillation. Most of the organic matter is insoluble in ordinary organic solvents; therefore, it must be decomposed by heating to release such materials. Underlying most definitions of oil shale is its potential for the economic recovery of energy, including shale oil and combustible gas, as well as a number of byproducts. A deposit of oil shale having economic potential is generally one that is at or near enough to the surface to be developed by open-pit or conventional underground mining or by in-situ methods. Oil shales range widely in organic content and oil yield. Commercial grades of oil shale, as determined by their yield of shale oil, ranges from about 100 to 200 liters per metric ton (l/t) of rock. The U.S. Geological Survey has used a lower limit of about 40 l/t for classification of Federal oil-shale lands. Others have suggested a limit as low as 25 l/t. Deposits of oil shale are in many parts of the world. These deposits, which range from Cambrian to Tertiary age, may occur as minor accumulations of little or no economic value or giant deposits that occupy thousands of square kilometers and reach thicknesses of 700 m or more. Oil shales were deposited in a variety of depositional environments, including fresh-water to highly saline lakes, epicontinental marine basins and subtidal shelves, and in limnic and coastal swamps, commonly in association with deposits of coal. In terms of mineral and elemental content, oil shale differs from coal in several distinct ways. Oil shales typically contain much larger amounts of inert mineral matter (60-90 percent) than coals, which have been defined as containing less than 40 percent mineral matter. The organic matter of oil shale, which is the source of liquid and gaseous hydrocarbons, typically has a higher hydrogen and lower oxygen content than that of lignite and bituminous coal. In general, the precursors of the organic matter in oil shale and coal also differ. Much of the organic matter in oil shale is of algal origin, but may also include remains of vascular land plants that more commonly compose much of the organic matter in coal. The origin of some of the organic matter in oil shale is obscure because of the lack of recognizable biologic structures that would help identify the precursor organisms. Such materials may be of bacterial origin or the product of bacterial degradation of algae or other organic The mineral component of some oil shales is composed of carbonates including calcite, dolomite, and siderite, with lesser amounts of aluminosilicates. For other oil shales, the reverse is true-silicates including quartz, feldspar, and clay minerals are dominant and carbonates are a minor component. Many oil-shale deposits contain small, but ubiquitous, amounts of sulfides including pyrite and marcasite, indicating that the sediments probably accumulated in dysaerobic to anoxic waters that prevented the destruction of the organic matter by burrowing organisms and oxidation. Although shale oil in today's (2004) world market is not competitive with petroleum, natural gas, or coal, it is used in several countries that possess easily exploitable deposits of oil shale but lack other fossil fuel resources. Some oil-shale deposits contain minerals and metals that add byproduct value such as alum [KAl(SO4)2.12H2O], nahcolite (NaHCO3), dawsonite [NaAl(OH)2CO3], sulfur, ammonium sulfate, vanadium, zinc, copper, and uranium. The gross heating value of oil shales on a dry-weight basis ranges from about 500 to 4,000 kilocalories per kilogram (kcal/kg) of rock. The high-grade kukersite oil shale of Estonia, which fuels several electric power plants, has a heating value of about 2,000 to 2,200 kcal/kg. By comparison, the heating value of lignitic coal ranges from 3,500 to 4,600 kcal/kg on a dry, mineral-free basis (American Society for Testing Materials, 1966). Tectonic events and volcanism have altered some deposits. Structural deformation may impair the mining of an oil-shale deposit, whereas igneous intrusions may have thermally degraded the organic matter. Thermal alteration of this type may be restricted to a small part of the deposit, or it may be widespread making most of the deposit unfit for recovery of shale oil. The purpose of this report is to (1) discuss the geology and summarize the resources of selected deposits of oil shale in varied geologic settings from different parts of the world and (2) present new information on selected deposits developed since 1990 (Russell, 1990). The commercial development of an oil-shale deposit depends upon many factors. The geologic setting and the physical and chemical characteristics of the resource are of primary importance. Roads, railroads, power lines, water, and available labor are among the factors to be considered in determining the viability of an oil-shale operation. Oil-shale lands that could be mined may be preempted by present land usage such as population centers, parks, and wildlife refuges. Development of new in-situ mining and processing technologies may allow an oil-shale operation in previously restricted areas without causing damage to the surface or posing problems of air and water pollution. The availability and price of petroleum ultimately effect the viability of a large-scale oil-shale industry. Today, few, if any deposits can be economically mined and processed for shale oil in competition with petroleum. Nevertheless, some countries with oil-shale resources, but lack petroleum reserves, find it expedient to operate an oil-shale industry. As supplies of petroleum diminish in future years and costs for petroleum increase, greater use of oil shale for the production of electric power, transportation fuels, petrochemicals, and other industrial products seems likely. Determining Grade of Oil Shale The grade of oil shale has been determined by many different methods with the results expressed in a variety of units. The heating value of the oil shale may be determined using a calorimeter. Values obtained by this method are reported in English or metric units, such as British thermal units (Btu) per pound of oil shale, calories per gram (cal/gm) of rock, kilocalories per kilogram (kcal/kg) of rock, megajoules per kilogram (MJ/kg) of rock, and other units. The heating value is useful for determining the quality of an oil shale that is burned directly in a power plant to produce electricity. Although the heating value of a given oil shale is a useful and fundamental property of the rock, it does not provide information on the amounts of shale oil or combustible gas that would be yielded by retorting (destructive distillation). The grade of oil shale can be determined by measuring the yield of oil of a shale sample in a laboratory retort. This is perhaps the most common type of analysis that is currently used to evaluate an oil-shale resource. The method commonly used in the United States is called the "modified Fischer assay," first developed in Germany, then adapted by the U.S. Bureau of Mines for analyzing oil shale of the Green River Formation in the western United States (Stanfield and Frost, 1949). The technique was subsequently standardized as the American Society for Testing and Materials Method D-3904-80 (1984). Some laboratories have further modified the Fischer assay method to better evaluate different types of oil shale and different methods of oil-shale processing. The standardized Fischer assay method consists of heating a 100-gram sample crushed to -8 mesh (2.38-mm mesh) screen in a small aluminum retort to 500ºC at a rate of 12ºC per minute and held at that temperature for 40 minutes. The distilled vapors of oil, gas, and water are passed through a condenser cooled with ice water into a graduated centrifuge tube. The oil and water are then separated by centrifuging. The quantities reported are the weight percentages of shale oil (and its specific gravity), water, shale residue, and "gas plus loss" by difference. The Fischer assay method does not determine the total available energy in an oil shale. When oil shale is retorted, the organic matter decomposes into oil, gas, and a residuum of carbon char remaining in the retorted shale. The amounts of individual gases-chiefly hydrocarbons, hydrogen, and carbon dioxide-are not normally determined but are reported collectively as "gas plus loss," which is the difference of 100 weight percent minus the sum of the weights of oil, water, and spent shale. Some oil shales may have a greater energy potential than that reported by the Fischer assay method depending on the components of the "gas plus loss." The Fischer assay method also does not necessarily indicate the maximum amount of oil that can be produced by a given oil shale. Other retorting methods, such as the Tosco II process, are known to yield in excess of 100 percent of the yield reported by Fischer assay. In fact, special methods of retorting, such as the Hytort process, can increase oil yields of some oil shales by as much as three to four times the yield obtained by the Fischer assay method (Schora and others, 1983; Dyni and others, 1990). At best, the Fischer assay method only approximates the energy potential of an oil-shale deposit. Newer techniques for evaluating oil-shale resources include the Rock-Eval and the "material-balance" Fischer assay methods. Both give more complete information about the grade of oil shale, but are not widely used. The modified Fischer assay, or close variations thereof, is still the major source of information for most deposits. It would be useful to develop a simple and reliable assay method for determining the energy potential of an oil shale that would include the total heat energy and the amounts of oil, water, combustible gases including hydrogen, and char in sample residue. Origin of Organic Matter Organic matter in oil shale includes the remains of algae, spores, pollen, plant cuticle and corky fragments of herbaceous and woody plants, and other cellular remains of lacustrine, marine, and land plants. These materials are composed chiefly of carbon, hydrogen, oxygen, nitrogen, and sulfur. Some organic matter retains enough biological structures so that specific types can be identified as to genus and even species. In some oil shales, the organic matter is unstructured and is best described as amorphous (bituminite). The origin of this amorphous material is not well known, but it is likely a mixture of degraded algal or bacterial remains. Small amounts of plant resins and waxes also contribute to the organic matter. Fossil shell and bone fragments composed of phosphatic and carbonate minerals, although of organic origin, are excluded from the definition of organic matter used herein and are considered to be part of the mineral matrix of the oil shale. Most of the organic matter in oil shales is derived from various types of marine and lacustrine algae. It may also include varied admixtures of biologically higher forms of plant debris that depend on the depositional environment and geographic position. Bacterial remains can be volumetrically important in many oil shales, but they are difficult to identify. Most of the organic matter in oil shale is insoluble in ordinary organic solvents, whereas some is bitumen that is soluble in certain organic solvents. Solid hydrocarbons, including gilsonite, wurtzilite, grahamite, ozokerite, and albertite, are present as veins or pods in some oil shales. These hydrocarbons have somewhat varied chemical and physical characteristics, and several have been mined commercially. Thermal Maturity of Organic Matter The thermal maturity of an oil shale refers to the degree to which the organic matter has been altered by geothermal heating. If the oil shale is heated to a high enough temperature, as may be the case if the oil shale were deeply buried, the organic matter may thermally decompose to form oil and gas. Under such circumstances, oil shales can be source rocks for petroleum and natural gas. The Green River oil shale, for example, is presumed to be the source of the oil in the Red Wash field in northeastern Utah. On the other hand, oil-shale deposits that have economic potential for their shale-oil and gas yields are geothermally immature and have not been subjected to excessive heating. Such deposits are generally close enough to the surface to be mined by open-pit, underground mining, or by in-situ methods. The degree of thermal maturity of an oil shale can be determined in the laboratory by several methods. One technique is to observe the changes in color of the organic matter in samples collected from varied depths in a borehole. Assuming that the organic matter is subjected to geothermal heating as a function of depth, the colors of certain types of organic matter change from lighter to darker colors. These color differences can be noted by a petrographer and measured using photometric techniques. Geothermal maturity of organic matter in oil shale is also determined by the reflectance of vitrinite (a common constituent of coal derived from vascular land plants), if present in the rock. Vitrinite reflectance is commonly used by petroleum explorationists to determine the degree of geothermal alteration of petroleum source rocks in a sedimentary basin. A scale of vitrinite reflectances has been developed that indicates when the organic matter in a sedimentary rock has reached temperatures high enough to generate oil and gas. However, this method can pose a problem with respect to oil shale, because the reflectance of vitrinite may be depressed by the presence of lipid-rich organic matter. Vitrinite may be difficult to recognize in oil shale because it resembles other organic material of algal origin and may not have the same reflectance response as vitrinite, thereby leading to erroneous conclusions. For this reason, it may be necessary to measure vitrinite reflectance from laterally equivalent vitrinite-bearing rocks that lack the algal material. In areas where the rocks have been subjected to complex folding and faulting or have been intruded by igneous rocks, the geothermal maturity of the oil shale should be evaluated for proper determination of the economic potential of the deposit. Classification of Oil Shale Oil shale has received many different names over the years, such as cannel coal, boghead coal, alum shale, stellarite, albertite, kerosene shale, bituminite, gas coal, algal coal, wollongite, schistes bitumineux, torbanite, and kukersite. Some of these names are still used for certain types of oil shale. Recently, however, attempts have been made to systematically classify the many different types of oil shale on the basis of the depositional environment of the deposit, the petrographic character of the organic matter, and the precursor organisms from which the organic matter was derived. A useful classification of oil shales was developed by A.C. Hutton (1987, 1988, 1991), who pioneered the use of blue/ultraviolet fluorescent microscopy in the study of oil-shale deposits of Australia. Adapting petrographic terms from coal terminology, Hutton developed a classification of oil shale based primarily on the origin of the organic matter. His classification has proved to be useful for correlating different kinds of organic matter in oil shale with the chemistry of the hydrocarbons derived from oil shale. Hutton (1991) visualized oil shale as one of three broad groups of organic-rich sedimentary rocks: (1) humic coal and carbonaceous shale, (2) bitumen-impregnated rock, and (3) oil shale. He then divided oil shale into three groups based upon their environments of deposition - terrestrial, lacustrine, and marine. Terrestrial oil shales include those composed of lipid-rich organic matter such as resin spores, waxy cuticles, and corky tissue of roots, and stems of vascular terrestrial plants commonly found in coal-forming swamps and bogs. Lacustrine oil shales include lipid-rich organic matter derived from algae that lived in freshwater, brackish, or saline lakes. Marine oil shales are composed of lipid-rich organic matter derived from marine algae, acritarchs (unicellular organisms of questionable origin), and marine dinoflagellates. Several quantitatively important petrographic components of the organic matter in oil shale-telalginite, lamalginite, and bituminite-are adapted from coal petrography. Telalginite is organic matter derived from large colonial or thick-walled unicellular algae, typified by genera such as Botryococcus. Lamalginite includes thin-walled colonial or unicellular algae that occurs as laminae with little or no recognizable biologic structures. Telalginite and lamalginite fluoresce brightly in shades of yellow under blue/ultraviolet light. Bituminite, on the other hand, is largely amorphous, lacks recognizable biologic structures, and weakly fluoresces under blue light. It commonly occurs as an organic groundmass with fine-grained mineral matter. The material has not been fully characterized with respect to its composition or origin, but it is commonly an important component of marine oil shales. Coaly materials including vitrinite and inertinite are rare to abundant components of oil shale; both are derived from humic matter of land plants and have moderate and high reflectance, respectively, under the microscope. Within his three-fold grouping of oil shales (terrestrial, lacustrine, and marine), Hutton (1991) recognized six specific oil-shale types: cannel coal, lamosite, marinite, torbanite, tasmanite, and kukersite. The most abundant and largest deposits are marinites and lamosites. Cannel coal is brown to black oil shale composed of resins, spores, waxes, and cutinaceous and corky materials derived from terrestrial vascular plants together with varied amounts of vitrinite and inertinite. Cannel coals originate in oxygen-deficient ponds or shallow lakes in peat-forming swamps and bogs (Stach and others, 1975, p. 236-237). Lamosite is pale- and grayish-brown and dark gray to black oil shale in which the chief organic constituent is lamalginite derived from lacustrine planktonic algae. Other minor components in lamosite include vitrinite, inertinite, telalginite, and bitumen. The Green River oil-shale deposits in western United States and a number of the Tertiary lacustrine deposits in eastern Queensland, Australia, are lamosites. Marinite is a gray to dark gray to black oil shale of marine origin in which the chief organic components are lamalginite and bituminite derived chiefly from marine phytoplankton. Marinite may also contain small amounts of bitumen, telalginite, and vitrinite. Marinites are deposited typically in epeiric seas such as on broad shallow marine shelves or inland seas where wave action is restricted and currents are minimal. The Devonian-Mississippian oil shales of eastern United States are typical marinites. Such deposits are generally widespread covering hundreds to thousands of square kilometers, but they are relatively thin, often less than about 100 m. Torbanite, tasmanite, and kukersite are related to specific kinds of algae from which the organic matter was derived; the names are based on local geographic features. Torbanite, named after Torbane Hill in Scotland, is a black oil shale whose organic matter is composed mainly of telalginite derived largely from lipid-rich Botryococcus and related algal forms found in fresh- to brackish-water lakes. It also contains small amounts of vitrinite and inertinite. The deposits are commonly small, but can be extremely high grade. Tasmanite, named from oil-shale deposits in Tasmania, is a brown to black oil shale. The organic matter consists of telalginite derived chiefly from unicellular tasmanitid algae of marine origin and lesser amounts of vitrinite, lamalginite, and inertinite. Kukersite, which takes its name from Kukruse Manor near the town of Kohtla-Järve, Estonia, is a light brown marine oil shale. Its principal organic component is telalginite derived from the green alga, Gloeocapsomorpha prisca. The Estonian oil-shale deposit in northern Estonia along the southern coast of the Gulf of Finland and its eastern extension into Russia, the Leningrad deposit, are kukersites. Evaluation of Oil-Shale Resources Relatively little is known about many of the world's deposits of oil shale and much exploratory drilling and analytical work need to be done. Early attempts to determine the total size of world oil-shale resources were based on few facts, and estimating the grade and quantity of many of these resources were speculative, at best. The situation today has not greatly improved, although much information has been published in the past decade or so, notably for deposits in Australia, Canada, Estonia, Israel, and the United States. Evaluation of world oil-shale resources is especially difficult because of the wide variety of analytical units that are reported. The grade of a deposit is variously expressed in U.S. or Imperial gallons of shale oil per short ton (gpt) of rock, liters of shale oil per metric ton (l/t) of rock, barrels, short or metric tons of shale oil, kilocalories per kilogram (kcal/kg) of oil shale, or gigajoules (GJ) per unit weight of oil shale. To bring some uniformity into this assessment, oil-shale resources in this report are given in both metric tons of shale oil and in equivalent U.S. barrels of shale oil, and the grade of oil shale, where known, is expressed in liters of shale oil per metric ton (l/t) of rock. If the size of the resource is expressed only in volumetric units (barrels, liters, cubic meters, and so on), the density of the shale oil must be known or estimated to convert these values to metric tons. Most oil shales produce shale oil that ranges in density from about 0.85 to 0.97 by the modified Fischer assay method. In cases where the density of the shale oil is unknown, a value of 0.910 is assumed for estimating resources. Byproducts may add considerable value to some oil-shale deposits. Uranium, vanadium, zinc, alumina, phosphate, sodium carbonate minerals, ammonium sulfate, and sulfur are some of the potential byproducts. The spent shale after retorting is used to manufacture cement, notably in Germany and China. The heat energy obtained by the combustion of the organic matter in oil shale can be used in the cement-making process. Other products that can be made from oil shale include specialty carbon fibers, adsorbent carbons, carbon black, bricks, construction and decorative blocks, soil additives, fertilizers, rock wool insulating material, and glass. Most of these uses are still small or in experimental stages, but the economic potential is large. This appraisal of world oil-shale resources is far from complete. Many deposits are not reviewed because data or publications are unavailable. Resource data for deeply buried deposits, such as a large part of the Devonian oil-shale deposits in eastern United States, are omitted, because they are not likely to be developed in the foreseeable future. Thus, the total resource numbers reported herein should be regarded as conservative estimates. This review focuses on the larger deposits of oil shale that are being mined or have the best potential for development because of their size and grade. in that it is "a laminated rock consisting of at least 67% clay minerals," however, it sometimes contains enough organic material and carbonate minerals that clay minerals account for less than 67% of the rock. | United States of America Oil Shale | Estonia and Sweden Oil Shale | Israel and Jordan Oil Shale | China, Russia, Syria, Thailand and Turkey More from Geology.com |Sunstone: A feldspar with aventurescence caused by light reflecting from platy inclusions. |Rocks: Photos of igneous, sedimentary and metamorphic rocks. |Minerals: The building blocks of our society. We use items made with minerals every day. |Volcanoes: Articles about volcanoes, volcanic hazards and eruptions past and present.
http://geology.com/usgs/oil-shale/
13
31
This lesson focuses on the scarce and nonrenewable nature of fossil fuels in order to stimulate student thinking about energy conservation. It emphasizes the fact that saving energy can be good for the wallet as well as the earth's future. Students play a memory game that challenges them to find people-powered substitutes for things that use electricity and gas. Students then use the federally-mandated EnergyGuide labels to estimate the cost savings of energy- efficient home appliances. In a final activity, students explore positive and negative economic incentives that motivate people to conserve energy. Many federal energy-related programs and policies are featured in this lesson. These include, besides the Energy Guide label, EnergyStar certification, the Fuel Economy Guide for motor vehicles, and a diverse collection of taxes, tax breaks and subsidies. In this lesson, students examine options for reducing their dependence on energy resources, especially by substituting people power for other forms of energy and purchasing energy efficient home appliances. Students also explore some of the government programs that are influencing consumer choices in the marketplace. - Identify energy alternatives that use people power. - Compute the operating cost of a major appliance. - Choose the best deal on an appliance after considering both the purchase price and lifetime operation costs of an appliance. - Explain how economic incentives influence people’s behavior. High energy prices are in the news again! Dwindling supply relative to demand has put the squeeze on oil – increasing prices at the gas pump as well as for home heating fuel. Given that natural gas-fired generators are the source of much of our electricity, the higher price of oil has a ripple effect on electricity bills. The August 2003 blackout has brought to the forefront another reason to anticipate higher energy prices. The nation’s electrical transmission system — the high-voltage network that connects the power plants to the local distribution companies that deliver electricity to end users — needs to be updated. Someone will have to pay for this investment in the power grid and ultimately, it will be the consumer. Higher market prices are serving as an economic incentive for people to reduce energy consumption. Government programs in the form of taxes and subsidies serve as additional incentives for saving energy. Energy Conservation Game: Students can complete this drag and drop activity which teaches them about energy conservation. Energy Conservation Game Using Energy Guides Worksheet: This worksheet teaches students how to read energy guides. Life-Cycle Costing Manual for the Federal Energy Management Program, Handbook 135: This PDF provides information on calculating the life cycle costs of appliances. Refrigerator Energy Saver: Students will complete this activity to learn how much it costs to run a refrigerator and how much energy it takes. Refrigerator Energy Saver Government Incentives: Students will complete this interactive activity, distinguishing positive and negative incentives then summarize how these economic incentives influence energy conservation. A Home Energy Audit: Discusses energy efficiency within homes. Home Energy Saver: An energy audit tool. The Energy Star: Discusses energy efficient products and actions. The Fuel Economy Guide: Discusses the fuel economy of selected vehicles Hybrid Vehicle Tax Deduction: Discusses a tax reduction for hybrid vehicles. The Weatherization Assistance Program: Discusses how low income families can make their homes more energy efficient, thus saving money. NOTE: If you would prefer to have students use the current rate for a kilowatt of electricity in your community, you will need to research the current rate on a local utility bill or have a copy of a bill available so that students can find the rate on their own. All activities in this lesson can be completed by students independently, in teams or in small groups. Activity 1: People Power Play the Energy Conservation Game to learn about conserving energy. Reinforce this activity, by having students think of other ways to substitute people power for fossil fuel and electricity. Discuss how these changes would influence their lives in terms of comfort and convenience. Activity 2: Energy-Efficient Appliances [NOTE: You may want to print this worksheet in advance and distribute it to the students instead of having them print it during the lesson. If you prefer, you can also direct students to use the current rate for a kilowatt of electricity in your community. Give students the rate or have them find it on a copy of a local utility bill.] Students may be unfamiliar with the terms top load and front load. Explain that the two washers they are comparing have different designs. Older washers are typically top load in design. Front loaders — this means laundry is put in from the front of the machine versus the top – are relatively new to the U.S. market. Front-loaders work by tumbling the clothes and then spin-drying them in a tub that rotates on a horizontal axis. There are some exceptions. One manufacturer makes a horizontal-axis machine that loads from the top, and another company sells a machine with an axis that is between vertical and horizontal. Front-loaders designs are just as effective in cleaning clothes as top-loaders designs. Some studies report they actually clean clothes better. Typically, front-loaders use less water: from one-third to one-half the amount that top-loaders require. Because less water is used, less natural gas or electricity is required to heat the water. The machines also spin faster and clothes are wrung out more completely. Not factored into the label estimates is the fact that the improved spin-drying reduces the cost of running a clothes dryer as well. Horizontal-axis washers (front-loaders) have one major drawback – the initial purchase price is almost always greater than the price for vertical-axis machine (most top loaders). As the worksheet example illustrates, however, the energy savings provided by front-loaders can more than compensate for the higher purchase cost. In some areas of the U.S., utility companies, environmental groups and government agencies help sweeten the deal by offering incentives to consumers who buy front-loaders. When students have completed their worksheets, discuss: 1. Was the washer with the lowest sales price the best deal? [No, the cost of energy used was also important] 2. The purpose of this worksheet is to demonstrate that investments in products or systems designed to save energy provide a return through future savings from lower energy bills. The calculations assume that fuel prices remain stable. What happens if fuel prices increase? [Energy savings will be greater.] 3. If you have not already done so, point out to students that the two washers vary in design. The relatively new front-loading designs use less water. They also spin clothes dry faster and extract more moisture. How do these factors influence the cost of doing laundry? [Energy use is reduced because less water must be heated and the spin cycle extracts water more quickly. The lower moisture content after spinning results in lower energy use when operating a clothes dryer.] 4. What other factors might you consider to get the best deal on an appliance besides sales price and energy efficiency? [Repair records for other appliances made by the same manufacturer, the cost of repairs and maintenance, the source of energy - natural gas tends to be less expensive than electricity for operating appliances. A good source of information on appliance dependability and repair records is Consumers Union which publishes the monthly Consumer Reports magazine.] [NOTE: There are a number of ways to analyze the life cycle cost of appliances. The calculation in this activity illustrates one of the simpler methods. A more complex approach is life-cycle costing which takes into consideration all costs (purchase, installation, operation, maintenance and repair costs) less salvage value – all expressed as present dollar values. For definitions of these terms and the formula for performing life-cycle cost analysis, see the Life-Cycle Costing Manual for the Federal Energy Management Program, Handbook 135 . Once your students have finished working on their worksheet, have them finish this activity. Use this sample of an Energy Guide for a refrigerator-freezer to answer the questions .Then have them print out their results to hand in. The answers to the questions are below. 1. 23 cubic feet 2. 800 kilowatts (kWh) 4. e, all of the above 5. 685 kilowatts (kWh) Discuss how much it would cost to operate the most efficient refrigerator-freezer for a year. Make sure that students round their answers. Activity 3: Government Incentives Have your students complete this activity distinguishing between positive and negative incentives then summarize how these economic incentives influence energy conservation. Discuss the correct answers with your class. In contrast, a true tax credit is a dollar for dollar reduction in taxes paid. In the tax credit example provided as part of this activity, the taxpayer would first calculate their taxes using an IRS 1040 form. The taxpayer can then reduce the amount of tax he or she pays by the precise amount (within limits) spent on home improvements for energy conservation. 1. What motivates you to conserve energy? [Answers will vary. As pointed out in the student version of the conclusion, some people conserve just because they think it is the right thing to do. Others are influenced by different economic incentives. Of course, there may also be some students who claim nothing presently motivates them to be an energy saver. If you have some of the latter, point out that there are consequences with this choice. Each dollar used to pay for energy represents a dollar less that can be used to purchase other goods and services they might prefer. Examples from school might be more computers, additional books in the library, painting the cafeteria, and after-school activities. At home, many students would probably prefer more money to be spent on their allowance, a family vacation, a nicer car, etc.] 2. Do you think your motives for conserving energy may change as you get older? [Most students have little awareness of how much is paid for the natural energy used in their home and school. Their knowledge of prices at the gas pump is influenced by whether they drive and more importantly, whether they pay for the gasoline they use. Having to pay for energy can be a significant factor in determining the effectiveness of economic incentives.] Assessment tools are provided as part of two activities. The worksheet Using Energy Guides in Activity 2 requires students to use federally-mandated Energy Guides to calculate the cost of operating appliances. Students must also make and support a choice between two washers based on purchase and operating costs. At the end of Activity 3, students are asked to submit a summary of how positive and negative economic incentives influence consumer behavior with respect to energy conservation. JUST FOR FUN (MORE INFORMATION) 1. Create a bulletin board featuring power alternatives: coal, oil, natural gas, wood, nuclear power, people power, draft animals, wind, water, and the sun. 2. Have students imagine that beginning tomorrow morning there is no electricity. Have them write a paper on how their lives would change, what “necessities” would no longer function, and what would be used in place of things no longer useful. 3. Have students keep a log for one day of ways they conserved energy. 4. Using publishing software, create brochures or web pages with tips on saving energy. 5. Have students conduct home energy audits. An interactive online audit source is: Home Energy Saver 6. Have students research and do a report on one of the following federal energy conservation programs: a. The Energy Guide Label mandated by the federal government which provides an estimate of a product's energy efficiency for most major home appliances. b. The Energy Star voluntary labeling and certification program operated by the U.S. Department of Energy (DOE) and the Environmental Protection Agency (EPA). ENERGY STAR®-labeled products include air conditioners, clothes washers, dishwashers, heating equipment, home office equipment, indoor/outdoor lighting, refrigerators and windows. c. The Fuel Economy Guide which helps consumers compare gas mileage on vehicles manufactured from 1985 to the present. d. Hybrid Vehicle Tax Deduction which offers a tax break for purchasers of the new hybrid gas/electric vehicles. e. The Weatherization Assistance Program for low-income households which funds energy audits and home improvements that increase energy efficiency. f. State and local incentives for conserving energy. Sources of incentives include government, utility companies and some private organizations. Types of incentives vary greatly but may include rate reductions, free energy audits, weatherization assistance and financial help or people who purchase energy-efficient appliances. “This is a great site for economic information - great links, etc. It also has information that can be adapted into Australian schools.” “I was looking for samples of persuasive writing for my sixth grade class. In the introduction of this writing, energy conservation came up as a topic. I am going to use your link in my class tomorrow.” “Just right for my renewable energies project! Thank you!”
http://www.econedlink.org/lessons/index.php?lid=526&type=educator
13
19
Throughout our history, many individuals have left a legacy, or something for which they will be remembered. For instance, Dr. Martin Luther King, Jr. is known for his leadership in the civil rights movement. John Marshall is remembered for the landmark decisions he made while Chief Justice of the United States - decisions that have shaped the country in important and historic ways. Many of those key decisions are summarized below. In this activity, you will create a poster or brief PowerPoint presentation in which you use words or images to summarize: - John Marshall's key ideas about how power should be balanced between states and the national government. (Refer to one or more of the cases below as evidence) - Marshall's ideas about how powerful the Supreme Court ought to be. (Refer to one or more of the cases below as evidence.) - Marshall's view of the power of the Constitution. (Refer to one or more of the cases below as evidence.) Then, evaluate Marshall’s legacy. In what ways, if any, do you think Marshall’s decisions have influenced history? Are they relevant today? Explain your opinions. Your poster or presentation should be visually appealing and the messages should be clear and organized. Here’s the evidence: Marbury v. Madison (1803) At the end of his term, President John Adams appointed William Marbury as justice of the peace for the District of Columbia. The Secretary of State, John Marshall (the same person who later became Chief Justice) failed to deliver the commission to Marbury and left that task to the new Secretary of State, James Madison. Upon his inauguration, Adams’ political enemy, Thomas Jefferson told Madison not to deliver the commissions because he did not want supporters of Adams working in his new government. Marbury filed suit and asked the Supreme Court to issue a writ of mandamus, or a court order which would require Madison to deliver the commission to Marbury. Chief Justice Marshall wrote the opinion in the case. He said that while Marbury was entitled to the commission, the Supreme Court did not have the power to force Madison to deliver the commission. He reasoned that the Judiciary Act of 1789, the act written by Congress which authorized the Supreme Court the to issue such writs conflicted with Constitution so the law was unconstitutional. He said that when ordinary laws conflict with the constitution, they must be struck down or made “null and void.” This is called judicial review. In effect, he wrote that the Constitution is the supreme law of the land and the courts — especially the Supreme Court — are the ultimate “deciders” of what is constitutional. Through this decision, Marshall established the judicial branch as an equal partner with the executive and legislative branches of the government. McCulloch v. Maryland (1819) In the early years of our country, there was disagreement about whether the national government had the power to create a national bank. The first president, who believed in a strong national government created a national bank. The third president, who believed states should have more power closed the bank. The fourth president opened a new national bank in 1816. Many state banks did not like the competition and the conservative practices of the national bank. As a way to restrict the national bank's operations or force the banks to close, the state of Maryland imposed a huge tax on the national bank. After the Bank refused to pay the tax, the case went to court. Maryland argued that the federal government did not have the authority to establish a bank, because that power was not specifically delegated to them in the Constitution. The Supreme Court reached a unanimous decision that upheld the authority of Congress to establish a national bank. In the opinion, Chief Justice John Marshall conceded that the Constitution does not explicitly grant Congress the right to establish a national bank, but noted that the "necessary and proper" clause of the Constitution gives Congress the authority to do that which is required to exercise its enumerated powers. Thus, the Court affirmed the existence of implied powers. On the issue of the authority of Maryland to tax the national bank, the Court also ruled in the Bank's favor. The Court found that "the power to tax involves the power to destroy ...If the states may tax one instrument [of the Federal Government] they may tax any and every other instrument ...the mail ...the mint...patent rights ... judicial process? This was not intended by the American people…" Furthermore, he said, "The Constitution and the laws made in pursuance thereof are supreme; they control the Constitution and laws of the respective states and cannot be controlled by them." Cohens v. Virginia (1821) The Cohen brothers sold Washington D.C. lottery tickets in Virginia, which was a violation of Virginia state law. They argued that it was legal because the (national) U.S. Congress had enacted a statute that allowed the lottery to be established. When the brothers were convicted and fined in a Virginia court, they appealed the decision. In determining the outcome, the Supreme Court of Virginia said that in disputes that involved the national and state government, the state had the final say. The Cohens appealed to the Supreme Court. The (national) Supreme Court upheld the conviction, saying that the lottery was a local matter and that the Virginia court was correct in allowing the Cohens to be fined. However, the most important part of this decision is what Marshall and the Supreme Court had to way about which court has the final say in disputes between states and the national government. The Supreme Court said if had the right to review state criminal proceedings. In fact, the Court said that it was required to hear cases that involved constitutional questions, including those cases when a state or a state law is at the center of the case. Gibbons v. Ogden (1824) Aaron Ogden held a license to operate a steamboat on the well-traveled route between New York and New Jersey. The State of New York gave him the license as a part of a monopoly granted to Robert Livingston and Robert Fulton. The route was so successful financially that competitors wanted to be able to operate there, too. When competitors could not get a license from New York, they got licenses from the U.S. Congress. Thomas Gibbons held such a license from Congress. At issue in this case is whether New York's monopoly over steamboat passage in the waters between New York and New Jersey conflicted with Congress' constitutional power to regulate interstate commerce. Ogden argued that the New York monopoly was not in conflict with Congress' regulation of commerce because the boats only carried passengers between the states and were not really engaged in commerce. The Supreme Court disagreed. Justice Marshall, who wrote the decision, ruled that the Constitution gives Congress power to regulate commerce among several states. He said that commerce was not just about exchanging products. In his opinion, commerce could include the movement of people, navigation, as well as the exchange of products, ideas, and communication. Since the (national) Congress could regulate all of these types of interstate commerce, the New York monopoly was illegal.
http://www.streetlaw.org/en/Page/403/Chief_Justice_John_Marshalls_Legacy
13
16
World War One brought the discovery that photographs behind enemy lines taken from airplanes could be of great value in warfare. Not longer after this, observers taking random photographs from the air over rural England noticed that traces of old Roman walls, forts and roads could be seen on aerial photographs but otherwise went unnoticed under cornfields and pastures when archaeologists wandered about the countryside on foot. Terrain photos from captive balloons had been made even earlier (1860) but it was only in the 1930's and 40's that archaeologists began to take advantage of photos from the air over archaeological sites. Today, of course, stereo-pair color and color infra-red film photographs (or even the newer multi-spectral imaging methods) from the air, are the place to begin in mapping and understanding an archaeologically interesting area. Prior to the Second World War electronic methods began to be employed in earnest in searching hr oil and large mineral deposits beneath the surface of the earth. Because of the big economic payoff, successful discoveries made possible by even primitive geophysical methods were high enough that R&D budgets soon became generous. An explosion of knowledge in geology, earth science, geophysical and remote sensing followed. After World War 2 all the sophistication brought by war time research then also became available to private industry, producing a new, even bigger, boom in geophysical exploration. Historically, the scale of exploration required for oil and mineral exploration for most of these methods was very large (of the order of kilometers), while in contrast the scale of interest to an archaeologist is only centimeters or meters. In addition to highly evolved aerial photography, airborne and satellite multi- spectral imaging instruments, good ground based geophysical instruments began to be commercially available in the 30's taking advantage of various physical phenomena. Some basic geophysical methods include the following: (1) Seismic Reflection & Refraction, (2) Gravity, (3) Magnetics, (4) Electrical, and (5) Radioactivity. Method (1) is commonly used in oil exploration, engineering geology, and regional geology studies. The gravity method (2) is especially useful in oil exploration. Methods (3) and (4) find common application in mineral exploration, oil exploration, and regional geology studies. Finally radioactive methods are used in exploration for radioactive minerals. Common geophysical instruments and methods include: As mentioned, the application of some of the above geophysical methods to archaeology began in earnest after World War II, but in contrast to the huge budgets available for petroleum and mineral exploration, archaeological budgets have almost always been minuscule. Usually the chief archaeologist at a site is a reputable and experienced professor whose modest salary is paid by his school so that he can teach university classes and do some seasonal field research on the side. The field work in archaeology has always depended mostly on student volunteers and assistants. Small amounts of financing are sometimes available from museums or grant institutions such as National Geographic Society, the National Science Foundation or the Smithsonian Institution. Usually digging at an archaeological site must be done by hand though occasionally massive amounts of overburden must be removed, or trenching done, with the help of a back-hoe or bulldozer. Cataloging, preserving artifacts (conservation), and publication of scientific papers occupies the off-season, but often funding levels for these important activities are also minimal. But even with the limited budgets archaeologists have with for decades, geophysical methods can be of great value to an archaeologist. Some of these reasons include: Ground Penetrating Radar (GPR) was invented in the 1970's, originally for military purposes such as locating land-mines and underground military tunnels. Soon public utility companies began to be keenly interested in such radars in hopes they would provide a practical method for mapping pipes and utility lines under city streets, and for locating cavities and voids. Most recently radars of this type have been used from aircraft for mapping the surface of the earth through jungle or forest cover. GPR technologies have proven to be of great usefulness in archaeology, especially in Israel. Radar from the air is seldom of use to the archaeologist these days except for large sites covered by jungle such as are found in the Yucatan or Central America. Foliage-penetrating radars are now used widely for topographic mapping of the land surface beneath jungle canopy and forest cover. Thermal-infrared imaging methods measure the surface temperature of the earth to an accuracy of a fraction of one degree. The electronic scanning equipment necessary for such measurements was originally available only to the military and the instruments cost from $100,000 to $1,000,000. In recent years portable instruments of great sensitivity have become commercially available at greatly reduced prices. These instruments can be used from a tripod on the ground, or from helicopter or airplane by viewing through a hole in the fuselage. Borehole Technology. Radars, seismic and resistivity other probes are often lowered into holes drilled into an archaeological site, to permit geophysical probing at depth. Core drill soil samples can be a big help in identifying the various historic levels and strata at a layered archaeological site such as a tell. When chambers or voids are encountered while drilling, these can be explored (and video taped) using a down-hole television camera equipped with lights. Holes drilled into an archaeological site are obviously much less damaging than trenches or tunnels and they can either be filled or capped after use. Not all individuals or companies who offer geophysical assistance to the archaeologist are reputable or professionally competent. Fraudulent self-made experts---whose instruments may be little more than electronic water dowsing rods-commonly offer services that are of little value. Some geophysical instruments on the market may promise amazing results in identifying metals at great depth by type and quantity but many of these operate by methods unknown to reputable science. Geophysical records, even when made using legitimate instruments, are also of little value unless the data is collected and interpreted correctly. Archaeologists should not expect his geophysicist to work wonders for him at all sites. In some cases a combination of instruments may be appropriate, in other cases no known method may prove really very useful or cost effective. The following legitimate geophysical methods and instruments are in use in the service of archaeology today: A wide variety of "metal detectors" are commercially available today; they have the advantage of being easy to use and most cost only a few hundred dollars. The larger the search coil, the deeper the penetration; however coins and small metal objects can be detected only a few inches deep and very large metal objects only to depths of a few feet. Non-metal objects are not detected. Some areas are too "noisy" for metal detectors. "Noise" can originate from power lines, or from obscuring signals caused by nearby parked cars, scattered nails, re-bar or metallic litter at the site. Highly mineralized areas are difficult to work in, and certain rocks such as iron-rich basalt can be troublesome for metal detector Metal detectors are "active" instruments. A battery-powered transmitter in the unit radiates a relatively low-frequency alternating current signal into the ground by means of a transmitting coil. If the signal from the transmitter encounters any type of conducting metal or mineral in the ground an induced current flows in the subsurface target. This induced current then re-radiates a weak signal back to the surface. The latter signal is out-of-phase with the transmitted signal and thus is easily detected by a receiving coil. Modern metal detectors have circuitry for carefully balancing out any direct signal leakage between transmitter and receiver coils and for discriminating between large and small, shallow or deep, and ferrous or non-ferrous metals. The simpler instruments of this type are useful for "coin shooting" at old ghost town sites, or archaeological sites (on land or under the sea), and for locating gold or silver deposits within a quartz vein in a lode mine. Small objects such as coins usually must lie within a few inches to a foot of the surface to be detected by metal detectors. The sensitivity of metal detectors is a steep function of the coil diameter, however with large coils and ample transmitter power larger metal objects can be located to depths of 10 or 15 feet using metal detectors. Claims for detection at greater depths as well as identification of metals by type are suspect. The resistivity method of subsurface exploration is powerful but often tedious to employ unless an automated instrument is available. The method is simple: Current is introduced into the ground through one pair of electrodes. Current flow between these electrodes fans out through the ground in a pattern and intensity that depends on the conductivity of the ground and any stratification or obstacles that lie in the vicinity of the electrodes. A second pair of electrodes is then used to quantitatively measure the voltage pattern on the surface resulting from the current flow pattern of the first set of electrodes. A number of different electrode configurations are used in practice, but in simplest form the operator takes measurements along a straight line ("traverse"), moving his electrodes in pairs. He then repeats the measurements along a parallel line until the area of interest has been covered with a rectangular grid of electrode positions. If multiple electrodes are used and the results recorded automatically at the push of a button, the area to be examined can be searched more efficiently, and also probed at various depths at the same time. (As a rule of thumb, the depth of maximum sensitivity for resistivity sounding is about 1.5 times the electrode spacing in typical arrays). A crew of two can easily study an area of perhaps 1000 square meters in a day. Typical electrode spacings might be 0. 3 to 1.0 meters for shallow targets. Once the resistivity data has been collected, a simple computer program quickly generates a three-dimensional map of ground electrical resistivity or conductivity. Targets most easily seen on resistivity surveys are cavities or voids, but buried walls and filled trenches can often be mapped. The target depth divided by the diameter of the target should be less than 3 or 4 for best sensitivity, though some experts claim to be able to detect targets with a depth to diameter ratio of 9 or more. Boulders, geological stratifications and water-table depth can also be successfully located by the use of resistivity by selecting appropriate electrode spacing to allow the probing current to enter the ground to the appropriate depth. Resistivity meters employed in oil prospecting are often powered by large generators using very high voltages and electrodes spaced perhaps hundreds of meters or kilometers apart, but instruments suitable for archaeological use are battery powered, easy to use, and usually priced under $1500. Resistivity instruments no different than those used by professional geophysicists, but with fancy labels attached, are often found advertised for five times the price of standard instruments. Let the buyer beware! Radars designed for probing into the earth typically operate from 30 to 300 MHz-the frequency being determined by the length of the dipole antennas used. It is necessary to use relatively low frequencies because the earth almost always is a good absorber of radar waves. Unfortunately, low frequencies imply long probing wavelengths and long wavelengths imply low resolution. A very short pulse is used allowing accurate measurement of depth to the target, however the antenna beam is very broad (90-120 degrees usually) and can not easily be narrowed because the antennas become too big and bulky. Very often GPRs are mounted on a small wheeled cart which is hand towed across the area of interest, that is, if the search area is reasonably flat and relatively free of brush and boulders. The echoes are displayed in a continuous strip oscilloscope false color record for ease of interpreting results. In recent years the state of the art in GPR technology has been greatly improved by computer signal processing methods, since the performance of these radars is almost always "clutter limited." Clutter signals are unwanted reflections, off-axis echoes, and multiple scattering echoes. These signals obscure the target of interest under bands of signals but in many cases digital processing improves radar performance by many orders of magnitude. When cart-mounted radar can be used, an experienced operator can often traversing large areas of surface at a site in a single day. The radar output can be recorded on a standard home video tape for archiving and detailed study, and also printed out on strip-chart paper for immediate on site analysis. GPRs are usually limited not only by clutter but also by attenuation of the radar signal in the soil. This is most severe in clay soils and damp soils where the salt content is high. The depth of penetration at some sites may be less than 1 foot, or under favorable conditions, many tens of feet or even hundreds of feet. Commercial cart GPRs are priced from about $18,000 to $40,000 and operator training and experience is necessary to interpret the records. Very often cart radars can not be used because of rugged surface terrain. Or perhaps the area to be explored is underground---inside a tunnel or cistern or along a confined area such as a hillside. Portable individual transmitting and receiving dipoles are useful in such cases. But the data must now be recorded point by point, usually by taking Polaroid photos. Targets of interest can be triangulated and mapped if these targets can be viewed from various aspect angles. Portable GPRs are well suited for discovering cavities and voids, and when soil attenuation values are low they can detect caves, tombs, or chambers one hundred feet or more in depth. Interpretation of GPR records of all types is unusually difficult requiring operator skill and experience for satisfactory results. Sound waves are not easily coupled into soils, except at very low frequencies (a few Hertz, or cycles per second) but at higher frequencies sound waves can be used in rock or solid walls as a helpful diagnostic tool. Frequencies used for probing in bedrock or stone are generally 1000 to 30,000 Hertz (cycles/second). A coupling gel, or mud layer, is necessary to couple the seismic signal into and out of the transmitting and receiving transducers and this makes field measurements somewhat time consuming unless only a few locations are to be surveyed. High-frequency sounding is especially useful for finding tombs and voids in areas of high radar signal absorption. For example, the Valley of the Kings in Egypt has very high radar attenuation, but the same limestone can be probed with high-frequency sound waves for distances well beyond 100 feet. Measuring the thickness of a wall or pillar is readily done with this method. High-frequency seismic sounding instruments are not presently commercially available, but can be custom built for about $10,000. The earth's magnetic field is slightly disturbed by some kinds of archaeological anomalies such as fired clay pottery. The magnetic signals associated with archaeological features are very small and easily obscured by trash metals, power lines, nearby automobiles, and the like. Magnetometers are most suited for remote, isolated sites away from modern buildings and debris. Magnetometers cost from about $1500 to $10,000 and can be used in pairs (a "differential magnetometer" to subtract out all but the wanted signals. Modern magnetometers are sensitive to field changes of about 1 gamma-the earth's weak magnetic field intensity is of the order of 50,000 gammas. Magnetometry has been successfully used to located imported stone at some well-known archaeological sites. Fired mud brick has a reasonably high magnetic anomaly and of course ferrous materials such as one might expect at an Iron-Age or later site, give rise to very large magnetic anomalies. Gravity is one of the weakest of all forces found in nature. Yet, the earth's gravity field is very slightly altered by such features as subsurface voids or caves. Suitable gravity meters, known as "microgravimeters" cost of the order of $50,000 and require a very experienced trained operator. Point by point measurements must be made, which may be time consuming. The data must be carefully corrected for such things as surface topography and diurnally varying "earth tides." For these reasons gravity surveys have been little used in archaeology to date. Conventional aerial (stereo-pair) photos of a site are very useful, as has been suggested, since outlines and features not visible from the ground frequently show up in aerial photos. Thermal infra-red (IR) imagery requires a scanner, usually cooled by liquid nitrogen, (instrument cost $15,000 to 50,000), but surface temperature differences of a small fraction of one degree can be measured. At night radiation cooling of the ground is not uniform if there are subsurface features that impede or enhance heat flow. In additional to diurnal heating and cooling, seasonal heat flow temperature changes can often be detected providing information on deeper archaeological anomalies. Heat flow through rock and soil is very slow---rock is an excellent heat insulator---so infra- red measurements give information about temperatures near the surface, not about temperatures deep within the earth. In spite of the limitations, false-color images showing temperature contours can thus provide interesting clues for the archaeologist at some sites, especially if such measurements can be made carefully at periodic intervals through an entire year. The Temple Mount in Jerusalem is an ideal site for on-going thermal infra-red imaging studies and Tuvia Sagiv, an architect from Tel Aviv, has already obtained some fascinating thermal IR images of the Temple Mount area. These can all be done from a distance or from the air. If an archaeological site is complex and important, likely to be excavated for many field seasons, geophysical methods can be most useful since they are non-destructive and rapid. The archaeologist can hope to chose digging priorities based on survey findings. Some sites (monuments or parks) contain sites or buildings that can not be disturbed at all, so geophysical sensing may provide the only means of studying the site. Advice from a geologist who is familiar with an area can be helpful also. A combination of geophysical methods can be helpful as each method has its strengths and limitations. Archaeology is a time-honored exacting scientific discipline which provides us with some of our best information on human history and the past. It is to be hoped that more opportunities and sources of funding will develop so that modern geophysical methods can assist the archaeologist even more frequently than has been possible in recent years. I am now retired from Geophysical Work. Contact International Radar Consulnatnts, Inc. for expert geophysical help. International Radar Consultants (Roger Vickers) Sensors and Software, Inc. GEM Advanced Magnetometers Georadar Division - IDS Ingegneria dei Sistemi SpA Advanced Geosciences Inc. Services Geophysical Survey Systems (GSS) Arkadia Links List to Archaeology and Remote Sensing Advanced Geosciences Inc. Geovision Geophysical Services Geophysical Survey Systems (GSS) Arkadia Links List to Archaeology and Remote Sensing
http://www.ldolphin.org/Geoarch.html
13
22
CFA Level 1 Microeconomics - Market Efficiency The supply curve (see figure 3.3) represents the quantities of a particular good that producers are willing to supply at various price points. For any particular quantity, the height of the supply curve represents the minimum price that suppliers of a good must get in order to supply the additional unit. That minimum supply price must cover the increase in total costs, or marginal cost, of producing the additional unit. The opportunity cost represents the value of other goods that may have been produced with the resources used. Producers must receive a price at least equal to their opportunity cost. Producer surplus is defined as the difference between what a producer actually receives (which will be the market price) for a product and the producer's minimum supply price (marginal cost) for that product. If a producer is willing to provide a unit of a good for $3.00, and actually gets $4.00, then the producer would have $1.00 of producer surplus. Consumer Surplus, Producer surplus, and Equilibrium. We expect consumers to keep consuming additional units of a good until the marginal benefit no longer exceeds the price, or there is no longer an increase in consumer surplus. Producers will continue to provide additional units of a good up to the point where the market price no longer exceeds their minimum supply price. The marginal benefit for all people in a society can be described as the marginal social benefit. Similarly the marginal costs for all producers in a society of a good can be described as the marginal social cost. At market equilibrium, the marginal social benefit of consuming an additional unit of a good is just equal to the marginal social cost of producing the additional unit. In the figure 3.5 below, the triangle defined by the points P2PmQm represents consumer surplus, while the triangle defined by points P1PmQm represents producer surplus. Figure 3.5: Consumer and Producer Surplus How Resources Move Toward Their Most Efficient Allocation In economics, a market is efficient if the maximum amount of goods and services are being produced with a given level of resources, and if no additional output is possible without increasing the amount of inputs. Efficient markets ensure optimal resource utilization by allowing for price to motivate independent actors in the economy. If buyers and sellers are free to choose how to allocate resources, prices will direct resources towards those who value them most and can utilize them most effectively. Suppose consumer preferences change so that good A is now more desired than good B. We would expect the price of good A to shift higher and the price of good B to shift lower. This in turn will induce the production of additional units of good A and the devotion of more input resources to good A, while similarly decreasing production of B and its associated input resources. In the real world today we have seen higher oil prices stimulate more drilling for oil and more investment in oil substitutes. The wage rates of mainframe programmers in the United States has decreased over the last several years in comparison to the year 2000, as there less of a need for their services. The lower wage rates have induced more mainframe programmers to retrain themselves with other computer skills, or to leave the field. Obstacles to achieving efficiency include: · Price Ceilings/Floors - Sometimes governments impose price ceilings, which define a maximum price, or price floors, which define a minimum price. Effective price ceilings or floors prevent normal market equilibrium. · Public Goods are goods available to everyone, even if they don't pay. Examples include police protection and public parks. One reason competitive markets don't produce the optimum amount of a public good is due to the "free-rider" problem: those who don't pay get a "free ride" with regards to getting the benefit. · Externalities reflect costs and benefits not borne by the person or firm making the economic decision, which are imposed on or granted to others. Runoff from large cattle feedlots can damage nearby farms, and this potential cost may not be considered by feedlots when they look at their supply curve. A landowner who chooses not to develop her land may benefit several other homes for purposes of flood control. The benefit to others may not be taken into account when deciding to develop the land. · Taxes lead to lower quantities produced, higher prices for buyers and lower effective prices for sellers. · Subsidies increase the quantity produced, lower prices for buyers and increase seller prices. · Quotas limit the quantity that can be produced. · High transaction costs reduce the price that customers are willing to pay and increase supplier costs, leading to an equilibrium quantity that is lower than either party would desire absent the higher costs. · Asynchronous information creates a perceived cost for buyers and sellers if they cannot adequately evaluate a proposed transaction. Drug companies can charge premium prices for pharmaceuticals due in part to the established evidence that the drug works. Auto makers entering new markets often have to offer lower prices and/or better warranties because customers do not have sufficient information about the new brands. · Discrimination deprives market participants of the ability to conduct business at prices that otherwise be acceptable to them. Businesses that discriminate against certain types of job-seekers may have to pay more for labor, while customers that discriminate against a business may have to pay more for goods. · A monopoly means that only one firm can provide a certain good or service. A monopolist will charge a higher price and produce a lower quantity in comparison to a competitive market. With the exception of the above-mentioned obstacles, a competitive market will use resources efficiently. Goods are produced up to the point where the marginal benefit is equal to the marginal cost, and the sum of consumer and producer surplus is maximized. Although price is the dominant means of allocating resources in a market economy, it is not the only way for markets to allocate resources. A command economy relies upon a central planning authority to allocate resources. Markets can also allocate resources by majority rule (citizens vote on the desired allocation of resources), lottery, or force and theft. The Fairness Principle, Utilitarianism, and the Symmetry Principle Economists often like to examine the "fairness" of a situation or economic system. Ideas about fairness can be lumped into one of two categories: · "Results" must be fair. · "Rules" must be fair. Utilitarianism, which is a moral philosophy developed in 18th and 19th century Great Britain, posits that an action is correct if it increases overall happiness for the performer of the act and those affected by the act. Utilitarians argued that income should be transferred from the rich to the poor until complete equality was achieved. One problem with utilitarianism is the tradeoff between fairness and inefficiency. An effort to transfer wealth by heavily taxing rich people will decrease incentives for people to save money or work hard. This can lead to inefficient uses of capital and labor. Another source of inefficiency is the administrative cost of transferring money from the rich to the poor. The symmetry principle is based on the intuitive principle that people in similar situations should be treated the same. From an economic perspective, we would like to achieve equality of opportunity. The symmetry principle adheres to the viewpoint that "rules" must be fair. comments powered by Disqus
http://www.investopedia.com/exam-guide/cfa-level-1/microeconomics/market-efficiency.asp
13
14
The Permanent Settlement had come into operation in 1793. The East India Company had fixed the revenue that each zamindar had to pay. The estates of those who failed to pay were to be auctioned to recover the revenue. In those days Raja was a term that was often used to designate powerful zamindars. In 1797 there was an auction in Burdwan (present day Bardhaman). It was a big public event. A number of mahals (estates) held by the Raja of Burdwan were being sold. Since the raja had accumulated huge arrears, his estates had been put up for auction. Likewise over 75 per cent of the zamindaris changed hands after the Permanent Settlement. Failure of Permanent Settlement - Reasons In introducing the Permanent Settlement, British officials hoped to resolve the problems they had been facing since the conquest of Bengal.After a prolonged debate amongst Company officials lead by Charles Cornwallis, the Permanent Settlement was made with the rajas and taluqdars of Bengal. They were now classified as zamindars, and they had to pay the revenue demand that was fixed in perpetuity. In terms of this definition, the zamindar was not a landowner in the village, but a revenue Collector of the state. Zamindars had several (sometimes as many as 400) villages under them. In Company calculations the villages within one zamindari formed one revenue estate. The Company fixed the total demand over the entire estate whose revenue the zamindar contracted to pay. But the system failed due to below reasons. 1. The initial demands were very high. This was because it was felt that if the demand was fixed for all time to come, the Company would never be able to claim a share of increased income from land when prices rose and cultivation expanded. To minimise this anticipated loss, the Company pegged the revenue demand high, arguing that the burden on zamindars would gradually decline as agricultural production expanded and prices rose. 2. This high demand was imposed in the 1790s, a time when the prices of agricultural produce were depressed, making it difficult for the ryots to pay their dues to the zamindar. 3. The revenue was invariable, regardless of the harvest, and had to be paid punctually. In fact, according to the Sunset Law, if payment did not come in by sunset of the specified date, the zamindari was liable to be auctioned. 4. Rent collection was a perennial problem. Sometimes bad harvests and low prices made payment of dues difficult for the ryots. At other times ryots deliberately delayed payment. Rich ryots and village headmen – jotedars and mandals – were only too happy to see the zamindar in trouble. 5. Faced with an exorbitantly high revenue demand and possible auction of their estates, zamindars devised ways of surviving the pressures. New contexts produced new strategies. Fictitious sale was one such strategy. It involved a series of manoeuvres. The Raja of Burdwan, for instance, first transferred some of his zamindari to his mother, since the Company had decreed that the property of women would not be taken over. Then, as a second move, his agents manipulated the auctions. When a part of the estate was auctioned, the zamindar’s men bought the property, outbidding other purchasers. Subsequently they refused to pay up the purchase money, so that the estate had to be resold. This process was repeated endlessly, exhausting the state, and the other bidders at the auction. At last the estate was sold at a low price back to the zamindar. 6. When people from outside the zamindari bought an estate at an auction, they could not always take possession. At times their agents would be attacked by lathyals of the former zamindar. Sometimes even the ryots resisted the entry of outsiders. They felt bound to their own zamindar through a sense of loyalty and perceived him as a figure of authority. By the beginning of the nineteenth century the depression in prices was over. Thus those who had survived the troubles of the 1790s consolidated their power. Rules of revenue payment were also made somewhat flexible. As a result, the zamindar’s power over the villages was strengthened. It was only during the Great Depression of the 1930s that they finally collapsed and the jotedars consolidated their power in the countryside. The Fifth Report The British Parliament passed a series of Acts in the late eighteenth century to regulate and control Company rule in India. It forced the Company to produce regular reports on the administration of India and appointed committees to enquire into the affairs of the Company. It was the fifth of a series of reports on the administration and activities of the East India Company in India. Often referred to as the Fifth Report, it ran into 1002 pages, of which over 800 pages were appendices that reproduced petitions of zamindars and ryots, reports of collectors from different districts, statistical tables on revenue returns, and notes on the revenue and judicial administration of Bengal and Madras written by officials. He was the commander of the British forces during the American War of Independence and the Governor General of Bengal when the Permanent Settlement was introduced there in 1793. Term used to designate peasants in British records. Ryots in Bengal did not always cultivate the land directly, but leased it out to under-ryots. Literally one who wields the lathi or stick, functioned as a strongman of the zamindar. You might be interested in Based on Ncert 12-history-part-3
http://www.currentaffairsindia.info/2011/11/permanent-settlement-by-charles.html
13
22
English Civil War 2008/9 Schools Wikipedia Selection. Related subjects: British History 1500-1750 The English Civil War (1642-1651) was a series of armed conflicts and political machinations between Parliamentarians and Royalists. The first (1642–1646) and second (1648–1649) civil wars pitted the supporters of King Charles I against the supporters of the Long Parliament, while the third war (1649–1651) saw fighting between supporters of King Charles II and supporters of the Rump Parliament. The Civil War ended with the Parliamentary victory at the Battle of Worcester on 3 September 1651. The Civil War led to the trial and execution of Charles I, the exile of his son, Charles II, and replacement of English monarchy with first, the Commonwealth of England (1649–1653), and then with a Protectorate (1653–1659), under Oliver Cromwell's personal rule. The monopoly of the Church of England on Christian worship in England ended with the victors consolidating the established Protestant Ascendancy in Ireland. Constitutionally, the wars established the precedent that a British monarch can not govern without Parliament's consent, although this concept was established only with the Glorious Revolution later in the century. |Prehistoric Britain||(before AD 43)| |House of Plantagenet||(1154–1485)| |House of Lancaster||(1399–1471)| |House of York||(1461–1485)| |House of Tudor||(1485–1603)| |House of Stuart||(1603–1707)| |Kingdom of Great Britain||(1707–1800)| | United Kingdom of Great Britain and Ireland |United Kingdom of Great Britain and Northern Ireland The term English Civil War appears most commonly in the singular form, although historians often divide the conflict into two or three separate wars. Although the term describes events as impinging on England, from the outset the conflicts involved wars with and civil wars within both Scotland and Ireland; see Wars of the Three Kingdoms for an overview. Unlike other civil wars in England, which focused on who ruled, this war also concerned itself with the manner of governing Britain and Ireland. Historians sometimes refer to the English Civil War as the English Revolution and works such as the 1911 Encyclopædia Britannica call it the Great Rebellion. Marxist historians such as Christopher Hill (1912–2003) have long favoured the term English Revolution. The King's Rule War broke out less than forty years after the death of Elizabeth I in 1603 ended her long reign. At the accession of Charles I in 1625, England and Scotland had both experienced relative peace, both internally and in their relations with each other, for as long as anyone could remember. Charles hoped to unite the kingdoms of England, Scotland and Ireland into a new single kingdom, fulfilling the dream of his father, James I of England (James VI of Scotland). Many English Parliamentarians had suspicions regarding such a move, because they feared that setting up a new kingdom might destroy the old English traditions which had bound the English monarchy. As Charles shared his father's position on the power of the crown (James had described kings as "little Gods on Earth", chosen by God to rule in accordance with the doctrine of the " Divine Right of Kings"), the suspicions of the Parliamentarians had some justification. Although pious and with little personal ambition, Charles expected outright loyalty in return for "just rule". He considered any questioning of his orders as, at best, insulting. This trait, and a series of events, each seemingly minor on their own, led to a serious break between Charles and his English Parliament, and eventually to war. Parliament in the English constitutional framework Before the fighting, the Parliament of England did not have a large permanent role in the English system of government, instead as a temporary advisory committee — summoned by the monarch whenever the Crown required additional tax revenue, and subject to dissolution by the monarch at any time. Because responsibility for collecting taxes lay in the hands of the gentry, the English kings needed the help of that stratum of society in order to ensure the smooth collection of that revenue. If the gentry refused to collect the King's taxes, the Crown would lack any practical means with which to compel them. Parliaments allowed representatives of the gentry to meet, confer and send policy-proposals to the monarch in the form of Bills. These representatives did not, however, have any means of forcing their will upon the king — except by withholding the financial means required to execute his plans. Parliamentary concerns and the Petition of Right One of the first events to cause concern about Charles I came with his marriage to a French Roman Catholic princess, Henrietta-Marie de Bourbon. The marriage occurred in 1625, right after Charles came to the throne. Charles' marriage raised the possibility that his children, including the heir to the throne, could grow up as Catholics, a frightening prospect to Protestant England. Charles also wanted to take part in the conflicts underway in Europe, then immersed in the Thirty Years' War (1618 - 1648). As ever, foreign wars required heavy expenditure, and the Crown could raise the necessary taxes only with Parliamentary consent (as described above). Charles experienced even more financial difficulty when his first Parliament refused to follow the tradition of giving him the right to collect customs duties for his entire reign, deciding instead to grant it for only a year at a time. Charles, meanwhile, pressed ahead with his European wars, deciding to send an expeditionary force to relieve the French Huguenots whom Royal French forces held besieged in La Rochelle. The royal favourite, George Villiers, the Duke of Buckingham, secured the command of the English force. Unfortunately for Charles and Buckingham, the relief expedition proved a fiasco (1627), and Parliament, already hostile to Buckingham for his monopoly on royal patronage, opened impeachment proceedings against him. Charles responded by dissolving Parliament. This move, while saving Buckingham, reinforced the impression that Charles wanted to avoid Parliamentary scrutiny of his ministers. Having dissolved Parliament, and unable to raise money without it, the king assembled a new one in 1628. (The elected members included Oliver Cromwell.) The new Parliament drew up the Petition of Right, and Charles accepted it as a concession in order to get his subsidy. Amongst other things, the Petition referred to the Magna Carta. Charles I avoided calling a Parliament for the next decade, known as the " Eleven Years' Tyranny" or "Charles's Personal Rule". During this period , Charles's lack of money determined policies. Unable to raise revenue through Parliament — reluctant to convene it — he resorted to other means. Thus, not observing often long-outdated conventions became, in some cases, a finable offence (for example, a failure to attend and to receive knighthood at Charles's coronation), with the fine paid to the Crown. He tried to raise revenue through the ship money tax, by exploiting a naval war-scare in 1635, demanding that the inland English counties pay the tax for the Royal Navy. Established law supported this policy, but authorities had ignored it for centuries, many regarded it as yet another extra-Parliamentary (and therefore illegal) tax. Some prominent men refused to pay ship money arguing that the tax was illegal, but they lost in court and the fines imposed on them for refusing to pay ship money (and for standing against the tax's legality) aroused widespread indignation. During the "Personal Rule," Charles aroused most antagonism through his religious measures: he believed in High Anglicanism, a sacramental version of the Church of England, theologically based upon Arminianism, a creed shared with his main political advisor, Archbishop William Laud. In 1633, Charles appointed Laud as Archbishop of Canterbury and started making the Church more ceremonial, replacing the wooden communion tables with stone altars. Puritans accused Laud of reintroducing Catholicism; when they complained, he had them arrested. In 1637 John Bastwick, Henry Burton, and William Prynne had their ears cut off for writing pamphlets attacking Laud's views — a rare penalty for gentlemen, and one that aroused anger. Moreover, the Church authorities revived the statutes passed in time of Elizabeth I about church attendance, and fined Puritans for not attending Anglican church services. Rebellion in Scotland The end of Charles's independent governance came when he attempted to apply the same religious policies in Scotland. The Church of Scotland, reluctantly Episcopal in structure, had independent traditions. Charles, however, wanted one, uniform Church throughout Britain, and introduced a new, High Anglican, version of the English Book of Common Prayer to Scotland in summer of 1637. This was violently resisted; a riot broke out in Edinburgh, which may have been started in a church by Jenny Geddes; and, in February of 1638, the Scots formulated their objections to royal policy in the National Covenant. This document took the form of a "loyal protest", rejecting all innovations not first having been tested by free parliaments and General Assemblies of the Church. Before long, King Charles withdrew his prayer book, and summoned a General Assembly of the Church of Scotland, in Glasgow, in November of 1638. The General Assembly, affected by the contemporary radical mood, rejected the Prayer Book, then drastically declared unlawful the office of bishop. Charles demanded the acts of the Assembly be withdrawn; the Scots refused; both sides began raising armies. In spring of 1639, King Charles I accompanied his forces to the Scottish border, to end the rebellion known as the Bishops War, but, after an inconclusive military campaign, he accepted the offered Scottish truce — the Pacification of Berwick. The truce proved temporary; a second war followed in summer of 1640. This time, a Scots army defeated Charles's forces in the north, then captured Newcastle. Charles eventually agreed not to interfere with Scotland's religion, and paid the Scots war-expenses. Recall of the English Parliament Charles needed to suppress the rebellion in Scotland. He had insufficient funds, however, and had perforce to seek money from a newly-elected English Parliament in 1640. The majority faction in the new Parliament, led by John Pym, took this appeal for money as an opportunity to discuss grievances against the Crown, and opposed the idea of an English invasion of Scotland. Charles took exception to this lèse-majesté (offence against the ruler) and dissolved the Parliament after only a few weeks; hence the name "the Short Parliament". Without Parliament's support, Charles attacked Scotland again, breaking the truce at Berwick, and suffered a comprehensive defeat. The Scots then seized the opportunity and invaded England, occupying Northumberland and Durham. Meanwhile, another of Charles' chief advisers, Thomas Wentworth, 1st Viscount Wentworth, had risen to the role of Lord Deputy of Ireland in 1632 and brought in much-needed revenue for Charles by persuading the Irish Catholic gentry to pay new taxes in return for promised religious concessions. In 1639 Charles recalled Wentworth to England, and in 1640 made him Earl of Strafford, attempting to have him work his magic again in Scotland. This time he proved less successful, and the English forces fled the field in their second encounter with the Scots in 1640. Almost the entirety of Northern England was occupied, and Charles was forced to pay £850 per day to keep the Scots from advancing. If he did not, they would "take" the money by pillaging and burning the cities and towns of Northern England. All this put Charles in a desperate financial position. As King of Scots, he had to find money to pay the Scottish army in England; as King of England, to find money to pay and equip an English army to defend England. His means of raising English revenue without an English Parliament fell critically short of achieving this. Against this backdrop, and according to advice from the Magnum Concilium (the House of Lords, but without the Commons, so not a Parliament), Charles finally bowed to pressure and summoned another English Parliament in November 1640. The Long Parliament The new Parliament proved even more hostile to Charles than its predecessor. It immediately began to discuss grievances against Charles and his Government, and with Pym and Hampden (of ship money fame) in the lead, took the opportunity presented by the King's troubles to force various reforming measures upon him. The legislators passed a law which stated that a new Parliament should convene at least once every three years — without the King's summons, if necessary. Other laws passed by the Parliament made it illegal for the king to impose taxes without Parliamentary consent, and later, gave Parliament control over the king's ministers. Finally, the Parliament passed a law forbidding the King to dissolve it without its consent, even if the three years were up. Ever since, this Parliament has been known as the "Long Parliament". However, Parliament did attempt to avert conflict by requiring all adults to sign the Protestation, an oath of allegiance to Charles. In early 1641 Parliament had Thomas Wentworth, 1st Earl of Strafford, arrested and sent to the Tower of London on a charge of treason. John Pym claimed that Wentworth's statements of readiness to campaign against "the kingdom" were aimed in fact at England itself. Unable to prove the case in court, the House of Commons, led by Pym and Henry Vane, resorted to a Bill of Attainder. Unlike a guilty finding in a court case, attainder did not require a legal burden of proof, but it did require the king's approval. Charles, still incensed over the Commons' handling of Buckingham, refused. Wentworth himself, hoping to head off the war he saw looming, wrote to the king and asked him to reconsider. Wentworth's execution took place in May, 1641. Instead of saving the country from war, Wentworth's sacrifice in fact doomed it to one. Within months, the Irish Catholics, fearing a resurgence of Protestant power, struck first, and all Ireland soon descended into chaos. Rumours circulated that the King supported the Irish, and Puritan members of the Commons soon started murmuring that this exemplified the fate that Charles had in store for them all. In early January 1642, accompanied by 400 soldiers, Charles attempted to arrest five members of the House of Commons on a charge of treason. This attempt failed. When the troops marched into Parliament, Charles inquired of William Lenthall, the Speaker, as to the whereabouts of the five. Lenthall replied "May it please your Majesty, I have neither eyes to see nor tongue to speak in this place but as the House is pleased to direct me, whose servant I am here." In other words, the Speaker proclaimed himself a servant of Parliament, rather than of the King. In the summer of 1642 these national troubles helped to polarise opinion, ending indecision about which side to support or what action to take. Opposition to Charles also arose owing to many local grievances. For example, the imposition of drainage-schemes in The Fens negatively affected the livelihood of thousands of people after the King awarded a number of drainage-contracts. Many regarded the King as worse than insensitive, and this played a role in bringing a large part of eastern England into Parliament’s camp. This sentiment brought with it people such as the Earl of Manchester and Oliver Cromwell, each a notable wartime adversary of the King. Conversely, one of the leading drainage contractors, the Earl of Lindsey, was to die fighting for the King at the Battle of Edgehill. The First English Civil War In early January 1642, a few days after his failure to capture five members of the House of Commons, fearing for his own personal safety and for that of his family and retinue, Charles left the London area. Further negotiations by frequent correspondence between the King and the Long Parliament through to early summer proved fruitless. As the summer progressed, cities and towns declared their sympathies for one faction or the other: for example, the garrison of Portsmouth under the command of Sir George Goring declared for the King, but when Charles tried to acquire arms for his cause from Kingston upon Hull, the depository for the weapons used in the previous Scottish campaigns, Sir John Hotham, the military governor appointed by Parliament in January, initially refused to let Charles enter Hull, and when Charles returned with more men, drove them off. Charles issued a warrant for Hotham to be arrested as a traitor but was powerless to enforce it. Throughout the summer months, tensions rose and there was brawling in a number of places, with the first death of the conflict taking place in Manchester. At the outset of the conflict, much of the country remained neutral, though the Royal Navy and most English cities favoured Parliament, while the King found considerable support in rural communities. Historians estimate that between them, both sides had only about 15,000 men. However, the war quickly spread and eventually involved every level of society. Many areas attempted to remain neutral, some formed bands of Clubmen to protect their localities against the worst excesses of the armies of both sides, but most found it impossible to withstand both the King and Parliament. On one side, the King and his supporters thought that they fought for traditional government in Church and state. On the other, most supporters of the Parliamentary cause initially took up arms to defend what they thought of as the traditional balance of government in Church and state, which the bad advice the King had received from his advisers had undermined before and during the "Eleven Years' Tyranny". The views of the Members of Parliament ranged from unquestioning support of the King — at one point during the First Civil War, more members of the Commons and Lords gathered in the King's Oxford Parliament than at Westminster — through to radicals, who wanted major reforms in favour of religious independence and the redistribution of power at the national level. After the debacle at Hull, Charles moved on to Nottingham, where on 22 August 1642, he raised the royal standard. When he raised his standard, Charles had with him about 2,000 cavalry and a small number of Yorkshire infantry-men, and using the archaic system of a Commission of Array, Charles' supporters started to build a larger army around the standard. Charles moved in a south-westerly direction, first to Stafford, and then on to Shrewsbury, because the support for his cause seemed particularly strong in the Severn valley area and in North Wales. While passing through Wellington, in what became known as the " Wellington Declaration", he declared that he would uphold the "Protestant religion, the laws of England, and the liberty of Parliament". The Parliamentarians who opposed the King had not remained passive during this pre-war period. As in the case of Kingston upon Hull they had taken measures to secure strategic towns and cities, by appointing men sympathetic to their cause, and on 9 June they had voted to raise an army of 10,000 volunteers, appointing Robert Devereux, 3rd Earl of Essex commander three days later. He received orders "to rescue His Majesty's person, and the persons of the Prince [of Wales] and the Duke of York out of the hands of those desperate persons who were about them". The Lords Lieutenant, whom Parliament appointed, used the Militia Ordinance to order the militia to join Essex's army. Two weeks after the King had raised his standard at Nottingham, Essex led his army north towards Northampton, picking up support along the way (including a detachment of Cambridgeshire cavalry raised and commanded by Oliver Cromwell). By the middle of September Essex's forces had grown to 21,000 infantry and 4200 cavalry and dragoons. On 14 September he moved his army to Coventry and then to the north of the Cotswolds, a strategy which placed his army between the Royalists and London. With the size of both armies now in the tens of thousands, and only Worcestershire between them, it was inevitable that cavalry reconnaissance units would sooner or later meet. This happened in the first major skirmish of the Civil War, when a cavalry troop of about 1,000 Royalists commanded by Prince Rupert, a German nephew of the King and one of the outstanding cavalry commanders of the war, defeated a Parliamentary cavalry detachment under the command of Colonel John Brown in the Battle of Powick Bridge, at a bridge across the River Teme close to Worcester. Rupert withdrew to Shrewsbury, where, a council-of-war discussed two courses of action: whether to advance towards Essex's new position near Worcester, or to march along the now opened road towards London. The Council decided to take the London route, but not to avoid a battle, for the Royalist generals wanted to fight Essex before he grew too strong, and the temper of both sides made it impossible to postpone the decision. In the Earl of Clarendon's words: "it was considered more counsellable to march towards London, it being morally sure that Essex would put himself in their way". Accordingly, the army left Shrewsbury on 12 October, gaining two days' start on the enemy, and moved south-east. This had the desired effect, as it forced Essex to move to intercept them. The first pitched battle of the war, fought at Edgehill on 23 October 1642, proved inconclusive, and both the Royalists and Parliamentarians claimed it as a victory. The second field action of the war, the stand-off at Turnham Green, saw Charles forced to withdraw to Oxford. This city would serve as his base for the remainder of the war. In 1643 the Royalist forces won at Adwalton Moor, and gained control of most of Yorkshire. In the Midlands, a Parliamentary force under Sir John Gell, 1st Baronet besieged and captured the cathedral city of Lichfield, after the death of the original commander, Lord Brooke. This group subsequently joined forces with Sir John Brereton to fight the inconclusive Battle of Hopton Heath ( 19 March 1643), where the Royalist commander, the Earl of Northampton, was killed. Subsequent battles in the west of England at Lansdowne and at Roundway Down also went to the Royalists. Prince Rupert could then take Bristol. In the same year, Oliver Cromwell formed his troop of " Ironsides", a disciplined unit that demonstrated his military leadership-ability. With their assistance, he won a victory at the Battle of Gainsborough in July. In general, the early part of the war went well for the Royalists. The turning-point came in the late summer and early autumn of 1643, when the Earl of Essex's army forced the king to raise the siege of Gloucester and then brushed the Royalist army aside at the First Battle of Newbury ( 20 September 1643), in order to return triumphantly to London. Other Parliamentarian forces won the Battle of Winceby, giving them control of Lincoln. Political manoeuvering to gain an advantage in numbers led Charles to negotiate a ceasefire in Ireland, freeing up English troops to fight on the Royalist side in England, while Parliament offered concessions to the Scots in return for aid and assistance. With the help of the Scots, Parliament won at Marston Moor ( 2 July 1644), gaining York and the north of England. Cromwell's conduct in this battle proved decisive, and demonstrated his potential as both a political and an important military leader. The defeat at the Battle of Lostwithiel in Cornwall, however, marked a serious reverse for Parliament in the south-west of England. Subsequent fighting around Newbury ( 27 October 1644), though tactically indecisive, strategically gave another check to Parliament. In 1645 Parliament reaffirmed its determination to fight the war to a finish. It passed the Self-denying Ordinance, by which all members of either House of Parliament laid down their commands, and re-organized its main forces into the New Model Army ("Army"), under the command of Sir Thomas Fairfax, with Cromwell as his second-in-command and Lieutenant-General of Horse. In two decisive engagements — the Battle of Naseby on 14 June and the Battle of Langport on 10 July — the Parliamentarians effectively destroyed Charles' armies. In the remains of his English realm Charles attempted to recover a stable base of support by consolidating the Midlands. He began to form an axis between Oxford and Newark on Trent in Nottinghamshire. Those towns had become fortresses and showed more reliable loyalty to him than to others. He took Leicester, which lies between them, but found his resources exhausted. Having little opportunity to replenish them, in May 1646 he sought shelter with a Scottish army at Southwell in Nottinghamshire. This marked the end of the First English Civil War. The Second English Civil War Charles I took advantage of the deflection of attention away from himself to negotiate a new agreement with the Scots, again promising church reform, on 28 December 1647. Although Charles himself remained a prisoner, this agreement led inexorably to the Second Civil War. A series of Royalist uprisings throughout England and a Scottish invasion occurred in the summer of 1648. Forces loyal to Parliament put down most of the uprisings in England after little more than skirmishes, but uprisings in Kent, Essex and Cumberland, the rebellion in Wales, and the Scottish invasion involved the fighting of pitched battles and prolonged sieges. In the spring of 1648 unpaid Parliamentarian troops in Wales changed sides. Colonel Thomas Horton defeated the Royalist rebels at the Battle of St Fagans ( 8 May) and the rebel leaders surrendered to Cromwell on 11 July after the protracted two-month siege of Pembroke. Sir Thomas Fairfax defeated a Royalist uprising in Kent at the Battle of Maidstone on 24 June. Fairfax, after his success at Maidstone and the pacification of Kent, turned northward to reduce Essex, where, under their ardent, experienced and popular leader Sir Charles Lucas, the Royalists had taken up arms in great numbers. Fairfax soon drove the enemy into Colchester, but his first attack on the town met with a repulse and he had to settle down to a long siege. In the North of England, Major-General John Lambert fought a very successful campaign against a number of Royalist uprisings — the largest that of Sir Marmaduke Langdale in Cumberland. Thanks to Lambert's successes, the Scottish commander, the Duke of Hamilton, had perforce to take the western route through Carlisle in his pro-Royalist Scottish invasion of England. The Parliamentarians under Cromwell engaged the Scots at the Battle of Preston ( 17 August – 19 August). The battle took place largely at Walton-le-Dale near Preston in Lancashire, and resulted in a victory by the troops of Cromwell over the Royalists and Scots commanded by Hamilton. This Parliamentarian victory marked the end of the Second English Civil War. Nearly all the Royalists who had fought in the First Civil War had given their parole not to bear arms against the Parliament, and many honourable Royalists, like Lord Astley, refused to break their word by taking any part in the second war. So the victors in the Second Civil War showed little mercy to those who had brought war into the land again. On the evening of the surrender of Colchester, Parliamentarians had Sir Charles Lucas and Sir George Lisle shot. Parliamentary authorities sentenced the leaders of the Welsh rebels, Major-General Rowland Laugharne, Colonel John Poyer and Colonel Rice Powel to death, but executed Poyer alone ( 25 April 1649), having selected him by lot. Of five prominent Royalist peers who had fallen into the hands of Parliament, three, the Duke of Hamilton, the Earl of Holland, and Lord Capel, one of the Colchester prisoners and a man of high character, were beheaded at Westminster on 9 March. Trial of Charles I for treason The betrayal by Charles caused Parliament to debate whether to return the King to power at all. Those who still supported Charles' place on the throne tried once more to negotiate with him. Furious that Parliament continued to countenance Charles as a ruler, the Army marched on Parliament and conducted " Pride's Purge" (named after the commanding officer of the operation, Thomas Pride) in December 1648. Troops arrested 45 Members of Parliament (MPs) and kept 146 out of the chamber. They allowed only 75 Members in, and then only at the Army's bidding. This Rump Parliament received orders to set up, in the name of the people of England, a High Court of Justice for the trial of Charles I for treason. At the end of the trial the 59 Commissioners (judges) found Charles I guilty of high treason, as a "tyrant, traitor, murderer and public enemy". His beheading took place on a scaffold in front of the Banqueting House of the Palace of Whitehall on 30 January 1649. (After the Restoration in 1660, Charles II executed the surviving regicides not living in exile or sentenced them to life imprisonment.) The Third English Civil War Ireland had known continuous war since the rebellion of 1641, with most of the island controlled by the Irish Confederates. Increasingly threatened by the armies of the English Parliament after Charles I's arrest in 1648, the Confederates signed a treaty of alliance with the English Royalists. The joint Royalist and Confederate forces under the Duke of Ormonde attempted to eliminate the Parliamentary army holding Dublin, but their opponents routed them at the Battle of Rathmines ( 2 August 1649). As the former Member of Parliament Admiral Robert Blake blockaded Prince Rupert's fleet in Kinsale, Oliver Cromwell could land at Dublin on 15 August 1649 with an army to quell the Royalist alliance in Ireland. Cromwell's suppression of the Royalists in Ireland during 1649 still has a strong resonance for many Irish people. After the siege of Drogheda, the massacre of nearly 3,500 people — comprising around 2,700 Royalist soldiers and all the men in the town carrying arms, including civilians, prisoners, and Catholic priests — became one of the historical memories that has driven Irish-English and Catholic-Protestant strife during the last three centuries. However, the massacre has significance mainly as a symbol of the Irish perception of Cromwellian cruelty, as far more people died in the subsequent guerrilla and scorched-earth fighting in the country than at infamous massacres such as Drogheda and Wexford. The Parliamentarian conquest of Ireland ground on for another four years until 1653, when the last Irish Confederate and Royalist troops surrendered. Historians have estimated that around 30% of Ireland's population either died or had gone into exile by the end of the wars. The victors confiscated almost all Irish Catholic-owned land in the wake of the conquest and distributed it to the Parliament's creditors, to the Parliamentary soldiers who served in Ireland, and to English people who had settled there before the war. The execution of Charles I altered the dynamics of the the Civil War in Scotland, which had raged between Royalists and Covenanters since 1644. By 1649, the struggle had left the Royalists there in disarray and their erstwhile leader, the Marquess of Montrose, had gone into exile. At first, Charles II encouraged Montrose to raise a Highland army to fight on the Royalist side. However, when the Scottish Covenanters (who did not agree with the execution of Charles I and who feared for the future of Presbyterianism and Scottish independence under the new Commonwealth) offered him the crown of Scotland, Charles abandoned Montrose to his enemies. However, Montrose, who had raised a mercenary force in Norway, had already landed and could not abandon the fight. He did not succeed in raising many Highland clans and the Covenanters defeated his army at the Battle of Carbisdale in Ross-shire on 27 April 1650. The victors captured Montrose shortly afterwards and took him to Edinburgh. On 20 May the Scottish Parliament sentenced him to death and had him hanged the next day. Charles II landed in Scotland at Garmouth in Morayshire on 23 June 1650 and signed the 1638 National Covenant and the 1643 Solemn League and Covenant immediately after coming ashore. With his original Scottish Royalist followers and his new Covenanter allies, King Charles II became the greatest threat facing the new English republic. In response to the threat, Cromwell left some of his lieutenants in Ireland to continue the suppression of the Irish Royalists and returned to England. He arrived in Scotland on 22 July 1650 and proceeded to lay siege to Edinburgh. By the end of August disease and a shortage of supplies had reduced his army, and he had to order a retreat towards his base at Dunbar. A Scottish army, assembled under the command of David Leslie, tried to block the retreat, but Cromwell defeated them at the Battle of Dunbar on September 3. Cromwell's army then took Edinburgh, and by the end of the year his army had occupied much of southern Scotland. In July 1651, Cromwell's forces crossed the Firth of Forth into Fife and defeated the Scots at the Battle of Inverkeithing ( 20 July 1651). The New Model Army advanced towards Perth, which allowed Charles, at the head of the Scottish army, to move south into England. Cromwell followed Charles into England, leaving George Monck to finish the campaign in Scotland. Monck took Stirling on 14 August and Dundee on 1 September. The next year, 1652, saw the mopping up of the remnants of Royalist resistance, and under the terms of the " Tender of Union", the Scots received 30 seats in a united Parliament in London, with General Monck appointed as the military governor of Scotland. Although Cromwell's New Model Army had defeated a Scottish army at Dunbar, Cromwell could not prevent Charles II from marching from Scotland deep into England at the head of another Royalist army. The Royalists marched to the west of England because English Royalist sympathies were strongest in that area, but although some English Royalists joined the army, they came in far fewer numbers than Charles and his Scottish supporters had hoped. Cromwell finally engaged the new king at Worcester on 3 September 1651, and defeated him. Charles II escaped, via safe houses and a famous oak tree, to France, ending the civil wars. During the course of the Wars the Parliamentarians established a number of successive committees to oversee the war-effort. The first of these, the Committee of Safety, set up in July 1642, comprised 15 Members of Parliament. Following the Anglo-Scottish alliance against the Royalists, the Committee of Both Kingdoms replaced the Committee of Safety between 1644 and 1648. Parliament dissolved the Committee of Both Kingdoms when the alliance ended, but its English members continued to meet and became known as the Derby House Committee. A second Committee of Safety then replaced that committee. As usual in wars of this era, disease caused more deaths than combat. There are no accurate figures for these periods, and it is not possible to give a precise overall figure for those killed in battle, from those who died from disease, or even from a natural decline in population. Figures for casualties during this period are unreliable, but some attempt has been made to provide rough estimates. In England, a conservative estimate is that roughly 100,000 people died from war-related disease during the three civil wars. Historic records count 84,830 casualties of the wars themselves. Counting in accidents and the two Bishops' wars, an estimate of 190,000 people is achieved. Figures for Scotland are more unreliable and should be treated with greater caution. Casualties include the deaths of prisoners of war in conditions that accelerated their deaths, with estimates of 10,000 prisoners not surviving or not returning home (8,000 captured during and immediately after the Battle of Worcester were deported to New England, Bermuda and the West Indies to work for landowners as indentured labourers). There are no figures to calculate how many died from war-related diseases, but if the same ratio of disease to battle deaths from English figures is applied to the Scottish figures, a not unreasonable estimate of 60,000 people is achieved. Figures for Ireland are described as "miracles of conjecture". Certainly the devastation inflicted on Ireland was unbelievable, with the best estimate provided by Sir William Petty, the father of English demography. Although Petty's figures are the best available, they are still acknowledged as being tentative. The do not include the estimate of 40,000 driven into exile, some of whom served as soldiers in European continental armies, while others were sold as indentured servants to New England and the West Indies. Many of those sold to landowners in New England eventually prospered, but many of those sold to landowners in the West Indies were worked to death. Petty estimates that 112,000 Protestants were killed through plague, war, and famine, and that 504,000 Catholics were killed, giving an estimated total of 618,000. These estimate indicate that England suffered a 3.7% loss of population, Scotland a loss of 6%, while Ireland suffered a loss of 41% of its population. Putting these numbers into the context of other catastrophes helps to understand the devastation to Ireland in particular. The Great Hunger of 1845-1852 resulted in a loss of 16% of the population, while during the second world war, the population of the Soviet Union fell by 16%. Ordinary people took advantage of the dislocation of civil society during the 1640s to derive advantages for themselves. The contemporary guild democracy movement won its greatest successes among London's transport workers, notably the Thames watermen. Rural communities seized timber and other resources on the sequestrated estates of royalists and catholics, and on the estates of the royal family and the church hierarchy. Some communities improved their conditions of tenure on such estates. The old status quo began a retrenchment after the end of the main civil war in 1646, and more especially after the restoration of monarchy in 1660. But some gains were long-term. The democratic element introduced in the watermen's company in 1642, for example, survived, with vicissitudes, until 1827. The wars left England, Scotland, and Ireland among the few countries in Europe without a monarch. In the wake of victory, many of the ideals (and many of the idealists) became sidelined. The republican government of the Commonwealth of England ruled England (and later all of Scotland and Ireland) from 1649 to 1653 and from 1659 to 1660. Between the two periods, and due to in-fighting amongst various factions in Parliament, Oliver Cromwell ruled over the Protectorate as Lord Protector (effectively a military dictator) until his death in 1658. Upon his death, Oliver Cromwell's son Richard became Lord Protector, but the Army had little confidence in him. After seven months the Army removed Richard, and in May 1659 it re-installed the Rump. However, since the Rump Parliament acted as though nothing had changed since 1653 and as though it could treat the Army as it liked, military force shortly afterwards dissolved this, as well. After the second dissolution of the Rump, in October 1659, the prospect of a total descent into anarchy loomed as the Army's pretence of unity finally dissolved into factions. Into this atmosphere General George Monck, Governor of Scotland under the Cromwells, marched south with his army from Scotland. On 4 April 1660, in the Declaration of Breda, Charles II made known the conditions of his acceptance of the Crown of England. Monck organised the Convention Parliament, which met for the first time on 25 April 1660. On 8 May 1660, it declared that King Charles II had reigned as the lawful monarch since the execution of Charles I in January 1649. Charles returned from exile on 23 May 1660. On 29 May 1660, the populace in London acclaimed him as king. His coronation took place at Westminster Abbey on 23 April 1661. These events became known as the English Restoration. Although the monarchy was restored, it was still only with the consent of Parliament; therefore, the civil wars effectively set England and Scotland on course to adopt a parliamentary monarchy form of government. This system would result in the outcome that the future Kingdom of Great Britain, formed in 1707 under the Acts of Union, would manage to forestall the kind of often-bloody revolution, typical of European republican movements that followed the Jacobin revolution in 18th century France and the later success of Napoleon, which generally resulted in the total abolition of monarchy. Specifically, future monarchs became wary of pushing Parliament too hard, and Parliament effectively chose the line of royal succession in 1688 with the Glorious Revolution and in the 1701 Act of Settlement. After the Restoration, Parliament's factions became political parties (later becoming the Tories and Whigs) with competing views and varying abilities to influence the decisions of their monarchs. Theories relating to the English Civil War In the early decades of the 20th century the Whig school was the dominant theoretical view. They explained the Civil War as resulting from a centuries-long struggle between Parliament (especially the House of Commons) and the Monarchy, with Parliament defending the traditional rights of Englishmen, while the Stuart monarchy continually attempted to expand its right to arbitrarily dictate law. The most important Whig historian, S.R. Gardiner, popularized the English Civil War as a 'Puritan Revolution': challenging the repressive Stuart Church, and preparing the way for religious toleration in the Restoration. Thus, Puritanism was the natural ally of a people preserving their traditional rights against arbitrary monarchical power. The Whig view was challenged and largely superseded by the Marxist school, which became popular in the 1940s, and which interpreted the English Civil War as a bourgeois revolution. According to Marxist historian Christopher Hill: "The Civil War was a class war, wherein on the side of reaction stood the landed aristocracy and its ally, the established Church, and on the other side stood the trading and industrial classes in town and countryside . . . the yeomen and progressive gentry, and . . . wider masses of the population whenever they were able by free discussion to understand what the struggle was really about . . . In English history, the Civil War occurred when the wealthy middle class, already socially powerful, had eliminated the outmoded medieval system of English government. Moreover, like the Whigs, the Marxists found a role for religion: as a moral system, Puritanism ideally suited the bourgeois class, so Marxists identified Puritans as inherently bourgeois." In the 1970s, a new generation of historians, who would become known as Revisionists challenged both the Whig and the Marxist theories. In 1973, a group of revisionist historians published the anthology The Origins of the English Civil War ( Conrad Russell ed.). These historians disliked both Whig and Marxist explanations of the Civil War as long-term socio-economic trends in English society, producing work focused on the minutiae of the years immediately preceding the civil war, thereby returning to the contingency-based historiography of Clarendon's famous contemporary history History of the Rebellion and Civil Wars in England, which demonstrated that factional war-allegiance patterns did not fit either Whig or Marxist history. Puritans, for example, did not necessarily ally themselves with Parliamentarians, nor did they identify as bourgeois. On the other hand, many bourgeois fought for the King, whereas many landed aristocrats supported Parliament. Thus, revisionist historians have discredited much of Whig and Marxist English Civil War interpretation. Jane Ohlmeyer discarded and replaced the historical title "English Civil War" with the titles the "Wars of the Three" and the "British Civil Wars", positing that the civil war in England cannot be understood isolated from events in other parts of Great Britain and Ireland; King Charles I remains crucial, not just as King of England, but also because of his relationship with the peoples of his other realms. For example, the wars began when King Charles I tried imposing an Anglican Prayer Book upon Scotland, and when this was met with resistance from the Covenanters, he needed an army to impose his will. However, this forced him to call an English Parliament to raise new taxes to pay for the army. The English Parliaments were not willing to grant Charles the revenue he needed to pay for the Scottish expeditionary army unless he addressed their grievances. By the early 1640s, Charles was left in a state of near permanent crisis management; often he was not willing to concede enough ground to any one faction to neutralise the threat, and in some circumstances to do so would only antagonise another faction. For example, Charles finally agreed upon terms with the Covenanters in August 1641, but although this might have weakened the position of the English Parliament, the Irish Rebellion of 1641 broke out in October 1641, largely negating the political advantage he had obtained by relieving himself of the cost of the Scottish invasion.
http://schools-wikipedia.org/wp/e/English_Civil_War.htm
13
17
|Caribbean Islands Table of Contents European settlements in the Caribbean began with Christopher Columbus. Carrying an elaborate feudal commission that made him perpetual governor of all lands discovered and gave him a percentage of all trade conducted, Columbus set sail in September 1492, determined to find a faster, shorter way to China and Japan. He planned to set up a trading-post empire, modeled after the successful Portuguese venture along the West African coast. His aim was to establish direct commercial relations with the producers of spices and other luxuries of the fabled East, thereby cutting out the Arab middlemen who had monopolized trade since capturing Constantinople in 1453. He also planned to link up with the lost Christians of Abyssinia, who were reputed to have great quantities of gold--a commodity in great demand in Europe. Finally, as a good Christian, Columbus wanted to spread Christianity to new peoples. Columbus, of course, did not find the East. Nevertheless, he called the peoples he met "Indians," and, because he had sailed west, referred to the region he found as the "West Indies." However, dreams of a trading-post empire collapsed in the face of real Caribbean life. The Indians, although initially hospitable in most cases, simply did not have gold and trade commodities for the European market. In all, Columbus made four voyages of exploration between 1492 and 1502, failing to find great quantities of gold, Christians, or the courts of the fabled khans described by Marco Polo. After 1499, small amounts of tracer gold were discovered on Hispaniola, but by that time local challenges to his governorship were mounting, and his demonstrated lack of administrative skills made matters worse. Even more disappointing, he returned to Spain in 1502 to find that his extensive feudal authority in the New World was rapidly being taken away by his monarchs. Columbus inadvertently started a small settlement on the north coast of Hispaniola when his flagship, the Santa Maria, wrecked off the Môle St-Nicolas on his first voyage. When he returned a year later, no trace of the settlement appeared--and the former welcome and hospitality of the Indians had changed to suspicion and fear. The first proper European settlement in the Caribbean began when Nicolás de Ovando, a faithful soldier from western Spain, settled about 2,500 Spanish colonists in eastern Hispaniola in 1502. Unlike Columbus' earlier settlements, this group was an organized cross-section of Spanish society brought with the intention of developing the Indies economically and expanding Spanish political, religious, and administrative influence. In its religious and military motivation, it continued the reconquista (reconquest), which had expelled the Moors from Grenada and the rest of southern Spain. From this base in Santo Domingo, as the new colony was called, the Spanish quickly fanned out throughout the Caribbean and onto the mainland. Jamaica was settled in 1509 and Trinidad the following year. By 1511 Spanish explorers had established themselves as far as Florida. However, in the eastern Caribbean, the Caribs resisted the penetration of Europeans until well into the seventeenth century and succumbed only in the eighteenth century. With the conquest of Mexico in 1519 and the subsequent discovery of gold there, interest in working the gold deposits of the islands decreased. Moreover, by that time the Indian population of the Caribbean had dwindled considerably, creating a scarcity of workers for the mines and pearl fisheries. In 1518 the first African slaves, called ladinos because they had lived in Spain and spoke the Castilian language, were introduced to the Caribbean to help mitigate the labor shortage. The Spanish administrative structure that prevailed for the 132 years of Spanish monopoly in the Caribbean was simple. At the imperial level were two central agencies, the Casa de Contratación, or House of Trade, which licensed all ships sailing to or returning from the Indies and supervised commerce, and the Consejo de Indias, the royal Advisory Council, which attended to imperial legislation. At the local level in the Caribbean were the governors, appointed by the monarchs of Castile, who supervised local municipal councils. The governors were regulated by audiencias, or appellate courts. A parallel structure regulated the religious organizations. Despite the theoretical hierarchy and clear divisions of authority, in practice each agency reported directly to the monarch. As set out in the original instructions to Ovando in 1502, the Spanish New World was to be orthodox and unified under the Roman Catholic religion and Castilian and Spanish in culture and nationality. Moors, Jews, recent converts to Roman Catholicism, Protestants, and gypsies were legally excluded from sailing to the Indies, although this exclusiveness could not be maintained and was frequently violated. By the early seventeenth century, Spain's European enemies, no longer disunited and internally weak, were beginning to breach the perimeters of Spain's American empire. The French and the English established trading forts along the St. Lawrence and the Hudson Rivers in North America. These were followed by permanent settlements on the mid-Atlantic coast (Jamestown) and in New England (Massachusetts Bay colony). Between 1595 and 1620, the English, French, and Dutch made many unsuccessful attempts to settle along the Guiana coastlands of South America. The Dutch finally prevailed, with one permanent colony along the Essequibo River in 1616, and another, in 1624, along the neighboring Berbice River. As in North America, initial loss of life in the colonies was discouragingly high. In 1624 the English and French gave up in the Guianas and jointly created a colony on St. Kitts in the northern Leeward Islands. At that time, St. Kitts was occupied only by Caribs. With the Spanish deeply involved in the Thirty Years War in Europe, conditions were propitious for colonial exploits in what until then had been reluctantly conceded to be a Spanish domain. In 1621, the Dutch began to move aggressively against Spanish territory in the Americas--including Brazil, temporarily under Spanish control between 1580 and 1640. In the Caribbean, they joined the English in settling St. Croix in 1625 and then seized the minuscule, unoccupied islands of Curaçao, St. Eustatius, St. Martin, and Saba, thereby expanding their former holdings in the Guianas, as well as those at Araya and Cumana on the Venezuelan coast. The English and the French also moved rapidly to take advantage of Spanish weakness in the Americas and overcommitment in Europe. In 1625, the English settled Barbados and tried an unsuccessful settlement on Tobago. They took possession of Nevis in 1628 and Antigua and Montserrat in 1632. They planted a colony on St. Lucia in 1638, but it was destroyed within four years by the Caribs. The French, under the auspices of the Compagnie des Iles d'Amerique, chartered by Cardinal Richelieu in 1635, successfully settled Martinique and Guadeloupe, laying the base for later expansion to St. Bartholomé, St. Martin, Grenada, St. Lucia, and western Hispaniola, which was formally ceded by Spain in 1697 at the Treaty of Ryswick (signed between France and the alliance of Spain, the Netherlands, and England, and ending the War of the Grand Alliance). Meanwhile, an expedition sent out by Oliver Cromwell (Protector of the English Commonwealth, 1649-58) under Admiral William Penn (the father of the founder of Pennsylvania) and General Robert Venables in 1655 seized Jamaica, the first territory captured from the Spanish. (Trinidad, the only other British colony taken from the Spanish, fell in 1797 and was ceded in 1802.) At that time Jamaica had a population of about 3,000, equally divided between Spaniards and their slaves--the Indian population having been eliminated. Although Jamaica was a disappointing consolation for the failure to capture either of the major colonies of Hispaniola or Cuba, the island was retained at the Treaty of Madrid in 1670, thereby more than doubling the land area for potential British colonization in the Caribbean. By 1750 Jamaica was the most important of Britian's Caribbean colonies, having eclipsed Barbados in economic significane. The first colonists in the Caribbean were trying to recreate their metropolitan European societies in the region. In this respect, the goals and the world view of the early colonists in the Caribbean did not vary significantly from those of the colonists on the North American mainland. "The Caribbee planters," wrote the historian Richard Dunn, "began as peasant farmers not unlike the peasant farmers of Wigston Magna, Leicestershire, or Sudbury, Massachusetts. They cultivated the same staple crop--tobacco--as their cousins in Virginia and Maryland. They brought to the tropics the English common law, English political institutions, the English parish [local administrative unit], and the English church." These institutions survived for a very long time, but the social context in which they were introduced was rapidly altered by time and circumstances. Attempts to recreate microcosms of Europe were slowly abandoned in favor of a series of plantation societies using slave labor to produce large quantities of tropical staples for the European market. In the process of this transformation, complicated by war and trade, much was changed in the Caribbean. Source: U.S. Library of Congress
http://countrystudies.us/caribbean-islands/6.htm
13
17
- Cigna Medicare - Individual & Family Plans - International Plans - Offered Cigna Through Work? - Find a Doctor - Informed on Reform - Health and Wellness » - Cigna Home Delivery Pharmacy A hearing (audiometric) test is part of an ear examination that evaluates a person's ability to hear by measuring the ability of sound to reach the brain. The sounds we hear start as vibrations of air, fluid, and solid materials in our environment. The vibrations produce sound waves, which vibrate at a certain speed (frequency) and have a certain height (amplitude). The vibration speed of a sound wave determines how high or low a sound is (pitch). The height of the sound wave determines how loud the sound is (volume). Hearing happens when these sound waves travel through the ear and are turned into nerve impulses. These nerve impulses are sent to the brain, which "hears" them. - Sound waves enter the ear through the ear canal (external ear) and strike the eardrum (tympanic membrane), which separates the ear canal and the middle ear. - The eardrum vibrates, and the vibrations move to the bones of the middle ear. In response, the bones of the middle ear vibrate, magnifying the sound and sending it to the inner ear. - The fluid-filled, curved space of the inner ear, sometimes called the labyrinth, contains the main sensory organ of hearing, the cochlea. Sound vibrations cause the fluid in the inner ear to move, which bends tiny hair cells (cilia) in the cochlea. The movement of the hair cells creates nerve impulses, which travel along the cochlear (auditory, or eighth cranial) nerve to the brain and are interpreted as sound. Hearing tests help determine what kind of hearing loss you have by measuring your ability to hear sounds that reach the inner ear through the ear canal (air-conducted sounds) and sounds transmitted through the skull (bone-conducted sounds). Most hearing tests ask you to respond to a series of tones or words. But there are some hearing tests that do not require a response. Why It Is Done Hearing tests may be done: - To screen babies and young children for hearing problems that might interfere with their ability to learn, speak, or understand language. The United States Preventive Services Task Force recommends that all newborns be screened for hearing loss.1 All 50 states require newborn hearing tests for all babies born in hospitals. Also, many health organizations and doctors' groups recommend routine screening. Talk to your doctor about whether your child has been or should be tested. - To screen children and teens for hearing loss. Hearing should be checked by a doctor at each well-child visit. In children, normal hearing is important for proper language development. Some speech, behavior, and learning problems in children can be related to problems with hearing. For this reason, many schools routinely provide hearing tests when children first begin school. The American Academy of Pediatrics recommends a formal hearing test at ages 4, 5, 6, 8, and 10 years. - As part of a routine physical exam. In general, unless hearing loss is suspected, only a simple whispered speech test is done during a routine physical exam. - To evaluate possible hearing loss in anyone who has noticed a persistent hearing problem in one or both ears or has had difficulty understanding words in conversation. - To screen for hearing problems in older adults. Hearing loss in older adults is often mistaken for diminished mental capacity (for instance, if the person does not seem to listen or respond to conversation). - To screen for hearing loss in people who are repeatedly exposed to loud noises or who are taking certain antibiotics, such as gentamicin. - To find out the type and amount of hearing loss (conductive, sensorineural, or both). In conductive hearing loss, the movement of sound (conduction) is blocked or does not pass into the inner ear. In sensorineural hearing loss, sound reaches the inner ear, but a problem in the nerves of the ear or, in rare cases, the brain itself prevents proper hearing. How To Prepare Tell your doctor if you: - Have recently been exposed to any painfully loud noise or to a noise that made your ears ring. Avoid loud noises for 16 hours prior to having a thorough hearing test. - Are taking or have taken antibiotics that can damage hearing, such as gentamicin. - Have had any problems hearing normal conversations or noticed any other signs of possible hearing loss. - Have recently had a cold or ear infection. Before beginning any hearing tests, the health professional may check your ear canals for earwax and remove any hardened wax, which can interfere with your ability to hear the tones or words during testing. For tests in which you wear headphones, you will need to remove eyeglasses, earrings, or hair clips that interfere with the placement of the headphones. The health professional will press on each ear to find out whether the pressure from the headphones on your outer ear will cause the ear canal to close. If so, a thin plastic tube may be placed in the ear canal before the testing to keep your ear canal open. The headphones are then placed on your head and adjusted to fit. If you are wearing hearing aids, you may be asked to remove them for some of the tests. You may be asked to shampoo your hair before you have auditory brain stem response (ABR) testing. Talk to your doctor about any concerns you have about the need for a hearing test, its risks, how it will be done, or what the results will mean. To help you understand the importance of this test, fill out the medical test information form(What is a PDF document?). How It Is Done Hearing tests can be done in an audiometry laboratory by a hearing specialist (audiologist) or in a doctor's office, a school, or the workplace by a nurse, health professional, psychologist, speech therapist, or audiometric technician. Whispered speech test In a whispered speech test, the health professional will ask you to cover the opening of one ear with your finger. The health professional will stand 1 ft (0.3 m) to 2 ft (0.6 m) behind you and whisper a series of words. You will repeat the words that you hear. If you cannot hear the words at a soft whisper, the health professional will keep saying the words more loudly until you can hear them. Each ear is tested separately. Pure tone audiometry Pure tone audiometry uses a machine called an audiometer to play a series of tones through headphones. The tones vary in pitch (frequency, measured in hertz) and loudness (intensity, measured in decibels). The health professional will control the volume of a tone and reduce its loudness until you can no longer hear it. Then the tone will get louder until you can hear it again. You signal by raising your hand or pressing a button every time you hear a tone, even if the tone you hear is very faint. The health professional will then repeat the test several times, using a higher-pitched tone each time. Each ear is tested separately. The headphones will then be removed, and a special vibrating device will be placed on the bone behind your ear. Again, you will signal each time you hear a tone. Tuning fork tests A tuning fork is a metal, two-pronged device that produces a tone when it vibrates. The health professional strikes the tuning fork to make it vibrate and produce a tone. These tests assess how well sound moves through your ear. Sometimes the tuning fork will be placed on your head or behind your ear. Depending on how you hear the sound, your health professional can tell if there is a problem with the nerves themselves or with sound getting to nerves. Speech reception and word recognition tests Speech reception and word recognition tests measure your ability to hear and understand normal conversation. In these tests, you are asked to repeat a series of simple words spoken with different degrees of loudness. A test called the spondee threshold test determines the level at which you can repeat at least half of a list of familiar two-syllable words (spondees). Otoacoustic emissions (OAE) testing Otoacoustic emissions (OAE) testing is often used to screen newborns for hearing problems. In this test, a small, soft microphone is placed in the baby's ear canal. Sound is then introduced through a small flexible probe inserted in the baby's ear. The microphone detects the inner's ear's response to the sound. This test cannot distinguish between conductive and sensorineural hearing loss. Auditory brain stem response (ABR) testing Auditory brain stem response (ABR) testing detects sensorineural hearing loss. In this test, electrodes are placed on your scalp and on each earlobe. Clicking noises are then sent through earphones. The electrodes monitor your brain's response to the clicking noises and record the response on a graph. This test is also called brain stem auditory evoked response (BAER) testing or auditory brain stem evoked potential (ABEP) testing. How It Feels There is normally no discomfort involved with a hearing test. There are no risks associated with hearing tests. A hearing test is part of an ear examination that evaluates a person's ability to hear. Sound is described in terms of frequency and intensity. Your hearing threshold is how loud the sound of a certain frequency must be for you to hear it. - Frequency, or pitch (whether a sound is low or high), is measured in vibrations per second, or hertz (Hz). The human ear can normally hear frequencies from a very low rumble of 16 Hz to a high-pitched whine of 20,000 Hz. The frequencies of normal conversations in a quiet place are 500 Hz to 2,000 Hz. - Intensity, or loudness, is measured in decibels (dB). The normal range (threshold or lower limit) of hearing is 0 dB to 25 dB. For children, the normal range is 0 dB to 15 dB. Normal results shows that you hear within these ranges in both ears. The following table relates how loud a sound must be for a person to hear it (hearing thresholds) to the degree of hearing loss for adults: |Hearing threshold in decibels (dB)||Degree of hearing loss||Ability to hear speech| No significant difficulty Difficulty with faint or distant speech Difficulty with conversational speech Moderate to severe Speech must be loud; difficulty with group conversation Difficulty with loud speech; understands only shouted or amplified speech May not understand amplified speech What Affects the Test Reasons you may not be able to have the test or why the results may not be helpful include: - Being unable to cooperate, follow directions, and understand speech well enough to respond during most tests. It may be difficult to conduct hearing tests on young children or on people who have physical or mental disabilities. - Equipment problems, such as cracked or poorly fitting headphones or an uncalibrated audiometer, or background noise. - Difficulty speaking or understanding the language of the tester. - A recent cold or ear infection. - Being around loud noises within 16 hours before the test. What To Think About - Other types of tests may be used to evaluate hearing. These tests include: - Acoustic immitance testing (tympanometry and acoustic reflex testing). This 2- to 3-minute test measures how well the eardrum moves in response to sound. The soft tip of a small instrument is inserted into the ear canal and adjusted to achieve a tight seal. Sound and air pressure are then directed toward the eardrum. The test is not painful, but slight changes in pressure may be felt or the tone may be heard. - Vestibular tests (falling and past-pointing tests). These tests can detect problems with areas of the inner ear that help control balance and coordination. During these tests, the person tries to maintain balance and coordination while moving the arms and legs in certain ways, standing on one foot, standing heel-to-toe, and performing other maneuvers with the eyes open and closed. The health professional will protect the person from falling, and no special preparation is needed. U.S. Preventive Services Task Force (2008). Universal screening for hearing loss in newborns: U.S. Preventive Services Task Force Recommendation Statement. Pediatrics, 122(1): 143–148. Also available online: http://www.uspreventiveservicestaskforce.org/uspstf/uspsnbhr.htm. Other Works Consulted American Academy of Pediatrics (2008). Recommendations for preventive pediatric health care. In Bright Futures: Guidelines for Health Supervision of Infants, Children, and Adolescents, 3rd ed., p. 591. Elk Grove Village, IL: American Academy of Pediatrics. Also available online: http://brightfutures.aap.org/pdfs/Guidelines_PDF/20-Appendices_PeriodicitySchedule.pdf. |Primary Medical Reviewer||Sarah Marshall, MD - Family Medicine| |Specialist Medical Reviewer||Steven T. Kmucha, MD - Otolaryngology| |Last Revised||October 25, 2011| |By:||Healthwise Staff||Last Revised: October 25, 2011| |Medical Review:||Sarah Marshall, MD - Family Medicine| Steven T. Kmucha, MD - Otolaryngology © 1995-, Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
http://www.cigna.com/individualandfamilies/health-and-well-being/hw/medical-tests/hearing-tests-tv8475.html
13
24
Confederate States of America The Confederate States of America, also called the C.S.A and the Confederacy, was a decentralized confederation form of weak central government established by the eleven Southern states, which seceded from the United States in 1861. They followed the original government model of the American founding fathers, the Articles of Confederation, a government much like Switzerland today. The Southern states democratically withdrew from the Union by state conventions in the same way they had originally entered into the Union. Although the defense of the dying institution of slavery played a role in Southern secession for the small percentage of wealthy slaveholders, the high tariffs and import duties (the agricultural southern states generated most of the tax revenue for Washington) as well as regional animosity helped cause the conflict. The Lincoln Administration and the wealthy northern elite manufacturing, banking and railroad interests, which supported the Republican Party, could not have survived a low tariff nation importing from Europe and the Mississippi River transportation of goods from the Midwest being shipped in and out of New Orleans on the American southern border. The loss of Washington revenues would have crippled the United States, therefore this war like most others was all about economic and monetary issues rather than the slavery issue used by Washington to justify the genocidal war, which killed over 600,000 Americans. The war was also financed and promoted by Rothschild banking interests, which spent millions to promote different editorial positions and views in the same way that would take place in a more sophisticated manner with World War One in Europe. The Rothschilds had several objectives in mind for the Civil War like most other wars they have promoted. First was to make a profit from government, providing business loans and generating manufacturing profits off both sides of the Civil War. Second, was to destroy Southern political power that was inherently opposed to central banking. Third was to generate the necessary long-term monopoly of power by the pro-bank Lincoln and the Republican Party in order to create a third and lasting central banking entity in the United States that would be covertly controlled by Rothschild banking interests. The South, without an industrial base or major banking interest, blindly followed the Rothschild game plan of fiat currency inflation and loans to fund their war effort. The North, under Lincoln, followed the advice of Colonel Dick Taylor of Chicago and issued legal tender treasury notes with interest called Greenbacks in order to fund the war instead of paying the 20% plus interest rates demanded by the Rothschild's. In the end, the war destroyed Southern political and economic power and created the conditions for the establishment of the Federal Reserve in 1913. In addition, at the end of the war, an amendment to the Constitution made it illegal for states or individuals to pay debts to entities that loaned the Confederacy money. Some historians and economists believe the assassination of Abraham Lincoln was retribution for the losses sustained by the banking interests and comparisons have been made with the Kennedy assassination in 1963. When Robert E. Lee surrendered the Confederate Army of Northern Virginia in 1865, the Confederate States of America ceased to exist as a nation. Two limited constitutional republics went into the war based on principles of the founding fathers. But after the war, this vision of limited central government was replaced by a powerful central government dependent on banking interests and increasing levels of taxation combined with the beginnings of military aggression and empire. The rest is history. And not a pleasant one for those who believe civil societies function better when they are freer.
http://www.thedailybell.com/2391/Confederate-States-of-America.html
13
17
Science Fair Project Encyclopedia The Comstock Lode was a massive body of silver ore discovered under what is now Virginia City, Nevada in 1859. Between 1859 and 1878 it yielded $400 million in silver and gold. It is notable not just for the immense fortunes it generated and the large role those fortunes had in the growth of San Francisco, but also for the advances in mining technology that it spurred. In the early 1850s, '49ers on their way to California as part of the Gold Rush discovered small placer gold deposits in the vicinity of Dayton, Nevada. These deposits were followed up Six-Mile Canyon, where veins of gold-flecked decomposed quartz underlying a heavy blue-black sand were discovered. This sand was viewed as a nuisance until an assay determined that it was, in fact, a rich silver ore far more valuable than the gold ore beneath it. The deposits of this ore came to be collectively referred to as the 'Comstock Lode' after Henry Comstock who was the most conspicuous claim holder in the area. News of this discovery spread quickly and soon hundreds and then thousands of people flocked to the area. The ore was first extracted through surface diggings, but these were quickly exhausted and miners had to tunnel underground to reach ore bodies. Unlike most silver ore deposits which occur in long thin veins, those of the Comstock Lode occurred in discrete masses often hundreds of feet thick. The ore was so soft it could be removed by shovel. Although this allowed the ore to be easily excavated, the weakness of the surrounding material resulted in frequent and deadly cave-ins. The cave-in problem was solved by the method of square-set timbering invented by Philip Deidesheimer. Previously timber sets consisting of vertical members on either side of the diggings capped by a third member were used to support the excavation. However the Comstock ore bodies were too large for this method. Instead, as ore was removed it was replaced by timbers set as a cube six feet on a side. Thus the ore body would be progressively replaced with a timber lattice. Often these voids would be re-filled with waste rock from other diggings after ore removal was complete. As the depth of the diggings increased, the hemp ropes used to haul ore to the surface became impractical, as their self-weight became a significant fraction of their breaking load. The solution to this problem came from A. S. Hallidie in 1864 when he developed a flat woven wire rope. This wire rope went on to be used in San Francisco's famous cable cars. Intrusion of scalding-hot water into the mines was a large problem, and the expense of water removal increased as depths increased. In 1871 the Sutro Tunnel was driven up from the valley near Dayton through nearly four miles of solid rock to meet the Comstock mines approximately 1,650 feet beneath the surface. The purpose of the tunnel was to provide drainage and ventilation for the mines as well as gravity-assisted ore removal. However by the time it reached the Comstock area mines, most of the ore above 1,650 feet had already been removed and the lower workings were 1,500 feet deeper still. Although virtually no ore was removed through the tunnel, and the ventilation problems were solved at about the same time by the use of pneumatic drills, the drainage it provided greatly decreased the operating costs of the mines it serviced. Peak production from the Comstock occurred in 1877, with the mines producing over $14,000,000 of gold and $21,000,000 of silver that year. Production decreased rapidly thereafter, and by 1880 the Comstock was considered to be played out. The deepest depth was struck in 1884 in the Mexican Winze at 3,300 feet below the surface. Underground mining continued sporadically until 1922, when the last of the pumps was shut off causing the mines to flood. Re-processing of mill tailings continued through the 1920s, and exploration in the area continued through the 1950s. Nevada is commonly called the 'Silver State' on account of the silver produced from the Comstock Lode. However, since 1878 Nevada has been a relatively minor silver producer, with most subsequent bonanzas consisting of more gold than silver. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Comstock_Lode
13
17
One among the expected side effects of global warming is a global increase in sea level, or sea level rise. Sea levels are expected to increase for a number of reasons, including melting ice and the fact that warmer water occupies a greater volume. This is expected to affect low-lying land areas along coasts, including river deltas and barrier islands. Also see our Cryosphere page for the physical mechanisms of ice. Here is a graph of worldwide average sea levels, which appear to be rising at 3.1±0.4 millimeters per year: This graph is from: According to the NRC climate stabilization targets report, global sea level has risen by about 0.2 meters since 1870. The sea level rise by 2100 is expected to be at least 0.6 meters due to thermal expansion of the ocean, and melting of small ice caps and glaciers. However, ice loss is also occurring in parts of Greenland and Antarctica. If all the ice in Greenland were to melt, it would cause an additional sea level rise of 7.2 meters—and if all the ice in the West Antarctic Ice Sheet were to melt, it would cause a further rise of 4.8 meters (see below). However, the amount of sea level rise in the next century remain uncertain, because the rate of melting of these bodies of ice is hard to predict. The 4th IPCC report, back in 2007, took a conservative stance and assumed that the Greenland and West Antarctic ice sheets would melt at a slow and more or less constant rate until 2100. Their conclusion was that about 75% of sea level rise would be caused by the oceans expanding as they warmed. The melting of small glaciers, ice caps and Greenland would account for most of the rest. The Antarctic, they believed, would actually provide a small net reduction in sea levels, with increases in snowfall more than enough to outweigh the effects of melting. They predicted an overall sea level rise of between 0.18 and 0.59 meters, with most of the uncertainty arising from different assumptions about what the world economy will do. However, almost as soon as the 4th IPCC report was released, evidence started to accumulate suggesting that the melting of Greenland and the West Antarctic Ice Sheet were speeding up. For example: This graph, taken from Skeptical Science, shows Isabella Velicogna's estimates of the mass of the Greenland ice sheet. Unfiltered data are blue crosses. Data filtered to eliminate seasonal variations are shown as red crosses. The best fit by a quadratic function is shown in green. The data came from the Gravity Recovery and Climate Experiment satellites—or GRACE for short: a remarkable project to measure small variations in the Earth’s gravitational field from place to place with extreme accuracy. The big news, of course, was that the melting seems to be speeding up. Here’s the same sort of graph for Antarctica, again created by Velicogna: More recently, Eric Rignot and coauthors have compared GRACE data to another way of keeping track of these ice sheets: Satellites and radio echo soundings measure ice leaving these sheets, while regional atmospheric climate model data can be used to estimate the amount of snow being added. The difference should be the overall loss of ice. These graphs show Rignot’s results: Graph a is Greenland, graph b is Antarctica and graph c is the total of both. These graphs show not the total amount of ice, but the rate at which the amount of ice is changing, in gigatonnes per year. So, a line sloping down would mean that the ice loss is accelerating at a constant rate. By fitting a line to satellite and atmospheric data, Rignot’s team found that over the last 18 years, Greenland has been losing an average of 22 gigatonnes more ice each year. Antarctica has been losing an average of 14.5 gigatonnes more each year. But also note the black versus the red on the top two graphs! The GRACE data is in red. The other approach is in black. They match fairly well, though of course not perfectly. In conclusion, Rignot says: That ice sheets will dominate future sea level rise is not surprising—they hold a lot more ice mass than mountain glaciers. What is surprising is this increased contribution by the ice sheets is already happening. If present trends continue, sea level is likely to be significantly higher than levels projected by the United Nations Intergovernmental Panel on Climate Change in 2007. Indeed, most recent estimates of sea level rise take Greenland and the West Antarctic Ice Sheets into account as significant factors. This paper suggests an upper bound on sea level rise 2 meters per century (if you max out everything) and a more realistic upper bound of 1 meter/century for this century (it could accelerate later): The following paper aims to estimate the sea level rise with a greater variety of factors included in the analysis than is generally done: The authors say their estimates are in line with past sea level responses to temperature change, and they suggest that estimates based on ice and ocean thermal responses alone may be misleading. With six different IPCC radiative forcing scenarios they estimate a sea level rise of 0.6–1.6 meters, and are confident the rise will be between 0.59 and 1.8 meters. Worldwide, the NRC Climate Stabilization Targets report estimates that a 0.6 meter sea level rise would displace 3 million people and raise the risk of flood for millions more. Recall that they estimated 0.6 meters sea level rise by 2100 with no melting of Greenland and the Antarctic. Statistical data on the human impact of sea level rise is scarce. A study in the April, 2007 issue of Environment and Urbanization reports that 634 million people live in coastal areas within 30 feet (9.1 m) of sea level. The study also reported that about two thirds of the world’s cities with over five million people are located in these low-lying coastal areas. … A sea-level rise of just 400 mm in the Bay of Bengal would put 11 percent of the Bangladesh’s coastal land underwater, creating 7 to 10 million climate refugees. According to the UNEP, 1.5 meters in sea level rise would displace 18 million people in Bangladesh: This website lets you see how coastlines worldwide would change with different amounts of sea level rise: Actually it shows what would happen if the level of various bodies of water rose. So, for example, it shows what the Caspian Sea would look like if its level rose, even though this is a lake whose level would be unaffected by sea level rise. If the entire km3 of ice in Greenland were to melt, it would lead to a global sea level rise of 7.2 meters. This would inundate most of the world’s coastal cities and remove several small island countries from the face of the Earth, since island nations such as Tuvalu and Maldives have a maximum altitude below or just above this level: However, according to the abstract of the following paper, it appears that these authors predict only an 0.16 meter sea level due to Greenland ice melting by 2080: The following paper shows that the ice loss, which has been well‐documented over southern portions of Greenland, is now spreading up along the northwest coast, with this acceleration likely starting in late 2005. The melting of ice sheets is not a constant, but accelerating with time, i.e., that the GRACE observations are better represented by a quadratic trend than by a linear one, implying that the ice sheets contribution to sea level becomes larger with time. In Greenland, the mass loss increased from 137 Gt/yr in 2002–2003 to 286 Gt/yr in 2007–2009, i.e., an acceleration of −30 ± 11 Gt/yr2 in 2002–2009. It is estimated that the volume of the Antarctic ice sheet is about km3. The weight of the ice has caused the underlying rock to sink by between 0.5 and 1 kilometers. If all the ice in Antarctica melted, it would raise sea levels by 61.1 meters: The West Antarctic Ice Sheet (or WAIS) contains just under 10% of this, or km3: Large parts of the WAIS sit on a bed which is below sea level and slopes downward inland. This slope, and the low isostatic head, mean that the ice sheet is theoretically unstable: a small retreat could in theory destabilize the entire WAIS leading to rapid disintegration. However, current computer models do not include the physics necessary to simulate this process, and observations do not provide guidance, so predictions as to its rate of retreat remain uncertain. In January 2006, in a UK government-commissioned report, the head of the British Antarctic Survey, Chris Rapley, warned that this huge west Antarctic ice sheet may be starting to disintegrate. Rapley said a previous Intergovernmental Panel on Climate Change (IPCC) report that played down the worries of the ice sheet’s stability should be revised. “The last IPCC report characterized Antarctica as a slumbering giant in terms of climate change,” he wrote. “I would say it is now an awakened giant. There is real concern.” (Note that the IPCC report did not use the words “slumbering giant”.) Rapley said, “Parts of the Antarctic ice sheet that rest on bedrock below sea level have begun to discharge ice fast enough to make a significant contribution to sea level rise. Understanding the reason for this change is urgent in order to be able to predict how much ice may ultimately be discharged and over what timescale. Current computer models do not include the effect of liquid water on ice sheet sliding and flow, and so provide only conservative estimates of future behaviour.” It has been argued that a collapse of the WAIS could raise global sea levels by approximately 3.3 meters: However, these authors claim there would be important regional variations, with the maximum increase concentrated along the Pacific and Atlantic seaboard of the United States, where the value is about 25% greater than the global mean, even for the case of a partial collapse. If the entire West Antarctic Ice Sheet were to melt, this would contribute 4.8 m to global sea level: J. L. Bamber, R.E.M. Riva, B.L.A. Vermeersen and A.M. LeBroq,, Reassessment of the potential sea-level rise from a collapse of the West Antarctic Ice Sheet (supporting online material), Science 324 (2009), 901. Rob Young, Orrin Pilkey, How High Will Seas Rise? Get Ready for Seven Feet, Yale Environment 360, 14 Jan 2010. Indications that the West Antarctic Ice Sheet is losing mass at an increasing rate come from the Amundsen Sea sector, and three glaciers in particular: the Pine Island, Thwaites and Smith Glaciers: Data reveals they are losing more ice than is being replaced by snowfall. Total ice discharge from these glaciers increased 30% in 12 recent years, and the net mass loss increased 170% from 39 ± 15 Gt/yr to 105 ± 27 Gt/yr. The melting of these three glaciers alone is now contributing an estimated 0.24 millimetres per year to the rise in the worldwide sea level (see the article by Jenny Hogan above). More generally, there has been substantial increase in Antarctic ice mass loss in the ten years 1996-2006, with glacier acceleration a primary cause: In 1996 the net mass loss was 78 ± 78 gigatons/year. By 2006 this had risen to 153 ± 78 gigatons/year. Another estimate of West-Antartic ice loss: Velicogna - see reference under Greenland - estimates: In Antarctica the mass loss increased from 104 Gt/yr in 2002–2006 to 246 Gt/yr in 2006–2009, i.e., an acceleration of −26 ± 14 Gt/yr2 in 2002–2009. The observed acceleration in ice sheet mass loss helps reconcile GRACE ice mass estimates obtained for different time periods. A recent article about mass loss from the Canadian Arctic Archipelago: The CSIRO or Commonwealth Scientific and Industrial Research Organisation of Australia has a project on sea level changes. Their report on Historical sea level changes is on our recommended reading list. Here is some research from their project on sea levels: Research by Australian climate scientists has shown that global sea level has been rising at an increasing rate over the past 130 years. Using information from tide gauges and measurements from satellites, Dr John Church and Dr Neil White estimated changes in global mean sea levels since 1870. Their work, published in the science journal Geophysical Research Letters (6 January), indicates an acceleration in the rate of sea-level rise that had not been detected previously. ‘Although predicted by models, this is the first time a 20th century acceleration has actually been detected,’ Dr Church says. ‘Our research provides added confidence in sea-level rise projections published by the Intergovernmental Panel on Climate Change Third Assessment Report. ‘If the acceleration over the past 130 year period continues, we would expect sea level to be 280-340mm above its 1990 levels by 2100. This is consistent with the projections in the Intergovernmental Panel on Climate Change Third Assessment Report.’ The Copenhagen Diagnosis, written in 2009, was intended to serve as an interim evaluation of the evolving science before the 5th IPCC report, which is not due for completion until 2013. Its executive summary says, among other things: Current sea-level rise underestimates: Satellites show great global average sea-level rise (3.4 mm/yr over the past 15 years) to be 80% above past IPCC predictions. This acceleration in sea-level rise is consistent with a doubling in contribution from melting of glaciers, ice caps and the Greenland and West- Antarctic ice-sheets. Sea-level prediction revised: By 2100, global sea-level is likely to rise at least twice as much as projected by Working Group 1 of the IPCC AR4, for unmitigated emissions it may well exceed 1 meter. The upper limit has been estimated as 2 meters sea-level rise by 2100. Sea-level will continue to rise for centuries after global temperature have been stabilized and several meters of sea level rise must be expected over the next few centuries. There are several major “fast ice” dynamical effects that could accelerate mass loss. There is paleoclimatic evidence for abrupt sea level rise, see e.g. This effect is a mechanical instability of the entire ice sheet due to basal lubrication from meltwater. It is mostly mentioned in the context of the Greenland ice sheet. However, it has been argued that the Zwally effect may be self-limiting in the case of steady meltwater flow, and that a sustained acceleration in ice flow requires an increase in water input variability. For the West Antarctic ice sheet (WAIS), the important effects are often associated with floating ice shelves. Ice shelves can disintegrate due to warm ocean water underneath, or surface melt ponds “drilling” down throught the shelf and fragmenting it. When a shelf is removed, the land ice sheet it was “buttressing”, or holding back, can slide more rapidly into the ocean. The latter is called the Larsen B scenario. A. Shepherd, D. Wingham, T. Payne and P. Skvarca, Larsen Ice Shelf Has Progressively Thinned Science 302 (2003) no. 5646 pp. 856-859; T. A. Scambos, C. Hulbe; M. Fahnestock and J. Bohlander, The link between climate warming and break-up of ice shelves in the Antarctic Peninsula, Journal of Glaciology 46 (2000) no. 154 pp. 516-530 E. Rignot et al , Accelerated ice discharge from the Antarctic Peninsula following the collapse of Larsen B ice shelf, Geophysical Research Letters 31 (2004) Another concern is the underlying bed topography. The WAIS rests on an “upsloping”, or “foredeepened” bed, meaning that the bed slopes up toward the sea, or down toward the center of the ice sheet. There has been a debate for almost 40 years about whether ice sheets resting on such beds are particularly unstable, which is still unresolved. Due to the Archimedes’ principle the meltdown of ice that is freely floating will not increase the sea level, in a first appoximation. Therefore all estimates need to include ice residing on land only. See also: However, melting sea ice does change the sea level, because fresh ice becomes fresh water, which is less dense than sea water. But this effect is very small. Noerdlinger, P. D. and Brower, K. R. The melting of floating ice raises the ocean level. Geophysical Journal International 170 (2007) Jenkins, A., and D. Holland Melting of floating ice and sea level rise, Geophys. Res. Lett. 34 (2007) Summarizing, there are mechanical effects by which ice loss can be accelerated other than simply melting the whole ice sheet from the top down, i.e. by dumping ice directly into the sea. If you hear about the Greenland Ice Sheet (GIS) or West Antarctic Ice Sheet (WAIS) disappearing within centuries, it’s because these effects are being invoked. But it’s still unknown how significant these effects are. For example, the Zwally effect in the GIS may be compensated by more efficient subglacial drainage relieving pressure at the base of the ice sheet, as described in Although there is paleoclimate evidence for abrupt sea level rise, we heard from Nathan Urban that he doesn’t know glaciologists who think that GIS or WAIS could disappear within one century. The SeaRISE project (see references below) is an attempt to determine how fast you could lose an ice sheet if all of these effects are strong. Maybe SeaRISE will get a much higher number of sea level rise with more explicit ice sheet modeling, but Nathan Urban doesn’t consider it to be probable. How much sea level rise should we expect from Greenland and the West Antarctic Ice Sheet within the next, say, 10 years? What are the best adaptive measures? Floating cities? Has there been a time in history when the sea level was significantly higher than today? Sea level rise, Wikipedia. National Oceanography Centre, Measuring sea-level rise in the Falklands, The National Oceanography Centre, 20 October, 2010. Michael D. Lemonick, The Secret of Sea Level Rise: It Will Vary Greatly by Region, Yale Environment 360, 22 March, 2010. Michael D. Lemonick, Understanding Sea-level Rise and Variability, book based on the World Climate Research Programme workshop on Understanding Sea-Level Rise and Variability held in Paris in 2006, CSIRO, expected release Sept. 2010. NOAA’s Arctic Report Card provides recent data for changes in Arctic atmosphere, sea ice, ocean, land, Greeenland, temperature and biology. The SeaRISE project is attempt to get an upper bound from state of the art ice models. Abstract: We propose a simple relationship linking global sea-level variations on time scales of decades to centuries to global mean temperature. This relationship is tested on synthetic data from a global climate model for the past millennium and the next century. When applied to observed data of sea level and temperature for 1880–2000, and taking into account known anthropogenic hydrologic contributions to sea level, the correlation is >0.99, explaining 98% of the variance. For future global temperature scenarios of the Intergovernmental Panel on Climate Change’s Fourth Assessment Report, the relationship projects a sea-level rise ranging from 75 to 190 cm for the period 1990–2100.
http://www.azimuthproject.org/azimuth/show/Sea+level+rise
13
29
Going for Gold! Demonstrations to capture the student's imagination, by Declan Fleming of Pate's Grammar School, Cheltenham. In this issue: Going for Gold! Gold is the most malleable and ductile of all metals. It can be hammered to a few hundred atoms thick and we don't have to panic too much about wasting small quantities when demonstrating some of its interesting properties. This is a striking demonstration that is great for A-level students when discussing metallic bonding and/or intermolecular forces. On its simplest level, this demonstration lets us see another example of 'like dissolves like' - two chemicals that bond in similar ways are miscible with each other and as mercury is a liquid, it has the capacity to mix with gold. Mercury goes for gold - a great example of like dissolving like. See the video on this page for how to perform the demonstration To explain both of these properties, it's over to Einstein! Bohr showed us that the speed of a 1s electron is proportional to its atom's atomic number. When we get down to gold and mercury, our electron is travelling around 60% the speed of light. It experiences nearly a 25% relativistic increase in mass and a corresponding reduction in Bohr radius. The upshot of this is a smaller gap between the 5d and the incomplete 6s level in gold when compared to silver. This is enough for promotion to cause the absorption of blue rather than UV light and gives gold its colour. This contracted 6s orbital behaves in some ways like the 1s orbital in H and He. Diatomic helium would have a bond order of zero after filling the first antibonding orbital. Similarly, whilst diatomic gold exists in the gas phase (cf H2), mercury is exclusively monotamic (cf He2) and can be thought almost to be like a 'pseudo noble gas' with predominantly Van der Waals interactions. This explains its low melting point and conductivity when compared with gold.Kit: - Gold leaf. Care! Many products from craft stores are substitutes like Dutch metal (Cu 84%, Zn 16%) - Mercury. If you don't have any, you might find the physics dept will have some. You only need a little! - A microscope slide - A teat pipette - A small plastic/wood tray to contain any mercury - Optional: colloidal gold - available from chemical suppliers or health food shops Remove any rings and wear gloves when handling mercury. Place the microscope slide in the tray. It may help to fix the slide in position with a small blob of Blu-Tack under one side in order to make a shallow ramp. Place a piece of gold leaf onto the ramp. Handling the leaf can be difficult as it tends to stick to everything and is extremely delicate - a brush can help. Craft stores sell specialist brushes for the purpose but this is really only necessary for gilding. It doesn't matter for the purposes of this demonstration whether the gold is damaged. In front of the audience The demonstration is very simple. Release a small drop of mercury onto the top of the slide. Mercury tends to fall out of pipettes so take only enough to make a small bead <5 mm in diameter. As the mercury rolls down the slope, it will gather up and dissolve/amalgamate with the gold. Extension ideas/other discussion points If you did happen to find yourself with some gold leaf that turned out not to be gold you'll find this demo probably won't work. Whilst mercury will happily amalgamate with single electron-donors like silver, gold and the alkali metals to form a 3-electron bond, the two electrons offered by the copper in Dutch metal would result in a 4-electron He2-type situation and the mercury won't want to know about it! Colloidal gold is very affordable and can show your students that gold doesn't have to be gold in colour. Indeed its colour very much depends on its particle size, much in the same way that the size of the extended conjugated system in the chromophore of a dye will affect its colour. You may be tempted to try to distil off the mercury - I've found this rather problematic given the small scale. The gold that crystallises, no longer a few atoms thick, is hard to spot! This should only be attempted in an efficient fume cupboard. 1. L Norrby, J. Chem. Educ., 1991, 68, 110 (DOI 10.1021/ed068p110) Mercury is toxic. It should always be handled over a tray to contain spillages and it should not be left exposed to the open air for long. Small spills can be mopped up with a hot paste of 1:1 calcium hydroxide and flowers of sulfur in water. Dispose of mercury waste with your other mercury-containing waste (eg broken thermometers) for eventual disposal via a registered waste contractor. Going for gold Download this article as it originally appeared in Education in Chemistry PDF files require Adobe Acrobat Reader
http://www.rsc.org/Education/EiC/issues/2012May/going-for-gold.asp
13
35
Metals differ so widely in hardness, ductility (the potentiality of being drawn into wire), malleability, tensile strength, density, and melting point that a definite line of distinction between them and the nonmetals cannot be drawn. The hardest elemental metal is chromium; the softest, cesium. Copper, gold, platinum, and silver are especially ductile. Most metals are malleable; gold, silver, copper, tin, and aluminum are extremely so. Some metals exhibiting great tensile strength are copper, iron, and platinum. Three metals (lithium, potassium, and sodium) have densities of less than one gram per cubic centimeter at ordinary temperatures and are therefore lighter than water. Some heavy metals, beginning with the most dense, are osmium, iridium, platinum, gold, tungsten, uranium, tantalum, mercury, hafnium, lead, and silver. For many industrial uses, the melting points of the metals are important. Tungsten fuses, or melts, only at extremely high temperatures (3,370°C;.), while cesium has a melting point of 28.5°C;. The best metallic conductor of electricity is silver. Copper, gold, and aluminum follow in the order named. All metals are relatively good conductors of heat; silver, copper, and aluminum are especially conductive. The radioactive metal uranium is used in reactor piles to generate steam and electric power. Plutonium, another radioactive element, is used in nuclear weapons and nuclear reactors as well as in pacemakers. Some of the radioactive metals not found in nature, e.g., fermium and seaborgium, are produced by nuclear bombardment. Some elements, e.g., arsenic and antimony, exhibit both metallic and nonmetallic properties and are called metalloids. Furthermore, although all metals form crystals, this is also characteristic of certain nonmetals, e.g., carbon and sulfur. Chemically, the metals differ from the nonmetals in that they form positive ions and basic oxides and hydroxides. Upon exposure to moist air, a great many undergo corrosion, i.e., enter into a chemical reaction; e.g., iron rusts when exposed to moist air, the oxygen of the atmosphere uniting with the metal to form the oxide of the metal. Aluminum and zinc do not appear to be affected, but in fact a thin coating of the oxide is formed almost at once, stopping further action and appearing unnoticeable because of its close resemblance to the metal. Tin, lead, and copper react slowly under ordinary conditions. Silver is affected by compounds such as sulfur dioxide and becomes tarnished when exposed to air containing them. The metals are combined with nonmetals in their salts, as in carbides, carbonates, chlorides, nitrates, phosphates, silicates, sulfides, and sulfates.The Electromotive Series On the basis of their ability to be oxidized, i.e., lose electrons, metals can be arranged in a list called the electromotive series, or replacement series. Metals toward the beginning of the series, like cesium and lithium, are more readily oxidized than those toward the end, like silver and gold. In general, a metal will replace any other metal, or hydrogen, in a compound that it precedes in the series, and under ordinary circumstances it will be replaced by any metal, or hydrogen, that it follows.Metals in the Periodic Table Metals fall into groups in the periodic table determined by similar arrangements of their orbital electrons and a consequent similarity in chemical properties. Groups of similar metals include the alkali metals (Group 1 in the periodic table), the alkaline-earth metals (Group 2 in the periodic table), and the rare-earth metals (the lanthanide and actinide series of Group 3). Most metals other than the alkali metals and the alkaline earth metals are called transition metals (see transition elements). The oxidation states, or valence, of the metal ions vary from +1 for the alkali metals to as much as +7 for some transition metals. Although a few metals occur uncombined in nature, the great majority are found combined in their ores. The separation of metals from their ores is called extractive metallurgy. Metals are mixed with each other in definite amounts to form alloys; a mixture of mercury and another metal is called an amalgam. Bronze is an alloy of copper and tin, and brass contains copper and zinc. Steel is an alloy of iron and other metals with carbon added for hardness. Since metals form positive ions readily, i.e., they donate their orbital electrons, they are used in chemistry as reducing agents (see oxidation and reduction). Finely divided metals or their oxides are often used as surface catalysts. Iron and iron oxides catalyze the conversion of hydrogen and nitrogen to ammonia in the Haber process. Finely divided catalytic platinum or nickel is used in the hydrogenation of unsaturated oils. Metal ions orient electron-rich groups called ligands around themselves, forming complex ions. Metal ions are important in many biological functions, including enzyme and coenzyme action, nucleic acid synthesis, and transport across membranes. For the uses of specific metals, see separate articles. Any chemical element with valence electrons in two shells instead of only one. This structure gives them their outstanding ability to form ions containing more than one atom (complex ions, or coordination compounds), with a central atom or ion (often of a transition metal) surrounded by ligands in a regular arrangement. Theories on the bonding in these ions are still being refined. The elements in the periodic table from scandium to copper (atomic numbers 21–29), yttrium to silver (39–47), and lanthanum to gold (57–79, including the lanthanide series) are frequently designated the three main transition series. (Those in the actinide series and beyond, 89–111, also qualify.) All are metals, many of major economic or industrial importance (e.g., iron, gold, nickel, titanium). Most are dense, hard, and brittle, conduct heat and electricity well, have high melting points, and form alloys with each other and other metals. Their electronic structure lets them form compounds at various valences. Many of these compounds are coloured and paramagnetic (see paramagnetism) and (as do the metals themselves) often act as catalysts. Seealso rare earth metal. Learn more about transition element with a free trial on Britannica.com. Used metals that are an important source of industrial metals and alloys, particularly in the production of steel, copper, lead, aluminum, and zinc. Smaller amounts of tin, nickel, magnesium, and precious metals are also recovered from scrap. Impurities consisting of such organic materials as wood, plastic, paint, and fabric can be burned off. Scrap is usually blended and remelted to produce alloys similar to or more complex than those from which the scrap was derived. Seealso recycling. Learn more about scrap metal with a free trial on Britannica.com. Any of a large class of chemical elements including scandium (atomic number 21), yttrium (39), and the 15 elements from 57 (lanthanum) to 71 (see lanthanides). The rare earths themselves are pure or mixed oxides of these metals, originally thought to be quite scarce; however, cerium, the most plentiful, is three times as abundant as lead in the Earth's crust. The metals never occur free, and the pure oxides never occur in minerals. These metals are similar chemically because their atomic structures are generally similar; all form compounds in which they have valence 3, including stable oxides, carbides, and borides. Learn more about rare earth metal with a free trial on Britannica.com. Method of drawing with a small sharpened metal rod—of lead, copper, gold, or most commonly silver—on specially prepared paper or parchment. Silverpoint produces a fine gray line that oxidizes to a light brown; the technique is best suited for small-scale work. It first appeared in medieval Italy and achieved great popularity in the 15th century. Albrecht Dürer and Leonardo da Vinci were its greatest exponents. It went out of fashion in the 17th century with the rise of the graphite pencil but was revived in the 18th century by the miniaturists and in the 20th century by Joseph Stella. Learn more about metal point with a free trial on Britannica.com. Weakened condition of metal parts of machines, vehicles, or structures caused by repeated stresses or loadings, ultimately resulting in fracture under a stress much weaker than that necessary to cause fracture in a single application. Fatigue-resistant metals have been developed and their performance improved by surface treatments, and fatigue stresses have been significantly reduced in aircraft and other applications by designing to avoid stress concentrations. Learn more about metal fatigue with a free trial on Britannica.com. Any of a class of substances with, to some degree, the following properties: good heat and electricity conduction, malleability, ductility, high light reflectivity, and capacity to form positive ions in solution and hydroxides rather than acids when their oxides meet water. About three-quarters of the elements are metals; these are usually fairly hard and strong crystalline (see crystal) solids with high chemical reactivity that readily form alloys with each other. Metallic properties increase from lighter to heavier elements in each vertical group of the periodic table and from right to left in each row. The most abundant metals are aluminum, iron, calcium, sodium, potassium, and magnesium. The vast majority are found as ores rather than free. The cohesiveness of metals in a crystalline structure is attributed to metallic bonding: The atoms are packed close together, with their very mobile outermost electrons all shared throughout the structure. Metals fall into the following classifications (not mutually exclusive and most not rigidly defined): alkali metals, alkaline earth metals, transition elements, noble (precious) metals, platinum metals, lanthanide (rare earth) metals, actinide metals, light metals, and heavy metals. Many have essential roles in nutrition or other biochemical functions, often in trace amounts, and many are toxic as both elements and compounds (see mercury poisoning, lead poisoning). Learn more about metal with a free trial on Britannica.com. Type of rock music marked by highly amplified, distorted “power chords” on electric guitar, a hard beat, thumping bass, and often dark lyrics. It evolved in Britain and the U.S. in the late 1960s from the heavy, blues-oriented music of Steppenwolf, Jimi Hendrix, and others. In the 1970s the genre was defined by the music of bands such as Led Zeppelin, Black Sabbath, Kiss, AC/DC, and Aerosmith. After a period of decline, a new generation of bands such as Def Leppard, Iron Maiden, Mötley Crüe, and Van Halen revived heavy metal in the 1980s, along with the careers of many of its pioneers, including Ozzy Osbourne of Black Sabbath. Learn more about heavy metal with a free trial on Britannica.com. Any of the six chemical elements in the second leftmost group of the periodic table (beryllium, magnesium, calcium, strontium, barium, and radium). Their name harks back to medieval alchemy. Their atoms have two electrons in the outermost shell, so they react readily, form numerous compounds, and are never found free in nature. Learn more about alkaline earth metal with a free trial on Britannica.com. Any of the six chemical elements in the leftmost group of the periodic table (lithium, sodium, potassium, rubidium, cesium, and francium). They form alkalies when they combine with other elements. Because their atoms have only one electron in the outermost shell, they are very reactive chemically (they react rapidly, even violently, with water), form numerous compounds, and are never found free in nature. Learn more about alkali metal with a free trial on Britannica.com. In chemistry, a metal (Greek: Metallo, Μέταλλο) is defined as an element that readily loses electrons to form positive ions (cations) and forms metallic bonds between other metal atoms (forming ionic bonds with non-metals). Metals are sometimes described as a lattice of positive ions surrounded by a cloud of delocalized electrons. They are one of the three groups of elements as distinguished by their ionization and bonding properties, along with the metalloids and nonmetals. On the periodic table, a diagonal line drawn from boron (B) to polonium (Po) separates the metals from the nonmetals. Most elements on this line are metalloids, sometimes called semi-metals; elements to the lower left are metals; elements to the upper right are nonmetals (see the periodic table showing the metals). An alternative definition of metals is that they have overlapping conduction bands and valence bands in their electronic structure. This definition opens up the category for metallic polymers and other organic metals, which have been made by researchers and employed in high-tech devices. These synthetic materials often have the characteristic silvery-grey reflectiveness (luster) of elemental metals. Painting, anodising or plating metals are good ways to prevent their corrosion. However, a more reactive metal in the electrochemical series must be chosen for coating, especially when chipping of the coating is expected. Water and the two metals form an electrochemical cell, and if the coating is less reactive than the coatee, the coating actually promotes corrosion. Metals in general have superior electric and thermal conductivity, high luster and density, and the ability to be deformed under stress without cleaving. While there are several metals that have low density, hardness, and melting points, these (the alkali and alkaline earth metals) are extremely reactive, and are rarely encountered in their elemental, metallic form.lithium is the least dense solid element and osmium is the densest. The metals of groups I A and II A are referred to as the light metals because they are exceptions to this generalization. The high density of most metals is due to the tightly-packed crystal lattice of the metallic structure. The strength of metallic bonds for different metals reaches a maximum around the center of the transition series, as those elements have large amounts of delocalized electrons in a metallic bond. However, other factors (such as atomic radius, nuclear charge, number of bonding orbitals, overlap of orbital energies, and crystal form) are involved as well. When the planes of an ionic bond are slid past one another, the resultant change in location shifts ions of the same charge into close proximity, resulting in the cleavage of the crystal. Covalently bonded crystals can only be deformed by breaking the bonds between atoms, thereby resulting in fragmentation of the crystal.metallic bond, the outer electrons of the metal atoms form a gas of nearly free electrons, moving as an electron gas in a background of positive charge formed by the ion cores. Good mathematical predictions for electrical conductivity, as well as the electrons' contribution to the heat capacity and heat conductivity of metals can be calculated from the free electron model, which does not take the detailed structure of the ion lattice into account. crystals. The most important consequence of the periodic potential is the formation of a small band gap at the boundary of the brillouin zone. Mathematically, the potential of the ion cores is treated in the nearly-free electron model. In alchemy, a base metal was a common and inexpensive metal, as opposed to precious metals, mainly gold and silver. A longtime goal of the alchemists was the transmutation of base metals into precious metals. Chemically, the precious metals are less reactive than most elements, have high luster and high electrical conductivity. Historically, precious metals were important as currency, but are now regarded mainly as investment and industrial commodities. Gold, silver, platinum and palladium each have an ISO 4217 currency code. The best-known precious metals are gold and silver. While both have industrial uses, they are better known for their uses in art, jewelry, and coinage. Other precious metals include the platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum, of which platinum is the most widely traded. Plutonium and uranium could also be considered precious metals. The demand for precious metals is driven not only by their practical use, but also by their role as investments and a store of value. Palladium was, as of summer 2006, valued at a little under half the price of gold, and platinum at around twice that of gold. Silver is substantially less expensive than these metals, but is often traditionally considered a precious metal for its role in coinage and jewelry. Metals are often extracted from the Earth by means of mining, resulting in ores that are relatively rich sources of the requisite elements. Ore is located by prospecting techniques, followed by the exploration and examination of deposits. Mineral sources are generally divided into surface mines, which are mined by excavation using heavy equipment, and subsurface mines. Once the ore is mined, the metals must be extracted, usually by chemical or electrolytic reduction. Pyrometallurgy uses high temperatures to convert ore into raw metals, while hydrometallurgy employs aqueous chemistry for the same purpose. The methods used depend on the metal and their contaminants. Metallurgy is a domain of materials science that studies the physical and chemical behavior of metallic elements, their intermetallic compounds, and their mixtures, which are called alloys. Metals are good conductors, making them valuable in electrical appliances and for carrying an electric current over a distance with little energy lost. Electrical power grids rely on metal cables to distribute electricity. Home electrical systems, for the most part, are wired with copper wire for its good conducting properties. The thermal conductivity of metal is useful for containers to heat materials over a flame. Metal is also used for heat sinks to protect sensitive equipment from overheating. The high reflectivity of some metals is important in the construction of mirrors, including precision astronomical instruments. This last property can also make metallic jewelry aesthetically appealing. Some metals have specialized uses; radioactive metals such as uranium and plutonium are used in nuclear power plants to produce energy via nuclear fission. Mercury is a liquid at room temperature and is used in switches to complete a circuit when it flows over the switch contacts. Shape memory alloy is used for applications such as pipes, fasteners and vascular stents. Metals in northern forest ecosystems: Role of the vegetation in sequestrian and cycling, and implications for ecological risk assessment Jun 01, 2003; ABSTRACT Mass estimates of phytoaccumulated trace metal contaminants and transfers to soils are necessary to properly...
http://www.reference.com/browse/metal
13
19
Mexico, A brief History The International History Project Before the Spanish arrival in 1519, Mexico was occupied by a large number of Indian groups with very different social and economic systems. In general the tribes in the arid north were relatively small groups of hunters and gatherers who roamed extensive areas of sparsely vegetated deserts and steppes. These people are often referred to as Chichimecs, though they were a mixture of several linguistically distinctive cultural groups. In the rest of the country the natives were agriculturalists, which allowed the support of dense populations. Among these were the Maya of the Yucatan, Totonac, Huastec, Otomi, Mixtecs, Zapotecs, Tlaxcalans, Tarascans, and Aztecs. A number of these groups developed high civilizations with elaborate urban centers used for religious, political, and commercial purposes. The Mayan cities of Chichen Itza, Uxmal, and Palenque, the Aztec capital of Tenochtitlan, Tzintzuntzan of the Tarastec, and Monte Alban of the Zapotecs are examples. By AD 1100 the Toltecs had conquered much of central and southern Mexico and had established their capital at Tula in the Mesa Central. They also built the city of Teotihuacan near present-day Mexico City. At about the same time, the Zapotecs controlled the Oaxaca Valley and parts of the Southern Highlands. The cities they built at Mitla and Monte Alban remain, though they were taken over by the Mixtecs prior to the arrival of the Spanish. When the Spanish arrived in central Mexico, the Aztecs controlled most of the Mesa Central through a state tribute system that extracted taxes and political servility from conquered tribal groups. The Aztecs migrated into the Mesa Central from the north and fulfilled a tribal prophesy by establishing a city where an eagle with a snake in its beak rested on a cactus. This became the national symbol of Mexico and adorns the country's flag and official seal. The Aztecs founded the city of Tenochtitlan in the early 1300s, and it became the capital of their empire. The Tlaxcalans to the east, the Tarascans on the west, and the Chichimecs in the north were outside the Aztec domain and frequently warred with them. The nation's name derives from the Aztecs' war god, Mexitli. From the time of Hernando Cortez's conquest until 1821, Mexico was a colony of Spain. Cortez first entered the Valley of Mexico on the Mesa Central in 1519 after marching overland from Veracruz, the town he had founded on the Gulf Coastal Plain. With fewer than 200 soldiers and a few horses, the initial conquest of the Aztecs was possible only with the assistance of the large Indian armies Cortez assembled from among the Aztecs' enemies. After a brief initial success at Tenochtitlan, the Spanish were driven from the city on the Noche Triste but returned in 1521 to destroy the city and to overwhelm the Aztecs. Within a short time the rest of central and southern Mexico and much of Central America were conquered from Mexico City. The Spanish usurped the Indian lands and redistributed them among themselves, first as encomiendas, a system of tribute grants, and later as haciendas, or land grants. During the early contact with Indians, millions died from such European diseases as measles and smallpox, for which the natives had no immunity. Central Mexico did not regain its pre-Columbian population numbers until perhaps 1900. Along with other Spanish colonies in the New World, Mexico fought for and gained its independence in the early 1800s. On Sept. 16, 1810, in the town of Dolores Hidalgo, the priest Miguel Hidalgo y Costilla rang his church's bells and exorted the local Indians to "recover from the hated Spaniards the land stolen from your forefathers. . ." This is celebrated as Mexican Independence Day. Padre Hidalgo was hanged in July 1811. Hidalgo was succeeded by Jose Maria Morelos y Pavon, another parish priest but a more able leader than his predecessor. Morelos called a national congress, which on Nov. 6, 1812, officially declared Mexico to be independent from Spain. Morelos was executed by a Spanish firing squad in 1815, but his army, led by Vicente Guerrero, continued fighting until 1821. Because of weaknesses and political divisions in Spain, the revolutionary movement gained strength. Agustin de Iturbide, a royalist officer, joined forces with Guerrero and drafted the Plan of Iguala, which provided for national independence under a constitutional monarchy--the Mexican Empire. Not surprisingly, Iturbide was crowned emperor of Mexico in July 1822, and the newly formed empire lasted less than a year. Iturbide was exiled from the country but returned and was executed. General Antonio Lopez de Santa Anna then emerged as the dominant political force for some 30 years. Santa Anna was president of Mexico when Texas revolted and during the Mexican-American War of 1846. After nearly a half century of independence, Mexico had made relatively little economic or political progress, and the peasantry continued to suffer. In 1858 Benito Juarez, a Zapotec from Oaxaca, became president. He attempted to eliminate the role of the Roman Catholic church in the nation by appropriating its land and prerogatives. In 1859 the Ley Lerdo was issued--separating church and state, abolishing monastic orders, and nationalizing church property. Juarez had anticipated that Indians and peasants would reacquire the 50 percent of the nation's land formerly held by the church, but the properties were quickly purchased by the elite. Because of the many years of economic and political chaos that had elapsed, Mexico was financially insolvent. In 1861 Juarez announced a suspension of payment on foreign loans, and the British, Spanish, and French occupied Veracruz in order to collect the Mexican debts. The British and Spanish quickly withdrew, but France overthrew the Mexican government and in 1864 declared Mexico an empire with Maximilian I of Austria as emperor. During the war with the French, the Mexican armies won a major battle on May 5, 1862, despite being severely outnumbered and underarmed. That victory is celebrated as Cinco de Mayo, a national holiday. Because of its own Civil War, the United States was unable to enforce its Monroe Doctrine, which prohibited European involvement in the Americas. At the close of the Civil War, however, the United States threatened to send troops into Mexico, and the French army withdrew from the country. Maximilian was executed by the Mexicans in 1867. After the fall of the French and several years of turmoil, Porfirio Diaz emerged to become president in 1877 and, except for four years, ruled as an absolute dictator until 1910. During his reign Diaz encouraged foreign investment and attempted to modernize the nation. He helped to increase the GDP fivefold, expanded both exports and imports, saw gold and silver production increase from 25 million to 160 million dollars a year, and built more than 15,000 miles (24,000 kilometers) of railway. During this same period Diaz lined his pockets and gave away huge concessions of land to friends and foreign speculators. By 1910 more than 95 percent of rural families had become landless--debt peons because of government expropriation of communally held farm villages. Because Diaz had surrounded himself with friends and cronies who gained economic and political power, there was little opportunity for outsiders in the process even if they were upper-class Mexicans. Francisco Madero, born into a wealthy mining and ranching family of northern Mexico, is credited with instigating the Mexican Revolution. After the fraudulent election of 1910, Madero led a revolutionary movement that in 1911 captured the isolated border city of Ciudad Juarez. An old man in ill health, Diaz was forced to resign, and Madero was elected president on a platform promising social reform. Madero was idealistic but politically inept. As a result his presidency was short-lived and chaotic. Felix Diaz, Porfirio's nephew, and Gen. Victoriano Huerta joined together in a rebellion that ousted Madero. He and his vice-president, Pino Suarez, were executed by the military in February 1913. Huerta became president, but counterrevolutions broke out in the north. They were led by Gen. Venustiano Carranza, a follower of Madero and governor of Coahuila, with Pancho Villa and Gen. Alvaro Obregon. Peasants in the south, disillusioned with Madero's ineffectiveness, rallied behind the charismatic Indian revolutionary Emiliano Zapata. While the northern revolutionaries were largely interested in access to power, Zapata and his followers, the zapatistas, demanded land and liberty for the peasantry. During the next few years disorder and chaos reigned. In 1915 Carranza overthrew Huerta to became president, but in the process he alienated Villa, among others. Zapata was killed shortly after Carranza came to power, but his ideal of agrarian reform became a cornerstone of the revolution. Villa returned to Chihuahua and raided border towns in the southwest- ern United States, including Columbus, N.M., where a number of Americans were killed. American Gen. John J. Pershing was sent into Mexico to capture Villa but was unsuccessful. (See also Villa, Pancho; Zapata, Emiliano.) The major accomplishment of the Carranza period was the Constitution of 1917, which sought to destroy the feudalism that had existed in Mexico for 400 years. After Carranza's assassination in 1920, General Obregon ascended to the presidency. A strong individual, he was both willing and able to push through social reforms. His successor in 1924 was Gen. Plutarco Elias Calles, a longtime political ally. Calles was vigorously antichurch and was also unfriendly to foreign capital investment. Only through diplomatic intervention was Calles persuaded to reopen churches that had closed and to become less hostile to the foreign governments he had alienated. Obregon was elected to a second term in 1928 but was assassinated that same year. Calles, who had founded the National Revolutionary party, the predecessor of the Institutional Revolutionary party (PRI) that still controls the nation, filled the office of interim president with three successive puppet presidents. The election of Gen. Lazaro Cardenas in 1934 changed the politics of the nation. Cardenas expelled Calles and developed a vigorous six-year plan to modernize the country. He redistributed more land than did all of his predecessors combined, built rural schools, nationalized the petroleum industry and strengthened the unions. Miguel Aleman Valdes, president from 1946 to 1952, was responsible for massive public-works projects, including irrigation schemes in the northwest and hydroelectric power in the south. Luis Echeverria Alvarez (1970-76) devalued the peso after nearly 25 years of parity with the United States dollar. Jose Lopez Portillo (1976-82) directed the frantic economic growth of the oil boom. Miguel de la Madrid Hurtado (1982-88) inherited an economy that had been transformed by a rapid decrease in international oil prices as well as huge foreign debts. In July 1988 the PRI candidate Carlos Salinas de Gortari was elected president in a vote marred by charges of widespread fraud. In 1991, President Salinas ordered the immediate closing of Mexico City's largest government-operated refinery in a move to combat the city's air-pollution crisis. The giant refinery would be replaced by public parks and green spaces. Salinas was succeeded in 1994 by Ernesto Zedillo Ponce de Leon, who was elected after the leading candidate, Luis Donaldo Colosio, was assassinated. In July 1996 Zedillo and the country's main opposition parties signed a landmark agreement toward political reform. The pact eliminated the PRI's control of election procedures and ballot counting and placed limits on campaign spending. The agreement added 17 new amendments to Mexico's constitution. Sixty-eight years of uninterrupted legislative rule by PRI came to an end in July 1997 as Mexican voters handed control of the country's lower house of parliament to two opposition parties nominally allied against the PRI. While the PRI won the largest individual share of the vote, finishing with approximately 39 percent of the vote, it failed to win an independent majority in the lower house for the first time since 1929. The National Action party (PAN) finished second in the election, capturing 27 percent of the vote, and the Party of the Democratic Revolution (PRD) won nearly 26 percent of the vote. Political analysts were nearly unanimous in stating that the election results indicated that the Mexican political system had begun to move away from thinly veiled one-party rule and toward genuine multiparty democracy. Nowhere was the changing political atmosphere more evident than in Mexico City, where PRD leader Cuahtemoc Cardenas Solorzano won a landslide victory in the mayoral race. Recent Relations with the United States Relations between the United States and Mexico fluctuated in the 20th century. A long-standing border dispute was settled in 1963, and in 1992 the two countries, along with Canada, signed the broadest trade agreement ever reached between them. The continent-wide North American Free Trade Agreement (NAFTA) went into effect in 1994. In addition, the United States and Mexico have worked cooperatively to deal with the flow of illegal narcotics traffic from Mexico to the United States. The question of illegal The question of illegal immigration and the treatment of illegals in the United States is also a source of irritation between the nations. Unrest in Chiapas Tensions between pro-government paramilitary forces and the anti-government Zapatista National Liberation Army (EZLN) exploded during the first two weeks of January in 1994, when EZLN guerrillas staged an uprising throughout Chiapas in protest against the Mexican government's treatment of Mexico's large but impoverished Indian community. The 1994 uprising left at least 140 people dead, and it sparked an ongoing struggle that raged throughout the southern state of Chiapas and claimed between 300 and 600 lives in the ensuing years. In December 1997 the village of Acteal in Chiapas became the site of one of the bloodiest massacres in recent Mexican history, as an armed paramilitary group slaughtered at least 45 people and wounded dozens more. Although the motive for the attack, as well as the identities of the perpetrators, remained unclear, villagers from Acteal suggested that pro-government guerrillas had staged the attack to retaliate for Acteal's support of the EZLN, noting that Acteal's villagers had been strong supporters of the anti-government peasant rebellion that began in Chiapas in 1994. A project by History World International
http://history-world.org/mexico.htm
13
14
Periodontitis means “inflammation around the tooth” – it is a serious gum infection that damages the soft tissue and bone that supports the tooth. All periodontal diseases, including periodontitis, are infections which affect the periodontium. The periodontium are the tissues around a tooth, tissues that support the tooth. With periodontitis, the alveolar bone around the teeth is slowly and progressively lost. Microorganisms, such as bacteria, stick to the surface of the tooth and multiply – an overactive immune system reacts with inflammation. Untreated periodontitis will eventually result in tooth loss, and may increase the risk of stroke, heart attack and other health problems. Bacterial plaque, a sticky, colorless membrane that develops over the surface of teeth, is the most common cause of periodontal disease. In dentistry, periodontics deals with the prevention, diagnosis and treatment of diseases involving the gums and structures which support teeth. There are eight dental specialties, of which periodontics is one. If you want dental implants, you see a periodontist. In most cases, periodontitis is preventable. It is usually caused by poor dental hygiene. According to Medilexicon’s medical dictionary, Periodontitis is: 1. Inflammation of the periodontium. 2. A chronic inflammatory disease of the periodontium occurring in response to bacterial plaque on the adjacent teeth; characterized by gingivitis, destruction of the alveolar bone and periodontal ligament, apical migration of the epithelial attachment resulting in the formation of periodontal pockets, and ultimately loosening and exfoliation of the teeth. What is the difference between periodontitis and gingivitis Gingivitis occurs before periodontitis. Gingivitis usually refers to gum inflammation while periodontitis refers to gum disease and the destruction of tissue and/or bone. Initially, with gingivitis, bacteria plaque accumulates on the surface of the tooth, causing the gums to go red and inflamed; teeth may bleed when brushing them. Even though the gums are irritated and bothersome, the teeth are not loose. There is no irreversible damage to bone or surrounding tissue. Untreated gingivitis can progress to periodontitis. With periodontitis, the gum and bone pulls away from the teeth, forming large pockets. Debris collects in the spaces between the gums and teeth, and infect the area. The patient’s immune system attacks bacteria as the plaque spreads below the gum line. Bone and connective tissue that hold the tooth start to break down – this is caused by toxins produced by the bacteria. Teeth become loose and can fall out. Put simply, Periodontitis involves irreversible changes to the supporting structures of the teeth, while gingivitis does not. What are the signs and symptoms of periodontitis A symptom is something we feel and describe to the doctor, while a sign is something others, including the doctor can see. For example, pain is a symptom while redness or inflammation is a sign. Signs and symptoms of periodontitis can include: - Inflamed (swollen) gums, gum swelling recurs - Gums are bright red, sometimes purple - Gums hurt when touched - Gums recede, making teeth look longer - Extra spaces appear between the teeth - Pus may appear between the teeth and gums - Bleeding when brushing teeth - Bleeding when flossing - Metallic taste in the mouth - Halitosis (bad breath) - Loose teeth - The patient’s “bite” feels different because the teeth do not fit the same What are the causes of periodontitis - Dental plaque forms on teeth – this is a pale-yellow biofilm that develops naturally on teeth. If is formed by bacteria that try to attach themselves to the tooth’s smooth surface. - Brushing teeth gets rid of plaque, but it soon builds up; within a day or so. - If it is not removed, within two or three days it hardens into tartar. Tartar is much harder to remove than plaque. Another name for tartar is calculus. Getting rid of tartar requires a professional – you cannot do it yourself. - Plaque can gradually and progressively damage teeth and surrounding tissue. At first, the patient may develop gingivitis – inflammation of the gum around the base of the teeth. - Persistent gingivitis can result in pockets developing between the teeth and gums. These pockets fill up with bacteria. - Bacterial toxins and our immune system’s response to infection start destroying the bone and connective tissue that hold teeth in place. Eventually the teeth start becoming loose, and can even fall out. What are the risk factors for periodontitis A risk factor is something that increases the risk of developing a condition or disease. For example, obesity is a risk factor for diabetestype 2 – this means that obese people have a higher chance of developing diabetes. The following risk factors are linked to a higher risk of periodontitis: - Smoking – regular smokers are much more likely to develop gum problems. Smoking also undermines the efficacy of treatments. - Hormonal changes in females – puberty, pregnancy, and the menopause are moments in life when a female’s hormones undergo changes. Such changes raise the risk of developing gum diseases. - Diabetes – patients who live with diabetes have a much higher incidence of gum disease than other individuals of the same age - AIDS – people with AIDS have more gum diseases - Cancer – cancer, and some cancer treatments can make gum diseases more of a problem - Some drugs – some medications that reduce saliva are linked to gum disease risk. - Genetics – some people are more genetically susceptible to gum diseases A qualified dentist should find it fairly straightforward to diagnose periodontitis. The dentist will ask the patient questions regarding symptoms and carry out an examination of his/her mouth. The dentist will examine the patient’s mouth using a periodontal probe – a thin, silver stick-like object with a bend at one end. The probe is inserted next to the tooth, under the gum line. If the tooth is healthy, the probe should not slide far below the gum line. In cases of periodontitis, the probe will reach deeper under the gum line. Two types of periodontal probes. 1. Michigan O Probe (left). 2. Naber’s Probe (right) The dentist may order an X-ray to see what condition the jaw bone and teeth are in. What are the treatment options for periodontitis The main aim of the periodontist, dentist or dental hygienist, when treating periodontitis, is to clean out bacteria from the pockets around the teeth and prevent further destruction of bone and tissue. For best treatment results, the patient must maintain good oral hygiene and care. This involves brushing teeth at least twice a day and flossing once per day. If there is enough space between the teeth, an interdental brush (Proxi-brush) is recommended. Soft-picks can be used when the space between the teeth is smaller. Patients with arthritis, and others with dexterity problems may find that using an electric toothbrush is better for a thorough clean. It is important that the patient understands that periodontitis is a chronic (long-term) inflammatory disease – this means oral hygiene must be maintained for life. This will also involve regular visits to a dentist or dental hygienist. It is important to remove plaque and calculus (tartar) to restore periodontal health. The healthcare professional will use clean (non-surgically) below the gumline. This procedure is called scaling and debridement. Sometimes an ultrasonic device may be used. In the past Root Planing was used (the cemental layer was removed, as well as calculus). - Prescription antimicrobial mouthrinse – for example chlorhexidine. It controls bacteria when treating gum disease, as well as after surgery. Patients use it like they would a regular mouthwash. - Antiseptic “chip” – this is a small piece of gelatin which is filled with chlorhexidine. It controls bacteria and reduces periodontal pocket size. This medication is placed in the pockets after root planing. The medication is slowly resealed over time. - Antibiotic gel – a gel that contains doxycycline, an antibiotic. This medication controls bacteria and shrinks periodontal pockets. It is placed in the pockets after scaling and root planing. It is a slow-release medication. - Antibiotic microspheres – miniscule particles containing minocycline, an antibiotic. Also used to control bacteria and reduce periodontal pocket size. They are placed into pockets after scaling and root planing. A slow-release medication. - Enzyme suppressant – keeps destructive enzymes in check with a low-dose of doxycycline. Some enzymes can break down gum tissue, this medication holds back the body’s enzyme response. Taken orally as a pill, and is used with scaling and root planing. - Oral antibiotics – either in capsule or tablet form and are taken orally. They are used short-term for the treatment of acute or locally persistent periodontal infection. If good oral hygiene and non-surgical treatments are not enough, the following surgical interventions may be required: - Flap surgery – the healthcare professional performs flap surgery to remove calculus in deep pockets, or to reduce the pocket so that keeping it clean is easier. The gums are lifted back and the tarter is removed. The gums are then sutured back into place so they fit closely to the tooth. After surgery, the gums will heal and high tightly around the tooth. In some cases the teeth may eventually seem longer than they used to. - Bone and tissue grafts – this procedure helps regenerate bone or gum tissue that has been destroyed. With dental bone grafting, new natural or synthetic bone is placed where bone was lost, promoting bone growth.In a procedure called guided tissue regeneration, a small piece of mesh-like material is inseted between the gum tissue and bone. This stops the gum from growing into bone space, giving the bone and connective tissue a chance to regrow.The dentist may also use special proteins (growth factors) that help the body regrow bone naturally.The dental professional may suggest a soft tissue graft – tissue taken from another part of the mouth, or synthetic material is used to cover exposed tooth roots. Experts say it is not possible to predict how successful these procedures are – each case is different. Treatment results also depend on how advanced the disease is, how well the patient adheres to a good oral hygiene program, as well as other factors, such as smoking status. What are the complications of periodontitis The most common complication from periodontitis is the loss of teeth. However, patients with periodontitis are also at a higher risk of having respiratory problems, stroke, coronary artery disease, and low birth weight babies. Pregnant women with bacterial infections that cause moderate-to-severe periodontal disease have a higher risk of having a premature baby.
http://www.wtnperioblog.com/gum-disease/
13
51
Bank swallows, or sand martins as they are known in Europe and Asia, are one of the few small passerine birds that have an almost cosmopolitan distribution. They migrate between discrete breeding and wintering ranges. Bank swallow distribution in the breeding range is most limited by suitable nesting habitat. Winter distribution is influenced by appropriate foraging areas. (Garrison, 1999) In the Americas, bank swallows breed throughout much of Alaska and Canada to the maritime provinces and south to the mid-Atlantic United States, throughout much of the Appalachian chain, along the Ohio River Valley to Missouri, west throughout much of Kansas, along the Rocky Mountain Chain into New Mexico, and in the mountainous regions of Utah, Nevada, and northeastern California. They also breed along the Rio Grande river in Texas and northern Mexico. In winter, American populations migrate to throughout South America and along the western coastal slopes of Mexico. They are rare visitors to some Antillean islands in winter. (Garrison, 1999) In the Old World, bank swallows (or sand martins) breed throughout northern Eurasia, from the British Isles, across Scandinavia, northern Russia, and Siberia, and as far south as the Mediterranean, Middle East, the Nile River valley, northern, coastal Africa, northwestern Africa, Iran, Afghanistan, India, and Pakistan to as far east as southeastern China and Japan. They winter throughout the Arab Peninsula and Africa, including Madagascar. They can also be found throughout much of southern and southeastern Asia in winter, including the Philippine Islands. (Garrison, 1999) Their scientific name (Riparia riparia) refers to the preferred breeding habitat of bank swallows. They nest in small to large colonies in soft banks or bluffs along rivers, streams, and coastal areas. They prefer the eroding banks of low-gradient, meandering rivers and streams. They also use sandy coastal bluffs or cliffs. Man-made habitats are now also used, including gravel pits, quarries, and road cuts. They are found from sea level to 2100 meters elevation, but most populations occur in lowland river valleys and coastal areas. Important foraging habitats include wetlands, large bodies of water, grasslands, agricultural areas, and open woodlands. Bank swallows mainly migrate along large bodies of open water, such as marshes, coastal areas, estuaries, and large rivers. In winter they are seen mainly in open habitats with large bodies of water and grasslands, savannas, or agricultural areas. (Garrison, 1999) Bank swallow populations worldwide vary slightly in plumage color and size, but variation seems to be clinal. Their ability to disperse over very large distances suggests that gene flow can occur at continental levels at least. At one point 8 subspecies were recognized, but currently only 3 subspecies worldwide are recognized: R. r. riparia, a cosmopolitan subspecies, R. r. diluta a subspecies found throughout northern and central Asia, and R. r. shelleyi, found from Egypt to northeastern Africa. (Garrison, 1999) Bank swallows are smallish swallows with grayish-brown plumage on the head, back, wings, and tail. The flight feathers of the wings and tail have a slightly darker plumage color and there is a brown band that stretches across the breast. The chin, throat, belly, and undertail coverts are white. Juveniles may have buffy or whitish upperparts and a pink wash to the throat. Their tails are slightly notched. (Garrison, 1999) Bank swallows can be confused with other, small brownish swallows. In North America this includes northern rough-winged swallows (Stelgidopteryx serripennis), which lacks the breast band, and juvenile tree swallows (Tachycineta bicolor), which are larger and differ in some plumage characteristics. In South America they may be confused with brown-chested martins (Progne tapera), which are much larger (30-40 g). Bank swallows may also be distinguished by their voice and their flight pattern: they hold their wings at a sharp angle in flight and use quick, flicking wing beats. (Garrison, 1999) Average daily metabolic rates for bank swallows have been measured at 8.99 to 11.55 cm3 CO2/g/hr. (Garrison, 1999) Bank swallows are monogamous and defend their nesting site together. Males begin to excavate burrows when they arrive on their breeding grounds. Preferred burrow sites are in soft, but stable soils, most often higher on banks or slopes. Burrows are dug perpendicular to the bank face and average 58.8 cm in length when complete. Once the nest burrow is about 30 cm long, they will begin to sit in the entrance and sing to attract females. They will also perform flight displays outside of the burrow entrance to attract females. The pair bond is formed as a female begins to sing in response to the male and perch near the burrow. Males and females will sleep together in the nest burrow and most copulations occur there. (Garrison, 1999) Both sexes, however, will attempt extra-pair copulations. Male bank swallows assess female mass via flight characteristics, such as speed of ascent, in order to determine which females are most likely to be in a pre-laying or laying condition. Females that are heaviest are also at their most fertile condition, making them the best targets for attempts at extra-pair copulations. However, both sexes also guard their mates so extra-pair copulations may not be terribly common. (Garrison, 1999; Jones, 1986) Once a mated pair is formed at an excavated burrow, females will begin building a nest in the burrow, along with helping with any additional excavation. Nests are lined with grass, feathers, and other fine materials in the area. Females begin to lay eggs as early as April and into July in some areas. Most pairs attempt only 1 clutch per year, unless their first clutch is destroyed early in the nesting season. Females lay from 1 to 9, but usually 4 to 5, white eggs every day until the full clutch size is reached. Females begin incubating the clutch 1 to 2 days before all eggs are laid. Incubation takes 13 to 16 days and eggs hatch over the course of several days. Hatching in colonies is generally synchronous. Fledging occurs at around 20 days after hatching and parents continue to feed their young for 3 to 5 days after fledging. Once they become independent, young bank swallows gather in flocks of juveniles and adults. They are forced away from their natal burrow by their parents, but often gather in small groups at other burrows to rest. Males and females can breed in their first year after hatching. (Garrison, 1999) Male and female bank swallows share in incubating young, which allows them to lay eggs earlier in the season, when the weather is colder, than other swallow species (Hirundinidae) in which females only incubate eggs (such as Hirundo rustica). However, females do most incubation. Both parents sleep in the nest burrow at night. Young are altricial at hatching and parents brood them for 7 to 10 days. Both parents feed the young and help to protect them from predators until they are 23 to 25 days old, a few days after they have left the nest burrow. (Garrison, 1999; Turner, 1982) The yearly recruitment of bank swallows may be strongly influenced by conditions in the wintering habitat, which influence survival of juveniles. A study of a Hungarian population that winters in the Sahel region of Africa, suggested that winter, Sahelian rainfall was related to adult population size in the following year on the breeding range. Average annual mortality estimates for adults are approximately 60%, mortality in juveniles is higher. Two bank swallows lived to 9 years old in the wild. (Garrison, 1999; Szep, 1994) Bank swallows are susceptible to the effects of unseasonably cold weather, which makes it difficult for them to find insect prey and meet their energy demands. Nestlings also die when burrows collapse. (Garrison, 1999) Bank swallows are gregarious, living and breeding in colonies. Although colonial living has been demonstrated to increase exposure to parasites and increase levels of competition for food, nesting resources, and mates, bank swallows that live in larger colonies are more successful at detecting and defending against diurnal avian predators. Bank swallows may engage in communal preening and roosting. They will also sunbathe in groups. They are very social and will roost in direct contact with others. Bank swallows will cooperate to mob predators and may forage together. (Hoogland and Sherman, 1976) Bank swallows migrate fairly long distances, often in flocks with other swallows (Hirundinidae). They arrive on their breeding grounds in early spring and leave in late summer. On their winter range, bank swallow populations may be nomadic. Migration seems to occur primarily along waterways. They forage from early in the morning through dusk. Like most swallows (Hirudinidae), bank swallows are fast and agile in flight. Their flight is described as "fluttery," with rapid wing beats and short glides. Their wings are held bent in flight, unlike most other swallows. They will forcefully hit, and then bounce off of, the surface of water to drink, gather nesting material, grab an insect, or bathe themselves. Bank swallows are ungainly on the ground and are mainly seen perching or in flight. (Garrison, 1999) Nesting pairs defend only their nest burrow and the area immediately around the burrow. Home range sizes are not reported. (Garrison, 1999) Young bank swallows use a food-begging call and a signature call to their parents at the nest. Parents recognize the calls of their own offspring. Parents respond with a feeding call when they return to the nest to feed their young, the feeding call is described as a set of sweet notes. Contact calls are the most commonly used call and are described as a raspy or strident "tschr." Males also sing to advertise territories and attract females for mating. Males can sing at the nest and in flight. The song sounds like a rapid repetition of the contact call, giving it a chattering quality. Bank swallows also use warning and alarm calls when they observe predators. Warning calls are given to colony-mates while alarm calls are directed at predators when they are being mobbed. (Garrison, 1999) Males also perform display flights to attract females. (Garrison, 1999) Bank swallows eat almost exclusively insects that they catch in flight. Insect prey are generally flying insects, although occasionally they take terrestrial or aquatic insects or insect larvae. Most foraging occurs over bodies of water or large areas of short-growing vegetation, such as meadows, agricultural fields, or wetlands. They sometimes forage over forest canopies. Bank swallows drink in flight as well, by skimming the water surface with their lower mandible. The size of colonies may impact whether individuals can get information on the location of prey from other individuals. In North America, swallows in relatively small colonies (5-55 pairs) did not transmit information on foraging to others. In Hungary, however, swallows in a large colony (2100 pairs) foraged synchronously and seemed to transmit information on foraging to other colony members. Breeding adults generally forage within 200 m of their nest, although they may have to forage farther away. If foraging distances are higher, parents return to nests with larger food boluses. (Garrison, 1999; Hoogland and Sherman, 1976) Bank swallows forage from dawn to dusk. One study of stomach contents suggested 99.8% of bank swallow diet is insects, with approximately 33.5% ants, bees, and wasps (Hymenoptera), 26.6% flies (Diptera), 17.9% beetles (Coleoptera), 10.5% mayflies (Ephemeroptera), 8% bugs (Hemiptera), 2.1% dragonflies (Odonata), and 1.2% moths and butterflies (Lepidoptera). Other studies yielded similar results, although proportions of prey varied by region and season. (Garrison, 1999) Bank swallows that live in larger colonies are better able to detect and defend against avian predators. They cooperate to mob predators that threaten their colony. Most predation is on nestlings and eggs in burrows. Eurasian badgers (Meles meles) have been observed excavating burrows and it is likely that other terrestrial mammals attempt to take advantage of bank swallow colonies. Snakes are important predators of nestlings, including gopher snakes (Pituophis melanoleucus) and black rat snakes (Pantherophis obsoletus) in North America. American kestrels (Falco sparverius), hobbies (Falco subbuteo), and other bird-specialist raptors will attempt to take flying adults and inexperienced fledglings. Bank swallows are often unsuccessful in deterring predators via mobbing. They have been observed deterring predation by blue jays (Cyanocitta cristata), however. (Garrison, 1999; Hoogland and Sherman, 1976) Bank swallows that live in larger colonies suffer higher rates of flea infestation and nestlings with fleas had lower body masses than nestlings without fleas. Flea species include Ceratophyllus styx, Celsus celsus, and Ceratophyllus riparius. Larval blowflies parasitize bank swallows as well, including Protocalliphora splendida, Protocalliphora braueri, Protocalliphora hirundo, Protocalliphora metallica, Protocalliphora sialia, and Protocalliphora chrysorrhoea. This last species seems to be restricted to the nests of bank swallows throughout the Holarctic. Mites (Liponyssus sylviarum, Atricholaelaps glasgowi), lice (Myrsidea dissimilis), feather lice (Mallophaga), and nematodes (Acuaria attenuata) are also found in bank swallows. (Garrison, 1999; Hoogland and Sherman, 1976) Bank swallows are important predators of flying insects, especially where they concentrate around breeding colonies. European starlings and house sparrows may take over their burrows. Other sand and bank burrowing birds, such as kingfishers, barn owls, northern rough-winged swallows, and cliff swallows are tolerated by bank swallows. (Garrison, 1999) Through their predation on flying insects, bank swallows can help to control populations of pest insects, such as mosquitoes and agricultural pests. (Garrison, 1999) There are no known adverse effects of bank swallows on humans. Bank swallows are widespread and population sizes are large. The IUCN considers them "least concern." However, local populations are impacted by loss of nesting habitat. In California they are listed as threatened, they are considered sensitive in Oregon, and a species of special concern in Kentucky. Bank swallows are fairly tolerant of human activities and will even nest in active quarries. (BirdLife International 2008, 2008; Garrison, 1999) Tanya Dewey (author), Animal Diversity Web. living in sub-Saharan Africa (south of 30 degrees north) and Madagascar. living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico. living in the southern part of the New World. In other words, Central and South America. living in the northern part of the Old World. In otherwords, Europe and Asia and northern Africa. uses sound to communicate living in landscapes dominated by human agriculture. having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. an animal that mainly eats meat uses smells or other chemicals to communicate the nearshore aquatic habitats near a coast, or shoreline. used loosely to describe any group of organisms living together or in close proximity to each other - for example nesting shorebirds that live in large colonies. More specifically refers to a group of organisms in which members act as specialized subunits (a continuous, modular society) - as in clonal organisms. having a worldwide distribution. Found on all continents (except maybe Antarctica) and in all biogeographic provinces; or in all the major oceans (Atlantic, Indian, and Pacific. active at dawn and dusk animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds. an area where a freshwater river meets the ocean and tidal influences result in fluctuations in salinity. a distribution that more or less circles the Arctic, so occurring in both the Nearctic and Palearctic biogeographic regions. Found in northern North America and northern Europe or Asia. An animal that eats mainly insects or spiders. offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes). marshes are wetland areas often dominated by grasses and reeds. makes seasonal movements between breeding and wintering grounds Having one mate at a time. having the capacity to move from one place to another. the area in which the animal is naturally found, the region in which it is endemic. generally wanders from place to place, usually within a well-defined range. islands that are not part of continental shelf areas, they are not, and have never been, connected to a continental land mass, most typically these are volcanic islands. found in the oriental region of the world. In other words, India and southeast Asia. reproduction in which eggs are released by the female; development of offspring occurs outside the mother's body. Referring to something living or located adjacent to a waterbody (usually, but not always, a river or stream). scrub forests develop in areas that experience dry seasons. breeding is confined to a particular season reproduction that includes combining the genetic contribution of two individuals, a male and a female associates with others of its species; forms social groups. uses touch to communicate that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle). Living on the ground. defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia. A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome. A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands. uses sight to communicate BirdLife International 2008, 2008. "Riparia riparia" (On-line). IUCN Red List of Threatened Species. Accessed April 01, 2009 at http://www.iucnredlist.org/details/147936. Garrison, B. 1999. Riparia riparia. Birds of North America, 414: 1-20. Accessed March 30, 2009 at http://bna.birds.cornell.edu.proxy.lib.umich.edu/bna/species/414. Hoogland, J., P. Sherman. 1976. Advantages and disadvantages of bank swallow (Riparia riparia) coloniality. Ecological Monographs, 46: 33-58. Accessed March 30, 2009 at http://www.jstor.org/pss/1942393. Jones, G. 1986. Sexual chases in sand martins (Riparia riparia): cues for males to increase their reproductive success. Behavioral Ecology and Sociobiology, 19: 179-185. NatureServe 2008, 2008. "Riparia riparia" (On-line). NatureServe Explorer 2008. Accessed March 30, 2009 at http://www.natureserve.org/explorer/. Szep, T. 1994. Relationship between west African rainfall and the survival of central European Sand Martins Riparia riparia. Ibis, 137: 162 - 168. Turner, A. 1982. Journal of Animal Ecology. Timing of laying by swallows (Hirundo rustica) and and sand martins (Riparia ripari), 51: 29-46.
http://animaldiversity.ummz.umich.edu/site/accounts/information/Riparia_riparia.html
13
14
Water is a commodity, and water rights can be freely traded in an open market. Proponents of the free market approach argue that it leads to the most efficient allocation of water resources, as it would for any other commodity. However, unlike some commodities, water is critical for human life, for many human activities, and for environmental resources. When such an essential commodity becomes scarce, as frequently happens in Australia, a land prone to sudden and dramatic droughts, severe problems can occur quickly. In Australia's Murray Darling Basin, the country's largest agricultural region, the government had historically controlled the distribution of water rights. However, under these controls, a selected few controlled a large share of the water. To resolve this problem of overallocation, a free market approach was put in place in the early 1990s. Crase et al. summarize the advantages and possible pitfalls of the free market approach in the Murray Darling Basin. They suggest that making water rights available in an open market generally had positive outcomes for the region; the approach released the state controls, which allocated water inefficiently, and created a situation in which supply and demand dictate price, and farmers seem to respond efficiently. However, the authors note that the free market approach could lead to speculation, in which some people who have little practical use for water rights hoard them to drive up the price, leaving less water available for others who might need it. In conclusion, the authors advocate the free market-based approach but caution that such a system also has the potential to create economic, social, environmental, and ecological problems. Explore further: Professor argues Earth's mantle affects long-term sea-level rise estimates More information: Enhancing agrienvironmental outcomes: Market-based approaches to water in Australia's Murray-Darling Basin Water Resources Research, doi:10.1029/2012WR012140, 2012 http://dx.doi.org/10.1029/2012WR012140
http://phys.org/news/2012-10-pros-cons-case-australia.html
13
98
Origins of the American Civil War Historians debating the origins of the American Civil War focus on the reasons seven states declared their secession from the U.S. and joined to form the Confederate States of America (the "Confederacy"). The main explanation is slavery, especially Southern anger at the attempts by Northern antislavery political forces to block the expansion of slavery into the western territories. Southern slave owners held that such a restriction on slavery would violate the principle of states' rights. Abraham Lincoln won the 1860 presidential election without being on the ballot in ten of the Southern states. His victory triggered declarations of secession by seven slave states of the Deep South, and their formation of the Confederate States of America, even before Lincoln took office. Nationalists (in the North and elsewhere) refused to recognize the secessions, nor did any foreign government, and the U.S. government in Washington refused to abandon its forts that were in territory claimed by the Confederacy. War began in April 1861 when Confederates attacked Fort Sumter, a major U.S. fortress in South Carolina, the state that had been the first to declare its independence. As a panel of historians emphasized in 2011, "while slavery and its various and multifaceted discontents were the primary cause of disunion, it was disunion itself that sparked the war." States' rights and the tariff issue became entangled in the slavery issue, and were intensified by it. Other important factors were party politics, abolitionism, Southern nationalism, Northern nationalism, expansionism, sectionalism, economics and modernization in the Antebellum period. The United States had become a nation of two distinct regions. The free states in New England, the Northeast, and the Midwest had a rapidly growing economy based on family farms, industry, mining, commerce and transportation, with a large and rapidly growing urban population. Their growth was fed by a high birth rate and large numbers of European immigrants, especially Irish, British and German. The South was dominated by a settled plantation system based on slavery. There was some rapid growth taking place in the Southwest, (e.g., Texas), based on high birth rates and high migration from the Southeast, but it had a much lower immigration rate from Europe. The South also had fewer large cities, and little manufacturing except in border areas. Slave owners controlled politics and economics, though about 70% of Southern whites owned no slaves and usually were engaged in subsistence agriculture. Overall, the Northern population was growing much more quickly than the Southern population, which made it increasingly difficult for the South to continue to influence the national government. By the time of the 1860 election, the heavily agricultural southern states as a group had fewer Electoral College votes than the rapidly industrializing northern states. Lincoln was able to win the 1860 Presidential election without even being on the ballot in ten Southern states. Southerners felt a loss of federal concern for Southern pro-slavery political demands, and continued domination of the Federal government by "Slaveocracy" was on the wane. This political calculus provided a very real basis for Southerners' worry about the relative political decline of their region due to the North growing much faster in terms of population and industrial output. In the interest of maintaining unity, politicians had mostly moderated opposition to slavery, resulting in numerous compromises such as the Missouri Compromise of 1820. After the Mexican-American War, the issue of slavery in the new territories led to the Compromise of 1850. While the compromise averted an immediate political crisis, it did not permanently resolve the issue of the Slave power (the power of slaveholders to control the national government on the slavery issue). Part of the 1850 compromise was the Fugitive Slave Law of 1850, requiring that Northerners assist Southerners in reclaiming fugitive slaves, which many Northerners found to be extremely offensive. Amid the emergence of increasingly virulent and hostile sectional ideologies in national politics, the collapse of the old Second Party System in the 1850s hampered efforts of the politicians to reach yet one more compromise. The compromise that was reached (the 1854 Kansas-Nebraska Act) outraged too many northerners, and led to the formation of the Republican Party, the first major party with no appeal in the South. The industrializing North and agrarian Midwest became committed to the economic ethos of free-labor industrial capitalism. Arguments that slavery was undesirable for the nation had long existed, and early in U.S. history were made even by some prominent Southerners. After 1840, abolitionists denounced slavery as not only a social evil but a moral wrong. Many Northerners, especially leaders of the new Republican Party, considered slavery a great national evil and believed that a small number of Southern owners of large plantations controlled the national government with the goal of spreading that evil. Southern defenders of slavery, for their part, increasingly came to contend that blacks actually benefited from slavery, an assertion that alienated Northerners even further. Early Republic At the time of the American Revolution, the institution of slavery was firmly established in the American colonies. It was most important in the six southern states from Maryland to Georgia, but the total of a half million slaves were spread out through all of the colonies. In the South 40% of the population was made up of slaves, and as Americans moved into Kentucky and the rest of the southwest fully one-sixth of the settlers were slaves. By the end of the war, the New England states provided most of the American ships that were used in the foreign slave trade while most of their customers were in Georgia and the Carolinas. During this time many Americans found it difficult to reconcile slavery with their interpretation of Christianity and the lofty sentiments that flowed from the Declaration of Independence. A small antislavery movement, led by the Quakers, had some impact in the 1780s and by the late 1780s all of the states except for Georgia had placed some restrictions on their participation in slave trafficking. Still, no serious national political movement against slavery developed, largely due to the overriding concern over achieving national unity. When the Constitutional Convention met, slavery was the one issue "that left the least possibility of compromise, the one that would most pit morality against pragmatism. In the end, while many would take comfort in the fact that the word slavery never occurs in the Constitution, critics note that the three-fifths clause provided slaveholders with extra representatives in Congress, the requirement of the federal government to suppress domestic violence would dedicate national resources to defending against slave revolts, a twenty-year delay in banning the import of slaves allowed the South to fortify its labor needs, and the amendment process made the national abolition of slavery very unlikely in the foreseeable future. With the outlawing of the African slave trade on January 1, 1808, many Americans felt that the slavery issue was resolved. Any national discussion that might have continued over slavery was drowned out by the years of trade embargoes, maritime competition with Great Britain and France, and, finally, the War of 1812. The one exception to this quiet regarding slavery was the New Englanders' association of their frustration with the war with their resentment of the three-fifths clause that seemed to allow the South to dominate national politics. In the aftermath of the American Revolution, the northern states (north of the Mason-Dixon Line separating Pennsylvania and Maryland) abolished slavery by 1804. In the 1787 Northwest Ordinance, Congress (still under the Articles of Confederation) barred slavery from the Mid-Western territory north of the Ohio River, but when the U.S. Congress organized the southern territories acquired through the Louisiana Purchase, the ban on slavery was omitted. Missouri Compromise In 1819 Congressman James Tallmadge, Jr. of New York initiated an uproar in the South when he proposed two amendments to a bill admitting Missouri to the Union as a free state. The first barred slaves from being moved to Missouri, and the second would free all Missouri slaves born after admission to the Union at age 25. With the admission of Alabama as a slave state in 1819, the U.S. was equally divided with 11 slave states and 11 free states. The admission of the new state of Missouri as a slave state would give the slave states a majority in the Senate; the Tallmadge Amendment would give the free states a majority. The Tallmadge amendments passed the House of Representatives but failed in the Senate when five Northern Senators voted with all the Southern senators. The question was now the admission of Missouri as a slave state, and many leaders shared Thomas Jefferson's fear of a crisis over slavery—a fear that Jefferson described as "a fire bell in the night". The crisis was solved by the Compromise of 1820, which admitted Maine to the Union as a free state at the same time that Missouri was admitted as a slave state. The Compromise also banned slavery in the Louisiana Purchase territory north and west of the state of Missouri along the line of 36–30. The Missouri Compromise quieted the issue until its limitations on slavery were repealed by the Kansas Nebraska Act of 1854. In the South, the Missouri crisis reawakened old fears that a strong federal government could be a fatal threat to slavery. The Jeffersonian coalition that united southern planters and northern farmers, mechanics and artisans in opposition to the threat presented by the Federalist Party had started to dissolve after the War of 1812. It was not until the Missouri crisis that Americans became aware of the political possibilities of a sectional attack on slavery, and it was not until the mass politics of the Jackson Administration that this type of organization around this issue became practical. Nullification Crisis The American System, advocated by Henry Clay in Congress and supported by many nationalist supporters of the War of 1812 such as John C. Calhoun, was a program for rapid economic modernization featuring protective tariffs, internal improvements at Federal expense, and a national bank. The purpose was to develop American industry and international commerce. Since iron, coal, and water power were mainly in the North, this tax plan was doomed to cause rancor in the South where economies were agriculture-based. Southerners claimed it demonstrated favoritism toward the North. The nation suffered an economic downturn throughout the 1820s, and South Carolina was particularly affected. The highly protective Tariff of 1828 (also called the "Tariff of Abominations"), designed to protect American industry by taxing imported manufactured goods, was enacted into law during the last year of the presidency of John Quincy Adams. Opposed in the South and parts of New England, the expectation of the tariff’s opponents was that with the election of Andrew Jackson the tariff would be significantly reduced. By 1828 South Carolina state politics increasingly organized around the tariff issue. When the Jackson administration failed to take any actions to address their concerns, the most radical faction in the state began to advocate that the state declare the tariff null and void within South Carolina. In Washington, an open split on the issue occurred between Jackson and his vice-president John C. Calhoun, the most effective proponent of the constitutional theory of state nullification through his 1828 "South Carolina Exposition and Protest". Congress enacted a new tariff in 1832, but it offered the state little relief, resulting in the most dangerous sectional crisis since the Union was formed. Some militant South Carolinians even hinted at withdrawing from the Union in response. The newly elected South Carolina legislature then quickly called for the election of delegates to a state convention. Once assembled, the convention voted to declare null and void the tariffs of 1828 and 1832 within the state. President Andrew Jackson responded firmly, declaring nullification an act of treason. He then took steps to strengthen federal forts in the state. Violence seemed a real possibility early in 1833 as Jacksonians in Congress introduced a "Force Bill" authorizing the President to use the Federal army and navy in order to enforce acts of Congress. No other state had come forward to support South Carolina, and the state itself was divided on willingness to continue the showdown with the Federal government. The crisis ended when Clay and Calhoun worked to devise a compromise tariff. Both sides later claimed victory. Calhoun and his supporters in South Carolina claimed a victory for nullification, insisting that it had forced the revision of the tariff. Jackson's followers, however, saw the episode as a demonstration that no single state could assert its rights by independent action. Calhoun, in turn, devoted his efforts to building up a sense of Southern solidarity so that when another standoff should come, the whole section might be prepared to act as a bloc in resisting the federal government. As early as 1830, in the midst of the crisis, Calhoun identified the right to own slaves as the chief southern minority right being threatened: I consider the tariff act as the occasion, rather than the real cause of the present unhappy state of things. The truth can no longer be disguised, that the peculiar domestick [sic] institution of the Southern States and the consequent direction which that and her soil have given to her industry, has placed them in regard to taxation and appropriations in opposite relation to the majority of the Union, against the danger of which, if there be no protective power in the reserved rights of the states they must in the end be forced to rebel, or, submit to have their paramount interests sacrificed, their domestic institutions subordinated by Colonization and other schemes, and themselves and children reduced to wretchedness. The issue appeared again after 1842's Black Tariff. A period of relative free trade after 1846's Walker Tariff reduction followed until 1860, when the protectionist Morrill Tariff was introduced by the Republicans, fueling Southern anti-tariff sentiments once again. Gag Rule debates From 1831 to 1836 William Lloyd Garrison and the American Anti-Slavery Society (AA-SS) initiated a campaign to petition Congress in favor of ending slavery in the District of Columbia and all federal territories. Hundreds of thousands of petitions were sent with the number reaching a peak in 1835. The House passed the Pinckney Resolutions on May 26, 1836. The first of these resolutions stated that Congress had no constitutional authority to interfere with slavery in the states and the second that it "ought not" do so in the District of Columbia. The third resolution, known from the beginning as the "gag rule", provided that: All petitions, memorials, resolutions, propositions, or papers, relating in any way, or to any extent whatsoever, to the subject of slavery or the abolition of slavery, shall, without being either printed or referred, be laid on the table and that no further action whatever shall be had thereon. The first two resolutions passed by votes of 182 to 9 and 132 to 45. The gag rule, supported by Northern and Southern Democrats as well as some Southern Whigs, was passed with a vote of 117 to 68. Former President John Quincy Adams, who was elected to the House of Representatives in 1830, became an early and central figure in the opposition to the gag rules. He argued that they were a direct violation of the First Amendment right "to petition the Government for a redress of grievances". A majority of Northern Whigs joined the opposition. Rather than suppress anti-slavery petitions, however, the gag rules only served to offend Americans from Northern states, and dramatically increase the number of petitions. Since the original gag was a resolution, not a standing House Rule, it had to be renewed every session and the Adams' faction often gained the floor before the gag could be imposed. However in January 1840, the House of Representatives passed the Twenty-first Rule, which prohibited even the reception of anti-slavery petitions and was a standing House rule. Now the pro-petition forces focused on trying to revoke a standing rule. The Rule raised serious doubts about its constitutionality and had less support than the original Pinckney gag, passing only by 114 to 108. Throughout the gag period, Adams' "superior talent in using and abusing parliamentary rules" and skill in baiting his enemies into making mistakes, enabled him to evade the rule and debate the slavery issues. The gag rule was finally rescinded on December 3, 1844, by a strongly sectional vote of 108 to 80, all the Northern and four Southern Whigs voting for repeal, along with 55 of the 71 Northern Democrats. Antebellum South and the Union There had been a continuing contest between the states and the national government over the power of the latter—and over the loyalty of the citizenry—almost since the founding of the republic. The Kentucky and Virginia Resolutions of 1798, for example, had defied the Alien and Sedition Acts, and at the Hartford Convention, New England voiced its opposition to President James Madison and the War of 1812, and discussed secession from the Union. Southern culture Although a minority of free Southerners owned slaves (and, in turn, a minority of similar proportion within these slaveholders who owned the vast majority of slaves), Southerners of all classes nevertheless defended the institution of slavery– threatened by the rise of free labor abolitionist movements in the Northern states– as the cornerstone of their social order. Based on a system of plantation slavery, the social structure of the South was far more stratified and patriarchal than that of the North. In 1850 there were around 350,000 slaveholders in a total free Southern population of about six million. Among slaveholders, the concentration of slave ownership was unevenly distributed. Perhaps around 7 percent of slaveholders owned roughly three-quarters of the slave population. The largest slaveholders, generally owners of large plantations, represented the top stratum of Southern society. They benefited from economies of scale and needed large numbers of slaves on big plantations to produce profitable labor-intensive crops like cotton. This plantation-owning elite, known as "slave magnates", was comparable to the millionaires of the following century. In the 1850s as large plantation owners outcompeted smaller farmers, more slaves were owned by fewer planters. Yet, while the proportion of the white population consisting of slaveholders was on the decline on the eve of the Civil War—perhaps falling below around a quarter of free southerners in 1860—poor whites and small farmers generally accepted the political leadership of the planter elite. Several factors helped explain why slavery was not under serious threat of internal collapse from any moves for democratic change initiated from the South. First, given the opening of new territories in the West for white settlement, many non-slaveowners also perceived a possibility that they, too, might own slaves at some point in their life. Second, small free farmers in the South often embraced hysterical racism, making them unlikely agents for internal democratic reforms in the South. The principle of white supremacy, accepted by almost all white southerners of all classes, made slavery seem legitimate, natural, and essential for a civilized society. White racism in the South was sustained by official systems of repression such as the "slave codes" and elaborate codes of speech, behavior, and social practices illustrating the subordination of blacks to whites. For example, the "slave patrols" were among the institutions bringing together southern whites of all classes in support of the prevailing economic and racial order. Serving as slave "patrollers" and "overseers" offered white southerners positions of power and honor. These positions gave even poor white southerners the authority to stop, search, whip, maim, and even kill any slave traveling outside his or her plantation. Slave "patrollers" and "overseers" also won prestige in their communities. Policing and punishing blacks who transgressed the regimentation of slave society was a valued community service in the South, where the fear of free blacks threatening law and order figured heavily in the public discourse of the period. Third, many small farmers with a few slaves and yeomen were linked to elite planters through the market economy. In many areas, small farmers depended on local planter elites for vital goods and services including (but not limited to) access to cotton gins, access to markets, access to feed and livestock, and even for loans (since the banking system was not well developed in the antebellum South). Southern tradesmen often depended on the richest planters for steady work. Such dependency effectively deterred many white non-slaveholders from engaging in any political activity that was not in the interest of the large slaveholders. Furthermore, whites of varying social class, including poor whites and "plain folk" who worked outside or in the periphery of the market economy (and therefore lacked any real economic interest in the defense of slavery) might nonetheless be linked to elite planters through extensive kinship networks. Since inheritance in the South was often unequitable (and generally favored eldest sons), it was not uncommon for a poor white person to be perhaps the first cousin of the richest plantation owner of his county and to share the same militant support of slavery as his richer relatives. Finally, there was no secret ballot at the time anywhere in the United States – this innovation did not become widespread in the U.S. until the 1880s. For a typical white Southerner, this meant that so much as casting a ballot against the wishes of the establishment meant running the risk of social ostracization. Thus, by the 1850s, Southern slaveholders and non-slaveholders alike felt increasingly encircled psychologically and politically in the national political arena because of the rise of free soilism and abolitionism in the Northern states. Increasingly dependent on the North for manufactured goods, for commercial services, and for loans, and increasingly cut off from the flourishing agricultural regions of the Northwest, they faced the prospects of a growing free labor and abolitionist movement in the North. Militant defense of slavery With the outcry over developments in Kansas strong in the North, defenders of slavery— increasingly committed to a way of life that abolitionists and their sympathizers considered obsolete or immoral— articulated a militant pro-slavery ideology that would lay the groundwork for secession upon the election of a Republican president. Southerners waged a vitriolic response to political change in the North. Slaveholding interests sought to uphold their constitutional rights in the territories and to maintain sufficient political strength to repulse "hostile" and "ruinous" legislation. Behind this shift was the growth of the cotton industry, which left slavery more important than ever to the Southern economy. Reactions to the popularity of Uncle Tom's Cabin (1852) by Harriet Beecher Stowe (whom Abraham Lincoln reputedly called "the little woman that started this great war") and the growth of the abolitionist movement (pronounced after the founding of The Liberator in 1831 by William Lloyd Garrison) inspired an elaborate intellectual defense of slavery. Increasingly vocal (and sometimes violent) abolitionist movements, culminating in John Brown's raid on Harpers Ferry in 1859 were viewed as a serious threat, and—in the minds of many Southerners—abolitionists were attempting to foment violent slave revolts as seen in Haiti in the 1790s and as attempted by Nat Turner some three decades prior (1831). After J. D. B. DeBow established De Bow's Review in 1846, it grew to become the leading Southern magazine, warning the planter class about the dangers of depending on the North economically. De Bow's Review also emerged as the leading voice for secession. The magazine emphasized the South's economic inequality, relating it to the concentration of manufacturing, shipping, banking and international trade in the North. Searching for Biblical passages endorsing slavery and forming economic, sociological, historical and scientific arguments, slavery went from being a "necessary evil" to a "positive good". Dr. J.H. Van Evrie's book Negroes and Negro slavery: The First an Inferior Race: The Latter Its Normal Condition– setting out the arguments the title would suggest– was an attempt to apply scientific support to the Southern arguments in favor of race based slavery. Latent sectional divisions suddenly activated derogatory sectional imagery which emerged into sectional ideologies. As industrial capitalism gained momentum in the North, Southern writers emphasized whatever aristocratic traits they valued (but often did not practice) in their own society: courtesy, grace, chivalry, the slow pace of life, orderly life and leisure. This supported their argument that slavery provided a more humane society than industrial labor. In his Cannibals All!, George Fitzhugh argued that the antagonism between labor and capital in a free society would result in "robber barons" and "pauper slavery", while in a slave society such antagonisms were avoided. He advocated enslaving Northern factory workers, for their own benefit. Abraham Lincoln, on the other hand, denounced such Southern insinuations that Northern wage earners were fatally fixed in that condition for life. To Free Soilers, the stereotype of the South was one of a diametrically opposite, static society in which the slave system maintained an entrenched anti-democratic aristocracy. Southern fears of modernization According to the historian James M. McPherson, exceptionalism applied not to the South but to the North after the North phased out slavery and launched an industrial revolution that led to urbanization, which in turn led to increased education, which in its own turn gave ever-increasing strength to various reform movements but especially abolitionism. The fact that seven immigrants out of eight settled in the North (and the fact that most immigrants viewed slavery with disfavor), compounded by the fact that twice as many whites left the South for the North as vice versa, contributed to the South's defensive-aggressive political behavior. The Charleston Mercury read that on the issue of slavery the North and South "are not only two Peoples, but they are rival, hostile Peoples." As De Bow's Review said, "We are resisting revolution.... We are not engaged in a Quixotic fight for the rights of man.... We are conservative." Southern fears of modernity Allan Nevins argued that the Civil War was an "irrepressible" conflict, adopting a phrase first used by U.S. Senator and Abraham Lincoln's Secretary of State William H. Seward. Nevins synthesized contending accounts emphasizing moral, cultural, social, ideological, political, and economic issues. In doing so, he brought the historical discussion back to an emphasis on social and cultural factors. Nevins pointed out that the North and the South were rapidly becoming two different peoples, a point made also by historian Avery Craven. At the root of these cultural differences was the problem of slavery, but fundamental assumptions, tastes, and cultural aims of the regions were diverging in other ways as well. More specifically, the North was rapidly modernizing in a manner threatening to the South. Historian McPherson explains: When secessionists protested in 1861 that they were acting to preserve traditional rights and values, they were correct. They fought to preserve their constitutional liberties against the perceived Northern threat to overthrow them. The South's concept of republicanism had not changed in three-quarters of a century; the North's had.... The ascension to power of the Republican Party, with its ideology of competitive, egalitarian free-labor capitalism, was a signal to the South that the Northern majority had turned irrevocably towards this frightening, revolutionary future. Harry L. Watson has synthesized research on antebellum southern social, economic, and political history. Self-sufficient yeomen, in Watson's view, "collaborated in their own transformation" by allowing promoters of a market economy to gain political influence. Resultant "doubts and frustrations" provided fertile soil for the argument that southern rights and liberties were menaced by Black Republicanism. J. Mills Thornton III, explained the viewpoint of the average white Alabamian. Thornton contends that Alabama was engulfed in a severe crisis long before 1860. Deeply held principles of freedom, equality, and autonomy, as expressed in republican values appeared threatened, especially during the 1850s, by the relentless expansion of market relations and commercial agriculture. Alabamians were thus, he judged, prepared to believe the worst once Lincoln was elected. Sectional tensions and the emergence of mass politics The politicians of the 1850s were acting in a society in which the traditional restraints that suppressed sectional conflict in the 1820s and 1850s– the most important of which being the stability of the two-party system– were being eroded as this rapid extension of mass democracy went forward in the North and South. It was an era when the mass political party galvanized voter participation to an unprecedented degree, and a time in which politics formed an essential component of American mass culture. Historians agree that political involvement was a larger concern to the average American in the 1850s than today. Politics was, in one of its functions, a form of mass entertainment, a spectacle with rallies, parades, and colorful personalities. Leading politicians, moreover, often served as a focus for popular interests, aspirations, and values. Historian Allan Nevins, for instance, writes of political rallies in 1856 with turnouts of anywhere from twenty to fifty thousand men and women. Voter turnouts even ran as high as 84% by 1860. An abundance of new parties emerged 1854–56, including the Republicans, People's party men, Anti-Nebraskans, Fusionists, Know-Nothings, Know-Somethings (anti-slavery nativists), Maine Lawites, Temperance men, Rum Democrats, Silver Gray Whigs, Hindus, Hard Shell Democrats, Soft Shells, Half Shells and Adopted Citizens. By 1858, they were mostly gone, and politics divided four ways. Republicans controlled most Northern states with a strong Democratic minority. The Democrats were split North and South and fielded two tickets in 1860. Southern non-Democrats tried different coalitions; most supported the Constitutional Union party in 1860. Many Southern states held constitutional conventions in 1851 to consider the questions of nullification and secession. With the exception of South Carolina, whose convention election did not even offer the option of "no secession" but rather "no secession without the collaboration of other states", the Southern conventions were dominated by Unionists who voted down articles of secession. Historians today generally agree that economic conflicts were not a major cause of the war. While an economic basis to the sectional crisis was popular among the “Progressive school” of historians from the 1910s to the 1940s, few professional historians now subscribe to this explanation. According to economic historian Lee A. Craig, "In fact, numerous studies by economic historians over the past several decades reveal that economic conflict was not an inherent condition of North-South relations during the antebellum era and did not cause the Civil War." When numerous groups tried at the last minute in 1860–61 to find a compromise to avert war, they did not turn to economic policies. The three major attempts at compromise, the Crittenden Compromise, the Corwin Amendment and the Washington Peace Conference, addressed only the slavery-related issues of fugitive slave laws, personal liberty laws, slavery in the territories and interference with slavery within the existing slave states. Economic value of slavery to the South Historian James L. Huston emphasizes the role of slavery as an economic institution. In October 1860 William Lowndes Yancey, a leading advocate of secession, placed the value of Southern-held slaves at $2.8 billion. Huston writes: Understanding the relations between wealth, slavery, and property rights in the South provides a powerful means of understanding southern political behavior leading to disunion. First, the size dimensions of slavery are important to comprehend, for slavery was a colossal institution. Second, the property rights argument was the ultimate defense of slavery, and white southerners and the proslavery radicals knew it. Third, the weak point in the protection of slavery by property rights was the federal government.... Fourth, the intense need to preserve the sanctity of property rights in Africans led southern political leaders to demand the nationalization of slavery– the condition under which slaveholders would always be protected in their property holdings. The cotton gin greatly increased the efficiency with which cotton could be harvested, contributing to the consolidation of "King Cotton" as the backbone of the economy of the Deep South, and to the entrenchment of the system of slave labor on which the cotton plantation economy depended. The tendency of monoculture cotton plantings to lead to soil exhaustion created a need for cotton planters to move their operations to new lands, and therefore to the westward expansion of slavery from the Eastern seaboard into new areas (e.g., Alabama, Mississippi, and beyond to East Texas). Regional economic differences The South, Midwest, and Northeast had quite different economic structures. They traded with each other and each became more prosperous by staying in the Union, a point many businessmen made in 1860–61. However Charles A. Beard in the 1920s made a highly influential argument to the effect that these differences caused the war (rather than slavery or constitutional debates). He saw the industrial Northeast forming a coalition with the agrarian Midwest against the Plantation South. Critics challenged his image of a unified Northeast and said that the region was in fact highly diverse with many different competing economic interests. In 1860–61, most business interests in the Northeast opposed war. After 1950, only a few mainstream historians accepted the Beard interpretation, though it was accepted by libertarian economists. As Historian Kenneth Stampp—who abandoned Beardianism after 1950, sums up the scholarly consensus: "Most historians...now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united." Free labor vs. pro-slavery arguments Historian Eric Foner argued that a free-labor ideology dominated thinking in the North, which emphasized economic opportunity. By contrast, Southerners described free labor as "greasy mechanics, filthy operators, small-fisted farmers, and moonstruck theorists". They strongly opposed the homestead laws that were proposed to give free farms in the west, fearing the small farmers would oppose plantation slavery. Indeed, opposition to homestead laws was far more common in secessionist rhetoric than opposition to tariffs. Southerners such as Calhoun argued that slavery was "a positive good", and that slaves were more civilized and morally and intellectually improved because of slavery. Religious conflict over the slavery question Led by Mark Noll, a body of scholarship has highlighted the fact that the American debate over slavery became a shooting war in part because the two sides reached diametrically opposite conclusions based on reading the same authoritative source of guidance on moral questions: the King James Version of the Bible. After the American Revolution and the disestablishment of government-sponsored churches, the U.S. experienced the Second Great Awakening, a massive Protestant revival. Without centralized church authorities, American Protestantism was heavily reliant on the Bible, which was read in the standard 19th-century Reformed hermeneutic of "common sense", literal interpretation as if the Bible were speaking directly about the modern American situation instead of events that occurred in a much different context, millennia ago. By the mid-19th century this form of religion and Bible interpretation had become a dominant strand in American religious, moral and political discourse, almost serving as a de facto state religion. The problem that this caused for resolving the slavery question was that the Bible, interpreted under these assumptions, seemed to clearly suggest that slavery was Biblically justified: - The pro-slavery South could point to slaveholding by the godly patriarch Abraham (Gen 12:5; 14:14; 24:35–36; 26:13–14), a practice that was later incorporated into Israelite national law (Lev 25:44–46). It was never denounced by Jesus, who made slavery a model of discipleship (Mk 10:44). The Apostle Paul supported slavery, counseling obedience to earthly masters (Eph 6:5–9; Col 3:22–25) as a duty in agreement with "the sound words of our Lord Jesus Christ and the teaching which accords with godliness" (1 Tim 6:3). Because slaves were to remain in their present state unless they could win their freedom (1 Cor 7:20–24), he sent the fugitive slave Onesimus back to his owner Philemon (Phlm 10–20). The abolitionist north had a difficult time matching the pro-slavery south passage for passage. [...] Professor Eugene Genovese, who has studied these biblical debates over slavery in minute detail, concludes that the pro-slavery faction clearly emerged victorious over the abolitionists except for one specious argument based on the so-called Curse of Ham (Gen 9:18–27). For our purposes, it is important to realize that the South won this crucial contest with the North by using the prevailing hermeneutic, or method of interpretation, on which both sides agreed. So decisive was its triumph that the South mounted a vigorous counterattack on the abolitionists as infidels who had abandoned the plain words of Scripture for the secular ideology of the Enlightenment. Protestant churches in the U.S., unable to agree on what God's Word said about slavery, ended up with schisms between Northern and Southern branches: the Methodists in 1844, the Baptists in 1845, and the Presbyterians in 1857. These splits presaged the subsequent split in the nation: "The churches played a major role in the dividing of the nation, and it is probably true that it was the splits in the churches which made a final split of the national inevitable." The conflict over how to interpret the Bible was central: - The theological crisis occasioned by reasoning like [conservative Presbyterian theologian James H.] Thornwell's was acute. Many Northern Bible-readers and not a few in the South felt that slavery was evil. They somehow knew the Bible supported them in that feeling. Yet when it came to using the Bible as it had been used with such success to evangelize and civilize the United States, the sacred page was snatched out of their hands. Trust in the Bible and reliance upon a Reformed, literal hermeneutic had created a crisis that only bullets, not arguments, could resolve. - The question of the Bible and slavery in the era of the Civil War was never a simple question. The issue involved the American expression of a Reformed literal hermeneutic, the failure of hermeneutical alternatives to gain cultural authority, and the exercise of deeply entrenched intuitive racism, as well as the presence of Scripture as an authoritative religious book and slavery as an inherited social-economic relationship. The North– forced to fight on unfriendly terrain that it had helped to create– lost the exegetical war. The South certainly lost the shooting war. But constructive orthodox theology was the major loser when American believers allowed bullets instead of hermeneutical self-consciousness to determine what the Bible said about slavery. For the history of theology in America, the great tragedy of the Civil War is that the most persuasive theologians were the Rev. Drs. William Tecumseh Sherman and Ulysses S. Grant. There were many causes of the Civil War, but the religious conflict, almost unimaginable in modern America, cut very deep at the time. Noll and others highlight the significance of the religion issue for the famous phrase in Lincoln's second inaugural: "Both read the same Bible and pray to the same God, and each invokes His aid against the other." The Territorial Crisis and the United States Constitution Between 1803 and 1854, the United States achieved a vast expansion of territory through purchase, negotiation and conquest. Of the states carved out of these territories by 1845, all had entered the union as slave states: Louisiana, Missouri, Arkansas, Florida and Texas, as well as the southern portions of Alabama and Mississippi. And with the conquest of northern Mexico, including California, in 1848, slaveholding interests looked forward to the institution flourishing in these lands as well. Southerners also anticipated garnering slaves and slave states in Cuba and Central America. Northern free soil interests vigorously sought to curtail any further expansion of slave soil. It was these territorial disputes that the proslavery and antislavery forces collided over. The existence of slavery in the southern states was far less politically polarizing than the explosive question of the territorial expansion of the institution in the west. Moreover, Americans were informed by two well-established readings of the Constitution regarding human bondage: that the slave states had complete autonomy over the institution within their boundaries, and that the domestic slave trade – trade among the states – was immune to federal interference. The only feasible strategy available to attack slavery was to restrict its expansion into the new territories. Slaveholding interests fully grasped the danger that this strategy posed to them. Both the South and the North believed: “The power to decide the question of slavery for the territories was the power to determine the future of slavery itself.” By 1860, four doctrines had emerged to answer the question of federal control in the territories, and they all claimed to be sanctioned by the Constitution, implicitly or explicitly. Two of the “conservative” doctrines emphasized the written text and historical precedents of the founding document, while the other two doctrines developed arguments that transcended the Constitution. One of the “conservative” theories, represented by the Constitutional Union Party, argued that the historical designation of free and slave apportionments in territories should be become a Constitutional mandate. The Crittenden Compromise of 1860 was an expression of this view. The second doctrine of Congressional preeminence, championed by Abraham Lincoln and the Republican Party, insisted that the Constitution did not bind legislators to a policy of balance – that slavery could be excluded altogether in a territory at the discretion of Congress – with one caveat: the due process clause of the Fifth Amendment must apply. In other words, Congress could restrict human bondage, but never establish it. The Wilmot Proviso announced this position in 1846. Of the two doctrines that rejected federal authority, one was articulated by northern Democrat of Illinois Senator Stephen A. Douglas, and the other by southern Democrats Senator Jefferson Davis of Mississippi and Senator John C. Breckinridge of Kentucky. Douglas devised the doctrine of territorial or “popular” sovereignty, which declared that the settlers in a territory had the same rights as states in the Union to establish or disestablish slavery – a purely local matter. Congress, having created the territory, was barred, according to Douglas, from exercising any authority in domestic matters. To do so would violate historic traditions of self-government, implicit in the US Constitution. The Kansas-Nebraska Act of 1854 legislated this doctrine. The fourth in this quartet is the theory of state sovereignty (“states’ rights”), also known as the “Calhoun doctrine” after the South Carolinian political theorist and statesman John C. Calhoun. Rejecting the arguments for federal authority or self-government, state sovereignty would empower states to promote the expansion of slavery as part of the Federal Union under the US Constitution – and not merely as an argument for secession. The basic premise was that all authority regarding matters of slavery in the territories resided in each state. The role of the federal government was merely to enable the implementation of state laws when residents of the states entered the territories. Calhoun asserted that the federal government in the territories was only the agent of the several sovereign states, and hence incapable of forbidding the bringing into any territory of anything that was legal property in any state. State sovereignty, in other words, gave the laws of the slaveholding states extra-jurisdictional effect. “States’ rights” was an ideology formulated and applied as a means of advancing slave state interests through federal authority. As historian Thomas L Krannawitter points out, “[T]he Southern demand for federal slave protection represented a demand for an unprecedented expansion of federal power.” By 1860, these four doctrines comprised the major ideologies presented to the American public on the matters of slavery, the territories and the US Constitution. Antislavery movements in the North gained momentum in the 1830s and 1840s, a period of rapid transformation of Northern society that inspired a social and political reformism. Many of the reformers of the period, including abolitionists, attempted in one way or another to transform the lifestyle and work habits of labor, helping workers respond to the new demands of an industrializing, capitalistic society. Antislavery, like many other reform movements of the period, was influenced by the legacy of the Second Great Awakening, a period of religious revival in the new country stressing the reform of individuals which was still relatively fresh in the American memory. Thus, while the reform spirit of the period was expressed by a variety of movements with often-conflicting political goals, most reform movements shared a common feature in their emphasis on the Great Awakening principle of transforming the human personality through discipline, order, and restraint. "Abolitionist" had several meanings at the time. The followers of William Lloyd Garrison, including Wendell Phillips and Frederick Douglass, demanded the "immediate abolition of slavery", hence the name. A more pragmatic group of abolitionists, like Theodore Weld and Arthur Tappan, wanted immediate action, but that action might well be a program of gradual emancipation, with a long intermediate stage. "Antislavery men", like John Quincy Adams, did what they could to limit slavery and end it where possible, but were not part of any abolitionist group. For example, in 1841 Adams represented the Amistad African slaves in the Supreme Court of the United States and argued that they should be set free. In the last years before the war, "antislavery" could mean the Northern majority, like Abraham Lincoln, who opposed expansion of slavery or its influence, as by the Kansas-Nebraska Act, or the Fugitive Slave Act. Many Southerners called all these abolitionists, without distinguishing them from the Garrisonians. James M. McPherson explains the abolitionists' deep beliefs: "All people were equal in God's sight; the souls of black folks were as valuable as those of whites; for one of God's children to enslave another was a violation of the Higher Law, even if it was sanctioned by the Constitution." Stressing the Yankee Protestant ideals of self-improvement, industry, and thrift, most abolitionists– most notably William Lloyd Garrison– condemned slavery as a lack of control over one's own destiny and the fruits of one's labor. The experience of the fifty years… shows us the slaves trebling in numbers—slaveholders monopolizing the offices and dictating the policy of the Government—prostituting the strength and influence of the Nation to the support of slavery here and elsewhere—trampling on the rights of the free States, and making the courts of the country their tools. To continue this disastrous alliance longer is madness.… Why prolong the experiment? Abolitionists also attacked slavery as a threat to the freedom of white Americans. Defining freedom as more than a simple lack of restraint, antebellum reformers held that the truly free man was one who imposed restraints upon himself. Thus, for the anti-slavery reformers of the 1830s and 1840s, the promise of free labor and upward social mobility (opportunities for advancement, rights to own property, and to control one's own labor), was central to the ideal of reforming individuals. Controversy over the so-called Ostend Manifesto (which proposed the U.S. annexation of Cuba as a slave state) and the Fugitive Slave Act kept sectional tensions alive before the issue of slavery in the West could occupy the country's politics in the mid-to-late 1850s. Antislavery sentiment among some groups in the North intensified after the Compromise of 1850, when Southerners began appearing in Northern states to pursue fugitives or often to claim as slaves free African Americans who had resided there for years. Meanwhile, some abolitionists openly sought to prevent enforcement of the law. Violation of the Fugitive Slave Act was often open and organized. In Boston– a city from which it was boasted that no fugitive had ever been returned– Theodore Parker and other members of the city's elite helped form mobs to prevent enforcement of the law as early as April 1851. A pattern of public resistance emerged in city after city, notably in Syracuse in 1851 (culminating in the Jerry Rescue incident late that year), and Boston again in 1854. But the issue did not lead to a crisis until revived by the same issue underlying the Missouri Compromise of 1820: slavery in the territories. Arguments for and against slavery William Lloyd Garrison, a prominent abolitionist, was motivated by a belief in the growth of democracy. Because the Constitution had a three-fifths clause, a fugitive slave clause and a 20-year extension of the Atlantic slave trade, Garrison once publicly burned a copy of the U.S. Constitution and called it "a covenant with death and an agreement with hell". In 1854, he said: |“||I am a believer in that portion of the Declaration of American Independence in which it is set forth, as among self-evident truths, "that all men are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty, and the pursuit of happiness." Hence, I am an abolitionist. Hence, I cannot but regard oppression in every form—and most of all, that which turns a man into a thing—with indignation and abhorrence.||”| |“||(Thomas Jefferson's) ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error.... Our new government is founded upon exactly the opposite idea; its foundations are laid, its corner-stone rests, upon the great truth that the negro is not equal to the white man; that slavery—subordination to the superior race—is his natural and normal condition.||”| "Free soil" movement The assumptions, tastes, and cultural aims of the reformers of the 1830s and 1840s anticipated the political and ideological ferment of the 1850s. A surge of working class Irish and German Catholic immigration provoked reactions among many Northern Whigs, as well as Democrats. Growing fears of labor competition for white workers and farmers because of the growing number of free blacks prompted several northern states to adopt discriminatory "Black Codes". In the Northwest, although farm tenancy was increasing, the number of free farmers was still double that of farm laborers and tenants. Moreover, although the expansion of the factory system was undermining the economic independence of the small craftsman and artisan, industry in the region, still one largely of small towns, was still concentrated in small-scale enterprises. Arguably, social mobility was on the verge of contracting in the urban centers of the North, but long-cherished ideas of opportunity, "honest industry" and "toil" were at least close enough in time to lend plausibility to the free labor ideology. In the rural and small-town North, the picture of Northern society (framed by the ethos of "free labor") corresponded to a large degree with reality. Propelled by advancements in transportation and communication– especially steam navigation, railroads, and telegraphs– the two decades before the Civil War were of rapid expansion in population and economy of the Northwest. Combined with the rise of Northeastern and export markets for their products, the social standing of farmers in the region substantially improved. The small towns and villages that emerged as the Republican Party's heartland showed every sign of vigorous expansion. Their vision for an ideal society was of small-scale capitalism, with white American laborers entitled to the chance of upward mobility opportunities for advancement, rights to own property, and to control their own labor. Many free-soilers demanded that the slave labor system and free black settlers (and, in places such as California, Chinese immigrants) should be excluded from the Great Plains to guarantee the predominance there of the free white laborer. Opposition to the 1847 Wilmot Proviso helped to consolidate the "free-soil" forces. The next year, Radical New York Democrats known as Barnburners, members of the Liberty Party, and anti-slavery Whigs held a convention at Buffalo, New York, in August, forming the Free-Soil Party. The party supported former President Martin Van Buren and Charles Francis Adams, Sr., for President and Vice President, respectively. The party opposed the expansion of slavery into territories where it had not yet existed, such as Oregon and the ceded Mexican territory. Relating Northern and Southern positions on slavery to basic differences in labor systems, but insisting on the role of culture and ideology in coloring these differences, Eric Foner's book Free Soil, Free Labor, Free Men (1970) went beyond the economic determinism of Charles A. Beard (a leading historian of the 1930s). Foner emphasized the importance of free labor ideology to Northern opponents of slavery, pointing out that the moral concerns of the abolitionists were not necessarily the dominant sentiments in the North. Many Northerners (including Lincoln) opposed slavery also because they feared that black labor might spread to the North and threaten the position of free white laborers. In this sense, Republicans and the abolitionists were able to appeal to powerful emotions in the North through a broader commitment to "free labor" principles. The "Slave Power" idea had a far greater appeal to Northern self-interest than arguments based on the plight of black slaves in the South. If the free labor ideology of the 1830s and 1840s depended on the transformation of Northern society, its entry into politics depended on the rise of mass democracy, in turn propelled by far-reaching social change. Its chance would come by the mid-1850s with the collapse of the traditional two-party system, which had long suppressed sectional conflict. Slavery question in territories acquired from Mexico Soon after the Mexican War started and long before negotiation of the new US-Mexico border, the question of slavery in the territories to be acquired polarized the Northern and Southern United States in the most bitter sectional conflict up to this time, which lasted for a deadlock of four years during which the Second Party System broke up, Mormon pioneers settled Utah, the California Gold Rush settled California, and New Mexico under a federal military government turned back Texas's attempt to assert control over territory Texas claimed as far west as the Rio Grande. Eventually the Compromise of 1850 preserved the Union, but only for another decade. Proposals included: - The Wilmot Proviso banning slavery in any new territory to be acquired from Mexico, not including Texas which had been annexed the previous year. Passed by the United States House of Representatives in August 1846 and February 1847 but not the Senate. Later an effort to attach the proviso to the Treaty of Guadalupe Hidalgo also failed. - Failed amendments to the Wilmot Proviso by William W. Wick and then Stephen Douglas extending the Missouri Compromise line (36°30' parallel north) west to the Pacific, allowing slavery in most of present day New Mexico and Arizona, Las Vegas, Nevada, and Southern California, as well as any other territories that might be acquired from Mexico. The line was again proposed by the Nashville Convention of June 1850. - Popular sovereignty, developed by Lewis Cass and Douglas as the eventual Democratic Party position, letting each territory decide whether to allow slavery. - William L. Yancey's "Alabama Platform", endorsed by the Alabama and Georgia legislatures and by Democratic state conventions in Florida and Virginia, called for no restrictions on slavery in the territories either by the federal government or by territorial governments before statehood, opposition to any candidates supporting either the Wilmot Proviso or popular sovereignty, and federal legislation overruling Mexican anti-slavery laws. - General Zachary Taylor, who became the Whig candidate in 1848 and then President from March 1849 to July 1850, proposed after becoming President that the entire area become two free states, called California and New Mexico but much larger than the eventual ones. None of the area would be left as an unorganized or organized territory, avoiding the question of slavery in the territories. - The Mormons' proposal for a State of Deseret incorporating most of the area of the Mexican Cession but excluding the largest non-Mormon populations in Northern California and central New Mexico was considered unlikely to succeed in Congress, but nevertheless in 1849 President Zachary Taylor sent his agent John Wilson westward with a proposal to combine California and Deseret as a single state, decreasing the number of new free states and the erosion of Southern parity in the Senate. - The Compromise of 1850, proposed by Henry Clay in January 1850, guided to passage by Douglas over Northern Whig and Southern Democrat opposition, and enacted September 1850, admitted California as a free state including Southern California and organized Utah Territory and New Mexico Territory with slavery to be decided by popular sovereignty. Texas dropped its claim to the disputed northwestern areas in return for debt relief, and the areas were divided between the two new territories and unorganized territory. El Paso where Texas had successfully established county government was left in Texas. No southern territory dominated by Southerners (like the later short-lived Confederate Territory of Arizona) was created. Also, the slave trade was abolished in Washington, D.C. (but not slavery itself), and the Fugitive Slave Act was strengthened. States' rights States' rights was an issue in the 19th century for those who felt that the federal government was superseded by the authority of the individual states and was in violation of the role intended for it by the Founding Fathers of the United States. Kenneth M. Stampp notes that each section used states' rights arguments when convenient, and shifted positions when convenient. For example, the Fugitive Slave Act of 1850 was justified by its supporters as a state's right to have its property laws respected by other states, and was resisted by northern legislatures in the form of state personal liberty laws that placed state laws above the federal mandate. States’ rights and slavery Arthur M. Schlesinger, Jr. noted that the states' rights “never had any real vitality independent of underlying conditions of vast social, economic, or political significance.” He further elaborated: From the close of the nullification episode of 1832–1833 to the outbreak of the Civil War, the agitation of state rights was intimately connected with the new issue of growing importance, the slavery question, and the principle form assumed by the doctrine was the right of secession. The pro-slavery forces sought refuge in the state rights position as a shield against federal interference with pro-slavery projects.... As a natural consequence, anti-slavery legislatures in the North were led to lay great stress on the national character of the Union and the broad powers of the general government in dealing with slavery. Nevertheless, it is significant to note that when it served anti-slavery purposes better to lapse into state rights dialectic, northern legislatures did not hesitate to be inconsistent. Echoing Schlesinger, Forrest McDonald wrote that “the dynamics of the tension between federal and state authority changed abruptly during the late 1840s” as a result of the acquisition of territory in the Mexican War. McDonald states: And then, as a by-product or offshoot of a war of conquest, slavery– a subject that leading politicians had, with the exception of the gag rule controversy and Calhoun’s occasional outbursts, scrupulously kept out of partisan debate– erupted as the dominant issue in that arena. So disruptive was the issue that it subjected the federal Union to the greatest strain the young republic had yet known. States' rights and minority rights States' rights theories gained strength from the awareness that the Northern population was growing much faster than the population of the South, so it was only a matter of time before the North controlled the federal government. Acting as a "conscious minority", Southerners hoped that a strict, constructionist interpretation of the Constitution would limit federal power over the states, and that a defense of states' rights against federal encroachments or even nullification or secession would save the South. Before 1860, most presidents were either Southern or pro-South. The North's growing population would mean the election of pro-North presidents, and the addition of free-soil states would end Southern parity with the North in the Senate. As the historian Allan Nevins described Calhoun's theory of states' rights, "Governments, observed Calhoun, were formed to protect minorities, for majorities could take care of themselves". Until the 1860 election, the South’s interests nationally were entrusted to the Democratic Party. In 1860, the Democratic Party split into Northern and Southern factions as the result of a "bitter debate in the Senate between Jefferson Davis and Stephen Douglas". The debate was over resolutions proposed by Davis “opposing popular sovereignty and supporting a federal slave code and states’ rights” which carried over to the national convention in Charleston. Davis defined equality in terms of the equal rights of states, and opposed the declaration that all men are created equal. Jefferson Davis stated that a "disparaging discrimination" and a fight for "liberty" against "the tyranny of an unbridled majority" gave the Confederate states a right to secede. In 1860, Congressman Laurence M. Keitt of South Carolina said, "The anti-slavery party contend that slavery is wrong in itself, and the Government is a consolidated national democracy. We of the South contend that slavery is right, and that this is a confederate Republic of sovereign States." Stampp mentioned Confederate Vice President Alexander Stephens' A Constitutional View of the Late War Between the States as an example of a Southern leader who said that slavery was the "cornerstone of the Confederacy" when the war began and then said that the war was not about slavery but states' rights after Southern defeat. Stampp said that Stephens became one of the most ardent defenders of the Lost Cause. To the old Union they had said that the Federal power had no authority to interfere with slavery issues in a state. To their new nation they would declare that the state had no power to interfere with a federal protection of slavery. Of all the many testimonials to the fact that slavery, and not states rights, really lay at the heart of their movement, this was the most eloquent of all. The Compromise of 1850 The victory of the United States over Mexico resulted in the addition of large new territories conquered from Mexico. Controversy over whether these territories would be slave or free raised the risk of a war between slave and free states, and Northern support for the Wilmot Proviso, which would have banned slavery in the conquered territories, increased sectional tensions. The controversy was temporarily resolved by the Compromise of 1850, which allowed the territories of Utah and New Mexico to decide for or against slavery, but also allowed the admission of California as a free state, reduced the size of the slave state of Texas by adjusting the boundary, and ended the slave trade (but not slavery itself) in the District of Columbia. In return, the South got a stronger fugitive slave law than the version mentioned in the Constitution. The Fugitive Slave Law would reignite controversy over slavery. Fugitive Slave Law issues The Fugitive Slave Law of 1850 required that Northerners assist Southerners in reclaiming fugitive slaves, which many Northerners found to be extremely offensive. Anthony Burns was among the fugitive slaves captured and returned in chains to slavery as a result of the law. Harriett Beecher Stowe's best selling novel Uncle Tom's Cabin greatly increased opposition to the Fugitive Slave Law. Kansas-Nebraska Act (1854) Most people thought the Compromise had ended the territorial issue, but Stephen A. Douglas reopened it in 1854, in the name of democracy. Douglas proposed the Kansas-Nebraska Bill with the intention of opening up vast new high quality farm lands to settlement. As a Chicagoan, he was especially interested in the railroad connections from Chicago into Kansas and Nebraska, but that was not a controversial point. More importantly, Douglas firmly believed in democracy at the grass roots—that actual settlers have the right to decide on slavery, not politicians from other states. His bill provided that popular sovereignty, through the territorial legislatures, should decide "all questions pertaining to slavery", thus effectively repealing the Missouri Compromise. The ensuing public reaction against it created a firestorm of protest in the Northern states. It was seen as an effort to repeal the Missouri Compromise. However, the popular reaction in the first month after the bill's introduction failed to foreshadow the gravity of the situation. As Northern papers initially ignored the story, Republican leaders lamented the lack of a popular response. Eventually, the popular reaction did come, but the leaders had to spark it. Chase's "Appeal of the Independent Democrats" did much to arouse popular opinion. In New York, William H. Seward finally took it upon himself to organize a rally against the Nebraska bill, since none had arisen spontaneously. Press such as the National Era, the New York Tribune, and local free-soil journals, condemned the bill. The Lincoln-Douglas debates of 1858 drew national attention to the issue of slavery expansion. Founding of the Republican Party (1854) Convinced that Northern society was superior to that of the South, and increasingly persuaded of the South's ambitions to extend slave power beyond its existing borders, Northerners were embracing a viewpoint that made conflict likely; however, conflict required the ascendancy of a political group to express the views of the North, such as the Republican Party. The Republican Party– campaigning on the popular, emotional issue of "free soil" in the frontier– captured the White House after just six years of existence. The Republican Party grew out of the controversy over the Kansas-Nebraska legislation. Once the Northern reaction against the Kansas-Nebraska Act took place, its leaders acted to advance another political reorganization. Henry Wilson declared the Whig Party dead and vowed to oppose any efforts to resurrect it. Horace Greeley's Tribune called for the formation of a new Northern party, and Benjamin Wade, Chase, Charles Sumner, and others spoke out for the union of all opponents of the Nebraska Act. The Tribune's Gamaliel Bailey was involved in calling a caucus of anti-slavery Whig and Democratic Party Congressmen in May. Meeting in a Ripon, Wisconsin, Congregational Church on February 28, 1854, some thirty opponents of the Nebraska Act called for the organization of a new political party and suggested that "Republican" would be the most appropriate name (to link their cause to the defunct Republican Party of Thomas Jefferson). These founders also took a leading role in the creation of the Republican Party in many northern states during the summer of 1854. While conservatives and many moderates were content merely to call for the restoration of the Missouri Compromise or a prohibition of slavery extension, radicals advocated repeal of the Fugitive Slave Laws and rapid abolition in existing states. The term "radical" has also been applied to those who objected to the Compromise of 1850, which extended slavery in the territories. But without the benefit of hindsight, the 1854 elections would seem to indicate the possible triumph of the Know-Nothing movement rather than anti-slavery, with the Catholic/immigrant question replacing slavery as the issue capable of mobilizing mass appeal. Know-Nothings, for instance, captured the mayoralty of Philadelphia with a majority of over 8,000 votes in 1854. Even after opening up immense discord with his Kansas-Nebraska Act, Senator Douglas began speaking of the Know-Nothings, rather than the Republicans, as the principal danger to the Democratic Party. When Republicans spoke of themselves as a party of "free labor", they appealed to a rapidly growing, primarily middle class base of support, not permanent wage earners or the unemployed (the working class). When they extolled the virtues of free labor, they were merely reflecting the experiences of millions of men who had "made it" and millions of others who had a realistic hope of doing so. Like the Tories in England, the Republicans in the United States would emerge as the nationalists, homogenizers, imperialists, and cosmopolitans. Those who had not yet "made it" included Irish immigrants, who made up a large growing proportion of Northern factory workers. Republicans often saw the Catholic working class as lacking the qualities of self-discipline, temperance, and sobriety essential for their vision of ordered liberty. Republicans insisted that there was a high correlation between education, religion, and hard work—the values of the "Protestant work ethic"—and Republican votes. "Where free schools are regarded as a nuisance, where religion is least honored and lazy unthrift is the rule", read an editorial of the pro-Republican Chicago Democratic Press after James Buchanan's defeat of John C. Fremont in the 1856 presidential election, "there Buchanan has received his strongest support". Ethno-religious, socio-economic, and cultural fault lines ran throughout American society, but were becoming increasingly sectional, pitting Yankee Protestants with a stake in the emerging industrial capitalism and American nationalism increasingly against those tied to Southern slave holding interests. For example, acclaimed historian Don E. Fehrenbacher, in his Prelude to Greatness, Lincoln in the 1850s, noticed how Illinois was a microcosm of the national political scene, pointing out voting patterns that bore striking correlations to regional patterns of settlement. Those areas settled from the South were staunchly Democratic, while those by New Englanders were staunchly Republican. In addition, a belt of border counties were known for their political moderation, and traditionally held the balance of power. Intertwined with religious, ethnic, regional, and class identities, the issues of free labor and free soil were thus easy to play on. Events during the next two years in "Bleeding Kansas" sustained the popular fervor originally aroused among some elements in the North by the Kansas-Nebraska Act. Free-State settlers from the North were encouraged by press and pulpit and the powerful organs of abolitionist propaganda. Often they received financial help from such organizations as the Massachusetts Emigrant Aid Company. Those from the South often received financial contributions from the communities they left. Southerners sought to uphold their constitutional rights in the territories and to maintain sufficient political strength to repulse "hostile and ruinous legislation". While the Great Plains were largely unfit for the cultivation of cotton, informed Southerners demanded that the West be open to slavery, often—perhaps most often—with minerals in mind. Brazil, for instance, was an example of the successful use of slave labor in mining. In the middle of the 18th century, diamond mining supplemented gold mining in Minas Gerais and accounted for a massive transfer of masters and slaves from Brazil's northeastern sugar region. Southern leaders knew a good deal about this experience. It was even promoted in the pro-slavery DeBow's Review as far back as 1848. Fragmentation of the American party system "Bleeding Kansas" and the elections of 1856 In Kansas around 1855, the slavery issue reached a condition of intolerable tension and violence. But this was in an area where an overwhelming proportion of settlers were merely land-hungry Westerners indifferent to the public issues. The majority of the inhabitants were not concerned with sectional tensions or the issue of slavery. Instead, the tension in Kansas began as a contention between rival claimants. During the first wave of settlement, no one held titles to the land, and settlers rushed to occupy newly open land fit for cultivation. While the tension and violence did emerge as a pattern pitting Yankee and Missourian settlers against each other, there is little evidence of any ideological divides on the questions of slavery. Instead, the Missouri claimants, thinking of Kansas as their own domain, regarded the Yankee squatters as invaders, while the Yankees accused the Missourians for grabbing the best land without honestly settling on it. However, the 1855–56 violence in "Bleeding Kansas" did reach an ideological climax after John Brown– regarded by followers as the instrument of God's will to destroy slavery– entered the melee. His assassination of five pro-slavery settlers (the so-called "Pottawatomie Massacre", during the night of May 24, 1856) resulted in some irregular, guerrilla-style strife. Aside from John Brown's fervor, the strife in Kansas often involved only armed bands more interested in land claims or loot. Of greater importance than the civil strife in Kansas, however, was the reaction against it nationwide and in Congress. In both North and South, the belief was widespread that the aggressive designs of the other section were epitomized by (and responsible for) what was happening in Kansas. Consequently, "Bleeding Kansas" emerged as a symbol of sectional controversy. Indignant over the developments in Kansas, the Republicans—the first entirely sectional major party in U.S. history—entered their first presidential campaign with confidence. Their nominee, John C. Frémont, was a generally safe candidate for the new party. Although his nomination upset some of their Nativist Know-Nothing supporters (his mother was a Catholic), the nomination of the famed explorer of the Far West with no political record was an attempt to woo ex-Democrats. The other two Republican contenders, William H. Seward and Salmon P. Chase, were seen as too radical. Nevertheless, the campaign of 1856 was waged almost exclusively on the slavery issue—pitted as a struggle between democracy and aristocracy—focusing on the question of Kansas. The Republicans condemned the Kansas-Nebraska Act and the expansion of slavery, but they advanced a program of internal improvements combining the idealism of anti-slavery with the economic aspirations of the North. The new party rapidly developed a powerful partisan culture, and energetic activists drove voters to the polls in unprecedented numbers. People reacted with fervor. Young Republicans organized the "Wide Awake" clubs and chanted "Free Soil, Free Labor, Free Men, Frémont!" With Southern fire-eaters and even some moderates uttering threats of secession if Frémont won, the Democratic candidate, Buchanan, benefited from apprehensions about the future of the Union. Dred Scott decision (1857) and the Lecompton Constitution The Lecompton Constitution and Dred Scott v. Sandford were both part of the Bleeding Kansas controversy over slavery as a result of the Kansas Nebraska Act, which was Stephen Douglas' attempt at replacing the Missouri Compromise ban on slavery in the Kansas and Nebraska territories with popular sovereignty, which meant that the people of a territory could vote either for or against slavery. The Lecompton Constitution, which would have allowed slavery in Kansas, was the result of massive vote fraud by the pro-slavery Border Ruffians. Douglas defeated the Lecompton Constitution because it was supported by the minority of pro-slavery people in Kansas, and Douglas believed in majority rule. Douglas hoped that both South and North would support popular sovereignty, but the opposite was true. Neither side trusted Douglas. The Supreme Court decision of 1857 in Dred Scott v. Sandford added to the controversy. Chief Justice Roger B. Taney's decision said that slaves were "so far inferior that they had no rights which the white man was bound to respect", and that slavery could spread into the territories even if the majority of people in the territories were anti-slavery. Lincoln warned that "the next Dred Scott decision" could threaten Northern states with slavery. Buchanan, Republicans and anti-administration Democrats President James Buchanan decided to end the troubles in Kansas by urging Congress to admit Kansas as a slave state under the Lecompton Constitution. Kansas voters, however, soundly rejected this constitution— at least with a measure of widespread fraud on both sides— by more than 10,000 votes. As Buchanan directed his presidential authority to this goal, he further angered the Republicans and alienated members of his own party. Prompting their break with the administration, the Douglasites saw this scheme as an attempt to pervert the principle of popular sovereignty on which the Kansas-Nebraska Act was based. Nationwide, conservatives were incensed, feeling as though the principles of states' rights had been violated. Even in the South, ex-Whigs and border states Know-Nothings— most notably John Bell and John J. Crittenden (key figures in the event of sectional controversies)— urged the Republicans to oppose the administration's moves and take up the demand that the territories be given the power to accept or reject sovereignty. As the schism in the Democratic party deepened, moderate Republicans argued that an alliance with anti-administration Democrats, especially Stephen Douglas, would be a key advantage in the 1860 elections. Some Republican observers saw the controversy over the Lecompton Constitution as an opportunity to peel off Democratic support in the border states, where Frémont picked up little support. After all, the border states had often gone for Whigs with a Northern base of support in the past without prompting threats of Southern withdrawal from the Union. Among the proponents of this strategy was The New York Times, which called on the Republicans to downplay opposition to popular sovereignty in favor of a compromise policy calling for "no more slave states" in order to quell sectional tensions. The Times maintained that for the Republicans to be competitive in the 1860 elections, they would need to broaden their base of support to include all voters who for one reason or another were upset with the Buchanan Administration. Indeed, pressure was strong for an alliance that would unite the growing opposition to the Democratic Administration. But such an alliance was no novel idea; it would essentially entail transforming the Republicans into the national, conservative, Union party of the country. In effect, this would be a successor to the Whig party. Republican leaders, however, staunchly opposed any attempts to modify the party position on slavery, appalled by what they considered a surrender of their principles when, for example, all the ninety-two Republican members of Congress voted for the Crittenden-Montgomery bill in 1858. Although this compromise measure blocked Kansas' entry into the union as a slave state, the fact that it called for popular sovereignty, rather than outright opposition to the expansion of slavery, was troubling to the party leaders. In the end, the Crittenden-Montgomery bill did not forge a grand anti-administration coalition of Republicans, ex-Whig Southerners in the border states, and Northern Democrats. Instead, the Democratic Party merely split along sectional lines. Anti-Lecompton Democrats complained that a new, pro-slavery test had been imposed upon the party. The Douglasites, however, refused to yield to administration pressure. Like the anti-Nebraska Democrats, who were now members of the Republican Party, the Douglasean insisted that they— not the administration— commanded the support of most northern Democrats. Extremist sentiment in the South advanced dramatically as the Southern planter class perceived its hold on the executive, legislative, and judicial apparatus of the central government wane. It also grew increasingly difficult for Southern Democrats to manipulate power in many of the Northern states through their allies in the Democratic Party. Historians have emphasized that the sense of honor was a central concern of upper class white Southerners. The idea of being treated like a second class citizen was anathema and could not be tolerated by an honorable southerner. The anti-slavery position held that slavery was a negative or evil phenomenon that damaged the rights of white men and the prospects of republicanism. To the white South this rhetoric made Southerners second-class citizens because it trampled their Constitutional rights to take their property anywhere. Assault on Sumner (1856) On May 19 Massachusetts Senator Charles Sumner gave a long speech in the Senate entitled "The Crime Against Kansas" , which condemned the Slave Power as the evil force behind the nation's troubles. Sumner said the Southerners had committed a "crime against Kansas", singling out Senator Andrew P. Butler of South Carolina: - "Not in any common lust for power did this uncommon tragedy have its origin. It is the rape of a virgin Territory, compelling it to the hateful embrace of slavery; and it may be clearly traced to a depraved desire for a new Slave State, hideous offspring of such a crime, in the hope of adding to the power of slavery in the National Government." Sumner's cast the South Carolinian as having "chosen a mistress [the harlot slavery]... who, though ugly to others, is always lovely to him, though polluted in the sight of the world is chaste in his sight." According to Hoffer (2010), "It is also important to note the sexual imagery that recurred throughout the oration, which was neither accidental nor without precedent. Abolitionists routinely accused slaveholders of maintaining slavery so that they could engage in forcible sexual relations with their slaves." Three days later, Sumner, working at his desk on the Senate floor, was beaten almost to death by Congressman Preston S. Brooks, Butler's nephew. Sumner took years to recover; he became the martyr to the antislavery cause who said the episode proved the barbarism of slave society. Brooks was lauded as a hero upholding Southern honor. The episode further polarized North and South, strengthened the new Republican Party, and added a new element of violence on the floor of Congress. Emergence of Lincoln Republican Party structure Despite their significant loss in the election of 1856, Republican leaders realized that even though they appealed only to Northern voters, they need win only two more states, such as Pennsylvania and Illinois, to win the presidency in 1860. As the Democrats were grappling with their own troubles, leaders in the Republican party fought to keep elected members focused on the issue of slavery in the West, which allowed them to mobilize popular support. Chase wrote Sumner that if the conservatives succeeded, it might be necessary to recreate the Free Soil Party. He was also particularly disturbed by the tendency of many Republicans to eschew moral attacks on slavery for political and economic arguments. The controversy over slavery in the West was still not creating a fixation on the issue of slavery. Although the old restraints on the sectional tensions were being eroded with the rapid extension of mass politics and mass democracy in the North, the perpetuation of conflict over the issue of slavery in the West still required the efforts of radical Democrats in the South and radical Republicans in the North. They had to ensure that the sectional conflict would remain at the center of the political debate. William Seward contemplated this potential in the 1840s, when the Democrats were the nation's majority party, usually controlling Congress, the presidency, and many state offices. The country's institutional structure and party system allowed slaveholders to prevail in more of the nation's territories and to garner a great deal of influence over national policy. With growing popular discontent with the unwillingness of many Democratic leaders to take a stand against slavery, and growing consciousness of the party's increasingly pro-Southern stance, Seward became convinced that the only way for the Whig Party to counteract the Democrats' strong monopoly of the rhetoric of democracy and equality was for the Whigs to embrace anti-slavery as a party platform. Once again, to increasing numbers of Northerners, the Southern labor system was increasingly seen as contrary to the ideals of American democracy. Republicans believed in the existence of "the Slave Power Conspiracy", which had seized control of the federal government and was attempting to pervert the Constitution for its own purposes. The "Slave Power" idea gave the Republicans the anti-aristocratic appeal with which men like Seward had long wished to be associated politically. By fusing older anti-slavery arguments with the idea that slavery posed a threat to Northern free labor and democratic values, it enabled the Republicans to tap into the egalitarian outlook which lay at the heart of Northern society. In this sense, during the 1860 presidential campaign, Republican orators even cast "Honest Abe" as an embodiment of these principles, repeatedly referring to him as "the child of labor" and "son of the frontier", who had proved how "honest industry and toil" were rewarded in the North. Although Lincoln had been a Whig, the "Wide Awakes" (members of the Republican clubs), used replicas of rails that he had split to remind voters of his humble origins. In almost every northern state, organizers attempted to have a Republican Party or an anti-Nebraska fusion movement on ballots in 1854. In areas where the radical Republicans controlled the new organization, the comprehensive radical program became the party policy. Just as they helped organize the Republican Party in the summer of 1854, the radicals played an important role in the national organization of the party in 1856. Republican conventions in New York, Massachusetts, and Illinois adopted radical platforms. These radical platforms in such states as Wisconsin, Michigan, Maine, and Vermont usually called for the divorce of the government from slavery, the repeal of the Fugitive Slave Laws, and no more slave states, as did platforms in Pennsylvania, Minnesota, and Massachusetts when radical influence was high. Conservatives at the Republican 1860 nominating convention in Chicago were able to block the nomination of William Seward, who had an earlier reputation as a radical (but by 1860 had been criticized by Horace Greeley as being too moderate). Other candidates had earlier joined or formed parties opposing the Whigs and had thereby made enemies of many delegates. Lincoln was selected on the third ballot. However, conservatives were unable to bring about the resurrection of "Whiggery". The convention's resolutions regarding slavery were roughly the same as they had been in 1856, but the language appeared less radical. In the following months, even Republican conservatives like Thomas Ewing and Edward Baker embraced the platform language that "the normal condition of territories was freedom". All in all, the organizers had done an effective job of shaping the official policy of the Republican Party. Southern slave holding interests now faced the prospects of a Republican President and the entry of new free states that would alter the nation's balance of power between the sections. To many Southerners, the resounding defeat of the Lecompton Constitution foreshadowed the entry of more free states into the Union. Dating back to the Missouri Compromise, the Southern region desperately sought to maintain an equal balance of slave states and free states so as to be competitive in the Senate. Since the last slave state was admitted in 1845, five more free states had entered. The tradition of maintaining a balance between North and South was abandoned in favor of the addition of more free soil states. Sectional battles over federal policy in the late 1850s Lincoln-Douglas Debates The Lincoln-Douglas Debates were a series of seven debates in 1858 between Stephen Douglas, United States Senator from Illinois, and Abraham Lincoln, the Republican who sought to replace Douglas in the Senate. The debates were mainly about slavery. Douglas defended his Kansas Nebraska Act, which replaced the Missouri Compromise ban on slavery in the Louisiana Purchase territory north and west of Missouri with popular sovereignty, which allowed residents of territories such as the Kansas to vote either for or against slavery. Douglas put Lincoln on the defensive by accusing him of being a Black Republican abolitionist, but Lincoln responded by asking Douglas to reconcile popular sovereignty with the Dred Scott decision. Douglas' Freeport Doctrine was that residents of a territory could keep slavery out by refusing to pass a slave code and other laws needed to protect slavery. Douglas' Freeport Doctrine, and the fact that he helped defeat the pro-slavery Lecompton Constitution, made Douglas unpopular in the South, which led to the 1860 split of the Democratic Party into Northern and Southern wings. The Democrats retained control of the Illinois legislature, and Douglas thus retained his seat in the U.S. Senate (at that time United States Senators were elected by the state legislatures, not by popular vote); however, Lincoln's national profile was greatly raised, paving the way for his election as president of the United States two years later. In The Rise of American Civilization (1927), Charles and Mary Beard argue that slavery was not so much a social or cultural institution as an economic one (a labor system). The Beards cited inherent conflicts between Northeastern finance, manufacturing, and commerce and Southern plantations, which competed to control the federal government so as to protect their own interests. According to the economic determinists of the era, both groups used arguments over slavery and states' rights as a cover. Recent historians have rejected the Beardian thesis. But their economic determinism has influenced subsequent historians in important ways. Modernization theorists, such as Raimondo Luraghi, have argued that as the Industrial Revolution was expanding on a worldwide scale, the days of wrath were coming for a series of agrarian, pre-capitalistic, "backward" societies throughout the world, from the Italian and American South to India. But most American historians point out the South was highly developed and on average about as prosperous as the North. Panic of 1857 and sectional realignments A few historians believe that the serious financial panic of 1857 and the economic difficulties leading up to it strengthened the Republican Party and heightened sectional tensions. Before the panic, strong economic growth was being achieved under relatively low tariffs. Hence much of the nation concentrated on growth and prosperity. The iron and textile industries were facing acute, worsening trouble each year after 1850. By 1854, stocks of iron were accumulating in each world market. Iron prices fell, forcing many American iron mills to shut down. Republicans urged western farmers and northern manufacturers to blame the depression on the domination of the low-tariff economic policies of southern-controlled Democratic administrations. However the depression revived suspicion of Northeastern banking interests in both the South and the West. Eastern demand for western farm products shifted the West closer to the North. As the "transportation revolution" (canals and railroads) went forward, an increasingly large share and absolute amount of wheat, corn, and other staples of western producers– once difficult to haul across the Appalachians– went to markets in the Northeast. The depression emphasized the value of the western markets for eastern goods and homesteaders who would furnish markets and respectable profits. Aside from the land issue, economic difficulties strengthened the Republican case for higher tariffs for industries in response to the depression. This issue was important in Pennsylvania and perhaps New Jersey. Southern response Meanwhile, many Southerners grumbled over "radical" notions of giving land away to farmers that would "abolitionize" the area. While the ideology of Southern sectionalism was well-developed before the Panic of 1857 by figures like J.D.B. DeBow, the panic helped convince even more cotton barons that they had grown too reliant on Eastern financial interests. Thomas Prentice Kettell, former editor of the Democratic Review, was another commentator popular in the South to enjoy a great degree of prominence between 1857 and 1860. Kettell gathered an array of statistics in his book on Southern Wealth and Northern Profits, to show that the South produced vast wealth, while the North, with its dependence on raw materials, siphoned off the wealth of the South. Arguing that sectional inequality resulted from the concentration of manufacturing in the North, and from the North's supremacy in communications, transportation, finance, and international trade, his ideas paralleled old physiocratic doctrines that all profits of manufacturing and trade come out of the land. Political sociologists, such as Barrington Moore, have noted that these forms of romantic nostalgia tend to crop up whenever industrialization takes hold. Such Southern hostility to the free farmers gave the North an opportunity for an alliance with Western farmers. After the political realignments of 1857–58—manifested by the emerging strength of the Republican Party and their networks of local support nationwide—almost every issue was entangled with the controversy over the expansion of slavery in the West. While questions of tariffs, banking policy, public land, and subsidies to railroads did not always unite all elements in the North and the Northwest against the interests of slaveholders in the South under the pre-1854 party system, they were translated in terms of sectional conflict—with the expansion of slavery in the West involved. As the depression strengthened the Republican Party, slave holding interests were becoming convinced that the North had aggressive and hostile designs on the Southern way of life. The South was thus increasingly fertile ground for secessionism. The Republicans' Whig-style personality-driven "hurrah" campaign helped stir hysteria in the slave states upon the emergence of Lincoln and intensify divisive tendencies, while Southern "fire eaters" gave credence to notions of the slave power conspiracy among Republican constituencies in the North and West. New Southern demands to re-open the African slave trade further fueled sectional tensions. From the early 1840s until the outbreak of the Civil War, the cost of slaves had been rising steadily. Meanwhile, the price of cotton was experiencing market fluctuations typical of raw commodities. After the Panic of 1857, the price of cotton fell while the price of slaves continued its steep rise. At the 1858 Southern commercial convention, William L. Yancey of Alabama called for the reopening of the African slave trade. Only the delegates from the states of the Upper South, who profited from the domestic trade, opposed the reopening of the slave trade since they saw it as a potential form of competition. The convention in 1858 wound up voting to recommend the repeal of all laws against slave imports, despite some reservations. John Brown and Harpers Ferry (1859) On October 16, 1859, radical abolitionist John Brown led an attempt to start an armed slave revolt by seizing the U.S. Army arsenal at Harper's Ferry, Virginia (now West Virginia). Brown and twenty followers, both whites (including two of Brown's sons) and blacks (three free blacks, one freedman, and one fugitive slave), planned to seize the armory and use weapons stored there to arm black slaves in order to spark a general uprising by the slave population. Although the raiders were initially successful in cutting the telegraph line and capturing the armory, they allowed a passing train to continue on to Washington, D.C., where the authorities were alerted to the attack. By October 17 the raiders were surrounded in the armory by the militia and other locals. Robert E. Lee (then a Colonel in the U.S. Army) led a company of U.S. Marines in storming the armory on October 18. Ten of the raiders were killed, including both of Brown's sons; Brown himself along with a half dozen of his followers were captured; four of the raiders escaped immediate capture. Six locals were killed and nine injured; the Marines suffered one dead and one injured. The local slave population failed to join in Brown's attack. Brown was subsequently hanged for treason (against the Commonwealth of Virginia), as were six of his followers. The raid became a cause célèbre in both the North and the South, with Brown vilified by Southerners as a bloodthirsty fanatic, but celebrated by many Northern abolitionists as a martyr to the cause of freedom. Elections of 1860 Initially, William H. Seward of New York, Salmon P. Chase of Ohio, and Simon Cameron of Pennsylvania, were the leading contenders for the Republican presidential nomination. But Abraham Lincoln, a former one-term House member who gained fame amid the Lincoln-Douglas Debates of 1858, had fewer political opponents within the party and outmaneuvered the other contenders. On May 16, 1860, he received the Republican nomination at their convention in Chicago, Illinois. The schism in the Democratic Party over the Lecompton Constitution and Douglas' Freeport Doctrine caused Southern "fire-eaters" to oppose front runner Stephen A. Douglas' bid for the Democratic presidential nomination. Douglas defeated the proslavery Lecompton Constitution for Kansas because the majority of Kansans were antislavery, and Douglas' popular sovereignty doctrine would allow the majority to vote slavery up or down as they chose. Douglas' Freeport Doctrine alleged that the antislavery majority of Kansans could thwart the Dred Scott decision that allowed slavery by withholding legislation for a slave code and other laws needed to protect slavery. As a result, Southern extremists demanded a slave code for the territories, and used this issue to divide the northern and southern wings of the Democratic Party. Southerners left the party and in June nominated John C. Breckinridge, while Northern Democrats supported Douglas. As a result, the Southern planter class lost a considerable measure of sway in national politics. Because of the Democrats' division, the Republican nominee faced a divided opposition. Adding to Lincoln's advantage, ex-Whigs from the border states had earlier formed the Constitutional Union Party, nominating John C. Bell for President. Thus, party nominees waged regional campaigns. Douglas and Lincoln competed for Northern votes, while Bell, Douglas and Breckinridge competed for Southern votes. "Vote yourself a farm– vote yourself a tariff" could have been a slogan for the Republicans in 1860. In sum, business was to support the farmers' demands for land (popular also in industrial working-class circles) in return for support for a higher tariff. To an extent, the elections of 1860 bolstered the political power of new social forces unleashed by the Industrial Revolution. In February 1861, after the seven states had departed the Union (four more would depart in April–May 1861; in late April, Maryland was unable to secede because it was put under martial law), Congress had a strong northern majority and passed the Morrill Tariff Act (signed by Buchanan), which increased duties and provided the government with funds needed for the war. Split in the Democratic Party The Alabama extremist William Lowndes Yancey's demand for a federal slave code for the territories split the Democratic Party between North and South, which made the election of Lincoln possible. Yancey tried to make his demand for a slave code moderate enough to get Southern support and yet extreme enough to enrage Northerners and split the party. He demanded that the party support a slave code for the territories if later necessary, so that the demand would be conditional enough to win Southern support. His tactic worked, and lower South delegates left the Democratic Convention at Institute Hall in Charleston, South Carolina and walked over to Military Hall. The South Carolina extremist Robert Barnwell Rhett hoped that the lower South would completely break with the Northern Democrats and attend a separate convention at Richmond, Virginia, but lower South delegates gave the national Democrats one last chance at unification by going to the convention at Baltimore, Maryland before the split became permanent. The end result was that John C. Breckinridge became the candidate of the Southern Democrats, and Stephen Douglas became the candidate of the Northern Democrats. Yancy's previous 1848 attempt at demanding a slave code for the territories was his Alabama Platform, which was in response to the Northern Wilmot Proviso attempt at banning slavery in territories conquered from Mexico. Both the Alabama Platform and the Wilmot Proviso failed, but Yancey learned to be less overtly radical in order to get more support. Southerners thought they were merely demanding equality, in that they wanted Southern property in slaves to get the same (or more) protection as Northern forms of property. Southern secession With the emergence of the Republicans as the nation's first major sectional party by the mid-1850s, politics became the stage on which sectional tensions were played out. Although much of the West– the focal point of sectional tensions– was unfit for cotton cultivation, Southern secessionists read the political fallout as a sign that their power in national politics was rapidly weakening. Before, the slave system had been buttressed to an extent by the Democratic Party, which was increasingly seen as representing a more pro-Southern position that unfairly permitted Southerners to prevail in the nation's territories and to dominate national policy before the Civil War. But Democrats suffered a significant reverse in the electoral realignment of the mid-1850s. 1860 was a critical election that marked a stark change in existing patterns of party loyalties among groups of voters; Abraham Lincoln's election was a watershed in the balance of power of competing national and parochial interests and affiliations. Once the election returns were certain, a special South Carolina convention declared "that the Union now subsisting between South Carolina and other states under the name of the 'United States of America' is hereby dissolved", heralding the secession of six more cotton states by February, and the formation of an independent nation, the Confederate States of America. Lipset (1960) examined the secessionist vote in each Southern state in 1860–61. In each state he divided the counties into high, medium or low proportion of slaves. He found that in the 181 high-slavery counties, the vote was 72% for secession. In the 205 low-slavery counties. the vote was only 37% for secession. (And in the 153 middle counties, the vote for secession was in the middle at 60%). Both the outgoing Buchanan administration and the incoming Lincoln administration refused to recognize the legality of secession or the legitimacy of the Confederacy. After Lincoln called for troops, four border states (that lacked cotton) seceded. Disputes over the route of a proposed transcontinental railroad affected the timing of the Kansas Nebraska Act. The timing of the completion of a railroad from Georgia to South Carolina also was important, in that it allowed influential Georgians to declare their support for secession in South Carolina at a crucial moment. South Carolina secessionists feared that if they seceded first, they would be as isolated as they were during the Nullification Crisis. Support from Georgians was quickly followed by support for secession in the same South Carolina state legislature that previously preferred a cooperationist approach, as opposed to separate state secession. The Totten system of forts (including forts Sumter and Pickens) designed for coastal defense encouraged Anderson to move federal troops from Fort Moultrie to the more easily defended Fort Sumter in Charleston harbor, South Carolina. Likewise, Slemmer moved U.S. troops from Fort Barrancas to the more easily defended Fort Pickens in Florida. These troop movements were defensive from the Northern point of view, and acts of aggression from the Southern point of view. Also, an attempt to resupply Fort Sumter via the ship Star of the West was seen as an attack on a Southern owned fort by secessionists, and as an attempt to defend U.S. property from the Northern point of view. The tariff issue is greatly exaggerated by Lost Cause historians. The tariff had been written and approved by the South, so it was mostly Northerners (especially in Pennsylvania) who complained about the low rates; some Southerners feared that eventually the North would have enough control it could raise the tariff at will. As for states' rights, while a states' right of revolution mentioned in the Declaration of Independence was based on the inalienable equal rights of man, secessionists believed in a modified version of states' rights that was safe for slavery. These issues were especially important in the lower South, where 47 percent of the population were slaves. The upper South, where 32 percent of the population were slaves, considered the Fort Sumter crisis—especially Lincoln's call for troops to march south to recapture it—a cause for secession. The northernmost border slave states, where 13 percent of the population were slaves, did not secede. Fort Sumter When South Carolina seceded In December 1860, Major Robert Anderson, a pro-slavery, former slave-owner from Kentucky, remained loyal to the Union. He was the commanding officer of United States Army forces in Charleston, South Carolina—the last remaining important Union post In the Deep South. Acting without orders, he moved his small garrison from Fort Moultrie, which was indefensible, to the more modern, more defensible, Fort Sumter in the middle of Charleston Harbor. South Carolina leaders cried betrayal, while the North celebrated with enormous excitement at this show of defiance against secessionism. In February 1861 the Confederate States of America was formed and took charge. Jefferson Davis, the Confederate President, ordered the fort be captured. The artillery attack was commanded by Brig. Gen. P. G. T. Beauregard, who had been Anderson's student at West Point. The attack began April 12, 1861, and continued until Anderson, badly outnumbered and outgunned, surrendered the fort on April 14. The battle began the American Civil War, As an overwhelming demand for war swept both the North and South, with only Kentucky attempting to remain neutral. The opening of the Civil War, as well as the modern meaning of the American flag, according to Adam Goodheart (2011), was forged in December 1860, when Anderson, acting without orders, moved the American garrison from Fort Moultrie to Fort Sumter, in Charleston Harbor, in defiance of the overwhelming power of the new Confederate States of America. Goodheart argues this was the opening move of the Civil War, and the flag was used throughout the North to symbolize American nationalism and rejection of secessionism. - Before that day, the flag had served mostly as a military ensign or a convenient marking of American territory, flown from forts, embassies, and ships, and displayed on special occasions like the Fourth of July. But in the weeks after Major Anderson's surprising stand, it became something different. Suddenly the Stars and Stripes flew – as it does today, and especially as it did after September 11 – from houses, from storefronts, from churches; above the village greens and college quads. For the first time American flags were mass-produced rather than individually stitched and even so, manufacturers could not keep up with demand. As the long winter of 1861 turned into spring, that old flag meant something new. The abstraction of the Union clause was transfigured into a physical thing: strips of cloth that millions of people would fight for, and many thousands die for. Onset of the Civil War and the question of compromise Abraham Lincoln's rejection of the Crittenden Compromise, the failure to secure the ratification of the Corwin amendment in 1861, and the inability of the Washington Peace Conference of 1861 to provide an effective alternative to Crittenden and Corwin came together to prevent a compromise that is still debated by Civil War historians. Even as the war was going on, William Seward and James Buchanan were outlining a debate over the question of inevitability that would continue among historians. Two competing explanations of the sectional tensions inflaming the nation emerged even before the war. Buchanan believed the sectional hostility to be the accidental, unnecessary work of self-interested or fanatical agitators. He also singled out the "fanaticism" of the Republican Party. Seward, on the other hand, believed there to be an irrepressible conflict between opposing and enduring forces. The irrepressible conflict argument was the first to dominate historical discussion. In the first decades after the fighting, histories of the Civil War generally reflected the views of Northerners who had participated in the conflict. The war appeared to be a stark moral conflict in which the South was to blame, a conflict that arose as a result of the designs of slave power. Henry Wilson's History of The Rise and Fall of the Slave Power in America (1872–1877) is the foremost representative of this moral interpretation, which argued that Northerners had fought to preserve the union against the aggressive designs of "slave power". Later, in his seven-volume History of the United States from the Compromise of 1850 to the Civil War, (1893–1900), James Ford Rhodes identified slavery as the central—and virtually only—cause of the Civil War. The North and South had reached positions on the issue of slavery that were both irreconcilable and unalterable. The conflict had become inevitable. But the idea that the war was avoidable did not gain ground among historians until the 1920s, when the "revisionists" began to offer new accounts of the prologue to the conflict. Revisionist historians, such as James G. Randall and Avery Craven, saw in the social and economic systems of the South no differences so fundamental as to require a war. Randall blamed the ineptitude of a "blundering generation" of leaders. He also saw slavery as essentially a benign institution, crumbling in the presence of 19th century tendencies. Craven, the other leading revisionist, placed more emphasis on the issue of slavery than Randall but argued roughly the same points. In The Coming of the Civil War (1942), Craven argued that slave laborers were not much worse off than Northern workers, that the institution was already on the road to ultimate extinction, and that the war could have been averted by skillful and responsible leaders in the tradition of Congressional statesmen Henry Clay and Daniel Webster. Two of the most important figures in U.S. politics in the first half of the 19th century, Clay and Webster, arguably in contrast to the 1850s generation of leaders, shared a predisposition to compromises marked by a passionate patriotic devotion to the Union. But it is possible that the politicians of the 1850s were not inept. More recent studies have kept elements of the revisionist interpretation alive, emphasizing the role of political agitation (the efforts of Democratic politicians of the South and Republican politicians in the North to keep the sectional conflict at the center of the political debate). David Herbert Donald argued in 1960 that the politicians of the 1850s were not unusually inept but that they were operating in a society in which traditional restraints were being eroded in the face of the rapid extension of democracy. The stability of the two-party system kept the union together, but would collapse in the 1850s, thus reinforcing, rather than suppressing, sectional conflict. Reinforcing this interpretation, political sociologists have pointed out that the stable functioning of a political democracy requires a setting in which parties represent broad coalitions of varying interests, and that peaceful resolution of social conflicts takes place most easily when the major parties share fundamental values. Before the 1850s, the second American two party system (competition between the Democrats and the Whigs) conformed to this pattern, largely because sectional ideologies and issues were kept out of politics to maintain cross-regional networks of political alliances. However, in the 1840s and 1850s, ideology made its way into the heart of the political system despite the best efforts of the conservative Whig Party and the Democratic Party to keep it out. Contemporaneous explanations |“||The new [Confederate] Constitution has put at rest forever all the agitating questions relating to our peculiar institutions—African slavery as it exists among us—the proper status of the negro in our form of civilization. This was the immediate cause of the late rupture and present revolution. . . .(Jefferson's) ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error.... Our new government is founded upon exactly the opposite idea; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery– subordination to the superior race– is his natural and normal condition.||”| In July 1863, as decisive campaigns were fought at Gettysburg and Vicksburg, Republican senator Charles Sumner re-dedicated his speech The Barbarism of Slavery and said that desire to preserve slavery was the sole cause of the war: |“||[T]here are two apparent rudiments to this war. One is Slavery and the other is State Rights. But the latter is only a cover for the former. If Slavery were out of the way there would be no trouble from State Rights. The war, then, is for Slavery, and nothing else. It is an insane attempt to vindicate by arms the lordship which had been already asserted in debate. With mad-cap audacity it seeks to install this Barbarism as the truest Civilization. Slavery is declared to be the "corner-stone" of the new edifice. Lincoln's war goals were reactions to the war, as opposed to causes. Abraham Lincoln explained the nationalist goal as the preservation of the Union on August 22, 1862, one month before his preliminary Emancipation Proclamation: |“||I would save the Union. I would save it the shortest way under the Constitution. The sooner the national authority can be restored; the nearer the Union will be "the Union as it was." ... My paramount object in this struggle is to save the Union, and is not either to save or to destroy slavery. If I could save the Union without freeing any slave I would do it, and if I could save it by freeing all the slaves I would do it; and if I could save it by freeing some and leaving others alone I would also do that.... I have here stated my purpose according to my view of official duty; and I intend no modification of my oft-expressed personal wish that all men everywhere could be free.||”| On March 4, 1865, Lincoln said in his Second Inaugural Address that slavery was the cause of the War: |“||One-eighth of the whole population were colored slaves, not distributed generally over the Union, but localized in the southern part of it. These slaves constituted a peculiar and powerful interest. All knew that this interest was somehow the cause of the war. To strengthen, perpetuate, and extend this interest was the object for which the insurgents would rend the Union even by war, while the Government claimed no right to do more than to restrict the territorial enlargement of it.||”| See also - American Civil War - Compensated Emancipation - Conclusion of the American Civil War - Issues of the American Civil War - Slavery in the United States - Timeline of events leading to the American Civil War - Elizabeth R. Varon, Bruce Levine, Marc Egnal, and Michael Holt at a plenary session of the organization of American Historians, March 17, 2011, reported by David A. Walsh "Highlights from the 2011 Annual Meeting of the Organization of American Historians in Houston, Texas" HNN online - David Potter, The Impending Crisis, pages 42–50 - The Mason-Dixon Line and the Ohio River were key boundaries. - Fehrenbacher pp.15–17. Fehrenbacher wrote, "As a racial caste system, slavery was the most distinctive element in the southern social order. The slave production of staple crops dominated southern agriculture and eminently suited the development of a national market economy." - Fehrenbacher pp. 16–18 - Goldstone p. 13 - McDougall p. 318 - Forbes p. 4 - Mason pp. 3–4 - Freehling p.144 - Freehling p. 149. In the House the votes for the Tallmadge amendments in the North were 86–10 and 80-14 in favor, while in the South the vote to oppose was 66–1 and 64-2. - Missouri Compromise - Forbes pp. 6–7 - Mason p. 8 - Leah S. Glaser, "United States Expansion, 1800–1860" - Richard J. Ellis, Review of The Shaping of American Liberalism: The Debates over Ratification, Nullification, and Slavery. by David F. Ericson, William and Mary Quarterly, Vol. 51, No. 4 (1994), pp. 826–829 - John Tyler, Life Before the Presidency - Jane H. Pease, William H. Pease, "The Economics and Politics of Charleston's Nullification Crisis", Journal of Southern History, Vol. 47, No. 3 (1981), pp. 335–362 - Remini, Andrew Jackson, v2 pp. 136–137. Niven pg. 135–137. Freehling, Prelude to Civil War pg 143 - Craven pg.65. Niven pg. 135–137. Freehling, Prelude to Civil War pg 143 - Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights, and the Nullification Crisis (1987), page 193; Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816–1836. (1965), page 257 - Ellis p. 193. Ellis further notes that “Calhoun and the nullifiers were not the first southerners to link slavery with states’ rights. At various points in their careers, John Taylor, John Randolph, and Nathaniel Macon had warned that giving too much power to the federal government, especially on such an open-ended issue as internal improvement, could ultimately provide it with the power to emancipate slaves against their owners’ wishes.” - Jon Meacham (2009), American Lion: Andrew Jackson in the White House, p. 247; Correspondence of Andrew Jackson, Vol. V, p. 72. - Varon (2008) p. 109. Wilentz (2005) p. 451 - Miller (1995) pp. 144–146 - Miller (1995) pp. 209–210 - Wilentz (2005) pp. 470–472 - Miller, 112 - Miller, pp. 476, 479–481 - Huston p. 41. Huston writes, "...on at least three matters southerners were united. First, slaves were property. Second, the sanctity of southerners' property rights in slaves was beyond the questioning of anyone inside or outside of the South. Third, slavery was the only means of adjusting social relations properly between Europeans and Africans." - Brinkley, Alan (1986). American History: A Survey. New York: McGraw-Hill. p. 328. - Moore, Barrington (1966). Social Origins of Dictatorship and Democracy. New York: Beacon Press. p. 117. - North, Douglas C. (1961). The Economic Growth of the United States 1790–1860. Englewood Cliffs. p. 130. - Elizabeth Fox-Genovese and Eugene D. Genovese, Slavery in White and Black: Class and Race in the Southern Slaveholders' New World Order (2008) - James M. McPherson, "Antebellum Southern Exceptionalism: A New Look at an Old Question", Civil War History 29 (September 1983) - "Conflict and Collaboration: Yeomen, Slaveholders, and Politics in the Antebellum South", Social History 10 (October 1985): 273–98. quote at p. 297. - Thornton, Politics and Power in a Slave Society: Alabama, 1800–1860 (Louisiana State University Press, 1978) - McPherson (2007) pp.4–7. James M. McPherson wrote in referring to the Progressive historians, the Vanderbilt agrarians, and revisionists writing in the 1940s, “While one or more of these interpretations remain popular among the Sons of Confederate Veterans and other Southern heritage groups, few historians now subscribe to them.” - Craig in Woodworth, ed. The American Civil War: A Handbook of Literature and Research (1996), p.505. - Donald 2001 pp 134–38 - Huston pp. 24–25. Huston lists other estimates of the value of slaves; James D. B. De Bow puts it at $2 billion in 1850, while in 1858 Governor James Pettus of Mississippi estimated the value at $2.6 billion in 1858. - Huston p. 25 - Soil Exhaustion as a Factor in the Agricultural History of Virginia and Maryland, 1606–1860 - Encyclopedia of American Foreign Policy – A-D - Woodworth, ed. The American Civil War: A Handbook of Literature and Research (1996), 145 151 505 512 554 557 684; Richard Hofstadter, The Progressive Historians: Turner, Beard, Parrington (1969); for one dissenter see Marc Egnal. "The Beards Were Right: Parties in the North, 1840–1860". Civil War History 47, no. 1. (2001): 30–56. - Kenneth M. Stampp, The Imperiled Union: Essays on the Background of the Civil War (1981) p 198 - Also from Kenneth M. Stampp, The Imperiled Union p 198 Most historians... now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united. Beard oversimplified the controversies relating to federal economic policy, for neither section unanimously supported or opposed measures such as the protective tariff, appropriations for internal improvements, or the creation of a national banking system.... During the 1850s, Federal economic policy gave no substantial cause for southern disaffection, for policy was largely determined by pro-Southern Congresses and administrations. Finally, the characteristic posture of the conservative northeastern business community was far from anti-Southern. Most merchants, bankers, and manufacturers were outspoken in their hostility to antislavery agitation and eager for sectional compromise in order to maintain their profitable business connections with the South. The conclusion seems inescapable that if economic differences, real though they were, had been all that troubled relations between North and South, there would be no substantial basis for the idea of an irrepressible conflict. - James M. McPherson, Antebellum Southern Exceptionalism: A New Look at an Old Question Civil War History – Volume 50, Number 4, December 2004, page 421 - Richard Hofstadter, "The Tariff Issue on the Eve of the Civil War", The American Historical Review Vol. 44, No. 1 (1938), pp. 50–55 full text in JSTOR - John Calhoun, Slavery a Positive Good, February 6, 1837 - Noll, Mark A. (2002). America's God: From Jonathan Edwards to Abraham Lincoln. Oxford University Press. p. 640. - Noll, Mark A. (2006). The Civil War as a Theological Crisis. UNC Press. p. 216. - Noll, Mark A. (2002). The US Civil War as a Theological War: Confederate Christian Nationalism and the League of the South. Oxford University Press. p. 640. - Hull, William E. (February 2003). "Learning the Lessons of Slavery". Christian Ethics Today 9 (43). Retrieved 2007-12-19. - Methodist Episcopal Church, South - Presbyterian Church in the United States - Gaustad, Edwin S. (1982). A Documentary History of Religion in America to the Civil War. Wm. B. Eerdmans Publishing Co. pp. 491–502. - Johnson, Paul (1976). History of Christianity. Simon & Schuster. p. 438. - Noll, Mark A. (2002). America's God: From Jonathan Edwards to Abraham Lincoln. Oxford University Press. pp. 399–400. - Miller, Randall M.; Stout, Harry S.; Wilson, Charles Reagan, eds. (1998). "title=The Bible and Slavery". Religion and the American Civil War. Oxford University Press. p. 62. - Bestor, 1964, pp. 10–11 - McPherson, 2007, p. 14. - Stampp, pp. 190–193. - Bestor, 1964, p. 11. - Krannawitter, 2008, pp. 49–50. - McPherson, 2007, pp. 13–14. - Bestor, 1964, pp. 17–18. - Guelzo, pp. 21–22. - Bestor, 1964, p. 15. - Miller, 2008, p. 153. - McPherson, 2007, p. 3. - Bestor, 1964, p. 19. - McPherson, 2007, p. 16. - Bestor, 1964, pp. 19–20. - Bestor, 1964, p. 21 - Bestor, 1964, p. 20 - Bestor, 1964, p. 20. - Russell, 1966, p. 468-469 - Bestor, 1964, p. 23 - Russell, 1966, p. 470 - Bestor, 1964, p. 24 - Bestor, 1964, pp. 23-24 - Holt, 2004, pp. 34–35. - McPherson, 2007, p. 7. - Krannawitter, 2008, p. 232. - Bestor, 1964, pp. 24–25. - "The Amistad Case". National Portrait Gallery. Retrieved 2007-10-16. - McPherson, Battle Cry p. 8; James Brewer Stewart, Holy Warriors: The Abolitionists and American Slavery (1976); Pressly, 270ff - Wendell Phillips, "No Union With Slaveholders", January 15, 1845, in Louis Ruchames, ed. The Abolitionists (1963), p.196. - Mason I Lowance, Against Slavery: An Abolitionist Reader, (2000), page 26 - "Abolitionist William Lloyd Garrison Admits of No Compromise with the Evil of Slavery". Retrieved 2007-10-16. - Alexander Stephen's Cornerstone Speech, Savannah; Georgia, March 21, 1861 - Stampp, The Causes of the Civil War, page 59 - Schlessinger quotes from an essay “The State Rights Fetish” excerpted in Stampp p. 70 - Schlessinger in Stampp pp. 68–69 - McDonald p. 143 - Kenneth M. Stampp, The Causes of the Civil War, p. 14 - Nevins, Ordeal of the Union: Fruits of Manifest Destiny 1847–1852, p. 155 - Donald, Baker, and Holt, p.117. - When arguing for the equality of states, Jefferson Davis said, "Who has been in advance of him in the fiery charge on the rights of the States, and in assuming to the Federal Government the power to crush and to coerce them? Even to-day he has repeated his doctrines. He tells us this is a Government which we will learn is not merely a Government of the States, but a Government of each individual of the people of the United States". – Jefferson Davis' reply in the Senate to William H. Seward, Senate Chamber, U.S. Capitol, February 29, 1860, From The Papers of Jefferson Davis, Volume 6, pp. 277–84. - When arguing against equality of individuals, Davis said, "We recognize the fact of the inferiority stamped upon that race of men by the Creator, and from the cradle to the grave, our Government, as a civil institution, marks that inferiority". – Jefferson Davis' reply in the Senate to William H. Seward, Senate Chamber, U.S. Capitol, February 29, 1860, – From The Papers of Jefferson Davis, Volume 6, pp. 277–84. Transcribed from the Congressional Globe, 36th Congress, 1st Session, pp. 916–18. - Jefferson Davis' Second Inaugural Address, Virginia Capitol, Richmond, February 22, 1862, Transcribed from Dunbar Rowland, ed., Jefferson Davis, Constitutionalist, Volume 5, pp. 198–203. Summarized in The Papers of Jefferson Davis, Volume 8, p. 55. - Lawrence Keitt, Congressman from South Carolina, in a speech to the House on January 25, 1860: Congressional Globe. - Stampp, The Causes of the Civil War, pages 63–65 - William C. Davis, Look Away, pages 97–98 - David Potter, The Impending Crisis, page 275 - First Lincoln Douglas Debate at Ottawa, Illinois August 21, 1858 - Bertram Wyatt-Brown, Southern Honor: Ethics and Behavior in the Old South (1982) pp 22–23, 363 - Christopher J. Olsen (2002). Political Culture and Secession in Mississippi: Masculinity, Honor, and the Antiparty Tradition, 1830–1860. Oxford University Press. p. 237. footnote 33 - Lacy Ford, ed. (2011). A Companion to the Civil War and Reconstruction. Wiley. p. 28. - Michael William Pfau, "Time, Tropes, and Textuality: Reading Republicanism in Charles Sumner's 'Crime Against Kansas'", Rhetoric & Public Affairs vol 6 #3 (2003) 385–413, quote on p. 393 online in Project MUSE - In modern terms Sumner accused Butler of being a "pimp who attempted to introduce the whore, slavery, into Kansas" says Judith N. McArthur; Orville Vernon Burton (1996). "A Gentleman and an Officer": A Military and Social History of James B. Griffin's Civil War. Oxford U.P. p. 40. - Williamjames Hoffer, The Caning of Charles Sumner: Honor, Idealism, and the Origins of the Civil War (2010) p. 62 - William E. Gienapp, "The Crime Against Sumner: The Caning of Charles Sumner and the Rise of the Republican Party," Civil War History (1979) 25#3 pp. 218-245 doi:10.1353/cwh.1979.0005 - Donald, David; Randal, J.G. (1961). The Civil War and Reconstruction. Boston: D.C. Health and Company. p. 79. - Allan, Nevins (1947). Ordeal of the Union (vol. 3) III. New York: Charles Scribner's Sons. p. 218. - Moore, Barrington, p.122. - William W, Freehling, The Road to Disunion: Secessionists Triumphant 1854–1861, pages 271–341 - Roy Nichols, The Disruption of American Democracy: A History of the Political Crisis That Led Up To The Civil War (1949) - Seymour Martin Lipset, Political Man: The Social Bases of Politics (Doubleday, 1960) p. 349. - Maury Klein, Days of Defiance: Sumter, Secession, and the Coming of the Civil War (1999) - David M. Potter, The Impending Crisis, pages 14–150 - William W. Freehling, The Road to Disunion, Secessionists Triumphant: 1854–1861, pages 345–516 - Richard Hofstadter, "The Tariff Issue on the Eve of the Civil War", American Historical Review Vol. 44, No. 1 (October 1938), pp. 50–55 in JSTOR - Daniel Crofts, Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989 - Adam Goodheart, 1861: The Civil War Awakening (2011) ch 2–5 - Adam Goodheart, "Prologue", in 1861: The Civil War Awakening (2011) - Letter to Horace Greeley, August 22, 1862 - Craven, Avery. The Coming of the Civil War (1942) ISBN 0-226-11894-0 - Donald, David Herbert, Baker, Jean Harvey, and Holt, Michael F. The Civil War and Reconstruction. (2001) - Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights and the Nullification Crisis. (1987) - Fehrenbacher, Don E. The Slaveholding Republic: An Account of the United States Government's Relations to Slavery. (2001) ISBN 1-195-14177-6 - Forbes, Robert Pierce. The Missouri Compromise and ItAftermath: Slavery and the Meaning of America. (2007) ISBN 978-0-8078-3105-2 - Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816–1836. (1965) ISBN 0-19-507681-8 - Freehling, William W. The Road to Disunion: Secessionists at Bay 1776–1854. (1990) ISBN 0-19-505814-3 - Freehling, William W. and Craig M. Simpson, eds. Secession Debated: Georgia's Showdown in 1860 (1992), speeches - Hesseltine; William B. ed. The Tragic Conflict: The Civil War and Reconstruction (1962), primary documents - Huston, James L. Calculating the Value of the Union: Slavery, Property Rights, and the Economic Origins of the Civil War. (2003) ISBN 0-8078-2804-1 - Mason, Matthew. Slavery and Politics in the Early American Republic. (2006) ISBN 13:978-0-8078-3049-9 - McDonald, Forrest. States' Rights and the Union: Imperium in Imperio, 1776–1876. (2000) - McPherson, James M. This Mighty Scourge: Perspectives on the Civil War. (2007) - Miller, William Lee. Arguing About Slavery: John Quincy Adams and the Great Battle in the United States Congress. (1995) ISBN 0-394-56922-9 - Niven, John. John C. Calhoun and the Price of Union (1988) ISBN 0-8071-1451-0 - Perman, Michael, ed. Major Problems in Civil War & Reconstruction (2nd ed. 1998) primary and secondary sources. - Remini, Robert V. Andrew Jackson and the Course of American Freedom, 1822–1832,v2 (1981) ISBN 0-06-014844-6 - Stampp, Kenneth, ed. The Causes of the Civil War (3rd ed 1992), primary and secondary sources. - Varon, Elizabeth R. Disunion: The Coming of the American Civil War, 1789–1859. (2008) ISBN 978-0-8078-3232-5 - Wakelyn; Jon L. ed. Southern Pamphlets on Secession, November 1860 – April 1861 (1996) - Wilentz, Sean. The Rise of American Democracy: Jefferson to Lincoln. (2005) ISBN 0-393-05820-4 Further reading - Ayers, Edward L. What Caused the Civil War? Reflections on the South and Southern History (2005). 222 pp. - Beale, Howard K., "What Historians Have Said About the Causes of the Civil War", Social Science Research Bulletin 54, 1946. - Boritt, Gabor S. ed. Why the Civil War Came (1996) - Childers, Christopher. "Interpreting Popular Sovereignty: A Historiographical Essay", Civil War History Volume 57, Number 1, March 2011 pp. 48–70 in Project MUSE - Crofts Daniel. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989), pp 353–82 and 457-80 - Etcheson, Nicole. "The Origins of the Civil War", History Compass 2005 #3 (North America) - Foner, Eric. "The Causes of the American Civil War: Recent Interpretations and New Directions". In Beyond the Civil War Synthesis: Political Essays of the Civil War Era, edited by Robert P. Swierenga, 1975. - Kornblith, Gary J., "Rethinking the Coming of the Civil War: A Counterfactual Exercise". Journal of American History 90.1 (2003): 80 pars. detailed historiography; online version - Pressly, Thomas. Americans Interpret Their Civil War (1966), sorts historians into schools of interpretation - SenGupta, Gunja. “Bleeding Kansas: A Review Essay”. Kansas History 24 (Winter 2001/2002): 318–341. - Tulloch, Hugh. The Debate On the American Civil War Era (Issues in Historiography) (2000) - Woodworth, Steven E. ed. The American Civil War: A Handbook of Literature and Research (1996), 750 pages of historiography; see part IV on Causation. "Needless war" school - Craven, Avery, The Repressible Conflict, 1830–61 (1939) - The Coming of the Civil War (1942) - "The Coming of the War Between the States", Journal of Southern History 2 (August 1936): 30–63; in JSTOR - Donald, David. "An Excess of Democracy: The Civil War and the Social Process" in David Donald, Lincoln Reconsidered: Essays on the Civil War Era, 2d ed. (New York: Alfred A. Knopf, 1966), 209–35. - Holt, Michael F. The Political Crisis of the 1850s. (1978) emphasis on political parties and voters - Randall, James G. "The Blundering Generation", Mississippi Valley Historical Review 27 (June 1940): 3–28 in JSTOR - James G. Randall. The Civil War and Reconstruction. (1937), survey and statement of "needless war" interpretation - Pressly, Thomas J. "The Repressible Conflict", chapter 7 of Americans Interpret Their Civil War (Princeton: Princeton University Press, 1954). - Ramsdell, Charles W. "The Natural Limits of Slavery Expansion", Mississippi Valley Historical Review, 16 (September 1929), 151–71, in JSTOR; says slavery had almost reached its outer limits of growth by 1860, so war was unnecessary to stop further growth. online version without footnotes Economic causation and modernization - Beard, Charles, and Mary Beard. The Rise of American Civilization. Two volumes. (1927), says slavery was minor factor - Luraghi, Raimondo, "The Civil War and the Modernization of American Society: Social Structure and Industrial Revolution in the Old South Before and During the War", Civil War History XVIII (September 1972). in JSTOR - McPherson, James M. Ordeal by Fire: the Civil War and Reconstruction. (1982), uses modernization interpretation. - Moore, Barrington. Social Origins of Dictatorship and Democracy. (1966). modernization interpretation - Thornton, Mark; Ekelund, Robert B. Tariffs, Blockades, and Inflation: The Economics of the Civil War. (2004), stresses fear of future protective tariffs Nationalism and culture - Crofts Daniel. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989) - Current, Richard. Lincoln and the First Shot (1963) - Nevins, Allan, author of most detailed history - Ordeal of the Union 2 vols. (1947) covers 1850–57. - The Emergence of Lincoln, 2 vols. (1950) covers 1857–61; does not take strong position on causation - Olsen, Christopher J. Political Culture and Secession in Mississippi: Masculinity, Honor, and the Antiparty Tradition, 1830–1860" (2000), cultural interpretation - Potter, David The Impending Crisis 1848–1861. (1976), Pulitzer Prize-winning history emphasizing rise of Southern nationalism - Potter, David M. Lincoln and His Party in the Secession Crisis (1942). - Miller, Randall M., Harry S. Stout, and Charles Reagan Wilson, eds. Religion and the American Civil War (1998), essays Slavery as cause - Ashworth, John - Slavery, Capitalism, and Politics in the Antebellum Republic. (1995) - "Free labor, wage labor, and the slave power: republicanism and the Republican party in the 1850s", in Melvyn Stokes and Stephen Conway (eds), The Market Revolution in America: Social, Political and Religious Expressions, 1800–1880, pp. 128–46. (1996) - Donald, David et al. The Civil War and Reconstruction (latest edition 2001); 700-page survey - Fellman, Michael et al. This Terrible War: The Civil War and its Aftermath (2003), 400-page survey - Foner, Eric - Free Soil, Free Labor, Free Men: the Ideology of the Republican Party before the Civil War. (1970, 1995) stress on ideology - Politics and Ideology in the Age of the Civil War. New York: Oxford University Press. (1981) - Freehling, William W. The Road to Disunion: Secessionists at Bay, 1776–1854 1991., emphasis on slavery - Gienapp William E. The Origins of the Republican Party, 1852–1856 (1987) - Manning, Chandra. What This Cruel War Was Over: Soldiers, Slavery, and the Civil War. New York: Vintage Books (2007). - McPherson, James M. Battle Cry of Freedom: The Civil War Era. (1988), major overview, neoabolitionist emphasis on slavery - Morrison, Michael. Slavery and the American West: The Eclipse of Manifest Destiny and the Coming of the Civil War (1997) - Ralph E. Morrow. "The Proslavery Argument Revisited", The Mississippi Valley Historical Review, Vol. 48, No. 1. (June 1961), pp. 79–94. in JSTOR - Rhodes, James Ford History of the United States from the Compromise of 1850 to the McKinley-Bryan Campaign of 1896 Volume: 1. (1920), highly detailed narrative 1850–56. vol 2 1856–60; emphasis on slavery - Schlesinger, Arthur Jr. "The Causes of the Civil War" (1949) reprinted in his The Politics of Hope (1963); reintroduced new emphasis on slavery - Stampp, Kenneth M. America in 1857: A Nation on the Brink (1990) - Stampp, Kenneth M. And the War Came: The North and the Secession Crisis, 1860–1861 (1950). - Civil War and Reconstruction: Jensen's Guide to WWW Resources - Report of the Brown University Steering Committee on Slavery and Justice - State by state popular vote for president in 1860 election - Tulane course – article on 1860 election - Onuf, Peter. "Making Two Nations: The Origins of the Civil War" 2003 speech - The Gilder Lehrman Institute of American History - CivilWar.com Many source materials, including states' secession declarations. - Causes of the Civil War Collection of primary documents - Declarations of Causes of Seceding States - Alexander H. Stephens' Cornerstone Address - An entry from Alexander Stephens' diary, dated 1866, reflecting on the origins of the Civil War. - The Arguments of the Constitutional Unionists in 1850–51 - Shmoop US History: Causes of the Civil War – study guide, dates, trivia, multimedia, teachers' guide - Booknotes interview with Stephen B. Oates on The Approaching Fury: Voices of the Storm, 1820–1861, April 27, 1997.
http://en.wikipedia.org/wiki/Origins_of_the_American_Civil_War
13
19
The Roman Empire is the term conventionally used to describe the Roman state in the centuries following its reorganization under the leadership of Caesar Augustus. Although Rome possessed a collection of tribute-states for centuries before the autocracy of Augustus, the pre-Augustan state is conventionally described as the Roman Republic. The difference between the Roman Empire and the Roman Republic lies primarily in the governing bodies and their relationship to each other. For many years historians made a distinction between the Principate, the period from Augustus until the Crisis of the Third Century, and the Dominate, the period from Diocletian until the end of the Empire in the West. According to this theory, during the Principate (from the Latin word princeps, meaning "the first", the only title Augustus would permit himself) the realities of dictatorship were concealed behind Republican forms; while during the Dominate (from the word dominus, meaning "Master") imperial power showed its naked face, with golden crowns and ornate imperial ritual. We now know that the situation was far more nuanced: certain historical forms continued until the Byzantine period, more than one thousand years after they were created, and displays of imperial majesty were common from the earliest days of the Empire. Over the course of its history the Roman Empire controlled all of the Hellenized states that bordered the Mediterranean Sea, as well as the Celtic regions of Western Europe. The administration of the Roman Empire eventually evolved into separate Eastern and Western halves, more or less following this cultural division. They are respectively known as the Eastern Roman Empire and the Western Roman Empire. By the time that Odoacer took power of the West in 476 the Western half was clearly evolving in new directions, with the Church absorbing much of the administrative and charitable roles previously filled by the secular government. The Eastern half of the Empire, centered around Constantinople, the city of Constantine the Great, remained the heartland of the Roman state until 1453 when the Byzantine Empire fell to the Ottoman Turks. The Roman Empire's influence on government, law, and monumental architecture, as well as many other aspects of Western life remains inescapable. Roman titles of power were adopted by successor states and other entities with imperial pretensions, including the Frankish kingdom, the Holy Roman Empire, the first and second Bulgarian empires (see List of Bulgarian monarchs), the Russian/Kiev dynasties (see czars), and the German Empire (see Kaiser). See Roman culture |Topics in Roman Government| |Politics and Law:| As a matter of convenience, the Roman Empire is held to have begun with the constitutional settlement following the Battle of Actium in 31 BC. In fact the Republican institutions at Rome had been destroyed over the preceding century and Rome had been effectively under one-man rule since the time of Sulla. The reign of Augustus marks an important turning point, though. By the time of Actium, there was no one left alive who could recall functional Republican institutions or a time when there was no civil war in Rome. Forty-five years later, at Augustus's death, there would have been few living who could recall a time before Augustus himself. The average Roman had a life expectancy of only forty years. The long, peaceful and consensual reign of Augustus allowed a generation to live and die knowing no other form of rule, or indeed no other ruler. This was critically important to creating a mindset that would allow hereditary monarchy to exist in a Rome that had killed Julius Caesar for his regal pretensions. Whether or not the people of Rome welcomed one-man rule, in the Age of Augustus, it was all they knew, and so it would remain for many centuries. Augustus's reign was notable for several long-lasting achievements that would define the Empire: - Creation of a hereditary office, which we refer to as Emperor of Rome. - Fixation of the payscale. Duration of Roman military service marked the final step in the evolution of the Roman Army from a citizen army to a professional one. - Creation of the Praetorian Guard, which would make and unmake emperors for centuries. - Expansion to the natural borders of the Empire. The borders reached upon Augustus's death remained the limits of Empire, with minimal exceptions, for the next four hundred years. - Development of trade links with regions as far as India and China. - Creation of a civil service outside of the Senatorial structure, leading to a continuous weakening of Senatorial authority. - Enactment of the lex Julia of 18 BC and the lex Papia Poppaea of AD 9, which rewarded childbearing and penalized celibacy. - Promulgation of the cult of the Deified Julius Caesar throughout the Empire, and the encouragement of a quasi-godlike status for himself in his own lifetime in the Hellenist East. This tradition lasted until the time of Constantine, who was made both a Roman god and "the Thirteenth Apostle" upon his death. Cultural developmentsMain article: Roman culture The Augustan period saw a tremendous outpouring of cultural achievement in the areas of poetry, history, sculpture and architecture. SourcesThe Age of Augustus is paradoxically far more poorly documented than the Late Republican period that preceded it. While Livy wrote his magisterial history during Augustus's reign and his work covered all of Roman history through 9 BC, only epitomes survive of his coverage of the Late Republican and Augustan periods. Our important primary sources for this period include the: - Res Gestae Divi Augusti, Augustus's highly partisan autobiography, - Historiae Romanae by Velleius Paterculus, a disorganized work which remains the best annals of the Augustan period, and - Controversiae and Suasoriae of Seneca the Elder. Though primary accounts of this period are few, works of poetry, legislation and engineering from this period provide important insights into Roman life. Archeology, including maritime archeology, aerial surveys, epigraphic inscriptions on buildings, and Augustan coinage, has also provided valuable evidence about economic, social and military conditions. Secondary sources on the Augustan Age include Tacitus, Dio Cassius, Plutarch and Suetonius. Josephus's Jewish Antiquities is the important source for Judea in this period, which became a province during Augustus's reign. Julio-Claudian dynasty: Augustus' heirs Augustus, leaving no sons, was succeeded by his stepson Tiberius, the son of his wife Livia from her first marriage. Augustus was a scion of the gens Julia (the Julian family), one of the most ancient patrician clans of Rome, while Tiberius was a scion of the gens Claudia, only slightly less ancient than the Julians. Their three immediate successors were all descended both from the gens Claudia, through Tiberius' brother Nero Claudius Drusus, and from gens Julia, either through Julia Caesaris, Augustus' daughter from his first marriage (Caligula and Nero), or through Augustus' sister Octavia (Claudius). Historians thus refer to their dynasty as "Julio-Claudian". The early years of Tiberius' reign were peaceful and relatively benign. Tiberius secured the power of Rome and enriched her treasury. However, Tiberius' reign soon became characterized by paranoia and slander. In AD19, he was popularly blamed for the death of his nephew, the popular Germanicus. In AD 23 his own son Drusus died. More and more, Tiberius retreated into himself. He began a series of treason trials and executions. He left power in the hands of the commander of the guard, Aelius Sejanus. Tiberius himself retired to live at his villa on the island of Capri in AD 26, leaving administration in the hands of Sejanus, who carried on the persecutions with relish. Sejanus also began to consolidate his own power; in AD 31 he was named co-consul with Tiberius and married Livilla, the emperor's niece. At this point he was hoist by his own petard; the Emperor's paranoia, which he had so ably exploited for his own gain, was turned against him. Sejanus was put to death, along with many of his cronies, the same year. The persecutions continued apace until Tiberius's death in AD 37. At the time of Tiberius's death most of the people who might have succeeded him had been brutally murdered. The logical successor (and Tiberius's own choice) was his grandnephew, Germanicus's son Gaius (better known as Caligula). Caligula started out well, by putting an end to the persecutions and burning his uncle's records. Unfortunately, he quickly lapsed into illness. The Caligula that emerged in late 37 may have suffered from epilepsy, and was more probably insane. He ordered his soldiers to invade Britain, but changed his mind at the last minute and had them pick sea shells on the northern end of France instead. It is believed he carried on incestuous relations with his sisters. He had ordered a statue of himself to be erected in the Temple at Jerusalem, which would have undoubtedly led to revolt had he not been dissuaded. In 41, Caligula was assassinated by the commander of the guard Cassius Chaerea. The only member left of the imperial family to take charge was another nephew of Tiberius's, Tiberius Claudius Drusus Nero Germanicus, better known as the emperor Claudius. Claudius had long been considered a weakling and a fool by the rest of his family. He was, however, neither paranoid like his uncle Tiberius, nor insane like his nephew Caligula, and was therefore able to administer the empire with reasonable ability. He improved the bureaucracy and streamlined the citizenship and senatorial rolls. He also proceeded with the conquest and colonization of Britain (in 43), and incorporated more Eastern provinces into the empire. In Italy, he constructed a winter port at Ostia, thereby providing a place for grain from other parts of the Empire to be brought in inclement weather. On the home front, Claudius was less successful. His wife Messalina cuckolded him; when he found out, he had her executed and married his niece, Agrippina the younger. She, along with several of his freedmen, held an inordinate amount of power over him, and very probably killed him in 54. Claudius was deified later that year. The death of Claudius paved the way for Agrippina's own son, the 16-year-old Lucius Domitius, or, as he was known by this time, Nero. Initially, Nero left the rule of Rome to his mother and his tutors, particularly Lucius Annaeus Seneca. However, as he grew older, his desire for power increased; he had his mother and tutors executed. During Nero's reign, there were a series of riots and rebellions throughout the Empire: in Britain, Armenia, Parthia, and Judaea. Nero's inability to manage the rebellions and his basic incompetence became evident quickly and in 68, even the Imperial guard renounced him. Nero is best remembered for playing his fiddle while the city of Rome burned, though this story is apocryphal, as the fiddle had yet to be invented. Nero committed suicide, and the year 69 (known as the Year of the Four Emperors) was a year of civil war, with the emperors Galba, Otho, Vitellius, and Vespasian ruling in quick succession. By the end of the year, Vespasian was able to solidify his power as emperor of Rome. The Flavians, although a relatively short lived dynasty, helped restore stability in an empire on its knees. Although there are criticism of all three, especially based on their more centralized style of rule, it was through the reforms and good rule of the three that helped create a stable empire that would last well into the 3rd Century. Vespasian was a remarkably successful Roman general who had been given rule over much of the eastern part of the Roman Empire. He had supported the imperial claims of Galba; however, on his death, Vespasian became a major contender for the throne. After the suicide of Otho, Vespasian was able to hijack Rome's winter grain supply in Egypt, placing him in a good position to defeat his remaining rival, Vitellius. On December 20, 69, some of Vespasian's partisans were able to occupy Rome. Vitellius was murdered by his own troops, and the next day, Vespasian was confirmed as Emperor by the Senate. At the age of 60 and battle hardened he was hardly a charismatic emperor, but he turned out to be an excellent ruler none the less. Although Vespasian was considered quite the autocrat by the senate, he mostly continued the weakening of that body that had been going since the reign of Tiberius. This was typified by his dating his accession to power from July 1, when his troops proclaimed him emperor, instead of December 21, when the Senate confirmed his appointment. Another example was his assumption of the censorship in 73, giving him power over who exactly made up the senate. He used that power to expel dissident senators. At the same time, he increased the number of senators from 200, at that low level due to the actions of Nero and the year of crisis that followed, to 1000, most of the new senators coming not from Rome but from Italy and the urban centers within the western provinces. Vespasian was able to liberate Rome from the financial burdens placed upon it by Nero's excesses and the civil wars. To do this, he not only increased taxes, but created new forms of taxation. Also, through his power as censor he was able to carefully examine the fiscal status of every city and province, many paying taxes based upon information and structures more than a century old. Through this sound fiscal policy, he was able to build up a surplus in the treasury and embark on public works projects. It was he who first commissioned the Roman Colosseum; he also built a forum whose centerpiece was a temple to Peace. In addition, he alloted sizable subsedies to the arts, creating a chair of rhetoric at Rome. Vespasian was also an effective emperor for the provinces in his decades of office, having posts all across the empire, both east and west. In the west he gave considerable favoritism to Spain in which he granted Latin rights to over three hundred towns and cities, promoting a new era of urbanization throughout the western (i.e. formerly barbarian) provinces. Through the additions he made to the Senate he allowed greater influence of the provinces in the Senate, helping to promote unity in the empire. He also extended the borders of the empire on every front, most of which was done to help strengthen the frontier defenses, one of Vespasian's main goals. The crisis of 69 had wrought havoc on the army. One of the most marked problems had been the support lent by provincial legions to men who supposedly represented the best will of their province. This was mostly caused by the placement of native auxiliary units in the areas they were recruited in, a practice Vespasian stopped. He mixed auxiliary units with men from other areas of the empire or moved the units away from where they were recruited to help stop this. Also, to further reduce the chances of another military coup he broke up the legions, and instead of placing them in singular concentrations broke them up along the border. Perhaps the most important military reform he undertook was the extension of legion recruitment from exclusively Italy to Gaul and Spain, in line with the Romanization of those areas. Titus, the eldest son of Vespasian, had been groomed to rule. He had served as an effective general under his father, helping to secure the east and eventually taking over the command of Roman armies in Syria and Palestine, quelling the significant Jewish revolt going on at the time. Throughout his father's reign he had been tailored for rule, sharing the consul for several years with his father and receiving the best tutelage. Although there was some trepidation when he took office due to his known dealings with some of the less respectable elements of Roman society, he quickly proved his merit, even recalling many exiled by his father as a show of good faith. However, his short reign was marked by disaster: in 79, Vesuvius erupted in Pompeii, and in 80, a fire decimated much of Rome. His generosity in rebuilding after these tragedies made him very popular. Titus was very proud of his work on the vast amphitheater begun by his father. He held the opening ceremonies in the still unfinished edifice during the year 80, celebrating with a lavish show that featured 100 gladiators and lasted 100 days. Titus died in 81, at the age of 41; it was rumored that his brother Domitian murdered him in order to become his successor, although these claims have little merit. Whatever the case, he was greatly mourned and missed. The Flavians all had rather poor relations with the senate due to their more autocratic style, however Domitian was the only one who truly created significant problems. His continuous control as consul and censor throughout his rule, the former his father sharing in much the same way of his Julio-Claudian forerunners, the latter having difficulty even obtaining, were unheard of. In addition, he often appeared in full military regalia as an imperator, an affront to the idea of what the Principate-era emperor's power was based upon, the emperor as the princeps. His reputation in the Senate aside, he kept the people of Rome happy through various measures, including donations to every resident of Rome, wild spectacles in the newly finished Colosseum, and continuing the public works projects of his father and brother. He also apparently had the good fiscal sense of his father, because although he spent lavishly his successors came to power with a well endowed treasury. However, during the end of his reign Domitian became extremely paranoid which probably had its initial roots in the treatment he received by his father. Although given significant responsibility, he was never trusted with anything important without supervision. This flowered into the severe and perhaps pathological following the short lived rebellion in 89 of Antonius Saturninus, a governor and commander in Germany. This paranoia led to a large number of arrests, executions, and seizure of property (which might help explain his ability to spend so lavishly). Eventually it got to the point where even his closest advisors and family members lived in fear, leading them to his murder in 96. The Adoptive Emperors "Five Good Emperors" (AD 96 – 180) Under Trajan, the Empire's borders briefly achieved their maximum extension with provinces created in Mesopotamia. From 166 AD, Roman embassies to China, first sent under the reign of Antonius Pius and probably traveling on the southern sea route, are recorded in Chinese historical sources such as the Later Han History. The period of the "five good emperors" was brought to an end by the reign of Commodus from 180 to 192. Commodus was the son of Marcus Aurelius, making him the first direct successor in a century, breaking the scheme of adoptive successors that had turned out so well. He was co-emperor with his father from 177. When he became sole emperor upon the death of his father in 180, it was at first seen as a hopeful sign by the people of the Roman Empire. Nevertheless, as generous and magnanimous as his father was, Commodus turned out to be just the opposite. Commodus is often thought to have been insane, and he was certainly given to excess. He began his reign by making an unfavorable peace treaty with the Marcomanni, who had been at war with Marcus Aurelius. Commodus also had a passion for gladiatorial combat, which he took so far as to take to the arena himself, dressed as a gladiator. In 190, a part of the city of Rome burned, and Commodus took the opportunity to "re-found" the city of Rome in his own honor, as Colonia Commodiana. The months of the calendar were all renamed in his honor, and the senate was renamed as the Commodian Fortunate Senate. The army became known as the Commodian Army. Commodus was strangled in his sleep in 192, a day before he planned to march into the Senate dressed as a gladiator to take office as a consul. Upon his death, the Senate passed damnatio memoriae on him and restored the proper name to the city of Rome and its institutions. The popular movies The Fall of the Roman Empire (1964) and Gladiator (2000) were loosely based on the career of the emperor Commodus, although they should not be taken as an accurate historical depictions of his life. Many wonder why Marcus Aurelius decided to break the successful scheme of adoptive succession. The real reasoning can be found in that line of succession before him. The other emperors did not have direct successors available, so had to adopt their successors. However, they attempted to keep it in the family as it were. Trajan was chosen by Nerva more likely to appease the Senate than anything else. Hadrian was a relative of Trajan, and although Antonius Pius was not related to Hadrian, the conditions of his being made heir included the adoption of Hadrian's young nephew Marcus Aurelius as heir to Pius. So, in fact, Aurelius' choice to make his son his successor was hardly out of place, and it's likely that had any of the previous emperors had available a suitable son as heir they would have taken the same course of action. It is then merely misfortune more than anything else that placed such a ill-suited man on the throne. Severan dynasty (AD 193 – 235) The Severan dynasty includes the increasingly troubled reigns of Septimius Severus (193–211), Caracalla (211–217), Macrinus (217–218), Elagabalus (218–222), and Alexander Severus (222–235). The founder of the dynasty, Lucius Septimius Severus, belonged to a leading native family of Leptis Magna in Africa who allied himself with a prominent Syrian family by his marriage to Julia Domna. Their provincial background and cosmopolitan alliance, eventually giving rise to imperial rulers of Syrian background, Elagabalus and Alexander Severus, testifies to the broad political franchise and economic development of the Roman empire that had been achieved under the Antonines. A generally successful ruler, Septimius Severus cultivated the army's support with substantial remuneration in return for total loyalty to the emperor and substituted equestrian officers for senators in key administrative positions. In this way, he successfully broadened the power base of the imperial administration throughout the empire. Abolishing the regular standing jury courts of Republican times, Septimius Severus was likewise able to transfer additional power to the executive branch of the government, of which he was decidedly the chief representative. Septimius Severus' son, Marcus Aurelius Antoninus – nicknamed Caracalla – removed all legal and political distinction between Italians and provincials, enacting the Constitutio Antoniniana in 212 which extended full Roman citizenship to all free inhabitants of the empire. Caracalla was also responsible for erecting the famous Baths of Caracalla in Rome, their design serving as an architectural model for many subsequent monumental public buildings. Increasingly unstable and autocratic, Caracalla was assassinated by the praetorian prefect Macrinus in 217, who succeeded him briefly as the first emperor not of senatorial rank. The imperial court, however, was dominated by formidable women who arranged the succession of Elagabalus in 218, and Alexander Severus, the last of the dynasty, in 222. In the last phase of the Severan principate, the power of the Senate was somewhat revived and a number of fiscal reforms were enacted. Despite early successes against the Sassanian Empire in the East, Alexander Severus' increasing inability to control the army led eventually to its mutiny and his assassination in 235. The death of Alexander Severus ushered in a subsequent period of soldier-emperors and almost a half-century of civil war and strife. Crisis of the 3rd Century (AD 235 – 284) The Crisis of the 3rd Century is a commonly applied name for the crumbling and near collapse of the Roman Empire between 235 and 284. During this period, Rome was ruled by more than 35 individuals, most of them prominent generals who assumed Imperial power over all or part of the empire, only to lose it by defeat in battle, murder, or death. After nearly 50 years of external invasion, internal civil wars and economic collapse, the Empire was on the verge of ending. A series of tough soldier-emperors saved it, but in the process fundamentally changed the Roman Empire. The transitions of this period mark the beginnings of Late Antiquity and the end of Classical Antiquity. Diocletian saw that the vast Roman Empire was ungovernable by a single emperor in the face of internal pressures and military threats on two fronts. He therefore split the Empire in half along a north-west axis just east of Italy, and created two equal Emperors to rule under the title of Augustus. Diocletian was Augustus of the eastern half, and gave his long time friend Maximian the title of Augustus in the western half. In 293 authority was further divided as each Augustus took a Caesar to aid him in administrative matters, and to provide a line of succession; Galerius became the junior emperor of Diocletian and Constantius Chlorus the junior emperor of Maximian. This constituted what was called in Latin a quadrumvirate and in Greek a Tetrarchy; the leadership of four. The system allowed the peaceful succession of the Augusti as the Caesar in each half rose up to replace the Augustus and proclaimed a new Caesar. On May 1, 305 Diocletian and Maximian abdicated in favor of their Caesars. Galerius named the two new Caesars: his nephew Maximinus for himself and Flavius Valerius Severus for Constantius. The Tetrarchy would effectively collapse with the death of Constantius Chlorus on July 25, 306. Constantius' troops in Eboracum immediately proclaimed his son Constantine the Great an Augustus. In August, 306, Galerius promoted Severus to the position of Augustus. A revolt in Rome supported another claimant to the same title: Maxentius, son of Maximian, who was proclaimed Augustus on October 28, 306. His election was supported by the Praetorian Guard. This left the Empire with five rulers: four Augusti (Galerius, Constantine, Severus and Maxentius) and a Caesar (Maximinus). The year 307 saw the return of Maximian to the role of Augustus alongside his son Maxentius creating a total of six rulers of the Empire. Galerius and Severus campaigned against them in Italy. Severus was killed under command of Maxentius on September 16, 307. The two Augusti of Italy also managed to ally themselves with Constantine by having Constantine marry Fausta, the daughter of Maximian and sister of Maxentius. The end of 307 saw the Empire with four Augusti (Maximian, Galerius, Constantine and Maxentius) and a sole Caesar (Maximinus). The five were briefly joined by another Augustus in 308, Domitius Alexander, vicarius of the Roman province of Africa under Maxentius, proclaimed himself Augustus. Before long he was captured by Rufus Volusianus and Zenas. Alexander ended his life in captivity in 309. The current situation of conflict between the various rivalrous Augusti was resolved in the Congress of Carnuntum with the participation of all four Augusti and the Caesar. The final decisions were taken on November 11, 308: - Galerius remained Augustus of the Eastern Roman Empire. - Maximinus remained Caesar of the Eastern Roman Empire. - Maximian was forced to abdicate. - Maxentius received official recognition as Augustus of the Western Roman Empire. - Constantine received official recognition but was demoted to Caesar of the Western Roman Empire. - Licinius replaced Maximian as Augustus of the Western Roman Empire. Problems however continued. Maximinus demanded to be promoted to Augustus. He proclaimed himself to be one on May 1, 310. Maximian similarly proclaimed himself an Augustus for a third and final time. He was killed by his son-in-law Constantine in July, 310. The end of the year again found the Empire with four Augusti (Galerius, Maximinus, Maxentius and Licinius) and a sole Caesar (Constantine). Galerius died in May 311 leaving Maximinus sole ruler of the Eastern Roman Empire. Meanwhile Maxentius declared a war on Constantine under the pretext of avenging his executed father. He was among the casualties of the Battle of Milvian Bridge on October 28, 312. Constantine was promoted to Augustus. This left the Empire in the hands of the three remaining Augusti, Maximinus, Constantine and Licinius. Licinius allied himself with Constantine, cementing the alliance by marriage to his younger half-sister Constantia in March 313 and joining open conflict with Maximinus. In August 313 Maximinus met his death at Tarsus in Cilicia. The two remaining Augusti divided the Empire again in the pattern established by Diocletian, Constantine becoming Augustus of the Western Roman Empire and Licinius Augustus of the Eastern Roman Empire. This division lasted just one year until 324. A final war between the last two remaining Augusti ended with the deposition of Licinius and the elevation of Constantine to sole Emperor of the Roman Empire. Deciding that the empire needed a new capital, Constantine chose the site of Byzantium for the new city. He refounded it as Nova Roma, but it was popularly called Constantinople: Constantine's City. The beginning of the Roman Empire as a Christian empire lies in 313 AD, with the Edict of Milan. The edict was signed under the reigns of Constantine I and Licinius. The edict made Christianity one of the official religions of Rome. Christianity became the single official religion of Rome under Theodosius (r. 379-395 AD). Initially the emperor had control over the church. While Christianity flourished, the Empire by no means became uniformly Christian; paganism remained significant. Theodosius massacred Thessalonica for rebelling against his new Christian policies condemning homosexuality, which was a common practice in both ancient Greece and Greece under Roman rule. Upon his return to Rome the Bishop Ambrose refused to let Theodosius enter the church until he made a public repentance. Theodosius did so, and from then on the church's powers grew. Eventually the church would gain enough power that it would outlast the empire in the west. In popular history, the year 476 is generally accepted as the end of the Western Roman Empire. In that year, Odoacer disposed of his puppet Romulus Augustus (475–476), and for the first time did not bother to induct a successor, choosing instead to rule as a representative of the Eastern Emperor (although Julius Nepos, the emperor deposed by Romulus Augustulus, continued to rule Illyricum until his death in 480, at which point Odoacer annexed the remainder of the Western Empire to his Italian kingdom). The last Emperor who ruled from Rome, however, had been Theodosius, who removed the seat of power to Mediolanum (Milan). Edward Gibbon, in writing The History of the Decline and Fall of the Roman Empire knew not to end his narrative at 476. The great corpse continued to twitch, into the 6th century. On the other hand, in 409, with the Emperor of the West fled from Milan to Ravenna and all the provinces wavering in loyalties, the Goth Alaric I, in charge at Rome, came to terms with the senate, and with their consent set up a rival emperor and invested the prefect of the city, a Greek named Priscus Attalus, with the diadem and the purple robe. In the following year when the Goths rampaged in the City, local power was in the hands of the Bishop of Rome. The transfer of power to Christian pope and military dux had been effected: the Western Empire was effectively dead, though no contemporary knew it. The next seven decades played out as aftermath. Theodoric the Great as King of the Goths, couched his legitimacy in diplomatic terms as being the representative of the Emperor of the East. Consuls were appointed regularly through his reign: a formula for the consular appointment is provided in Cassiodorus' Book VI. The post of consul was last filled in the west under Theodoric's successor, Athalaric, until he died in 534. Ironically the Gothic War in Italy, which was meant as the reconquest of a lost province for the Emperor of the East and a re-establishment of the continuity of power, actually caused more damage and cut more ties of continuity with the Antique world than the attempts of Theodoric and his minister Cassiodorus to meld Roman and Gothic culture within a Roman form. In essence, the "fall" of the Roman Empire to a contemporary depended a great deal on where they were and their status in the world. On the great villas of the Italian Campagna, the seasons rolled on without a hitch. The local overseer may have been representing an Ostrogoth, then a Lombard duke, then a Christian bishop, but the rhythm of life and the horizons of the imagined world remained the same. Even in the decayed cities of Italy consuls were still elected. In Auvergne, at Clermont, the Gallo-Roman poet and diplomat Sidonius Apollinaris, bishop of Clermont, realized that the local "fall of Rome" came in 475, with the fall of the city to the Visigoth Euric. In the north of Gaul the Franks could not be taken for Roman, but in Hispania the last Arian Visigothic king Leovigild considered himself the heir of Rome. In Alexandria, dreams of a "Christian Empire" with genuine continuity were shattered when a rampaging mob of Christians were encouraged to sack and destroy the Serapeum in 392. Hispania Baetica was still essentially Roman when the Moors came in 711, but in the northwest, the invasion of the Suevi broke the last frail links with Roman culture in 409. In Aquitania and Provence, cities like Arles were not abandoned, but Roman culture in Britain collapsed in waves of violence after the last legions evacuated: the final legionary probably left Britain in 409. In Athens the end came for some in 529, when the Emperor Justinian closed the Neoplatonic Academy and its remaining members fled east for protection under the rule of Sassanid king Khosrau I; for other Greeks it had come long before, in 396, when Christian monks led Alaric I to vandalize the site of the Eleusinian Mysteries. From Roman to Byzantine in the East Under Constantine (AD 330 – 337) and his sons (AD 337 – 361) Constantinople would serve as the capital of Constantine the Great from May 11, 330 to his death on May 22, 337. The Empire was parted again among his three surviving sons.The Western Roman Empire was divided among the eldest son Constantine II and the youngest son Constans. The Eastern Roman Empire along with Constantinople were the share of middle son Constantius II. Constantine II was killed in conflict with his youngest brother in 340. Constans was himself killed in conflict with army proclaimed Augustus Magnentius on January 18, 350. Magnentius was at first opposed in the city of Rome by self-proclaimed Augustus Nepotianus, a paternal first cousin of Constans. Nepotianus was killed alongside his mother Eutropia. His other first cousin Constantia convinced Vetriano to proclaim himself Caesar in opposition to Magnentius. Vetriano served a brief term from March 1 to December 25, 350. He was then forced to abdicate by the legitimate Augustus Constantius. The usurper Magnentius would continue to rule the Western Roman Empire till 353 while in conflict with Constantius. His eventual defeat and suicide left Constantius as sole Emperor. Constantius rule would however be opposed again in 360. He had named his paternal half-cousin and brother-in-law Julian as his Caesar of the Western Roman Empire in 355. During the following five years, Julian had a series of victories against invading Germanic tribes, including the Alamanni. This allowed him to secure the Rhine frontier. His victorious Gallic troops thus ceased campaigning. Constantius send orders for the troops to be transferred to the east as reinforcements for his own currently unsuccessful campaign against Shapur II of Persia. This order led the Gallic troops to an insurrection. They proclaimed their commanding officer Julian to be an Augustus. Both Augusti were not ready to lead their troops to another Roman Civil War. Constantius' timely demise on November 3, 361 prevented this war from ever occurring. Julian would serve as the sole Emperor for two years. He had received his baptism as a Christian years before, but apparently no longer considered himself one. His reign would see the ending of restriction and persecution of paganism introduced by his uncle and father-in-law Constantine the Great and his cousins and brother-in-laws Constantine II, Constans and Constantius II. He instead placed similar restrictions and unofficial persecution of Christianism. His edict of tolerance in 362 ordered the reopening of pagan temples, the reinstitution of alienated temple properties. And more problematic for the Christian Church, the recalling of previously exiled Christian bishops. Returning Orthodox and Arian bishops resumed their conflicts thus further weakening the Church as a whole. Julian himself was not a traditional pagan. His personal beliefs were largely influenced by Neoplatonism and Theurgy. He produced works of philosophy arguing his beliefs. His brief renaissance of paganism would however end with his death. Julian eventually resumed the war against Shapur II of Persia. He received a mortal wound in battle and died on June 26, 363. An ironic fate for someone who reportedly believed himself a reincarnation of Alexander the Great. He was considered a hero by pagan sources of his time and a villain by Christian ones. Later historians have treated him as a controversial figure. Julian died childless and with no designated successor. The officers of his army elected the rather obscure officer Jovian as an Augustus. He is remembered for signing an unfavorable peace treaty with Persia and restoring the privileges of Christianity. He is considered a Christian himself though little is known of his beliefs. Jovian himself died on February 17, 364. The role of choosing a new Augustus fell again to army officers. On February 28, 364, Pannonian officer Valentinian I was elected Augustus in Nicaea, Bithynia. However, the army had been left leaderless twice in less than a year, and the officers demanded Valentinian to choose a co-ruler. On March 28 Valentinian chose his own younger brother Valens and the two new Augusti parted the Empire in the pattern established by Diocletian: Valentinian would administer the Western Roman Empire, while Valens took control over the Eastern Roman Empire. Valens' election would soon be disputed. Procopius, a Cilician maternal cousin of Julian, had been considered a likely heir to his cousin but was never designated as such. He had been in hiding since the election of Jovian. In 365, while Valentinian was at Paris and then at Reims to direct the operations of his generals against the Alamanni, Procopius managed to bribe two legions assigned to Constantinople and take control of the Eastern Roman capital. He was proclaimed Augustus on September 28 and soon extended his control to both Thrace and Bithynia. War between the two rival Eastern Roman Emperors continued until Procopius was defeated. Valens had him executed on May 27, 366. On August 4, 367, a 3rd Augustus was proclaimed by the other two. His father Valentinian and uncle Valens chose the 8 year-old Gratian as a nominal co-ruler, obviously as a means to secure succession. In April 375 Valentinian I led his army in a campaign against the Quadi, a Germanic tribe which had invaded his native province of Pannonia. During an audience to an embassy from the Quadi at Brigetio on the Danube (part of modern-day Komárom, Hungary), Valentinian suffered a burst blood vessel in the skull while angrily yelling at the people gathered. This injury resulted in his death on November 17, 375. Succession did not go as planned. Gratian was then a sixteen-year-old and arguably ready to act as Emperor, but the troops in Pannonia proclaimed his infant half-brother emperor under the title Valentinian II. Gratian acquiesced in their choice and administrated the Gallic part of the Western Roman Empire. Italy, Illyria and Africa were officially administrated by his brother and his step-mother Justina. However the division was merely nominal as the actual authority still rested with Gratian. Battle of Adrianople (AD 378) Meanwhile the Eastern Roman Empire faced its own problems with Germanic tribes. The East Germanic tribe known as the Goths were forced to flee their former lands following an invasion by the Huns. Their leaders Alavinus and Fritigern led them to seek refuge from the Eastern Roman Empire. Valens indeed let them settle as foederati on the southern bank of the Danube in 376. However the newcomers faced problems from allegedly corrupted provincial commanders and a series of hardships. Their dissatisfaction led them to revolt against their Roman hosts. For the following two years conflicts continued. Valens personally led a campaign against them in 378. Gratian provided his uncle with reinforcements from the Western Roman army. However this campaign proved disastrous for the Romans. The two armies approached each other near Adrianople. Valens was apparently overconfident of his numerical superiority of his own forces over the Goths. His officers advised him to wait for the promised arrival of Gratian himself with further reinforcements. But Valens instead rushed to battle. On August 9, 378, the Battle of Adrianople resulted in the crushing defeat of the Romans and the death of Valens. Contemporary historian Ammianus Marcellinus estimated that two thirds of the Roman army were lost in the battle. The last third managed to retreat. The battle had far reaching consequences. Veteran soldiers and valuable administrators were among the heavy casualties. There were few available replacements at the time. Leaving the Empire with problems of finding suitable leadership. The Roman army would also start facing recruiting problems. In the following century much of the Roman army would consist of Germanic mercenaries. For the moment however there was another concern. The death of Valens left Gratian and Valentinian II as the sole two Augusti. Gratian was now effectively responsible for the whole of the Empire. He sought however, a replacement Augustus for the Eastern Roman Empire. His choice was Theodosius I, son of formerly distinguished general Count Theodosius. The elder Theodosius had been executed in early 375. for unclear reasons. The younger one was named Augustus of the Eastern Roman Empire on January 19, 379. His appointment would prove a deciding moment in the division of the Empire. Disturbed peace in the West (AD 383) Gratian governed the Western Roman Empire with energy and success for some years, but he gradually sank into indolence. He is considered to have become a figurehead while Frankish general Merobaudes and bishop Ambrose of Milan jointly acted as the power behind the throne. Gratian lost favor with factions of the Roman Senate by prohibiting traditional paganism at Rome and relinquishing his title and faction of Pontifex Maximus. The senior Augustus also became unpopular to his own Roman troops due to his close association with so-called barbarians. He reportedly recruited Alans to his personal service and adopted the guise of a Scythian warrior for public appearances. Meanwhile Gratian, Valentinian II and Theodosius were joined by a fourth Augustus. Theodosius proclaimed his oldest son Arcadius to be an Augustus in January, 383 in an obvious attempt to secure succession. The boy was only still five or six years old and held no actual authority. Nevertheless he was recognized as a co-ruler by all three Augusti. The increasing unpopularity of Gratian would cause the four Augusti problems later that same year. Spanish Celt general Magnus Maximus, stationed in Roman Britain, was proclaimed Augustus by his troops in 383 and rebelling against Gratian he invaded Gaul. Gratian fled from Lutetia (Paris) to Lugdunum (Lyon), where he was assassinated on August 25, 383 at the age of twenty-five. Maximus was a firm believer of the Nicene Creed and introduced state persecution on charges of heresy, which brought him in conflict with Pope Siricius who argued that the Augustus had no authority over church matters. But he was an Emperor with popular support and his reputation survived in Romano-British tradition and gained him a place in the Mabinogion, compiled about a millennium after his death. Following Gratian's death, Maximus had to deal with Valentinian II, actually only twelve year old, as the senior Augustus. The first few years the Alps would serve as the borders between the respective territories of the two rival Western Roman Emperors. Maximus controlled Britain, Gaul, Hispania and Africa. He chose Augusta Treverorum (Trier) as his capital. Maximus soon entered negotiations with Valentinian II and Theodosius, attempting to gain their official recognition. By 384, negotiations were unfruitful and Maximus tried to press the matter by settling succession as only a legitimate Emperor could do: proclaiming his own infant son Flavius Victor an Augustus. The end of the year find the Empire having five Augusti (Valentinian II, Theodosius I, Arcadius, Magnus Maximus and Flavius Victor) with relations between them yet to be determined. In 385 Theodosius was left a widower following the sudden death of Augusta Flacilla. He was remarried to Galla, sister of Valentinian II, and the marriage secured closer relations between the two legitimate Augusti. In 386 Maximus and Victor finally received official recognition by Theodosius but not Valentinian. In 387, Maximus apparently decided to rid himself of his Italian rival. He crossed the Alps into the valley of the Po and threatened Milan. Valentinian and his mother fled to Thessaloniki from where they sought the support of Theodosius. Theodosius indeed campaigned west in 388 and was victorious against Maximus. Maximus himself was captured and executed in Aquileia on July 28, 388. Magister militum Arbogastes was sent to Trier with orders to also kill Flavius Victor. Theodosius restored Valentinian to power and through his influence had him converted to Orthodox Catholicism. Theodosius continued supporting Valentinian and protecting him from a variety of usurpations. Theodosian Dynasty (AD 392 – 395) In 392 Valentinian was murdered in Vienne. Theodosius succeeded him, ruling the entire Roman Empire. Theodosius had two sons and a daughter, Pulcheria, from his first wife, Aelia Flacilla. His daughter and wife died in 385. By his second wife, Galla, he had a daughter, Galla Placidia, the mother of Valentinian III, who would be Emperor of the West. After his death in 395 he gave the two halves of the Empire to his two sons Arcadius and Honorius; Arcadius became ruler in the East, with his capital in Constantinople, and Honorius became ruler in the west, with his capital in Milan. Though the Roman state would continue to have two emperors, the Eastern Romans considered themselves Roman in full. Latin was used in official writings as much as, if not more than, Greek. The two halves were nominally, culturally and historically, if not politically, the same state. The west would continue to decline during the 5th century. However, the richer east would be spared much of the destruction. The last western emperor, Romulus Augustus, was deposed in 476 by Odoacer, the half Hunnish, half Scirian chieftain of the Germanic Heruli. The Eastern Empire counter-attacked in the 6th century under the eastern emperor Justinian, taking much of the west back. These gains were lost during subsequent reigns. Of the many accepted dates for the end of the Roman state, the latest is 610. This is when the Emperor Heraclius made sweeping reforms, forever changing the face of the empire. Greek was readopted as the language of government and Latin influence waned. By 610, the Classical Roman Empire had evolved into the Middle Age Byzantine Empire although it was never called this (rather it was called Romania or Basileia Romaion) and the Byzantines continued to consider themselves Roman until their fall in the 15th century. Several states claiming to be the Roman Empire's successor arose, before as well as after the fall of Constantinople to the Ottoman Turks in 1453. The Holy Roman Empire, an attempt to resurrect the Empire in the West, was established in 800 when Pope Leo III crowned Charlemagne as Roman Emperor on Christmas Day, though the empire and the imperial office did not become formalized for some decades. After the fall of Constantinople, the Russian Empire, as inheritor of the Byzantine Empire's Orthodox Christian tradition, counted itself as the third Rome (with Constantinople being the second). And when the Ottomans, who based their state around the Byzantine model, took Constantinople and renamed it Istanbul, Sultan Mehmed II established his capital there and assumed the title "Roman Emperor". But excluding these states claiming their heritage, the Romans lasted, from the founding of Rome in 753 BC, to the fall in 1461 of the Empire of Trebizond (a successor state and fragment of the Byzantine Empire, which escaped destruction by the Ottomans in 1453), for a total of 2214 years. Their impact on Western and Eastern civilizations lives on. In time most of the Roman achievements have been duplicated by later civilizations. For example, the technology for cement was rediscovered [1755–1759] by John Smeaton. Timeline of the Roman Empire <timeline> Preset = TimeHorizontal_AutoPlaceBars_UnitYear ImageSize = width:800 barincrement:25 PlotArea = left:30 right:45 bottom:40 id:canvas value:rgb(0.97,0.97,0.97) id:white value:rgb(1,1,1) id:subtitle value:gray(0.5) id:grid1 value:gray(0.7) id:grid2 value:gray(0.88) id:black value:rgb(0,0,0) id:events value:rgb(0.75,1,0.75) id:mark1 value:rgb(0,0.7,0) id:mark2 value:rgb(0.7,0,0) id:years value:gray(0.5) id:period1 value:rgb(1,1,0) id:period2 value:rgb(1,0.75,0) id:caesar value:rgb(1,0.8,0.8) legend:Caesar id:augustus value:rgb(1,0.4,0.4) legend:Augustus BackgroundColors = canvas:canvas Period = from:-150 till:150 ScaleMajor = unit:year increment:100 start:-150 gridcolor:grid1 ScaleMinor = unit:year increment:10 start:-150 gridcolor:grid2 AlignBars = justify bar:title bar:dummy0 # separator bar:title_emperors bar:emperors1 bar:emperors2 bar:emperors3 bar:emperors4 bar:title_events barset:events bar:dummy2 # separator bar:title_periods bar:periods - explanation: attribute 'barset' instead of 'bar' means consecutive data lines are automatically placed on new bar - data lines are lines containing at: or from: & till: attributes - 'barset:break' means 'reset barcounter' = next line will be placed at first bar in barset - 'barset:skip' means 'increment barcounter' = skip one bar for next data line (to allow extra space - for text containing line break = ~) bar:title from:start till:end text:"Roman Empire" fontsize:XL anchor:middle align:center width:25 color:canvas mark:(line,canvas) bar:title_emperors from:start till:end text:Emperors fontsize:M anchor:middle align:center width:10 color:subtitle bar:title_events from:start till:end text:"Events" fontsize:M anchor:middle align:center width:10 color:subtitle mark:(line,white) bar:title_periods from:start till:end text:"Periods" fontsize:M anchor:middle align:center width:10 color:subtitle mark:(line,white) shift:(5,-5) fontsize:S mark:(line,mark1) barset:events from:-58 till:-49 text:Gallic Wars at:-47 text:Battle of Zela at:-44 text:Asassination of Julius Caesar at:-31 text:Battle of Actium at:-27 fontsize:XS shift:(2,1) text:Construction of the~Pantheon from:-26 till:-19 text:War in the Pyrenees at:-1 at:-18 text:lex Julia barset:break at:9 text:lex Papia Poppaea at:9 text:Battle of the Teutoburg Forest barset:skip at:47 text:conquest of Britain at:64 text:Great fire of Rome #18–19 July at:65 text:Pisonian conspiracy barset:break from:72 till:80 shift:(2,1) fontsize:XS text:Construction of the~Colosseum barset:skip at:79 text:Destruction of Pompeii at:125 fontsize:XS shift:(2,1) text:Reconstruction of the~Pantheon barset:break bar:emperors1 from:-23 till:14 text:Caesar_Augustus from:41 till:54 text:Claudius from:68 till:69 shift:(1,0) fontsize:XS text:Galba from:79 till:81 shift:(1,-11) fontsize:XS text:Titus from:96 till:98 text:Nerva from:138 till:end shift:(-15,-1) fontsize:XS text:Antoninus~Pius color:augustus bar:emperors2 from:14 till:37 text:Tiberius from:54 till:68 text:Nero from:69 till:69 text:Vitellius from:98 till:117 text:Trajan bar:emperors3 from:37 till:41 text:Caligula color:augustus from:69 till:79 text:Vespasian color:augustus from:117 till:138 text:Hadrian bar:emperors4 from:69 till:69 shift:(1,0) fontsize:XS text:Otho from:81 till:96 shift:(1,-11) fontsize:XS text:Domitian width:22 bar:periods from:14 till:69 shift:(4,-10) color:period1 text:Julio-Claudian dynasty from:69 till:69 shift:(4,1) color:period2 text:Year of the four emperors from:96 till:end shift:(4,-10) color:period1 text:Five Good Emperors width:8 <timeline> Preset = TimeHorizontal_AutoPlaceBars_UnitYear ImageSize = width:800 barincrement:25 PlotArea = left:30 right:45 bottom:40 id:canvas value:rgb(0.97,0.97,0.97) id:white value:rgb(1,1,1) id:subtitle value:gray(0.5) id:grid1 value:gray(0.7) id:grid2 value:gray(0.88) id:black value:rgb(0,0,0) id:events value:rgb(0.75,1,0.75) id:mark1 value:rgb(0,0.7,0) id:mark2 value:rgb(0.7,0,0) id:years value:gray(0.5) id:period1 value:rgb(1,1,0) id:period2 value:rgb(1,0.75,0) id:caesar value:rgb(1,0.8,0.8) id:augustus value:rgb(1,0.4,0.4) id:eastern value:rgb(1,0.8,0.4) legend:Eastern_Half id:western value:rgb(1,0.4,0.8) legend:Western_Half - id:caesar value:rgb(1,0.8,0.8) legend:Caesar - id:augustus value:rgb(1,0.4,0.4) legend:Augustus BackgroundColors = canvas:canvas Period = from:150 till:450 ScaleMajor = unit:year increment:100 start:150 gridcolor:grid1 ScaleMinor = unit:year increment:10 start:150 gridcolor:grid2 AlignBars = justify bar:title_emperors bar:emperors1 bar:emperors2 bar:emperors3 bar:emperors4 bar:title_events barset:events bar:dummy2 # separator bar:title_periods bar:periods bar:title_emperors from:start till:end text:Emperors fontsize:M anchor:middle align:center width:10 color:subtitle bar:title_events from:start till:end text:"Events" fontsize:M anchor:middle align:center width:10 color:subtitle mark:(line,white) bar:title_periods from:start till:end text:"Periods" fontsize:M anchor:middle align:center width:10 color:subtitle mark:(line,white) shift:(5,-5) fontsize:S mark:(line,mark1) barset:events from:212 till:216 shift:(2,1) fontsize:XS text:Construction of the~Baths of Caracalla at:251 text:Battle of Abrittus from:258 till:274 text:Gallic Empire at:284 text:Bagaudae revolt barset:break at:268 text:Battle of Naissus at:313 text:Edict of Milan barset:break at:378 text:Battle of Adrianople bar:emperors1 from:start till:161 shift:(1,0) fontsize:XS text:Antoninus~Pius color:augustus from:175 till:175 shift:(1,0) fontsize:XS text:Avidus~Cassius color:caesar from:193 till:211 shift:(0,0) color:augustus fontsize:XS text:Septimius~Severus from:218 till:222 shift:(-10,0) fontsize:XS text:Elagabalus from:235 till:238 shift:(2,0) fontsize:XS text:Maximus~Thrax from:249 till:251 shift:(2,-11) fontsize:XS text:Decius from:268 till:270 shift:(1,0) fontsize:XS text:Claudius_II from:276 till:276 shift:(1,-11) fontsize:XS text:Florianus from:285 till:305 shift:(30,0) fontsize:XS text:Diocletian from:308 till:313 shift:(1,-11) fontsize:XS text:Maximinus from:337 till:361 shift:(1,-11) fontsize:XS text:Constantius_II from:364 till:375 shift:(1,0) fontsize:XS text:Valentinian_I color:western from:379 till:395 shift:(1,-11) fontsize:XS text:Theodosius from:408 till:450 shift:(1,0) fontsize:XS text:Theodosius_II color:eastern bar:emperors2 from:161 till:180 shift:(2,0) fontsize:XS text:Marcus~Aurelius from:192 till:193 shift:(1,0) fontsize:XS text:Pertinax from:211 till:217 shift:(-30,-11) fontsize:XS text:Caracalla from:222 till:235 shift:(1,0) fontsize:XS text:Alexander~Severus from:238 till:238 shift:(1,-11) fontsize:XS text:Gordian_I+ from:238 till:238 shift:(51,-11) fontsize:XS text:II from:253 till:260 shift:(1,0) fontsize:XS text:Gallenius color: caesar from:260 till:268 from:275 till:276 shift:(1,-11) fontsize:XS text:M.C.Tacitus from:282 till:283 shift:(1,0) fontsize:XS text:Carus from:305 till:306 shift:(1,0) fontsize:XS text:Constantius_Chlorus color:augustus from:308 till:324 shift:(1,-11) fontsize:XS text:Licinius from:337 till:350 shift:(1,-11) fontsize:XS text:Constans from:363 till:364 shift:(1,0) fontsize:XS text:Jovian from:375 till:383 shift:(1,-11) fontsize:XS text:Gratian from:395 till:423 shift:(1,0) fontsize:XS text:Flavius_Augustus_Honorius color:western bar:emperors3 from:161 till:169 shift:(1,0) fontsize:XS text:Lucius~Verus from:177 till:180 color:caesar from:180 till:192 shift:(1,0) fontsize:XS text:Commodus color:augustus from:209 till:211 shift:(-20,-11) fontsize:XS text:Geta from:217 till:218 shift:(1,0) fontsize:XS text:Macrinus color:augustus from:238 till:244 shift:(-30,-11) fontsize:XS text:Gordian_III from:253 till:260 shift:(1,0) fontsize:XS text:Valerian from:270 till:275 shift:(-25,-11) fontsize:XS text:Aurelian color:augustus from:276 till:282 shift:(1,0) fontsize:XS text:Probus from:286 till:305 shift:(1,-11) fontsize:XS text:Maximian color:augustus from:306 till:337 shift:(1,0) fontsize:XS text:Constantine_I from:361 till:363 shift:(1,0) fontsize:XS text:Julian from:375 till:387 shift:(1,-11) fontsize:XS text:Valentinian_II color:western from:388 till:392 color:western from:395 till:408 shift:(1,0) fontsize:XS text:Arcadius color:eastern bar:emperors4 from:193 till:193 shift:(-76,-11) fontsize:XS text:Didius_Julianus from:217 till:218 color:caesar from:218 till:218 shift:(1,-11) fontsize:XS text:Diadumenian - from:238 till:238 shift:(-103,0) fontsize:XS text:Pupienus_and_Balbinus from:238 till:238 shift:(1,-22) fontsize:XS text:Pupienus_and_Balbinus from:244 till:249 shift:(1,-11) fontsize:XS text:Philip_the_Arab from:251 till:251 shift:(1,0) fontsize:XS text:Herennius_Etruscus from:283 till:285 shift:(1,-11) fontsize:XS text:Carinus+ from:283 till:285 shift:(42,-11) fontsize:XS text:Numerian from:305 till:311 shift:(1,0) fontsize:XS text:Galerius from:337 till:340 shift:(1,0) fontsize:XS text:Constantine_II from:364 till:378 shift:(1,-11) fontsize:XS text:Valens color:eastern from:421 till:421 shift:(1,0) fontsize:XS text:Constantius_III color:western width:22 bar:periods from:start till:193 shift:(4,-10) color:period1 text:Five Good Emperors from:193 till:235 shift:(4,1) color:period2 text:Severan dynasty from:235 till:275 shift:(4,-10) color:period1 text:Crisis of the Third Century from:313 till:395 shift:(4,1) color:period2 text:Christian Empire width:8 textcolor:black fontsize:XS pos:(30,2) text:"produced with EasyTimeline" pos:(620,2) text:"reference" - Byzantine Empire - Gallic Empire - History of the Balkans - History of Europe - List of Ancient Rome-related topics - Pax romana - Roman commerce - Roman culture - Roman currency - Roman law - Roman military history - Roman place names - Roman provinces - Roman roads - Roman technology |Roman Emperors by Epoch (see also: List – Concise List – Roman Empire)| | -> (Italy:)| -> (Much later in Western Europe:) -> (Continuing in Eastern Europe:) Ancient Historians of the Empire - Livy – history is of Roman Republic, but wrote during Augustus' reign - Ammianus Marcellinus Latin Literature of the Empire - A virtual tour of Ancient Rome with pictures and virtual reality movies - Grout, James, "Encyclopaedia Romana" - J. O'Donnell, Worlds of Late Antiquity website: links, bibliographies: Austine, Boethius, Cassiodorus etc. - History Forum Simaqianstudio - Roman Life Expectancy - Portrait gallery of Roman emperors 18th & 19th century history Modern histories of the Roman Empire - J. B. Bury, A History of the Roman Empire from its Foundation to the death of Marcus Aurelius, 1913 - J. A. Crook, Law and Life of Rome, 90 BC-AD 212, 1967 - S. Dixon, The Roman Family, 1992 - Donald R. Dudley, The Civilization of Rome, 2nd ed., 1985 - A.H.M. Jones, The Later Roman Empire, 284–602, 1964 - A. Lintott, Imperium Romanum: Politics and administration, 1993 - R. Macmullen, Roman Social Relations, 50 BC to AD 284, 1974 - M.I. Rostovtzeff, Economic History of the Roman Empire 2nd ed., 1957 - R. Syme, The Roman Revolution, 1939 - C. Wells, The Roman Empire, 2nd ed., 1992
http://enc.slider.com/Enc/Roman_Empire
13
15
Indri lemur in Madagascar. Click image for more indri photos. (Photo by R. Butler) DEFORESTATION AND EXTINCTION By Rhett Butler | Last updated July 22, 2012 The greatest loss with the longest-lasting effects from the ongoing destruction of wilderness will be the mass extinction of species that provide Earth with biodiversity. Although great extinctions have occurred in the none has occurred as rapidly or has been so much the result of the actions of a single species. The extinction rate of today may be 1,000 to 10,000 times the biological normal, or background, extinction rate of 1-10 species extinctions per So far there is little evidence for the massive species extinctions predicted by the species-area curve in the chart below. However, many biologists believe that species extinction, like global warming, has a time lag, and the loss of forest species due to forest clearing in the past may not be apparent yet today. Ward (1997) uses the term "extinction debt" to describe such extinction of species and populations long after Decades or centuries after a habitat perturbation, extinction related to the perturbation may still be taking place. This is perhaps the least understood and most insidious aspect of habitat destruction. We can clear-cut a forest and then point out that the attendant extinctions are low, when in reality a larger number of extinctions will take place in the future. We will have produced an extinction debt that has to be paid... We might curtail our hunting practices when some given population falls to very low numbers and think that we have succeeded in "saving" the species in question, when in reality we have produced an extinction debt that ultimately must be paid in full... Extinction debts are bad debts, and when they are eventually paid, the world is a poorer place. For example, the disappearance of crucial pollinators will not cause the immediate extinction of tree species with life cycles measured in centuries. Similarly, a study of West African primates found an extinction debt of over 30 percent of the total primate fauna as a result of historic deforestation. This suggests that protection of remaining forests in these areas might not be enough to prevent extinctions caused by past habitat loss. While we may be able to predict the effects of the loss of some species, we know too little about the vast majority of species to make reasonable projections. The unanticipated loss of unknown species will have a magnified effect over time. The process of extinction is enormously complex, resulting from perhaps hundreds or even thousands of factors, many of which scientists (let alone lay people) fail to grasp. The extinction of small populations, either endangered or isolated from the larger gene pool by fragmentation or natural barriers like water or mountain ranges, is the best modeled and understood form of extinction. Since the standard was set by MacArthur and Wilson in The Theory of Island Biogeography (1967), much work has been done modeling the effects of population size and land area on the survival of species. The number of individuals in a given population is always fluctuating due to numerous influences, from extrinsic changes in the surrounding environment to intrinsic forces within a species' own genes. This population fluctuation is especially a problem for populations in isolated forest fragments and species that are critically endangered throughout their range. When a population falls below a certain number, known as the minimum viable population (MVP), it is unlikely to recover. Thus the minimum viable population is often considered the extinction threshold for a population or species. There are three common forces that can drive a species with a population under MVP to extinction: demographic stochasticity, environmental stochasticity, and reduced genetic diversity. Demographic stochasticity involves birth and death rates of the individuals within a species. As the population size decreases, random quirks in mating, reproduction, and survival of young can have a significant outcome for a species. This is especially true in species with low birth rates (i.e. some primates, birds of prey, elephants), since their populations take a longer time to recover. Social dysfunction also plays an important role in a population's survival or demise. Once a population's size falls below a critical number, the social structure of a species may no longer function. For example many gregarious species live in herds or packs which enable the species to defend themselves from predators, find food, or choose mates. In these species, once the population is too small to sustain an effective herd or pack, the population may crash. Among species that are widely dispersed like large cats, finding a mate may be impossible once the population density falls below a certain point. Many insect species use chemical odors or pheromeres to communicate and attract mates. As population density falls, there is less probability that an individual's chemical message will reach a potential mate, and reproductive rates may decrease. Similarly, as plant species become rarer and more widely scattered, the distance between plants increases and pollination becomes less likely. Environmental stochasticity is caused by randomly occurring changes in weather and food supply, and natural disasters like fire, flood, and drought. In populations confined to a small area, a single drought, bad winter, or fire can eliminate all individuals. Reduced genetic diversity is a substantial obstacle blocking the recovery of small populations. Small populations have a smaller genetic base than larger populations. Without the influx of individuals from other populations, a population's genome stagnates and loses the genetic variability to adapt to changing conditions. Small populations are also prone to genetic drift where rare traits have a high probability of being lost with each successive generation. The smaller the population, the more vulnerable it is to demographic stochasticity, environmental stochasticity, and reduced genetic diversity. These factors, often working in concert, tend to further reduce population size and drive the species toward extinction. This trend is known as the extinction vortex. See the box on the right for an example of an extinction Some mathematical ecologists have suggested that population fluctuations may be governed by properties of chaos making the behavior of the system (the fluctuation of a species's population size) nearly impossible to predict due to the complex dynamics within a given ecosystem. EXTINCTION ESTIMATES MADE IN THE 1990s |Estimate and Method of estimation| % Global Loss 10 million sp. 30 million sp. |0.2-0.3% annually based on tropical deforestation rate of 1% annually| |2-13% loss between 1990 and 2015 using species area curve and increasing deforestation rates| |Loss of half the species in the area likely to be deforested by 2015| |Fitting exponential extinction functions based on IUCN red data books| Tropical species are not only threatened directly by deforestation, but also by global climate change. Even if species survive in protected reserves, they may perish as a result of rising ocean levels and climactic changes. Many tropical species are used to constant, year-round conditions of temperature and humidity. They are not adapted to climate change even if it is as small as 1.8F (1C). Changes in seasonal length, precipitation, and intensity and frequency of extreme events that could occur should the Earth warm may strongly impact biodiversity in seasonal tropical forests and cloud forests. Studies show that unusual weather conditions—such as those under el Niño and la Niña—can cause population fluctuations of many forest animals. Should the frequency and intensity of such extreme events reach the level where whole populations are unable to recover to their normal level between events we could see localized extinctions and serious changes in the ecosystem. Climate changes could especially impact some sensitive ecosystems like cloud forests, which would be drastically affected by any lifting of the cloud cap. One often-overlooked consequence of increased temperatures is the spread of disease among wild animals. For example, there is a good chance that avian malaria and bird pox will be spread to Hawaiian upland forests by mosquitoes currently limited to elevations below 4,800 feet (1,500 m) due to temperature constraints. The spread of these diseases to upland forest would probably mean the extinction of several endangered Many forest communities have survived global climate change in the past by "migrating" north or southward. However, today, because of fragmentation and human development, there are few corridors of wild territory for migration. Highways, parking lots, plantations, housing developments, and farms impede the slow, but steady movement necessary for many communities to survive changing climate conditions. Unable to escape the changes, many species within these communities will have to cope or face extinction. One of the contributing factors to the worldwide decline in amphibian populations may be the gradual climate change over the past 100 years, which when coupled with the increase in UV-B radiation, may have weakened their defense to a previously harmless fungal infection. This fungus has been detected on dead or dying frogs in locations around the world. Global climate change may have had an impact on the extinction of North American megafauna at the end of the ice age some 10,000 years ago. One of the leading theories for the demise of these mammals—which included such wild beasts as giant sloths, mammoths, sabertooth cats, and oversized horses and rhinos—is that habitat fragmentation, caused by global climate change, split species into small populations, making them more vulnerable to extinction. As the last glacial interval came to a close and the great ice sheets receded, an additional factor came into play: the presence of hungry human hunters. Models (the Moisimann and Martin model of 1975, amended by Whittington and Dyke in 1989) suggest that by merely killing off 2 percent of the mammoth population every year, year after year, the entire species would be doomed to eventual extinction some three or four centuries down the road. These natural (climate change) and unnatural (human) influences working in concert surely condemned to extinction some of the most magnificent creatures ever seen by man. Today we are facing a similar situation, only this time we may be responsible for both factors, the global climate change and the overexploitation. For the most recent news article about extinction and biodiversity loss, check out The extinction blog Extinction of a large number of species is highly likely because of the intricate relationships between species. David Quammen (1981) explains: The educated guess is that each species of plant supports ten to thirty species of dependent animal. Eliminate just one species of insect and you may have destroyed the sole specific pollinator for a flowering plant; when that plant consequently vanishes, so may another twenty-nine species of insects that rely on it for food; each of those twenty-nine species might be an important parasite upon still another species of insect, a pest, which when left uncontrolled by parasitism will destroy further whole populations of trees, which themselves had been important because . . . The complexity of the rainforest makes it impossible to anticipate when and what species will disappear. Besides losing unique species that have lived on the planet for longer than we have and have every right to exist as we do, we are losing an incredible pool of genetic diversity which we could harness to help our own kind. As each species is lost, a unique combination of genes which has been produced over the course of millions of years, is lost and will not be replaced during our time. We head toward a future impoverished of the magnificent beasts that we remember learning about as children: ferocious tigers; armored rhinos; brilliant macaws; colorful frogs and toads. As these species vanish from the globe, the world is truly a poorer Estimates of species loss each year range greatly as shown by this table. + More information on extinction Display articles >> - Why is there a "lag time" for species extinction? - Why do small populations have a lower probability of survival? - How might climate change impact global biodiversity? - Why are frogs dying around the world? Other versions of this page print version | spanish | french | portuguese | chinese | japanese Continued / Next: Other pages in this section: Could the Tasmanian tiger be hiding out in New Guinea? (05/20/2013) Many people still believe the Tasmanian tiger (Thylacinus cynocephalus) survives in the wilds of Tasmania, even though the species was declared extinct over eighty years ago. Sightings and reports of the elusive carnivorous marsupial, which was the top predator on the island, pop-up almost as frequently as those of Bigfoot in North America, but to date no definitive evidence has emerged of its survival. Yet, a noted cryptozoologist (one who searches for hidden animals), Dr. Karl Shuker, wrote recently that tiger hunters should perhaps turn their attention to a different island: New Guinea. Aquarium launches desperate search to save a species down to 3 individuals (05/10/2013) Aquarists at ZSL London Zoo have launched a worldwide appeal to find a female mate for a fish species that is believed to have gone extinct in the wild. The Hawaiian silversword: another warning on climate change (05/06/2013) The Hawaiian silversword (Argyroxyphium sandwicense), a beautiful, spiny plant from the volcanic Hawaiian highlands may not survive the ravages of climate change, according to a new study in Global Change Biology. An unmistakable plant, the silversword has long, sword-shaped leaves covered in silver hair and beautiful flowering stalks that may tower to a height of three meters. 13 year search for Taiwan's top predator comes up empty-handed (05/01/2013) After 13 years of searching for the Formosan clouded leopard (Neofelis nebulosa brachyura), once hopeful scientists say they believe the cat is likely extinct. For more than a decade scientists set up over 1,500 camera traps and scent traps in the mountains of Taiwan where they believed the cat may still be hiding out, only to find nothing. Rhinos now extinct in Mozambique's Limpopo National Park (04/25/2013) Poachers have likely killed off the last rhinos in Mozambique's Limpopo National Park, according to a park official. More news on extinction More rainforest news
http://rainforests.mongabay.com/0908.htm
13
21
Science Fair Project Encyclopedia California is a state located in the western United States, bordering the Pacific Ocean. The most populous and third largest state in the U.S., With a population roughly the size of Canada and one of the biggest economies in the world, California is more like a country than a state. California is both physically and demographically diverse. The state's official nickname is "The Golden State", which may refer either to the discovery of gold in California in 1848 and the subsequent gold rush, or to the golden brown color of much of the state during the summer. California's U.S. postal abbreviation is CA, and its Associated Press abbreviation is Calif. Southern California is highly populated, while the larger northern California is less densely populated. The vast majority of the population lives within 50 miles (80 km) of the Pacific Ocean. California dominates American culture and economy, contributing significant advances in technology and legal reform, in addition to paying significantly more to the federal system than it receives in benefits. The entire region originally known as California was composed of the Mexican peninsula now known as Baja California and the land in the current states of California, Nevada, Utah, and parts of Arizona and Wyoming, known as Alta California. In these early times, the boundaries of the Sea of Cortez and the Pacific coast were only partially explored and California was shown on early maps as an island. The name comes from Las sergas de Espladián (Adventures of Spladian), a 16th century novel, by Garci Rodríguez de Montalvo, where there is an island paradise called California. (For further discussion, see: Origin of the name California.) The first Europeans to explore parts of the coast were Juan Rodriguez Cabrillo in 1542. The first to explore the entire coast and claim possession of it was Francis Drake in 1579. Beginning in the late 1700s, Spanish missionaries set up tiny settlements on enormous grants of land in the vast territory north of Baja California. Upon Mexican independence from Spain, the chain of missions became the property of the Mexican government, and they were quickly dissolved and abandoned. In 1846, at the outset of the Mexican-American War, a California Republic was founded and the Bear Flag was flown that featured a golden bear and a star. The Republic came to a sudden end when Commodore John D. Sloat of the United States Navy sailed into San Francisco Bay and claimed California for the United States. Following the Mexican-American War, the region was divided between Mexico and the United States. The Mexican portion, Baja (lower) California was later divided into the states of Baja California and Baja California Sur. The western part of the U.S. portion, Alta (upper) California, was to become the state of California. In 1848, the Spanish-speaking population of distant upper California numbered around 4,000. But after gold was discovered, the population burgeoned with Americans and a few Europeans in the great California gold rush. In 1850, the state was admitted to the Union. During the American Civil War, popular support was divided 70% for the South and 30% for the North, and although California officially entered on the side of the North, many troops went east to fight with the Confederacy. The connection of the far Pacific West to the eastern population centers came in 1869 with the completion of the first transcontinental railroad. Out West, residents were discovering that California was extremely well suited to fruit cultivation and agriculture in general. Citrus, oranges in particular, were widely grown, and the foundation was laid for the state's prodigious agricultural production of today. In the period from 1900 to 1965 the population grew from fewer than one million to become the most populous state in the Union, sending the most electors to the Electoral College to elect the President. From 1965 to the present, this population completely changed and became one of the most diverse in the world. The state is liberal-leaning, technologically and culturally savvy, and a world center of engineering businesses, the film and television industry and, as mentioned above, American agricultural production. Law and government The Governor of California and the other state constitutional officers serve four-year terms and may be reelected only once. The California State Legislature consists of a 40 member Senate and 80 member Assembly. Senators serve four year terms and Assembly members two. The terms of the Senators are staggered so that half the membership is elected every two years. The Senators representing the odd-numbered districts are elected in years evenly divisible by four, i.e., presidential election years. The Senators from the even-numbered districts are elected in the intervening even-numbered years, in the gubernatorial election cycle. For the 2003-2004 session, there are 48 Democrats and 32 Republicans in the Assembly. In the Senate, there are 25 Democrats and 15 Republicans. The current Governor is the Republican Arnold Schwarzenegger, whose current term lasts through January 2007. Schwarzenegger was only the second person in the history of the United States to be put into office by a recall of a sitting Governor (the first was the 1921 recall of North Dakota Governor Lynn J. Frazier). Schwarzenegger replaced Governor Gray Davis (1999-2003) who was removed from office by the October 2003 California recall election. The state's capital is Sacramento. In California's early history, the capital was located in Monterey (1775-1849), San Jose (1849-1851), Vallejo (1852-1853), Benicia (1853-1854), and San Francisco (1862). The capital moved to Sacramento temporarily in 1852 when construction on a State House could not be completed in time in Vallejo. The capital moved to Sacramento for good on February 25, 1854, except for a four-month temporary move in 1862 to San Francisco due to severe flooding in Sacramento. California's giant judiciary is supervised by the seven Justices of the Supreme Court of California. California judges are always appointed by the Governor but must be regularly reconfirmed by the electorate. At the national level, California is represented by two senators and 53 representatives. It has 55 electoral votes in the U.S. Electoral College. California has the most Congressmen and Presidential Electors of any state. The two U.S. Senators from California are Democrats Dianne Feinstein and Barbara Boxer. 33 Democrats and 20 Republicans represent the state in the U.S. House of Representatives. California borders the Pacific Ocean, Oregon, Nevada, Arizona, and the Mexican state of Baja California. The state has striking natural features, including an expansive central valley, high mountains, and hot dry deserts. With an area of 410,000 km² it is the third largest state in the U.S. Most major cities cling to the cool, pleasant seacoast along the Pacific, notably San Francisco, San Jose, Los Angeles, Santa Ana/Orange County, and San Diego. However, the capital, Sacramento is in the Central Valley. California has extremely varied geography. Down the center of the state lies the Central Valley, a huge, fertile valley bounded by the coastal mountain ranges in the west, the granite Sierra Nevada to the east, the volcanic Cascade Range in the north and the Tehachapi Mountains in the south. Mountain-fed rivers, dams, and canals provide water to irrigate the Central Valley. With dredging, several of these rivers have become sufficiently large and deep that several inland cities, notably Stockton, California, are seaports. In the center and east of the state are the Sierra Nevada, containing the highest peak in the continental U.S., Mount Whitney, at 14,505 feet (4421 m). Also located in the Sierra are the world famous Yosemite National Park and a deep freshwater lake, Lake Tahoe. To the east of the Sierra are Owens Valley and Mono Lake, an essential seabird habitat. In the south lie the Transverse Ranges and a large salt lake, the Salton Sea. The south-central desert is called the Mojave. To the northeast of the Mojave lies Death Valley, which contains the lowest, hottest point in North America. California is famous for its earthquakes due partly to the presence of the San Andreas Fault. While more powerful earthquakes in the United States have occurred in Alaska and along the Mississippi River, California earthquakes are notable in their frequency and location in highly populated areas. Popular legend has it that, eventually, a huge earthquake will result in the splitting of coastal California from the continent, either to sink into the ocean or form a new landmass. The fact that this scenario is completely implausible from a geologic standpoint does not lessen its acceptance in public conventional wisdom, or its exploitation by the producers of science fiction and fantasy media. Notable movies in which the possible destruction of much of California by an earthquake includes the titles Earthquake, A View to a Kill, Escape from L.A., and Superman. Different regions of California have very different climates, depending on their latitude, elevation, and proximity to the coast. Most of the state has a Mediterranean climate, with rainy winters and dry summers. The influence of the ocean generally moderates temperature extremes, creating cooler summers and warmer winters, and the cold oceanic California Current offshore often creates summer fog near the coast. As one moves away from the coast, the climate becomes more continental, with hotter summers and colder winters. Westerly winds from the ocean also bring moisture, and the northern parts of the state generally receive higher rainfall than the south. California's mountain ranges influence the climate as well; moisture-laden air from the west cools as it ascends the mountains, dropping moisture; some of the rainiest parts of the state are west-facing mountain slopes. Northwestern California has a temperate climate with rainfall of 15-40 inches (38-102 cm) per year. The Central Valley has a Mediterranean climate, but with greater temperature extremes than the coastal areas; parts of the valley are often filled with thick fog, similar to that found in the coastal valleys. The high mountains, including the Sierra Nevada, have a mountain climate with snow in winter and moderate heat in summer. On the east side of the mountains is a drier "rain shadow". California's desert climate regions lie east of the high Sierra Nevada and southern California's Transverse Ranges and Peninsular Ranges. The low deserts east of the southern California mountains, including the Imperial and Coachella valleys and the lower Colorado River, are part of the Sonoran Desert, with hot summers and mild winters; the higher elevation deserts of eastern California, including the Mojave Desert, Owens Valley, and the Modoc Plateau , are part of the Great Basin region, with hot summers and cold winters. Ecologically, California is one of the richest and most diverse parts of the world, and includes some of the most endangered ecological communities. California's diverse geography, geology, soils and climate have generated a tremendous diversity of plant and animal life. The state of California is part of the Nearctic ecozone, and spans a number of terrestrial ecoregions, and is perhaps the most ecologically diverse state in the United States. California has a high percentage of endemic species. California endemics include relict species that have died out elsewhere, including the redwoods and the Catalina Ironwood (Lyonothamnus floribundus). Many other endemics originated through differentiation or adaptive radiation, whereby multiple species develop from a common ancestor to take advantage of diverse ecological conditions. California's great abundance of species of California lilac (Ceanothus) is an example of adaptive radiation. Many California endemics have become endangered, as urbanization, logging, overgrazing, and the introduction of exotic species have encroached on their habitat. California is responsible for 14% of the United States' gross domestic product, which at almost $2.0 trillion USD, is greater than that of every other inidividual U.S state, and every country in the world (by Purchasing Power Parity) save for the other combined 49 United States, China, Japan, Germany, and the United Kingdom. If California was considered as an independent self-sufficient economy, it would be ranked the 6th, ahead of France. The predominant industry, more than twice as large as the next largest, is agriculture, (including fruit, vegetables, dairy, and wine). This is followed by aerospace; entertainment, primarily television by dollar volume, although many movies are still made in California; and light manufacturing including computer hardware and software, and the mining of borax. Per capita income varies widely by geographic region and profession. The Central Valley has the most extreme contrasts of income, with migrant farm workers making less than minimum wage, contrasted with farmers who frequently manage multimillion-dollar farms. Most farm managers are highly educated, most with at least master's degrees. While some coastal cities include some of the wealthiest per-capita areas in the U.S., notably San Francisco and Marin County, the non-agricultural central counties have some of the highest poverty rates in the U.S. The high-technology sectors in Northern California, specifically Silicon Valley, in Santa Clara and San Mateo counties, are currently emerging from economic depression caused by the dot.com bust, which caused the loss of over 250,000 jobs in Northern California alone. Recent (Spring 2005) economic data indicates that economic growth has resumed in California, although still slightly below the national annualized forecast of 3.9%. See also: California unemployment statistics With a population of 35,484,453 as of 2003 (according to Census Bureau estimates), California is the most populous state in the U.S., and contains 12% of the total U.S. population. According to the census, California lacks a majority ethnic group. It is the third minority-majority state, after Hawaii and New Mexico. Non-Hispanic Whites are still the largest group, but are no longer a majority of the population due to high levels of immigration in recent years. Hispanics make up almost one-third of the population; in order, other groups are Asian Americans, African Americans, and Native Americans. Because of high levels of immigration from Latin America, especially Mexico, and higher birth rates among the Hispanic population, Hispanics are predicted to become a majority around 2040. Racial breakdown of the population of California: 49.8% of the population is male, and 50.2% is female. The religious affiliations of the people of California are: - Protestant – 74% - Roman Catholic – 20% - Other Religions (Judaism, Buddhism, Islam)– 4% - Non-Religious – 2% Important cities and towns The state of California has many cities, and the majority of them are within one of the large metropolitan areas below. - Main articles: List of cities in California, List of cities in California (by population), List of urbanized areas in California (by population) - Population greater than 10,000,000 (urbanized area) - Population greater than 1,000,000 (urbanized area) - Population greater than 500,000 (urbanized area) - Important suburbs (within or near the above urbanized areas) - Anaheim (Orange County) - Berkeley (San Francisco Bay Area) - Burbank (Greater Los Angeles) - Chula Vista (San Diego Area) - Concord (San Francisco Bay Area) - Fremont (San Francisco Bay Area) - Glendale (Greater Los Angeles) - Huntington Beach (Orange County) - Irvine (Orange County) - Newport Beach (Orange County) - Ontario (Inland Empire) - Palo Alto (Silicon Valley) - Pasadena (Greater Los Angeles) - Santa Ana (Orange County) - Santa Clara (Silicon Valley) - Santa Clarita (Greater Los Angeles) - Simi Valley (Greater Los Angeles) - Sunnyvale (Silicon Valley) - Temecula (equidistant between Inland Empire and San Diego Area) - Thousand Oaks (Greater Los Angeles) - Torrance (Greater Los Angeles) - Ventura (Greater Los Angeles) - Walnut Creek (San Francisco Bay Area) 25 wealthiest places in California Thanks to the state's powerful economy, certain California cities are among the wealthiest on the planet, as evidenced by large numbers of extravagant mansions, sports cars, and beautiful people. The following list is ranked by per capita income: 1 Belvedere, California $113,595 2 Rancho Santa Fe, California $113,132 3 Atherton, California $112,408 4 Rolling Hills, California $111,031 5 Woodside, California $104,667 6 Portola Valley, California $99,621 7 Newport Coast, California $98,770 8 Hillsborough, California $98,643 9 Diablo, California $95,419 10 Fairbanks Ranch, California $94,150 11 Hidden Hills, California $94,096 12 Los Altos Hills, California $92,840 13 Tiburon, California $85,966 14 Sausalito, California $81,040 15 Monte Sereno, California $76,577 16 Indian Wells, California $76,187 17 Malibu, California $74,336 18 Del Monte Forest, California $70,609 19 Piedmont, California $70,539 20 Montecito, California $70,077 21 Palos Verdes Estates, California $69,040 22 Emerald Lake Hills, California $68,966 23 Loyola, California $68,730 24 Blackhawk-Camino Tassajara, California $66,972 25 Los Altos, California $66,776 See complete list of California places California's educational system is supported by a unique constitutional amendment that requires 40% of state revenues to be spent on education. The preeminent state university is the 9-campus University of California, which employs more Nobel Prize winners than any other institution in the world. The eight general campuses are in Berkeley, Los Angeles, Davis, Santa Cruz, Santa Barbara, Irvine, Riverside, and San Diego. A ninth campus, in San Francisco, teaches only health-sciences students. A tenth campus, in Merced, is scheduled to open in 2005. The UC system is intended to accept students from the top 12.5% of college-bound students, and provide most graduate studies and research. The University of California also administers federal laboratories for the Federal Department of Energy: Lawrence Livermore National Laboratory, Lawrence Berkeley National Laboratory, and Los Alamos National Laboratory. The California State University system provides education for teachers, the trades, agriculture and industry. With over 400,000 students, the CSU system is the largest university system in the United States. It is intended to accept most college-bound high-school students, while carrying out some research, especially in applied sciences. Lower-division course credits are frequently transferable to the University of California. The California community college system provides vocational education, remedial education, and continuing education programs. It awards certificates and associate degrees. It also provides lower division general-education courses, whose credit units are transferable to the CSU and UC systems. It is composed of 109 colleges organized into 72 districts. The system serves a student population of over 2.9 million. Preeminent private institutions include Stanford University, the University of Southern California (USC), and the California Institute of Technology (Caltech) (which administers the Jet Propulsion Laboratory for NASA). California has hundreds of private colleges and universities, including many religious and special-purpose institutions. This leads to many unique entertainment and educational opportunities for residents. For example, Southern California, with one of the highest densities of post-secondary institutions in the world, has a very large base of classically trained vocalists that compete in large choir festivals. Near Los Angeles, there are numerous art and film institutes, including the prestigious Academy of Motion Picture Arts and Sciences and the CalArts Institute. Secondary education consists of high schools that teach elective courses in trades, languages and liberal arts with tracks for gifted, college-bound and industrial arts students. They accept students from roughly age 14 to 18, with mandatory education ceasing at age 16. In many districts, junior high schools or middle schools teach electives with a strong skills-based curriculum, for ages from 11 to 13. Elementary schools teach pure skills, history and social studies, with optional half-day kindergartens beginning at age 5. Mandatory full-time instruction begins at age 6. The primary schools are of varying effectiveness. The quality of the local schools depends strongly on the local tax base, and the size of the local administration. In some regions, administrative costs divert a significant amount of educational monies from instructional purposes. In poor regions, literacy rates may fall below 70%. One thing they all have in common is a state mandate to teach fourth grade students about the history of California, including the role of the early missions; most schools implement this by requiring students complete a multiple medium project. California's vast terrain is connected by an extensive system of freeways, expressways, and highways, all maintained by Caltrans and patrolled by the California Highway Patrol. Most Californians usually resort to the roads for their commutes, errands, and vacations, which is why California's cities have a reputation equalled in the U.S. only by New York City for severe traffic congestion. As for air travel, San Francisco International Airport and Los Angeles International Airport are major hubs for trans-Pacific and transcontinental traffic. There are about a dozen important commercial airports and many more general aviation airports throughout the state's 58 counties. California also has several excellent seaports. The giant seaport complex formed by the Port of Los Angeles and the Port of Long Beach in Southern California is responsible for handling about a fourth of all container cargo traffic in the United States. The Port of Oakland handles most of the ocean containers passing through Northern California. Intercity rail travel is provided by Amtrak. San Francisco and Los Angeles both have rapid rail/subway networks, in addition to light rail. San Jose and Sacramento have only light rail. Metrolink commuter rail serves much of Southern California, and Caltrain commuter rail connects San Jose to San Francisco. Altamont Commuter Express (ACE) connects Tracy, Livermore and other edge cities with Silicon Valley. San Diego has Trolley light rail and Coaster commuter rail services. Nearly all counties operate bus lines, and many cities operate their own bus and light rail lines as well. Both Greyhound and Amtrak provide intercity bus service. The rapidly growing population of the state is straining all of its transportation networks. A regularly recurring issue in California politics is whether the state should continue to aggressively expand its freeway network or concentrate on improving mass transit networks in urban areas. - List of professional sports teams in California - List of California counties - List of California state prisons - List of California-related topics - List of cities in California - List of cities in California (by population) - USS California - Protected areas of California - A linguistic introduction to California English - Cuisine of California - Origin of the name California - State of California Official Website - US Census Bureau - California Genealogy Data Library - Counting California The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/California
13
68
A budget (from old French baguette, purse) is generally a list of all planned expenses and revenues. It is a plan for saving and spending. A budget is an important concept in microeconomics, which uses a budget line to illustrate the trade-offs between two or more goods. In other terms, a budget is an organizational plan stated in monetary terms. In summary, the purpose of budgeting is to: - Provide a forecast of revenues and expenditures i.e. construct a model of how our business might perform financially speaking if certain strategies, events and plans are carried out. - Enable the actual financial operation of the business to be measured against the forecast. Types of budget Following are various types of budget. The sales budget is an estimate of future sales, often broken down into both units and dollars. It is used to create company sales goals. Product oriented companies create a production budget which estimates the number of units that must be manufactured to meet the sales goals. The production budget also estimates the various costs involved with manufacturing those units, including labor and material. Cash Flow/Cash budget: The cash flow budget is a prediction of future cash receipts and expenditures for a particular time period. It usually covers a period in the short term future. The cash flow budget helps the business determine when income will be sufficient to cover expenses and when the company will need to seek outside financing. The marketing budget is an estimate of the funds needed for promotion, advertising, and public relations in order to market the product or service. The project budget is a prediction of the costs associated with a particular company project. These costs include labor, materials, and other related expenses. The project budget is often broken down into specific tasks, with task budgets assigned to each. The Revenue Budget consists of revenue receipts of government and the expenditure met from these revenues. Tax revenues are made up of taxes and other duties that the government levies. A budget type which include of spending data items Report that provides the basis for controlling (monitoring and revising) activities of an organization by comparing actual performance (actual sales or costs) with budgeted performance (budgeted sales or costs). A budget report has columns for budgeted and actual amounts. The difference between the two is the Variance. A budget report compares actual to budgeted amounts. - Prepare a Revenue Budget. The revenue budget is the planned revenues from various sources such as sales of goods, sale of intangibles etc. Be realistic when estimating revenue. Set up two revenue budgets in the initial stage of planning the budget. Revenue Budget A is the known budget. It includes all revenue that each department knows will exist. Budget B can be called a “Contingency budget” which should address the contingency situation of receipt and non-receipt of funds from planned sources. The contingency budget is important to face any unplanned event in the course of execution of the business. - Prepare an Expenditure Budget. Prepare an expenditure budget after calculating the revenue budget. The expenditure budget must always include a “contingency” item for emergency or unexpected expenses. Also prepare both a “Firm budget” and a “Contingency budget”. The “Contingency Budget” must be able to answer, “What if scenario to incur additional expenses””What if an emergency occurs?” “What if. . . .?” Additionally break down the expenditures into fixed and variable expenditures. A fixed expenditure does not change (for example, rent). A variable expenditure varies and not fixed. - Prepare an Overall Budget. Be sure that all the concerned departments review the initial budget. Upon their approval, perpare overall budget. The overall budget must include all the departmental budgets. Various programs and activities must be controlled within the allotted budget limits if time and money and must be integrated so as to achieve the mission of the organization. - Prepare a Budget Report. A budget report should compare the actual to the budgeted amounts. Prepare and analyze your budget report systematically so that the budget could be controlled effectively. While preparing the budget report, ensure all the information is posted for the correct periods and to the correct account. Incorrect period postings will cause a distorted picture. - Never submit an unaudited budget report. Budget must be as accurate as possible and inaccurate information will prove to be a disaster. Once the budget report is prepared, analyze as to why the variances from the planned budget have occurred and suggest appropriate actions that need to be carried out. Keep in mind that the budget is a planning tool, and the budget report is the controlling tool.
http://www.myhealthnwellness.com/index.php/more/ipbd/techniques-of-report-writing/269-budget-report
13
63
Welcome to HIST 1302 Online United States History, 1877- Part II: War, Depression and War, 1914-1945 The Great Depression and New Deal, 1929-1940s A 1928 campaign truck. Hoover won easily, taking 58 percent of the vote. (Herbert Hoover Presidential Library-Museum) Background and Causes of the Great Depression The 1920s "boom" enriched only a fraction of the American people. Earnings for farmers and industrial workers stagnated or fell. While this represented lower production costs for companies, it also precluded growth in consumer demand. Thus, by the mid 1920s the ability of most Americans to purchase new automobiles, new houses and other durable goods was beginning to weaken. This weakening demand was masked, however, by the "great bull market" in stocks on the New York Stock Exchange. The ever-growing price for stocks was, in part, the result of greater wealth concentration within the investor class. Eventually the Wall Street stock exchange began to take on a dangerous aura of invincibility, leading investors to ignore less optimistic indicators in the economy. Over-investment and speculating (gambling) in stocks further inflated their prices, contributing to the illusion of a robust economy. The crucial point came in the 1920s when banks began to loan money to stock-buyers since stocks were the hottest commodity in the marketplace. Banks allowed Wall Street investors to use the stocks themselves as collateral. If the stocks dropped in value, and investors could not repay the banks, the banks would be left holding near-worthless collateral. Banks would then go broke, pulling productive businesses down with them as they called in loans and foreclosed mortgages in a desperate attempt to stay afloat. But that doomsday scenario was laughed off by analysts and politicians who argued the U.S. stock market had entered a "New Era" where stock values and prices would always go up. That, of course, did not happen. Stock prices were seriously over-priced (when measured in the actual productivity of the companies they represented) making a market "correction" inevitable. In October 1929 the New York Stock Exchange's house of cards collapsed in the greatest market crash seen up to that time. Students are often surprised to learn that the stock market crash itself did not cause the rest of the economy to collapse. But, because American banks had loaned so heavily for stock purchases, falling stock prices began endangering local banks whose stock-buying borrowers began defaulting on their loans. Bank Failures Turn the Stock Market Crisis into an Economy-wide Crisis Banks are the pumping stations or hearts of the capitalist organism. Not only do banks circulate money, they create new money through the making of loans. Bank-created credit represents the most elastic element in the supply of money. As hundreds then thousands of banks failed between 1929 and 1933, the economy's credit (and, thus, money) supply began to dry up. Also, as banks went down, they often took local businesses with them as they called in business loans in a desperate effort to stay afloat. All of this rippled outward in ever-widening circles of bankruptcies, job lay-offs and curtailed consumption. The Depression's impact on the economy 1929 1933 Banks in operation 25,568 14,771 Prime interest rate 5.03% 0.63% Volume of stocks sold (NYSE) 1.1 B 0.65 B Privately earned income $45.5B $23.9B Personal and corporate savings $15.3B $2.3B Historical Statistics of the United States, pp. 235, 263, 1001, and 1007. During the worst years of the Depression, 1933-34, the overall jobless rate was twenty-five percent with another twenty-five percent of breadwinners having their wages and hours cut. Effectively, then, almost one out of every two U.S. households directly experienced unemployment or underemployment. For workers' families already facing hard times, the Depression's unemployment woes wreaked unprecedented, catastrophic havoc. Scholars tend to view the Depression and New Deal differently depending on their own ideological perspective. Conservative historians place a high value on the ideal of laissez-faire. Thus, the Depression was simply a painful but necessary market correction which would have corrected itself if left alone. To conservatives, small government means maximum freedom; and, the New Deal means the beginnings of an irresponsible and/or over-regulatory welfare state. For liberal historians the Depression represents the failure of laissez-faire, but not capitalism itself. Liberals value capitalism and democracy, asserting that democratic governments must be responsive to the social needs of the people. For many liberals the New Deal represents another American Revolution leading to the empowerment of previously powerless and oppressed groups and laying the foundation for a humane welfare state. To leftists the Depression represents the failure of market capitalism to protect the interests of the majority. The New Deal was simply laissez-faire capitalism's replacement with corporate statism (a more systematic partnership between corporations and the government). Rather than empowering the masses, for leftist scholars the New Deal represents capitalism's resilience and continued power.Different Ways of Learning About the Depression American workers declining ability to provide for their families can be seen, in part, in the following consumption statistics. Compare this method of learning about the Depression to the effect of viewing Dorothea Lange's photographs from the era. The Depression's impact on people: Consumer spending (in billions) on selected items, 1929-33 1929 1933 Food $19.5 $11.5 Housing $11.5 $ 7.5 Clothing $11.2 $ 5.4 Automobiles $ 2.6 $ 0.8 Medical care $ 2.9 $ 1.9 Philanthropy $ 1.2 $ 0.8 Value of shares on the NYSE $89.0 $19.0 Historical Statistics of the United States, p. 319. The Government's Response: Hoover President Herbert Hoover resisted calls for government intervention on behalf of individuals. He reiterated his belief that if left alone the economy would right itself and argued that direct government assistance to individuals would weaken the moral fiber of the American people. Hoover further believed that during hard times the government should adopt austerity measures, that is, cut spending even further. Forced by Congress to intervene, Hoover did so reluctantly, concerned about both unbalancing the federal budget, and, even more importantly, violating his laissez-faire principles. Hoover's efforts consisted of spending to stabilize the business community, believing that returning prosperity would eventually "trickle down" to the poor majority. The poor majority proved unwilling to wait. Branded by his many detractors as cold and uncaring, Hoover was easily defeated in the presidential election of 1932 by Democrat Franklin D. Roosevelt. The Government's Response: Roosevelt Projecting a vigorously robust image, candidate FDR campaigns here for the equine vote in New York, 1932. (FDR Library-Photo Credit) Roosevelt remained vague on the campaign trail, promising only that under his presidency government would act decisively to end the Depression. Once in office, FDR said yes to almost every plan put forward by advisors and the Congress said yes to almost every program proposed by the president. In the frantically-paced first few months of his administration, Congress passed scores of new legislation at the president's request. Historians tend to categorize these efforts as either measures for "relief" (short-term programs designed to alleviate immediate suffering), "recovery" (long-term programs to strengthen the economy back to its pre-crash level), or, "reform" (permanent structures meant to prevent future depressions). Another way of understanding FDR's Depression-fighting efforts is to analyze the politics of the New Deal. Generally speaking, the overall aim of the New Deal was essentially conservative. The New Deal sought to save capitalism and the fundamental institutions of American society from the disaster of the Great Depression. Within that framework, however, significant differences between New Deal programs existed. The "first" New Deal (1933-35) tended toward a continuation of "trickle down" policies, albeit better-funded and executed more creatively. Even in the early first New Deal, exceptional programs pointed toward the "second" New Deal's tendency toward "Keynesian" economic policies of revitalizing a mass-consumption based economy by revitalizing the masses ability to consume. English economist John Maynard Keynes sought both to explain why depressions occurred and what might be done to prevent them. Simply put, he thought government should use its massive financial power (taxing and spending) as a sort of ballast to stabilize the economy. Depressions, then, should be attacked with increased government spending at the bottom of the income pyramid. This position is the opposite of "trickle down." Keynesian economists call this "counter-cyclical demand management," believing that the government's massive financial impact can be used as a counterweight to current market forces. For a more detailed explanation of Keynesian theory, visit this well-written British site. Yet, as your text points out, Roosevelt never fully subscribed to Keynes' counterintuitive argument that government's should spend more during hard times. Roosevelt was a faint-hearted Keynesian, at best. The "Two" New Deals, 1933-1940s: The "First" New Deal (1933-35) aimed at restoring the economy from the top down The Agricultural Adjustment Act (AAA), passed in 1933, accepted the long-held premise that low farm prices resulted from overproduction. Thus, the government sought to stimulate increased farm prices by paying farmers to produce less. While the original AAA was declared unconstitutional by the Supreme Court, a new act correcting for the Court's concerns was passed in 1935. Critics pointed out the irony of reducing food production in a society where children already went hungry. Of course, those children's hunger did not represent demand in the marketplace. Indeed there were agricultural surpluses; as usual, the problem of the American farm was demand and distribution, not supply. "Acreage allotment" (the backbone of the crop reduction program) helped the largest and best capitalized farmers. It did little for smaller farmers and led to the eviction and homelessness of tenants and sharecroppers whose landlords hardly needed their services under a system that paid them to grow less. Further, it failed to address the fundamental problem of the Depression: weak consumer demand due to falling wages and unemployment. In the long run the effect of the AAA was beneficial to moderate to large operators. The 1933 National Industrial Recovery Act (NIRA) set up the New Deal's fundamental strategy of centralized planning as a means of combating the Depression. Industrial sectors were encouraged to avoid "cutthroat competition" (selling below cost to attract dwindling customers and drive weaker competitors out of business) which may have been good for individual businesses in the short-run, but resulted in increased unemployment and an even smaller customer pool in the long-run. The government temporarily suspended enforcement of anti-monopoly laws and sponsored what amounted to price-fixing as an emergency measure. Similar efforts were made to stabilize wages within industries as well. Again, the basic problem left unanswered was overall weak consumer demand. The NIRA did address this in a limited way with the Public Works Administration which funded various public employment schemes; however, the number of jobs created by the PWA were miniscule compared to the number of jobless workers. The "First" New Deal's Tennessee Valley Authority (TVA) reflected the future liberal methods of the "Second" New Deal. The TVA (1933) provided millions of dollars to transform the economies of seven depressed, rural Southern states along the Tennessee River. The program included dam-building, electric power-generation, flood and erosion control. It provided relatively high-wage jobs in construction in a region the president called "the nation's number one economic problem." Critics saw creeping socialism in this venture; liberals saw it as a successful example of government solving social and economic problems. The Politics of Right and Left push and pull FDR toward the Left The right-wing of American politics convinced Roosevelt he had nothing to lose on that end of the spectrum. Chief among his critics on the right was the Liberty League, a speaker's bureau funded by the Du Pont family and other business interests. The League leadership sought to fuse a partnership between the segregationist governor of Georgia Eugene Talmadge and other conservative leaders to create a grassroots opposition to the New Deal. Liberty League speakers toured the country accusing Roosevelt of instituting creeping socialism. Father Charles E. Coughlin (Library of Congress) Right-wing radio personality Father Charles Coughlin denounced recipients of government assistance and claimed the New Deal led the country toward a Communist dictatorship. He suggested Nazi Germany would prove to be America's correct model and blamed the Depression on a Jewish conspiracy. At the height of his popularity millions of Americans listened to his radio sermons each week. The Liberty League convinced Roosevelt that he had lost any hope of support from the business right and Coughlin's popularity convinced him that people must be suffering indeed to listen to such rhetoric. In a sense, both the Liberty League and Coughlin (for different reasons) pushed FDR further to the left. Roosevelt was pulled toward the left by both the traditional Left (The Socialist Party of America) and the unconventional left (Dr. Francis Townshend and Sen. Huey P. Long of Louisiana). In 1932 the Socialists' presidential candidate Norman Thomas had tripled his 1928 showing as hard times rejuvenated the Socialist critique of the system. Nobody thought Thomas posed an electoral threat to FDR; the president was sensitive, however, to the Socialists' rising popularity. Townsend Club Meeting, Union, New Jersey (Photo courtesy of Tim Nixon, Union, New Jersey) Dr. Francis Townsend, a California physician, argued in favor of a federally-funded old-age pension as a means of ending the Depression. He argued that turning the nation's elderly population into robust consumers would solve the underlying problem of weak demand. Dr. Townsend's clubs began springing up across the country as his message of care for the elderly meshed with people's desire for a rapid end to the Depression. Note the "Townsend Clubs of America" banner in the photograph. Huey P. Long The colorful senator from Louisiana, Huey P. Long, joined Roosevelt's critics on the left with his "Share Our Wealth" plan. Long proposed a guaranteed household income for each American family paid for by high taxes on the wealthiest Americans. Long's rising popularity (before his assassination in 1935) further demonstrated to FDR the discontent of the people. Convinced that Americans were suffering more than he had realized and believing he had already forfeited the support of the business right, FDR headed left in the "second" New Deal. The "second" New Deal (1935-40s) aimed at restoring the economy from the bottom up The "second" New Deal attempted to end the Depression by spending at the bottom of the economy where government funds attempted to turn non-consumers into consumers again. Many of the programs lasted only until World War II while others became permanent fixtures in American life. Here are three to illustrate the central thrust of the second New Deal. The Works Progress Administration was a huge federal jobs program that sought to hire unemployed breadwinners for the purpose strengthening their family's well-being as well as boosting consumer demand. The jobs varied but consisted of mainly of construction of public roads, buildings and parks. Over the course of its life (1935-43) over eight million Americans worked on WPA projects. This was "counter-cyclical demand management" on a huge scale. Responding in part to "Townsendites," the 1935 Social Security Act set up a modest worker-funded but federally-guaranteed pension system. Not on the princely scale Townsend had advocated, nevertheless, Social Security did act as a safety net for older workers, promote increased consumer demand and earned a place as a fixture on the American political and social landscape. Finally, another significant component of the "second" New Deal was the National Labor Relations Act of 1935. Usually called the Wagner Act after its sponsor, Senator Robert Wagner of New York, this law attempted to prevent employers' use of intimidation and coercion in breaking up unions. It set up the National Labor Relations Board to guarantee the right of collective bargaining for American workers. The results were immediately discernable: the formation of the Congress of Industrial Organizations whose auto worker and coal miner units soon saw their wages increase significantly. Again, higher wages among the masses of the working class is an example of the "second" New Deal's attempt to restore the economy from the bottom up. Assessing the legacy World War II ended both the temporary New Deal programs and the Depression they were attempting to cure. Keep in mind that many facets of the New Deal--Social Security, the Federal Deposit Insurance Corporation and the Securities and Exchange Commission to name only three--have remained features of American life from the 1930s until the present. War ended the Depression simply because of increased government spending, an intensified version of what Roosevelt was already doing with the WPA and similar programs.. Responding to the external threats posed by the Axis Powers (Germany, Japan and Italy) Roosevelt and the Congress threw fiscal caution to the wind and spent what was necessary to win the war. In so doing, they also achieved pre-Depression levels of employment and prosperity. What then is the legacy of the New Deal as a whole? Would it have ended the Depression? The best answer to that is that it went a long way toward alleviating the worst suffering of the Depression while still being captive to the conventional thinking (political, fiscal, racial) of the day. We cannot answer that question of whether it could have ended the Depression based on historical facts. World War II interrupted the process. What are the other long-term consequences of the Depression and New Deal? The rise of the "Roosevelt Coalition" of farmers, union members, working class people, Northern blacks and liberals made the Democratic Party the nation's dominant party for almost sixty years. Further, the political consensus that developed after World War II held that never again should the government allow another depression to take hold. Thus, there followed an unprecedented level of federal economic intervention. This huge expansion in the role, size and power of government in American social and economic life is aptly summed up in Republican President Richard Nixon's famous 1971 remark, "We're all Keynesians now."
http://iws.collin.edu/kwilkison/Online1302home/20th%20Century/DepressionNewDeal.html
13
29
The Great Depression Start Your Visit WithHistorical Timelines General Interest Maps America`s future appeared to shine brightly for most Americans when Herbert Hoover was inaugurated president in 1929. His personal qualifications and penchant for efficient planning made Hoover appear to be the right man to head the executive branch. However, the seeds of a great depression had been planted in an era of prosperity that was unevenly distributed. In particular, the depression had already sprouted on the American farm and in certain industries. The Hoover term was just months old when the nation sustained the most ruinous business collapse in its history. The stock market crashed in the fall of 1929. On just one day, October 29, frantic traders sold off 16,400,000 shares of stock. At year`s end, the government determined that investors in the market had lost some $40 billion. Previous to the 1929 collapse, business had begun to falter. Following the crash, the United States continued to decline steadily into the most profound depression of its history. Banks failed, millions of citizens suddenly had no savings. Factories locked their gates, shops were shuttered forever, and most remaining businesses struggled to survive. Local governments faced great difficulty with collecting taxes to keep services going. Hoover`s administration made a bad mistake when Congress, caving in to special interests, passed the Smoot-Hawley Tariff Act in 1930. The measure would hike up tariffs to prohibitively high levels. The president signed the bill into law over the objections of more than 1,000 economists. Every major trading nation protested against the law and many immediately retaliated by raising their tariffs. The impacts on international trade were catastrophic. This and other effects caused international trade to grind nearly to a standstill; the depression spread worldwide. Meanwhile, the president and business leaders tried to convince the citizenry that recovery from the Great Depression was imminent, but the nation`s economic health steadily worsened. In spite of widespread hardship, Hoover maintained that federal relief was not necessary. Farm prices dropped to record lows and bitter farmers tried to ward off foreclosers with pitchforks. By the dawn of the next decade, 4,340,000 Americans were out of work. More than eight million were on the street a year later. Laid-off workers agitated for drastic government remedies. More than 32,000 other businesses went bankrupt and at least 5,000 banks failed. Wretched men, including veterans, looked for work, hawked apples on sidewalks, dined in soup kitchens, passed the time in shantytowns dubbed "Hoovervilles," and some moved between them in railroad boxcars. It was a desperate time for families, starvation stalked the land, and a great drought ruined numerous farms, forcing mass migration. The Hoover administration attempted to respond by launching a road-, public-building, and airport-construction program, and increasing the country`s credit facilities, including strengthening the banking system. Most significantly, the administration established the Reconstruction Finance Corporation (RFC) with $2 billion to shore up overwhelmed banks, railroads, factories, and farmers. The actions taken signified, for the first time, the U.S. government`s willingness to assume responsibility for rescuing the economy by overt intervention in business affairs. Nevertheless, the Great Depression persisted throughout the nation. Unemployment relief remained largely a local and private matter. A thirst for change The electorate clamored for changes. The Republicans renominated Hoover, probably feeling that they had no better choice than their deeply unpopular leader. The Democrats nominated Franklin D. Roosevelt. His energetic, confident campaign rhetoric promoted something specifically for "the forgotten man" — a "New Deal." Roosevelt went on to a decisive victory. At his inauguration in March 1933, Roosevelt declared in his lilting style, "Let me assert my firm belief that the only thing we have to fear is, fear itself — needless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance." The nation needed immediate relief from the Great Depression, recovery from economic collapse, and reform to avoid future depressions, so relief, recovery and reform became Roosevelt`s goals when he took the helm. At his side stood a Democratic Congress, prepared to enact the measures he proposed. Congress passed a historic series of significant bills, most of which had originated in the White House, in just shy of a whirlwind 100 days. Congress also enacted several important relief and reform measures in the summer of 1935 — sometimes called the Second Hundred Days. Progress was made on the labor front: Relief, recovery and reform also affected the social welfare. The U.S. government could reach out in the widest way to alleviate human misery from the Great Depression — such was the assumption implicit in the New Deal. Beginning in 1935, Congress enacted Social Security laws (and later amendments) that provided pensions to the aged, benefit payments to dependent mothers, crippled children and blind people, and unemployment insurance. To fund all the new legislation, government spending rose. Spending in 1916 was $697 million; in 1936 it was $9 billion. The government modified taxes to tap wealthy people the most, who could take it in stride most easily. The rich, conservatives, numerous businessmen — and those who were all three — vigorously opposed the New Deal. But the election of 1936 triggered a nationwide endorsement of FDR, who carried every state except Vermont and Maine. Clamoring for their perceived share of the world`s pie, Germany, Italy and Japan marched onto the world stage. Germany came under the sway of Adolf Hitler and his National Socialist Party. Italy embraced Benito Mussolini`s brand of fascism, and military rulers gripped Japan. Those leaders were warlike dictators committed to forging vast empires by armed might. When Japan invaded Manchuria in 1931 and China in 1937, free peoples recoiled. Italy goose stepped into Ethiopia in 1935. The Third Reich reoccupied the Rhineland in 1936 and absorbed Austria in 1938. Isolationists believed the nation could remain aloof between the oceans, and most Americans went about their business trying to disregard the specter rising over the horizon. However, the president and his secretary of state, Cordell Hull, had no patience with isolationism and repeatedly warned the nation that when one country was threatened by an imperial bully, all countries were threatened. In the fall of 1937, Roosevelt called for action to isolate the aggressive powers, but his words fell on a hard-of-hearing Congress and most of the public had their minds elsewhere. In fact, Congress enacted several neutrality measures between 1935 and 1939 that prevented the nation from giving financial credit to, or trading with, any nation engaged in armed conflict. However, their effect was to invite aggression, because if Hitler struck at France or Great Britain, they could not hope for the United States to furnish them with arms or money. To prevent the United States from entering the war in western Europe, which broke out when Hitler`s divisions invaded Poland in September 1939, isolationists established the "America First" Committee in 1940. However, administration leaders continued to condemn Germany and the other dictatorships. They strove to win Latin America`s and Canada`s friendship, and commenced to bolster the armed services. Meanwhile, more Americans arrived at the notion that the United States might be next to fall under Germany`s sway if western Europe fell. War transforms the nation`s economy The United States had not fully put the economic woes of the Great Depression behind it by the time Japanese air and sea forces punched their fist through America`s back door at Pearl Harbor in December 1941. Even near the end of the Great Depression, unemployment remained high. The 1940 census records still showed 11.1 percent of U.S. heads of household unemployed. However, a deep, latent productive capacity existed within American industry. In anger, the nation swiftly changed gears from a peacetime to wartime footing that mobilized the populace and numerous industrial sectors. In January 1942, the president called for unheard-of production goals. In that year alone, he wanted 60,000 warplanes, 45,000 tanks, 20,000 antiaircraft guns and 18 million tons of merchant shipping. Labor, farms, mines, factories, trading houses, investment firms, communications — even cultural and educational institutions — were enlisted into the war effort. The nation accumulated big money and generated huge new industries to mass produce planes, ships, armored vehicles and numerous other items. Major population shifts occurred as people headed to new jobs. The draft helped bring the armed forces of the United States to more than 15 million members. Approximately 65 million men and women were in uniform or worked in war-related jobs by the end of 1943. Massive unemployment became a thing of the past and the Great Depression was swallowed up in the worldwide effort to defeat the Axis powers of Japan, Germany and Italy. ---- Selected Quotes ---- Quotes regarding The Great Depression. By Frances Perkins But with the slow menace of a glacier, depression came on. No one had any measure of its progress: no one had any plan for stopping it. Everyone tried to get out of the way. - - - Books You May Like Include: ---- New York City in the Great Depression: Sheltering the Homeless by Dorothy Laager Miller. Following the stock market crash of 1929, the rising unemployment rate and widespread depression made it necessary for the city of New York to provide... The 1939-1940 New York World's Fair by Bill Cotter. After enduring 10 harrowing years of the Great Depression, visitors to the 1939–1940 New York World’s Fair found welcome relief in the fair’s optimist... Chicago's 1933-34 World's Fair: A Century of Progress In Vintage Postcards by Samantha Gleisten. "You will enter A Century of Progress for the first time perhaps like an explorer-curious and eager-penetrating an amazingly rumored domain in search ... Cleveland's National Air Races by Thomas Matowitz. Enthusiasm for aviation exploded after Charles Lindbergh’s solo flight across the Atlantic in May 1927. The National Air Races, held in Cleveland betw... Detroit: A Motor City History by David Lee Poremba. On July 24, 1701, Antoine de La Mothe Cadillac stood in the heart of the wilderness on a bluff overlooking the Detroit River and claimed this frontier... Packard Motor Car Company by Evan P. Ide. Founded in 1899, the Packard Motor Car Company grew into one of America's finest automobile companies, producing cars that exemplified American qualit... Aberdeen Gardens by Aberdeen Gardens Heritage Committee. Aberdeen Gardens was established by Pres. Franklin Delano Roosevelt’s New Deal in 1934 as a model for housing following the Great Depression. Of the 5... Marshall Field's by Gayle Soucek. Anyone who has waited in a Christmas line for the Walnut Room’s Great Tree can attest that Chicago’s loyalty to Marshall Field’s is fierce. Dayton-Hud...
http://www.u-s-history.com/pages/h1569.html
13
21
|The aborigines of Paraguay were Native Americans of various tribes collectively known as Guaraní because of their common language. They were numerous when the country was visited, probably about 1525 by the Portuguese explorer Alejo García. During the next few years the Italian navigator Sebastian Cabot, then in the service of Spain, partly explored the rivers of the country. On August 15, 1537, Spanish adventurers seeking gold established a fort on the Paraguay River, calling it Nuestra Señora de la Asunción (Our Lady of the Assumption), because that day was the feast day honoring the Assumption of the Virgin Mary. Colonial Paraguay and the territory of present-day Argentina were ruled jointly until 1620, when they became separate dependencies of the viceroyalty of Peru. Beginning about 1609, the Jesuits, working under great hardship, established many missions called reducciones, which were settlements of Native American converts, whom the missionaries educated. The communal life on these settlements was similar to the original life of the Native Americans. Granted almost complete freedom from civil and ecclesiastical local authorities, the Jesuits, through the missions, became the strongest power in the colony. In 1750 King Ferdinand VI of Spain, by the Treaty of Madrid, ceded Paraguayan territory including seven reducciones to Portugal, and the Jesuits incited a Guaraní revolt against the transfer. In 1767 the missionaries were expelled from Spanish America, including Paraguay; soon thereafter, the missions were deserted. In 1776 Spain created the viceroyalty of La Plata, which comprised present-day Argentina, Paraguay, Uruguay, and Bolivia. Paraguay became an unimportant border dependency of Buenos Aires, the capital of the viceroyalty, and sank gradually into relative insignificance until the early 19th century. In 1810 Argentina proclaimed its independence of Spain, but Paraguay refused to join it and instead proclaimed its own independence on May 14, 1811. Three years later José Gaspar Rodríguez Francia made himself dictator and ruled absolutely until his death in 1840. Fearing that Paraguay might fall prey to stronger Argentina, Francia dictated a policy of national isolation. In the administrative reorganization following the dictator's death, his nephew Carlos Antonio López became the leading political figure. In 1844 López became president and dictator. He reversed the isolationist policy, encouraged commerce, instituted many reforms, and began building a railroad. Under his rule the population of Paraguay rose to more than 1 million. At his death in 1862 López was succeeded by his son, Francisco Solano López. In 1865, looking to build an empire, he led the nation into a war against an alliance of Argentina, Brazil, and Uruguay. The war devastated Paraguay, and when the death of López in 1870 ended the conflict, more than half of the population had been killed, the economy had been destroyed, and agricultural activity was at a standstill. Territorial losses exceeded 142,500 sq km (55,000 sq mi). The country was occupied by a Brazilian army until 1876, and the peace treaties imposed heavy indemnities on the country. In 1878 President Rutherford B. Hayes of the United States was arbiter in the settlement of boundaries between Argentina and Paraguay. Paraguayan history after the war was largely an effort to reconstruct the country. Immigration was encouraged, and Paraguay established subsidized agricultural colonies. The unsettling effects of the war, however, were apparent for many decades, particularly from 1870 to 1912, when no president was able to serve out a full term. Subsequently, periods of political stability alternated with periods of ferment and revolt. The administration (1912-1916) of Eduardo Schaerer was relatively enlightened. The country remained neutral and prosperous during World War I (1914-1918), and the administrations of Manuel Gondra (1920-1921), Eusebio Ayala (1921-1923), and Eligio Ayala (1923-1928) were on the whole periods of peace and progress. The border with Bolivia in the Gran Chaco, which had never been formally drawn, was the scene of numerous incidents between 1929 and 1932. In the latter year a full-scale war broke out when the area was invaded by Bolivia. An armistice was declared in 1935. In the final settlement, made by an arbitration commission in 1938, Paraguay was given about three-fourths of the disputed area. See Also Chaco War. After the war, the government was reorganized to permit widespread economic and social reforms. By a new constitution adopted in 1940, the state was given the power to regulate economic activities and the government was highly centralized. Paraguay declared war on Germany and Japan on February 7, 1945. The country subsequently became a charter member of the United Nations. Morínigo and Chávez In 1940 General Higinio Morínigo had made himself president and ruled as a dictator for the next eight years. A coup d'état deposed him in 1948. In September 1949, Federico Chávez, an army-backed leader of a faction of the dominant Colorado Party, was elected president without opposition. He imposed a dictatorship much like that of Morínigo. In March 1951 the Chávez regime devalued the currency in an attempt to check inflation and the loss of gold reserves. The economic crisis was aggravated in 1952, when Argentina, itself the victim of depressed economic conditions, abrogated a barter agreement with Paraguay. During the year legislation granted various benefits to workers. In general elections held on February 15, 1953, President Chávez was reelected, again without opposition. He imposed wage and price controls in June 1953 to check inflation. On May 5, 1954, his government was overthrown by an army-police junta. The Stroessner Regime The electorate on July 11 endorsed General Alfredo Stroessner, commander in chief of the army and head of the Colorado Party. He was the only candidate. Attempts by leftist forces to seize power were put down in 1956 and 1957. A plebiscite in 1958 confirmed President Stroessner for another five-year term. In elections for a new congress in 1960, all 60 seats were won by the president's supporters in the Colorado Party. Diplomatic relations with Cuba were severed in December. Paraguay was among the states that favored collective action by the Organization of American States against the Cuban regime, but such measures were not approved by the two-thirds majority required. In 1963 Stroessner was reelected president, running against the first opposition candidate in a Paraguayan presidential election in 30 years. He enjoyed some popularity in the mid-1960s, partly because of continued economic progress, but many Paraguayans had also fled into exile from his dictatorship. Stroessner continued in power in 1968 after having had the constitution altered the previous year to permit his reelection. He was again reelected in 1973, 1978, and 1983. A significant step was taken by the Stroessner regime in the late 1960s with the establishment of close economic relations with neighboring countries. In May 1968 the La Plata Basin Pact was signed by the foreign ministers of Argentina, Bolivia, Brazil, Paraguay, and Uruguay. This agreement, calling for joint development of the La Plata River Basin, was expected to stimulate the economy of the entire region and would be of special importance to Paraguay, the least developed nation in the area. In the 1970s and early 1980s Paraguay was relatively calm. Itaipu, the largest hydroelectric dam in the world, was built on the Alto Paraná River in a joint venture with Brazil. Inflation was controlled, but declining markets for Paraguayan exports led to rising unemployment and a worsening of the nation's trade position. The mid-1980s brought limited political liberalization, including, in 1987, the lifting of the state of siege in Asunción. Reelected to his eighth term in 1988, Stroessner was ousted in a military coup in February 1989. General Andrés Rodríguez, the leader of the coup that had removed Stroessner from office, won election to the presidency as head of the Colorado Party following Stroessner. In office, he inaugurated a program of privatizing state-owned enterprises, but the economy remained relatively stagnant, and his party lost some support. The Colorado nominee in the May 1993 presidential elections, Juan Carlos Wasmosy, won the office with only a plurality of the votes cast. Under Wasmosy, Paraguay joined Argentina, Brazil, and Uruguay in creating the Southern Cone Common Market (Spanish acronym MERCOSUR) in 1995. This trade association promised to lower tariffs and increase trade, sparking concerns that lower tariffs and economic integration would harm small Paraguayan businesses. In 1996 Wasmosy, backed by many, mandated that the commander of the country's army, General Lino Cesar Oviedo, step down from his office. Oviedo agreed to resign only under the condition that he be named defense minister, therefore placing him in charge of Paraguay's military policy and finances. Fearing another coup, Wasmosy agreed, but when many citizens protested, he reversed his decision. Oviedo then announced his intention to run for the presidency in 1998.
http://www.cartage.org.lb/en/themes/GeogHist/histories/history/hiscountries/P/paraguay.html
13
17
The opening words of the Declaration of Independence announce the "self- evident" truths "that all men are created equal, that they are endowed by their Creator with certain inalienable Rights, that among these are Life, Liberty and the pursuit of Happiness." The Declaration lists further truths: That to secure these rights, Governments are instituted among Men deriving their just powers from the consent of the governed. That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles, and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. These ideas were not new in 1776. The great English philosopher John Locke had made similar claims almost a century before. What was unique about the American situation at the time of the Revolution was the readiness and ability of Americans to translate these philosophical truths into practical action. The history of Massachusetts provides a good example of how seriously Americans took the claim that governments were their creations, and that the authority of such governments was derived from the consent of the governed. After the declaration of Independence was signed, the Massachusetts House of Representatives adopted a resolution announcing its interest in drafting a new state constitution to replace the colonial frame of government, and asked that town meetings be called so that citizens could consider whether such a course of action would be acceptable. Concord's meeting resolved that a sitting legislature was "by no means a Body proper to form & Establish a Constitution." Governments were not to be created by governments, but by the people. Concord resolved that "it appears to this Town highly necessary & Expedient that a Convention, or Congress be immediately Chosen, to form & establish a Constitution, by the Inhabitants of the Respective Towns in this State,..." The town meeting of Boston asserted that in making such an important public decision pains must be taken to consult not only the legislature of Massachusetts but all the people in order to "collect the wisest Sentiments" on the subject of a new constitution. Attleborough's town meeting objected to granting the government the right to draft a new form of government because "the right of the Inhabitants of the Said State to negative the Said form, or any Article in it when drawn is expressly acknowledged...." Undeterred, the state government of Massachusetts drafted the Constitution of 1778. The proposed frame of government was sent to town meetings where approval by two-thirds of votes cast was necessary for adoption. A gathering of citizens in Essex County found much in the proposed constitution that was objectionable. In particular, the assembled citizens asserted [t]hat a bill of rights, clearly ascertai ning and defining the rights of conscience, and that security of person and property, which every member of the State hath a right to expect from the supreme power hereof, ought to be settled and established, previous to any ratification of any constituti on for the state. (Pole 1970, 446) Other meetings found other aspects of the proposed constitution that were worrisome or objectionable, and it failed to gain the necessary votes for adoption. In June of 1779 the state government of Massachusetts responded to this defeat by calling for a constitutional convention. This convention, whose delegates were chosen specifically for the task of framing a new state government, drafted a new constitution, the first part of which was a declaration of the Rights of the Inhabitants of the Commonwealth of Massachusetts. Once again, the town meetings debated, voted, and eventually approved the new constitution. The voters of Massachusetts, acting twice through a popular vote on a proposed course of public action, had exercised their fundamental right to create a government that satisfied them. The initiative allows popular initiation of constitutional amendments and laws; a referendum gives voters an opportunity to express their approval or disapproval of acts taken by their governments. In essence, a referendum allows citizens to approve or repeal an adopted state statute (statutory referendum) or approve or reject a legislatively approved change in their state's constitution (constitutional referendum). Referenda in various states may be called by the state legislature or by popular petition, or may simply be required before undertaking certain measures. In every state except Delaware proposed amendments to state constitutions that have been approved by the state legislature must be submitted to a constitutional referendum of the people. In sixteen states citizens may use the constitutional initiative process to propose and adopt amendments to their state constitutions. In over twenty states citizens may use the statutory initiative to bring proposed statutes to popular vote. Some states require statutory referenda in certain cases and allow the state legislature to call for a referendum on legislation it has approved. In addition, thousands of local referenda re held each year. Given the role that initiatives and referenda have played in state and local governance, it is remarkable that there has never been a national initiative or referendum. The reason for this is simple: the Constitution of the United States does not provide for direct citizen initiation of, or direct popular vote on, either statutes or constitutional amendments. Americans do not make national law directly through their votes. Instead they choose representatives who determine national policy. A discussion of why there is no provision in the Constitution for national processes of initiative and referendum and whether there should be such a provision leads inevitably to a discussion of democracy and representation. Does the United States have a representative system of government only because it is impossible for the people to gather to conduct public business (an obstacle that initiatives and referenda seek to eliminate)? Or do we expect that our representatives and our representative form of government will produce better and wiser policies than the people themselves could produce? Are the people of the United States sufficiently well-informed to make wise decisions about public policy issues? Do they have enough regard for the rights and interests of those with minority points of view to avoid damaging those rights and interests? Should th e people have the power, ultimately, to make policy directly when they are dissatisfied with the actions, or inaction, of their elected officials? That men were capable of invading the rights of their fellow citizens was an axiom of Anglo-American political thought in the eighteenth century. It was widely assumed that the human appetite for power threatened all political systems, however noble their origins and intentions, with degeneration into tyranny. The men who gathered in Philadelphia were history-minded, and their study of ancient governments and of contemporary European systems taught them that monarchies tended to degenerate into tyrannies of one over all others, that aristocracies degenerated into oligarchies in which a few oppressed the many, and that democracies degenerated in to mob rule and anarchy. Most Americans had no intention of fastening a monarchy on themselves after struggling so hard to cast one off. There existed no formal aristocracy in the United States and Americans were not interested in seeing one created. But the only remaining alternative--a strongly democratic government--was not one that the framers of the Constitution were anxious to establish. Between 1776 and 1787, a number of states had experimented with systems of government in which popularly elected legi slatures had dominated weak state executives and judiciaries. The results were alarming to many of the framers. The democratic nature of various new state constitutions had brought a new kind of man--often a small farmer or person of modest economic background--into government. Many of these men found themselves crushed by debt and taxes in the years following the American War of Independence. To counter this burden, many states issued paper money. In Rhode Island this was accomplished by the adoption of a state law giving landowners loans of new paper money, with their land as security. When this paper money was made legal tender for payment of debts the usual relationship between creditors and debtors was reversed: debtors pursued creditors who wished to void being paid with what they regarded as worthless money. In response to this evasion the legislature of Rhode Island made it an offense to refuse paper money and allowed debtors to come to court, declare their debts, and pay them with paper money. Creditors were then informed that the debt had been discharged. Andrew McLaughlin wryly notes that "... seven states entered on the difficult task of legislating their people into financial blessedness by the simple means of making money..." (McLaughlin 1962, 106-107). Paper money as issued in Massachusetts as well, but continuing economic distress led to calls for yet more relief. When none was forthcoming, a group of armed men attempted to close the courts and disrupt the processes by which mortgage foreclosures and debt collections were carried out. A ragtag army of these hard-pressed and angry men gathered in Worcester, Massachusetts, in 1786, hoping to generate enough pressure on public authorities to receive some relief. Under the "command" of Daniel Shays, they moved against the federal arsenal in Springfield. Ho wever, they were met by a force of 4,400 men gathered under the authority of the state of Massachusetts. These troops scattered Shays' men and ended what is now known as Shays' rebellion. The framers of the Constitution shared this commitment to government by representation, but disagreed on how representatives were to be chosen and to whom they should be accountable. Should a theory of actual representation be translated into a system in which representatives were popularly elected and charged with the task of pursuing the interests of their constituents? Or should a theory of virtual representation be translated into a system in which representatives, insulated from public opinion and popular pressure, would seek to identify and pursue broader "public good"? Madison's Notes record the belief of Roger Sherman that "the people... immediately should have as little to do as may be about the Government. They [lack] information and are constantly liable to be misled." Elbridge Gerry of Massachusetts, no doubt with the memory of Shay's Rebellion still fresh in his mind, joined Sherman by stating that the evils we experience flow from an excess of democracy. The people do not [lack] virtue, but are the dupes of pretended patriots." To these doubts about the character and capability of the people were added the concerns of those, like Charles Pinckney of South Carolina, who wished to maximize the influence of state governments in the work of the national government. Pinckney proposed that each state's legislature choose its representatives to the House of Representatives. Otherwise, Pinckney worried, the state governments will "lose their agency" and "S. Carolina & other States would have but a small share of the benefits of Govt." John Dickinson of Delaware shared these concerns, comparing "the proposed National System to the Solar system, in which the States were the planets, and ought to left to move freely in their proper orbits." James Wilson replied that while he was not in favor of extinguishing these planets, "neither did he on the other hand, believe that they would warm or enlighten the Sun." Wilson added that selection of members of the house of representatives by state legislatures was undesirable because state legislatures have "an official sentiment" opposed to the aims and sentiments of a national government "and perhaps to that of people themselves." Wilson was joined in this view by Alexander Hamilton and other nationalists. But unlike Hamilton, Wilson opposed state legislative selection of representatives not only because he wanted to avoid undue state influence in the national government but also because he wanted to maximize the influence in the Congress of the people. He urged the popular election of members of both the House of Representatives and the Senate. Wilson invoked the theory of actual representation in arguing that "representation is made necessary only because it is impossible for the people to act collectively." If all citizens could not be gathered to make decisions, then a representative government should possess "not only 1st the force, but secondly the mind or sense of the people at large. The Legislature ought to be the most exact transcript of the whole society." The Convention eventually settled on a plan to have members of the House popularly elected and members of the Senate chosen by state legislatures. As was so often the case during that summer in Philadelphia, opposing views found a compromise. The national legislature was thus subject to both popular influence (in the House) and state influence (in the Senate). This system of election reflected the belief that the people must have some direct influence on government while still allowing for reservations about the people's ability to properly participate in their own governance. In addition to mixing state and popular influence in the national legislature, the constitution provided for actual representation in the House and virtual representation in the Senate. But had the Constitution really provided for the actual representation of popular opinion? Was the democratically elected branch of Congress to be easily subject to public opinion? ...governments are too unstable; that the public good is disregarded in the conflicts of rival parties; and that measures are too often decided not according to the rules of justice, and the rights of the minor party; but by the superior force of an interested and over-bearing majority. (Federalist No. 10) A prime advantage of the proposed constitution's representative form of government, Madison asserted, was that a great number of citizens from a large extent of territory would be brought together in an extended republic. This republic would contain a variety of groups and interests. The principle of majority rule in the legislature, however, would curb the pursuit of narrow interests by minority factions. And the multiplicity and geographic distance of interest groups from one another would discourage the effective operation of a majority faction that might threaten the rights of the numerical minority. The Constitution's plan for representation had, Madison wrote, an additional advantage: because members of the House of Representatives would be chosen in large districts with large constituencies, ...it will be more difficult for unworthy candidates to practise with success the vicious arts, by which elections are too often carried; and the suffrages of the people being more free, will be more likely to centre on men who possess the most attractive merit, and the most diffusive and established characters. (Federalist No. 10) Madison believed that the popular election of members of the House of representatives would bring to office men of considerable prominence. These men would "refine" the views of the public and pursue policies that showed a proper regard for the public good and a proper disdain for the political projects of special interests. To all of these checks against "factious" influence in the House, the framers of the constitution added additional checks against the threat of popular actions in the national government. A Senate, not chosen by popular election, had to approve any legislation passed by the House before it could go to the president; the president, also not chosen by popular election, had the power to veto it; and the Supreme Court, chosen by the president with the consent of the Senate, would judge the constitutionality of legislation. Not surprisingly, some of the most cogent complaints raised against the proposed Constitution focused on the alleged absence of democratic processes and on its overall aristocratic tendency." Melancton Smith of New York, speaking to the New York ratifying convention about the Constitution's system of representation, expressed this view: The idea that naturally suggests itself to our minds, when we speak of representatives, is, that they resemble those they represent. They should be a true picture of the people, possess a knowledge of their circumstances and their wants, sympathize in all their distresses, and be disposed to seek their true interests. The knowledge necessary for the representative of a free people not only comprehends extensive political and commercial information, such as is acquired by men of refined education, who have leisure to attain to high degrees of improvement, but it should also comprehend that kind of acquaintance with the common concerns and occupations of the people, which men of the middling class of life are, in general, more competent to than those of a superior class. (Kenyon 1966, 382) Smith wanted representatives to be attentive to the special concerns of their constituents. He also believed that a large number of representatives should be chosen in smaller districts: "...the number of representatives should be so large, as that, while it embraces the men of the first class, it should admit those of the middling class of life." Many opponents of the Constitution concluded, like Smith, that a small number of representatives serving large constituencies would prevent the common American from being elected to the House of representatives. This would preclude a desirable resemblance between the representative and the represented. The likely effect of this system, said opponents of the Constitution, would be diminished power of the people within the only part of government they directly selected. This "aristocratic" bias in the House was intolerable given what opponents of the Constitution regarded as the flatly undemocratic character of the remainder of the government. Opponents of the Constitution argued for the establishment of an advisory council to limit the power of the president, shorter terms and/or rotation of office for both senators and the president, a provision allowing states to recall senators, and a bill of rights to safeguard the rights of the individual. The Constitution was eventually ratified as written, though only after its supporters agreed to propose amendments forming a bill of rights once the government was in operation. As parties became more prominent in nominating residential candidates they transformed the operation of the electoral college. Candidates for selection to the electoral college were frequently pledged to a particular party and its presidential nominee. By 1832 every state but South Carolina had shifted the selection of these electoral college electors from state legislators to voters. The popular election of pledged electors retained the form of the electoral college's mediation between voters and presidential candidates, but the discretion of electors was substantially reduced in favor of greater public influence in the selection of a president. The newspapers are largely subsidized or muzzled, public opinion silenced, business prostrated, homes covered with mortgages, labor impoverished, and the land concentrating in the hands of capitalists ...The fruits of the toil of millions are boldly stolen to build up colossal fortunes for a few, unprecedented in the history of mankind; and the possessors of these, in turn, despise the Republic and endanger liberty. From the same prolific womb of governmental injustice we breed the two great classes--tramps and millionaires. (Levy 1982, 293) These economic wrongs would not be righted by the Democratic or Republican parties, the Populists argued, because both were servants of business interests: We have witnessed for more than a quarter of a century the struggles of the two great political parties for power and plunder, while grievous wrongs have been inflicted upon suffering people. We charge that the controlling influences dominating both th ese parties have permitted the existing dreadful conditions to develop without serious effort to prevent or restrain them....They propose to sacrifice our homes, lives, and children on the altar of mammon; to destroy the multitude in order to secure corru ption funds from the millionaires. (Levy 1982, 293) The Populists claimed that the answer to this political problem was more democracy. They favored the secret ballot, which would shield voters from intimidation; direct popular election of senators; and the use of he initiative and referendum. While the economic analyses and rhetoric of the Populists received a cold reception from most Americans, their complaints about the domination of political processes by special interests and corrupt parties found a large and responsive audience. The Progressive Movement, which succeeded the Populists, pursued a number of the Populists' reforms. Its greatest achievement at the national level was the adoption of the Seventeenth amendment, which provided for popular election of senators. Progressives successfully pressed for use of secret ballots, regulation of political parties, and use of nonpartisan elections at the local level. They also led the fight for direct democracy in the states, urging state adoption of the initiative and referendum processes. The Progressive Movement was extremely diverse, claiming both Democratic and Republican adherents. But the Progressive Movement's greatest concern was over the growth of large and powerful organizations--corporations, organized labor, and party political machines--and influence on American society and politics. Progressives feared such organizations undermined the role of the "unorganized individual" in American life and American politics. Against the power of these groups, which wielded money, votes, and patronage to pursue special political aims, the Progressives hoped to muster the power the votes of the average citizen. Between 1898 and 1914 the push for direct democracy won the amendment of state constitutions following initiatives and referenda in South Dakota, Utah, Oregon, Montana, Oklahoma, Maine, Missouri, Arkansas, Colorado, Arizona, California, Idaho, Nebraska, Nevada, Ohio, Washington. The pioneering spirit of many relatively new states reflected in their early adoption of the initiative and referendum. Some Americans believe that amending the constitution to allow direct popular votes on statutory and constitutional issues is the logical and desirable next step in the ongoing process by which our national government has been democratized. Others think such amendments would be a fundamental and ill- advised departure from the principle and practice of a democratically elected but nevertheless representative government. The push for direct democracy moved from the state level to the national level in 1907, when Rep. Elmer Fulton of Oklahoma introduced House Joint Resolution 44. This resolution, which was eventually unsuccessful, would have amended the Constitution to provide for national initiatives on both proposed statutes and constitutional amendments. In the 1916 presidential campaign the supporters of President Woodrow Wilson advised voters that, while war raged in Europe: You are working, not fighting! Alive and happy, not cannon fodder! Wilson and peace with honor? or Hughes with Roosevelt and war. Many Americans were not convinced that this would long remain the case because of loan arrangements that led to the shipment of U.S. munitions to Britain. Such notable reformers as William Jennings Bryan, Robert LaFollette, and Jane Addams supported a proposal that required a national referendum on any declaration of war. After Wilson's reelection this proposal was the subject of congressional hearings in February, 1917. By March of 1917 American merchant seamen were arming themselves against anticipated attacks by German submarines. On April 6, 1917, German attacks on American shipping prompted a U.S. declaration of war. Disillusioned by America's involvement in World War I and chagrined by the harsh peace that followed, Americans turned again to the war referendum proposal in the late 1930s as war menaced Europe once more. Rep. Louis Ludlow's "Ludlow Amendment" calling for a war referendum went further in the legislative process than any national referendum proposal before or since. The proposed amendment came to the floor of the House in December, 1937. President Franklin Roosevelt lobbied for the defeat of the proposal. A House vote of 188-209 fell well short of the two-thirds vote needed for further advancement. The most recent significant proposals for a national initiative were advanced in 1977. These proposals were in response to a perceived loss of effective popular contact with and control over national policymakers. An unpopular war in Vietnam, the resignation of a president in disgrace, and scandals in Congress all contributed to a low level of public confidence in national leadership. In December, 1977, the Subcommittee on the Constitution of the Senate Committee on the Judiciary held hearings on two national initiative proposals. In introducing Senate Joint Resolution 67, 95th Congress, 1st session, sponsored by Sen. James Abourezk and Sen. Mark Hatfield, Senator Abourezk stated that: [t]he last few years have seen a growing dissatisfaction, and in many cases a serious distrust, of Government by the very people who are its source of power and who elect its leaders. People stay home on election day not because they are lazy or do not care but because they have decided that meaningful communication with th eir leaders is no longer possible or effective. Echoing an earlier Progressive theme, Abourezk continued: ...much of the alienation and helplessness that citizens experience can be mitigated if avenues for constructive participation exist. The initiative procedure is one means to provide direct citizen access to our governmental decision-making process through a legal and democratic method. (Hearings on Voter Initiative Constitutional Amendment) Among the key features of the Abourezk-Hatfield proposal were the following: Also introduced during the first session of the Ninety-fifth Congress was House Joint Resolution 544, whose prime sponsor was Rep. Guy Vander Jagt. There are two key differences between this proposal and the Abourezk-Hatfield proposal. First, a successful initiative would require not a simple majority of all voters but a majority of votes cast in each of three-fourths (thirty-eight) of the states. Second, a three-fourths vote of both houses of Congress rather than a two-thirds vote would be necessary for Congress to reverse the action of the people. The same majority of votes in three-fourths of the states would be needed for the rep eal of a law or a provision of a law. The amendments proposed in 1977 did not involve proposals for national constitutional initiative or referendum processes. A constitutional initiative process would allow voters to propose amendments and put them to a national popular vote by gathering a specified number of signatures on a petition. Such a process would supplement, or could conceivably replace, the current amending process, which requires both houses of Congress to pass a proposed amendment by a two-thirds vote before sending the proposal to the states where approval by three-fourths of the states (expressed either by state legislature or special convention) is required for ratification. A constitutional referendum process would require a popu lar vote on constitutional amendments approved by Congress. Another option not discussed during the 1977 hearings is the statutory referendum, a process that could work in several ways. Congress and the president could be required to submit certain types of legislation to a popular vote before it could become law. Or Congress and the resident may be given the option of referring some legislation to a popular vote. Finally, a process could be established that would employ a popular petition to require that a statute approved by Congress and the president be submitted to a popular vote. Obviously, some of these alteratives are more sweeping than others and hence would be more controversial. Statutory initiatives and referenda would be subject to repeal by Congress and invalidation by the Supreme Court. Constitutional changes enacted by popular vote would become part of the nation's fundamental charter and could only be changed or removed by subsequent amendment. Debate about all proposals for direct democracy, though, would focus on arguments about the comparative merits of democratic and representative forms of government. The truth about the likely impact of national initiative and referendum processes is less tidy than one would gather from the claims of their advocates or detractors. It is true, for instance, that the referendum process has been used in ways that damaged the interests of minority groups. But he same results have been produced by representative governments at both the state and national levels. It is true that the initiative/referendum process could heighten citizen awareness of and interest in political questions. It is also true that this heightened awareness could bring increased emotion and acrimony to political life. Questions about the capabilities of citizen voters do not admit of pat answers. It is undeniable that many citizens lack information about public issues and do not participate widely in the nation's political life. The question is whether widespread citizen apathy and lack of information is a normal state of affairs or whether it is a product of a representative system that leaves us to decide who decides on policy issues instead of deciding for ourselves. Advocates of the initiative say representative government is "government by elites": the representative and the "interests" who lobby them. But any national initiative would be dominated by an intense, unelected minority using direct mail, television commercials and other techniques of mass persuasion. (George Will, Washington Post, 28 July 1977) Will believes people should not govern or decide issues, but are supposed "to decide who will decide." he states that public policy "is best given shape by representative institutions, which, unlike 'the people,' are deliberative bodies." Those of us who favor the initiative process believe that an educated and well-informed public, operating in an atmosphere of unrestrained First Amendment rights, is fully capable of acting as a deliberative body. Contrary to Will's suggestion that it would undermine our representative form of government, the initiative process would provide a much-needed complement to the system. To an electorate frustrated by a Congress unwilling to act, it provides another democratic means to bring about change. To the federal government itself, the initi ative process would provide another check in our system of checks and balances, a dilution of centralized power. (James Abourezk, Washington Post, 10 August 1977) The education value and politicization potential from a national initiative could be substantial. Thousands of people would be involved in any national initiative campaign on one side or the other. Furthermore, the public's attention would be focused on debate and discussion of the merits or demerits of public policy issues rather than just on style, looks, image, and other similar aspects of many modern campaigns. Furthermore, the debate would be in public and in the open....The initiative process will provide one more way for the vox populi to speak and, more importantly, it will permit them to act rather than simply react to actions taken by others. Larry Berg, Testimony on Voter Initiative Constitutional Amendment, 13 and 14 December 1977) The political arena which [the initiative process] creates will be preempted by groups that have money, that have organization, that have political skill, and that have power. (Peter Bachrach, Testimony on Voter Initiative Constitutional Amendment, 13 and 14 December 1977) In the end, in fact, the real issue... is whether or not America believes in democracy, and believes it can afford the risks that go with democratic life. All of the objections to it are so many different ways of saying "the people are not to be trusted"--a skepticism which, it is perfectly true, can be traced back to the "realism" and cynical elitism of a significant group of constitutional fathers....If Americans sometimes seem unfit to legislate, it may be because they hav e for so long been passive observers of government. The remedy is not to continue to exclude them from governing, but to provide practical and active forms of civic education that will make them more fit than they were. Initiative and referendum proces ses are ideal instruments of civic education .... Benjamin Barber, Testimony on Voter Initiative Constitutional amendment, 13 and 14 December 1977) Return to Voting by Phone home page
http://evanravitz.com/direct.htm
13
14
Few could have imagined the impact of Columbus' discovery of a spice so pungent that it rivaled the better known black pepper from the East Indies. Nonetheless, some 500 years later, on the quincentennial anniversary of the discovery of the New World, chili peppers (Capsicum) have come to dominate the world hot spice trade and are grown everywhere in the tropics as well as in many temperate regions of the globe. Not only have hot peppers come to command the world's spice trade but a genetic recessive non-pungent form has become an important "green" vegetable crop on a global scale especially in temperate regions. The New World genus Capsicum is a member of the Solanaceae, a large tropical family. Various authors ascribe some 25 species to the genus but this is only an estimate with anticipated new species to be discovered and named as exploration of the New World tropics expands. Exploration and plant collecting throughout the New World have given us a general but false impression of speciation in the genus. Humans unconsciously selected several taxa and in moving them toward domestication selected for the same morphological shapes, size, and colors in at least three distinct species. Without the advantage of genetic insight these early collectors and taxonomists named these many size, shape, and color forms as distinct taxa giving us a plethora of plant names that have only recently been sorted out reducing a long list of synonymy to four domesticated species. The early explorations in Latin America were designed to sample the flora of a particular region. Thus, any collection of Capsicum was a matter of chance and usually yielded a very limited sample of peppers from that area. Only with the advent of collecting trips designed to investigate a particular taxon did the range of variation within a species begin to be understood. One needs only to borrow specimens from the international network of herbaria to appreciate what a limited sample exists for most taxa, particularly for collections made prior to 1950. The domesticate Capsicum pubescens, for example, that is widespread in the mid-elevation Andes from Colombia to Bolivia, is barely represented in the herbarium collections of the world. Most herbarium collections of Capsicum, with the exception of Capsicum annuum holdings, are woefully inadequate. Furthermore, besides Capsicum annuum, very little attention has been paid to the many cultivars of each of the domesticated species. Often material is unusable because it was collected only in fruit neglecting the most important and critical characters associated with floral anatomy and morphology. With the advent of germplasm collecting programs during the past three decades, and concomitant improvement in herbarium collections we have come to better understand the nature of variation in the genus Capsicum. The increasing number of Capsicum herbarium specimens permits renewed interest and debate on the proper species classification. One of the more perplexing questions regarding the taxonomy of Capsicum is defining the genus (Eshbaugh 1977, 1980b; Hunziker 1979). The taxonomy of the genus Capsicum is confounded within certain species complexes, e.g. C. baccatum sensu lato. Major taxonomic difficulties below the species level in other taxa, e.g. C. annuum, also exist. Armando T. Hunziker (unpublished) is currently working on a revision of the genus. What taxa are ultimately included in Capsicum may indeed change if the concept of the genus is broadened to include taxa with non-pungent fruits but with other common morphological and anatomical traits such as the nature of the anther, the structure of nectaries, and the presence of giant cells on the inner surface of the fruit (Pickersgill 1984). Capsicum, as presently perceived, includes at least 25 species, four of which have been domesticated (Table 1). An understanding of each of these domesticates is instructive when trying to appreciate their origin and evolution. The data from plant breeding and cytogenetics confirm that the domesticated species belong to three distinct and separate genetic lineages. Earlier studies suggested two distinct lineages based upon white and purple flowered groupings (Ballard et al. 1970) but an evaluation of more recent data argues for the recognition of three distinct genetic lineages. Although the barriers between these gene pools may be broken down this rarely, if ever, occurs in nature. Capsicum pubescens forms a distinct genetic lineage. This pepper, first described by Ruiz and Pavon (1794) never received wide attention from taxonomists until recently (Eshbaugh 1979, 1982). Morphologically, it is unlike any other domesticated pepper having large purple or white flowers infused with purple and fruits with brown/black seeds. Genetically, it belongs to a tightly knit group of wild taxa including C. eximium (Bolivia and northern Argentina), C. cardenasii (Bolivia), and C. tovarii (Peru). Capsicum pubescens is unique among the domesticates as a mid-elevation Andean species. Capsicum pubescens is still primarily cultivated in South America although small amounts are grown in Guatemala and southern Mexico, especially Chiapas. This species remains virtually unknown to the rest of the world. A small export market seems to have reached southern California. Two of the major difficulties in transferring this species to other regions include (1) its growth requirements for a cool, freeze free environment and long growing season and (2) the fleshy nature of the fruit that leads to rapid deterioration and spoilage. Capsicum baccatum var. pendulum represents another discrete domesticated genetic line. Eshbaugh (1968, 1970) notes that this distinct South American species is characterized by cream colored flowers with gold/green corolla markings. Typically, fruits are elongated with cream colored seeds. The wild gene pool, tightly linked to the domesticate, is designated C. baccatum var. baccatum and is most common in Bolivia with outlier populations in Peru (rare) and Paraguay, northern Argentina, and southern Brazil. This lowland to mid-elevation species is widespread throughout South America particularly adjacent to the Andes. Known as aji, it is popular not only as a hot spice but for the subtle bouquet and distinct flavors of its many cultivars. This pepper is little known outside South America, although it has reached Latin America (Mexico), the Old World (India), and the United States (Hawaii). It is a mystery as to why it has not become much more wide spread, although the dominance of the Capsicum annuum lineage throughout the world at an early date may be responsible. Pickersgill (1988) has stated that "the status of Capsicum annuum, C. chinense, and C. frutescens as distinct species could legitimately be questioned." Several authors have previously raised this issue culminating in the observation that "at a more primitive level one cannot distinguish between the three species. On the one hand we treat the three domesticated taxa as separate while the corresponding wild forms intergrade to such an extent that it is impractical if not impossible to give them distinct taxonomic names" (Eshbaugh et al. 1983). McLeod et al. (1979, 1983) have argued that isoenzyme data make it impossible to distinguish between these three taxa. From an extensive isoenzyme study of these three taxa and several other species, Loaiza-Figueroa et al. (1989) argue that "thus far, this substitution of alleles constitutes a good argument against the proposal that these species form an allozymically indistinguishable association of a single polytypic species" as advanced by Mcleod et al. (1982, 1983) and Eshbaugh et al. (1983) in their published studies. Nonetheless, Pickersgill (1984) has pointed out that "each domesticate intergrades with morphologically wild accessions by way of partially improved semidomesticates. Any subdivision of the wild complex into three taxa, each ancestral to one of the domesticates, becomes decidedly arbitrary, although clusters corresponding to wild C. annuum, C. chinense, and wild C. frutescens can be detected." Clearly, Loaiza-Figueroa et al. missed the point of these earlier papers which argue for the complexity of the problem noting that the real difficulty comes as one approaches the more primitive forms of these taxa. Furthermore, the Loaiza-Figueroa et al. (1989) dendrogram (p. 183) suggests that the number of C. chinense and C. frutescens taxa included in their study is insufficient to reach any definitive conclusion regarding the status of these three taxa. There is a very close relationship of these three taxa based on crossing data from several studies (Smith and Heiser 1957; Pickersgill 1980). Stuessy (1990) has observed that "the ability to cross does not just deal with a primitive genetic background; it deals with the degree of genetic compatibility developed in a particular evolutionary line." As Stuessy (1990) has inferred there can be no stronger argument for relationship than the data obtained from plant breeding. Regardless of one's viewpoint, it is clear that the C. annuum--C. chinense--C. frutescens complex has been and continues to be a most difficult taxonomic morass. Some preliminary information from the studies of Gounaris et al. (1986) and Mitchell et al. (1989) suggest that molecular data may be useful in resolving this and other taxonomic questions. For the present, I have chosen to recognize the Capsicum annuum complex and the Capsicum chinense complex as two distinct domesticated species. Where C. frutescens fits into this scenario remains to be resolved. William G. D' Arcy, A.T. Hunziker, and others may solve the problem by merging the three taxa under a single taxonomic entity. Taxonomists and formal taxonomy are having a very difficult time coping with what is a complex and dynamic evolutionary process. The problem is heightened by the economic importance of Capsicum and the requirement that not only the domesticated species be named properly but that the several cultivars receive taxonomic recognition. Capsicum annuum is the best known domesticated species in the world. Since the time of Columbus, it has spread to every part of the globe. The non-pungent form, bell pepper, is widely used as a green vegetable. Another non-pungent form, "pimento," is also present throughout much of the globe. The hot spicy forms of this species have come to dominate the spicy foods within Latin America and the rest of the world. Capsicum annuum probably became the dominant pepper globally in part because it was the first pepper discovered by Columbus and other New World explorers (Andrews in press). This taxon was the first Capsicum species taken to Europe and quickly spread to other regions. Capsicum chinense was also discovered at an early date and spread globally but to a lesser extent than C. annuum. The more limited global expansion of this species is most probably related to its later discovery in South America and the competitive edge enjoyed by C. annuum which was firmly established in the Old World before C. chinense was introduced there. A discussion of the geography of Capsicum touches on two questions. The first relates to the origin of the genus Capsicum and the second to the origin of the domesticated taxa. The area of origin of Capsicum cannot be resolved until we understand the nature of the genus. If we accept the genus as currently circumscribed and limited to pungent taxa, then a clear center of diversity is to be found ranging from southern Brazil to Bolivia (McLeod et al. 1982; Eshbaugh et al. 1983; Pickersgill 1984). However, if the genus is reconstituted to include other non-pungent taxa, another center of diversity may be recognized in Central America and southern Mexico. Ultimately, our definition of the genus Capsicum and what species it includes will determine our view of its center of origins and whether the genus is monophyletic or polyphyletic. The emerging molecular studies of J.D. Palmer and R.G. Olmstead should give us a better sense of where Capsicum belongs within the framework of the Solanaceae. Determining the place of origin of the genus and each of the domesticated species is at best a problematic exercise. In 1983, I stated that "it appears that the domesticated peppers had their center of origin in south-central Bolivia with subsequent migration and differentiation into the Andes and Amazonia." This is a condensation of a highly speculative hypothesis (McLeod et al. 1982). From that hypothesis Pickersgill (1989) later suggested that I (Eshbaugh 1983) argued that all the domesticated taxa arose in Bolivia. Without question, I could have stated this idea more clearly. We (McLeod et al. 1982) have speculatively hypothesized that Bolivia is a nuclear center of the genus Capsicum and that the origin of the domesticated taxa can ultimately be traced back to this area. That does not imply that each of the domesticated species arose in Bolivia. Clearly, evidence supports a Mexican origin of domesticated C. annuum while the other domesticated species arose in South America. Nonetheless, the ancestry of the domesticates can be traced to South America. While McLeod et al. (1982) have hypothesized a Bolivian center of origin for Capsicum there is no evidence for a polyphyletic origin of the genus as now understood. Evidence suggests that C. annuum originally occurred in northern Latin America and C. chinense in tropical northern Amazonia (Pickersgill 1971). Capsicum pubescens and C. baccatum appear to be more prevalent in lower South America. Thus, at the time of discovery, the former two species were exploited while the later two species awaited a later discovery and remain largely unexploited outside South America today. In considering the question of origin of each particular domesticated species two issues must be considered. First, what wild progenitor is the most likely ancestor of each domesticated species and second, where is the most probable site of domestication? Capsicum pubescens ranges throughout mid-Andean South America. An analysis of fruit size of this domesticate indicates that fruits of a statistically smaller size occur in Bolivia, while fruits from accessions outside Bolivia on the average are somewhat larger suggesting that Bolivian material approaches a more primitive size Eshbaugh (1979, 1982) has argued that the origin of this domesticate can be found in the "ulupicas," C. eximium and C. cardenasii. Clearly, these two taxa are genetically closely related to each other and C. pubescens. Natural hybrids between these taxa have been reported and evaluated (Eshbaugh 1979, 1982). Furthermore, the two species that show the highest isoenzyme correlation with C. pubescens, C. eximium and C. cardenasii, occur primarily in Bolivia (Eshbaugh 1982; McLeod et al. 1983; Jensen et al. 1979). All three of these taxa form a closely knit breeding unit with the two wild taxa hybridizing to give fertile progeny with viable pollen above the ninety percent level. Crosses between the wild taxa C. eximium and C. cardenasii and the domesticate C. pubescens most often show hybrid pollen viability greater than 55%. These factors lend to the conclusion that domesticated C. pubescens originated in Bolivia and that C. eximium--C. cardenasii is the probable ancestral gene pool. This does not prove that these two taxa are the ancestors of C. pubescens but of the extant pepper taxa they represent the most logical choice. One perplexing question remains to be investigated and that is the origin of the brown/black seed coat in domesticated C. pubescens, a color unknown in any of the other pepper species. Capsicum baccatum var. pendulum is widespread throughout lowland tropical regions in South America. It ranges from coastal Peru to Coastal, Brazil. The wild form, recognized as C. baccatum var. baccatum, has a much more localized distribution but still ranges from Peru to Brazil. These two taxa have identical flavonoid (Ballard et al. 1970; Eshbaugh 1975) and isoenzyme profiles (McLeod et al. 1979, 1983; Jensen et al. 1979) and are morphologically indistinguishable except for the overall associated size differences found in the various organ systems of the domesticated taxon (Eshbaugh 1970). The wild form of Capsicum baccatum exhibits a high crossability index with domesticated C. baccatum var. pendulum with the progeny typically exhibiting pollen viability in excess of 55 percent (Eshbaugh 1970). The greatest center of diversity of wild C. baccatum var. baccatum is in Bolivia leading to the conclusion that this is the center of origin for this domesticate. Can we ever unscramble questions about the origin and evolution of the C. annuum--C. chinense--C. frutescens species complex? Pickersgill (1989) states that there is an "overwhelming likelihood of at least two independent domestications of the chile peppers of this complex." She also notes that one "may ... argue about whether wild forms of this complex should really be assigned to different species, and indeed whether domesticated C. annuum and domesticated C. chinense are really conspecific." I would agree that the evolutionary lineage of C. annuum--C. chinense--C. frutescens complex is intimately linked but I would further emphasize that when, where, and how they diverged is obscured in antiquity and that the extant wild forms of these three taxa are so similar as to make them very difficult to separate. One might well ask whether, at a minimum, C. chinense and C. frutescens are conspecific or grades within the same species. In contrast, a reasonably clear picture emerges on origin and progenitor of C. annuum. Capsicum annuum has its center of diversity in Mexico and northern Central America with a local, and more recent distribution in parts of South America. The wild bird pepper, Capsicum annuum var. aviculare, ranges from northern South America (Colombia) into the southern United States and Caribbean. Crossing studies indicate that the wild bird pepper is genetically the most closely related taxon to domesticated C. annuum (Emboden 1961; Smith and Heiser 1957; Pickersgill 1971). Pickersgill (1971), using karyotype analysis, suggests that the origin of domesticated C. annuum is to be found in southern Mexico. Pickersgill et al. (1979) also provided a detailed phenetic analysis of the C. annuum--C. chinense--C. frutescens complex and the difficulty of separating these taxa at the most primitive level is apparent. Capsicum chinense remains the least understood of the four domesticated taxa with respect to center of origin and probable progenitor. If one maps the range of forms in C. chinense, it is clear that amazonian South America is the center of diversity of this species. Furthermore, C. chinense does occur sporadically throughout the Caribbean. It is likely that C. chinense spread into the Caribbean at a later date since the diversity of taxa is more limited in that region than in amazonian South America. In considering the progenitor of C. chinense, one is bewildered by the evidence. It has been suggested that C. frutescens, in its primitive form, may be the ancestor of C. chinense (Eshbaugh et al. 1983). However, one needs to ask whether C. frutescens is merely a weedy offshoot of C. chinense or C. annuum. It is clear that the three species, C. annuum, C. frutescens, and C. chinense, hybridize with each other. They form a morphological continuum especially at a primitive level (McLeod et al. 1979). Genetic evidence from isoenzymes also confirms the close relationship of these three taxa (McLeod et al. 1983; Jensen et al. 1979). The spread of domesticated peppers throughout the world during the 500 years since discovery is truly a phenomenon. Two of the domesticated species, C. annuum var. annuum and C. chinense have been widely utilized on a global scale. Both C. baccatum var. pendulum and C. pubescens have been extensively exploited in South America but remain largely confined to that market. Given both the unique qualities and flavors of these later two species they each represent a potential source for future development. Of special interest to those working with peppers is the use and exploitation of the wild species. Wherever wild taxa of Capsicum occur, humans use them for their hot properties. In a few cases, exploitation of wild species has reached a commercial level. Capsicum praetermissum is collected and sold commercially in parts of Brazil (reported by correspondents). Capsicum chacoense and C. eximium are collected and bottled and marketed throughout southern Bolivia (pers. observ.). Fresh C. cardenasii is harvested and transported to the La Paz, Bolivia market for sale (pers. observ.). In Mexico and the southwestern United States wild C. annuum var. aviculare, the chiltepin, has been locally used for many years (Nabhan et al. 1989). More recently, a commercial market has developed for chiltepin. A large amount of this wild species is now harvested and sold to the gourmet food market. Nabhan et al. (1989) indicate that "currently chiltepin is almost completely wild harvested." They note that "as much as 12 tons of chiltepines may be harvested from a single Sonoran municipio in a good year, but total harvest may vary from perhaps 8 to as high as 50 tons." While the quantity of C. eximium, ulupica, being harvested in southern Bolivia is unknown, there is an extensive commercial trade in bottled whole peppers. Bolivians have not attempted to commercially plant wild plants of C. eximium, but Nabhan et al. (1989) indicate that incipient cultivation of C. annuum var. aviculare was initiated with extensive planting of the chiltepin by Sonoran farmers in the 1980s. The manipulation of these two wild species in each setting has led to some significant changes for the wild species. In both the case of C. annuum var. aviculare and C. eximium, larger fruit size has been selected for in the incipient area of cultivation and manipulation. Sonoran farmers are selecting for larger fruit size in the wild chiltepin. In C. eximium, there is a statistically significant larger fruit form of this ulupica in the zone of exploitation when compared to regions where the fruit is not widely collected (Eshbaugh 1979, 1982). In both cases we are witnessing incipient or semi-domestication of the wild species. Apparently, a market exists for the exploitation of peppers for the medicinal properties of capsaicin and several companies are pursuing such investigations. Two of the more interesting products to come to market in the last five years are the prescription drug Zostrix (Genderm registered trade mark), an analgesic cream, containing 0.025% capsaicin that is used topically to treat shingles and to provide enhanced pain relief for arthritis patients and Axsain (GalenPharma registered trade mark) that contains 0.075% capsaicin and is used for relief of neuralgias, diabetic neuropathy, and postsurgical pain. Both products are believed to work by action on a pain transmitting compound called substance P. Several pepper species, because of their unique fruit shapes and bright fruit colors, have been widely used as ornamentals. The presence of capsaicin, however is a potential hazard. In August, 1980, an expert consultative group, under the auspices of the IBPGR (International Board for Plant Genetic Resources), met at CATIE (Centro Agronomico Tropical de Investigacion y Ensenanza) in Turrialba, Costa Rica, to discuss the status of Capsicum germplasm collections and to map a strategy for future collecting and management of these resources (Genetic Resources of Capsicum 1983). The discussions led to a plan to systematically collect Capsicum throughout New World paying particular attention to the wild species most closely related to the domesticated taxa. The efforts of the past decade have resulted in a significant accumulation of pepper germplasm (seeds) that is now stored in various collections. Eshbaugh (1980a, 1981, 1988) has detailed the history of Capsicum germplasm collecting prior to 1980 and discussed the collecting efforts of peppers in Bolivia. Capsicum germplasm collections are now maintained in a number of facilities in the United States, as well as Mexico, Costa Rica, Bolivia, and Brazil.
http://ushotstuff.com/history.htm
13
22
Lesson Plans and Worksheets Browse by Subject Bank Operations Teacher Resources Find teacher approved Bank Operations educational resource ideas and activities Students examine the role of money in the colonial economy by participating in a trading activity. In this colonial economy lesson plan, students complete an activity to learn about colonial trade and what happens when there is a lack of money. Students research the difficulties associated with barter and read a booklet "Benjamin Franklin and the Birth of a Paper Money Economy" to learn about Franklin's role for money in the economy. Students study land banks and inflation. Students investigate the indicators the Fed uses to determine the course of monetary policy. In this monetary policy lesson plan, students define economic indicators and the conditions they reflect and explain the three functions of the Federal Reserve System. Students explain the use of monetary policy to affect the economy in this 45 page packet of activities. Students develop an understanding of monetary policy. In this monetary policy lesson, students define economic indicators and specify the economic conditions they reflect. Students explain the three functions of the system and play a card game to review vocabulary associated with economic indicators. Students engage in a reading of a document in order to become familiar with the Federal Reserve of The United States in the interest of strengthening reading comprehension skills with the exposure to expository literature. They read the document and write a summary of it. High schoolers participate in a simulation game to discover the role of banks in creating checkbook money through lending practices. They play a lending and borrowing game and use a money multipiler equation to solve problems associated with cash infusion from new reserves.
http://www.lessonplanet.com/lesson-plans/bank-operations/2
13
15
Matter refers to the "stuff" that everything, living or nonliving, is made of. The special type of matter which cannot be broken down is called an element. Elements cannot be broken down by chemical reactions. Carbon, hydrogen, and oxygen are examples of elements. Each element has a one or two letter symbol which is used to identify it. Over one hundred elements have currently been discovered. Any amount of an element will exhibit that element's chemical properties. At the center of every atom is a ball consisting of two particles, protons and neutrons. Protons have a positive charge. Neutrons have a neutral charge. Electrons are tiny negatively charged particles which orbit the nucleus. The positive charge of a proton is equal to the negative charge of an electron. Atoms normally have the same number of protons as electrons, so the overall charge is neutral. Atomic mass and atomic number An element's atomic number is the number of protons in an atom of that element. An element's atomic mass is the number of protons plus the number of neutrons in an atom of that element. All isotopes of an element have the same number of protons and electrons, but they have a different number of neutrons. Different isotopes of the same element may have different chemical properties. Molecules and their formation Molecules are collections of atoms formed through chemical reactions. The atoms in a molecule are held together by a bond. An ionic bond occurs when one atom donates one or more electrons to another atom. As a result, one atom becomes becomes positively charged, and the other becomes negatively charged, so they attract to one another and stay together. A covalent bond occurs when two atoms share one or more electrons to become more stable. Since atoms always tend to become more stable, the bond is not broken easily. Nonpolar molecules and polar molecules In molecules where all of the electrons are shared equally, the charge is neutral everywhere in the molecule. These molecules are nonpolar. Polar molecules are those in which the electrons are not shared equally, to some areas have a slight positive charge, and some have a slight negative charge. Molecular and structural formulas Molecular and structural formulas are used to describe the composition and structure of a molecule. Molecular formulas are simply a list of the symbol of each element in the molecule followed by the number of atoms of that element in the molecule. For example, the molecular formula for glucose is C6H12O6. When writing chemical reactions, scientists indicate that more than one of a molecule was present by adding a number before the molecular formula. Eight molecules of glucose would be written as 8C6H12O6. Structural formulas are actual sketches of the bonds between the atoms in a molecule. In a structural formula, the atoms are represented by their symbols, and the bonds are indicated by lines between the symbols. If two molecules have an identical molecular formula, only their structural formula can be used to tell the difference. A chemical reaction is the breaking of bonds and/or the formation of new bonds between atoms. The substances which existed before a chemical reaction are called the reactants, and the substances produced by the reaction are called the products. The number of atoms of an element which existed before a chemical reaction always equals the number of atoms of that element which existed after the chemical reaction was completed. Cells use various chemical reactions to break down food, store it, and use it to drive other processes. Acids, bases, and buffers Acids and bases are a way of classifying compounds based upon what happens to them when they are placed in water. When placed in water, acids release H+ ions. On the pH scale, acids have a pH less than 7. Bases release OH- ions when placed in water. Bases have a pH greater than 7 on the pH scale. Buffers can neutralize the pH of a solution by combining with either H+ ions or OH- ions. They are helpful in unicellular organisms, since many reactions can occur only at pH's which are not too acidic or basic. Organic compounds and the importance of carbon Organic molecules are those which contain carbon, oxygen, and hydrogen. Carbon is the "backbone" of these molecules because it can form four bonds with other atoms. Organic molecules have functional groups where bonding with other molecules generally occurs. Molecules with the same functional groups usually have similar chemical properties. Organic molecules are often classified based on their functional groups. Different types of organic compounds Carbohydrates are formed by joining sugar molecules. Sugars are characterized by having the same number of carbon atoms as oxygen atoms and having twice as many hydrogen atoms. Disaccharides are carbohydrates with two sugars, and polysaccharides are carbohydrates with more than two sugars. Cells use carbohydrates to store energy and as components of many cellular structures. Lipids are composed of molecules called fatty acids. Lipids do not dissolve in water, so they are said to be hydrophobic. Phospholipids consist of two nonpolar fatty acid molecules, a polar phosphate ion, and a glycerol molecule. Phospholipids are a very important part of the cell membrane because the phosphate ion is polar and the fatty acids are nonpolar. Proteins are long chains of amino acids. All amino acids have an amino group (NH2) and a carboxyl group (COOH). The variable group differs between amino acids, giving them certain properties. Amino acids bond together to form proteins through what is called a peptide bond. Enzymes and coenzymes Enzymes are catalysts: molecules which increase the rate of a chemical reaction. All enzymes are complex protein molecules folded upon themselves to form a three-dimensional shape. The area of the enzyme at which the substrate joins is called the active site. A subtrate is the molecule to which an enzyme attaches. Enzymes attach only to specific substrates which fit into the enzyme's active site. The induced-fit hypothesis states that an enzymes active site can change slightly so that a substrate which does not match perfectly can still fit. Coenzymes are organic molecules which aid in enzyme-catalyzed reaction, but they are not proteins. Often, coenzymes bond with electrons which are released from the reaction catalyzed by the enzyme. Factors which affect the efficiency of an enzyme Competitive inhibitors have a structure to the enzyme's substrate. The enzyme may bond with the competitive inhibitor instead of the substrate, so the reaction catalyzed by the enzyme will occur at a much slower rate. Noncompetitive do not bond to the active site of an enzyme and block the subtrate. They react with portions of the active site, thus changing its shape so that the substrate cannot fit. Many enzymes have an area called its regulatory site. Molecules which attach to the regulatory site are called allosteric factors. By joining to the regulatory site, allosteric factors can change the shape of the active site, which may either help or harm the enzyme. Acids and bases release H+ and OH- ions when dissolved in water. These ions are charged, so they can stretch and pull the enzyme's three-dimensional structure. Solutions with very high or very low pH's have many ions, enough to pull the enzyme's active site completely out of shape so that it can no longer function. Certain enzymes can function best at somewhat acidic or basic pHs. At higher temperatures, molecules move around faster, so it becomes more likely that an enzyme will come in contact with its substrate. When the temperature is too high, the enzyme may be ripped apart (denatured) so that it loses all function. At very low temperatures, the enzymes and substrates move around very slowly, so they do not come in contact very often and the reaction proceeds slowly. Diffusion is the movement of molecules from an area of high concentration to an area of low concentration. A concentration gradient is a difference in concentration between two areas. Molecules move "down" a concentration gradient; that is, toward the area with a lower concentration. Osmosis is the diffusion of water. Water potential is synonymous with water concentration; areas with a high concentration of water have a high water potential. Osmotic potential is the likelihood for osmosis to occur toward a particular area. Areas with a low concentration of water have a high osmotic potential. Terms to know acid - A substance which, when dissolved in water, release H+ ions. allosteric factor - A molecule which attaches to the regulatory site of an enzyme, causing a change in the enzymes structure. atom - The smallest amount of an element which still exhibits the properties of that element. An atom consists of a nucleus composed of protons and neutrons with electrons orbiting around it. atomic mass - The number of protons plus the number of neutrons in an atom of a particular element. atomic number - The number of protons in an atom of a particular element. base - A substance which, when dissolved in water, release OH- (hydroxide) ions. buffer - Compound which tends to neutralize the pH of a solution by combining with either H+ or OH- ions. carbohydrate - An organic molecule which is formed through the joining of sugar molecules. All carbohydrates have an equal number of carbon atoms and oxygen atoms, and they have twice as many hydrogen atoms. chemical reaction - When substances (the reactants) come together and react by rearranging their bonds to form new substances (the products). coenzyme - Organic molecules which are not proteins but still aide in reactions catalyzed by enzymes. covalent bond - A bond between two atoms in which the electrons are shared. diffusion - The movement of molecules from an area of high concentration to an area of low concentration. element - Matter which cannot be further broken down by chemical reactions. enzyme - Protein molecules which catalyze chemical reactions by joining to specific reactants in the reaction called substrates. ionic bond - A bond formed as a result of the jumping of electrons from one atom to another. Since the atoms either gained or lost electrons, they became ions, and the force of attraction between oppositely charged ions holds the bond together. isotope - A form of an element with a different number of neutrons in the nucleus than normal. lipid - An organic molecules which consists of fatty acids. Lipids do not dissolve in water. matter - The "stuff" that everything is made of. molecular formula - A way of describing a molecule by writing the symbol for each element in the molecule followed by a subscript indicating the number of atoms of that element in the molecule. molecule - A combination of atoms through chemical bonds. organic molecule - Any molecule which contains atoms of carbon, hydrogen, and oxygen. Its carbon backbone allows for a complex structure, and its functional groups allows it to form bonds to other molecules. osmosis - A special name for the diffusion of water. product - Any molecule formed as a result of a chemical reaction. protein - A type of organic molecule which consists of a long chain of amino acid molecules. They are synthesized by ribosomes based on the cell's genetic code. reactant - A substance which existed before a chemical reaction. structural formula - A method of writing a molecule by actually sketching its physical structure. Atoms are represented by their abbreviation, and bonds between atoms are indicated by lines. substrate - The substance to which an enzyme binds in catalyzing a chemical reaction.
http://library.thinkquest.org/27819/ch2_rvw.html
13
25
The transition elements are the elements that make up Groups 3 through 12 of the periodic table. These elements, all of which are metals, include some of the best-known names on the periodic table—iron, gold, silver, copper, mercury, zinc, nickel, chromium, and platinum among them. A number of other transition elements are probably somewhat less familiar, although they have vital industrial applications. These elements include titanium, vanadium, manganese, zirconium, molybdenum, palladium, and tungsten. One member of the transition family deserves special mention. Technetium (element #43) is one of only two "light" elements that does not occur in nature. It was originally produced synthetically in 1937 among the products of a cyclotron reaction. The discoverers of technetium were Italian physicists Carlo Perrier and Emilio Segré (1905–1989). The transition elements share many physical properties in common. With the notable exception of mercury, the only liquid metal, they all have relatively high melting and boiling points. They also have a shiny, lustrous, metallic appearance that may range from silver to gold to white to gray. In addition, the transition metals share some chemical properties. For example, they tend to form complexes, compounds in which a group of atoms cluster around a single metal atom. Ordinary copper sulfate, for example, normally occurs in a configuration that includes four water molecules surrounding a single copper ion. Transition element complexes have many medical and industrial applications. Another common property of the transition elements is their tendency to form colored compounds. Some of the most striking and beautiful chemical compounds known are those that include transition metals. Copper compounds tend to be blue or green; chromium compounds are yellow, orange, or green; nickel compounds are blue, green, or yellow; and manganese compounds are purple, black, or green. Words to Know Amalgam: An alloy that contains mercury. Basic oxygen process (BOP): A method for making steel in which a mixture of pig iron, scrap iron, and scrap steel is melted in a large steel container and a blast of pure oxygen is blown through the container. Bessemer convertor: A device for converting pig iron to steel in which a blast of hot air is blown through molten pig iron. Blast furnace: A structure in which a metallic ore (often, iron ore) is reduced to the elemental state. Cast iron: A term used to describe various forms of iron that also contain anywhere from 0.5 to 4.2 percent carbon and 0.2 to 3.5 percent silicon. Complex: A chemical compound in which a single metal atom is surrounded by two or more groups of atoms. Ductile: Capable of being drawn or stretched into a thin wire. Electrolytic cell: A system in which electrical energy is used to bring about chemical changes. Electrolytic copper: A very pure form of copper. Malleable: Capable of being rolled or hammered into thin sheets. Open hearth process: A method for making steel in which a blast of hot air or oxygen is blown across the surface of a molten mixture of pig iron, hematite, scrap iron, and limestone in a large brick container. Patina: A corrosion-resistant film that often develops on copper surfaces. Pig iron: A form of iron consisting of approximately 90 percent pure iron and the remaining 10 percent of various impurities. Slag: A by-product of the reactions by which iron is produced, consisting primarily of calcium silicate. The discussion that follows focuses on only three of the transition elements: iron, copper, and mercury. These three elements are among the best known and most widely used of all chemical elements. Iron is the fourth most abundant element in Earth's crust, following oxygen, silicon, and aluminum. In addition, Earth's core is believed to consist largely of iron. The element rarely occurs in an uncombined form but is usually found as a mineral such as hematite (iron[III] oxide), magnetite (lodestone, a mixture of iron[II] and iron[III] oxides), limonite (hydrated iron[III] oxide), pyrite (iron sulfide), and siderite (iron[II] carbonate). Properties. Iron is a silver-white or gray metal with a melting point of 2,795°F (1,535°C) and a boiling point of 4,982°F (2,750°C). Its chemical symbol, Fe, is taken from the Latin name for iron, ferrum. It is both malleable and ductile. Malleability is a property common to most metals, meaning that a substance can be hammered into thin sheets. Many metals are also ductile, meaning that they can be drawn into a fine wire. In a pure form, iron is relatively soft and slightly magnetic. When hardened, it becomes much more magnetic. Iron is the most widely used of all metals. Prior to its use, however, it must be treated in some way to improve its properties, or it must be combined with one or more other elements (in this case, another metal) to form an alloy. By far the most popular alloy of iron is steel. One of the most common forms of iron is pig iron, produced by smelting iron ore with coke (nearly pure carbon) and limestone in a blast furnace. (Smelting is the process of obtaining a pure metal from its ore.) Pig iron is approximately 90 percent pure iron and is used primarily in the production of cast iron and steel. Cast iron is a term used to describe various forms of iron that also contain carbon and silicon ranging in concentrations from 0.5 to 4.2 percent of the former and 0.2 to 3.5 percent of the latter. Cast iron has a vast array of uses in products ranging from thin rings to massive turbine bodies. Wrought iron contains small amounts of a number of other elements, including carbon, silicon, phosphorus, sulfur, chromium, nickel, cobalt, copper, and molybdenum. Wrought iron can be fabricated into a number of forms and is widely used because of its resistance to corrosion. How iron is obtained. Iron is one of the handful of elements that was known to ancient civilizations. Originally it was prepared by heating a naturally occurring ore of iron with charcoal in a very hot flame. The charcoal was obtained by heating wood in the absence of air. There is some evidence that this method of preparation was known as early as 3000 B.C. But the secret of ore smelting was carefully guarded within the Hittite civilization of the Near East for almost 2,000 years. Then, when that civilization fell in about 1200 B.C. , the process of iron ore smelting spread throughout eastern and southern Europe. Iron-smiths were soon making ornamental objects, simple tools, and weapons from iron. So dramatic was the impact of this new technology on human societies that the period following 1200 B.C. is generally known as the Iron Age. A major change in the technique for producing iron from its ores occurred around 1709. As trees (and therefore the charcoal made from them) grew increasingly scarce in Great Britain, English inventor Abraham Darby (c. 1678–1717) discovered a method for making coke from soft coal. Since coal was abundant in the British Isles, Darby's technique provided for a consistent and dependable method of converting iron ores to the pure metal. The modern production of iron involves heating iron ore with coke and limestone in a blast furnace, where temperatures range from 400°F (200°C) at the top of the furnace to 3,600°F (2,000°C) at the bottom. Some blast furnaces are as tall as 15-story buildings and can produce 2,400 tons (2,180 metric tons) of iron per day. Inside a blast furnace, a number of chemical reactions occur. One of these involves the reaction of coke (nearly pure carbon) with oxygen to form carbon monoxide. This carbon monoxide then reacts with iron ore to form pure iron and carbon dioxide. Limestone is added to the reaction mixture to remove impurities in the iron ore. The product of this reaction, known as slag, consists primarily of calcium silicate. The iron formed in a blast furnace exists in a molten form (called pig iron) that can be drawn off at the bottom of the furnace. The slag also is molten but less dense than the iron. It is drawn off from taps just above the outlet from which the molten iron is removed. Early efforts to use pig iron for commercial and industrial applications were not very successful. The material proved to be quite brittle, and objects made from it tended to break easily. Cannons made of pig iron, for example, were likely to blow apart when they fired a shell. By 1760, inventors had begun to find ways of toughening pig iron. These methods involved remelting the pig iron and then burning off the carbon that remained mixed with the product. The most successful early device for accomplishing this step was the Bessemer converter, named after its English inventor, Henry Bessemer (1813–1898). In the Bessemer converter, a blast of hot air is blown through molten pig iron. The process results in the formation of stronger forms of iron: cast and wrought iron. More importantly, when additional elements such as manganese and chromium are added to the converter, a new product—steel—is formed. Later inventions improved on the production of steel by the Bessemer converter. In the open hearth process, for example, a mix of molten pig iron, hematite, scrap iron, and limestone is placed into a large brick container. A blast of hot air or oxygen is then blown across the surface of the molten mixture. Chemical reactions within the molten mixture result in the formation of either pure iron or, with the addition of alloying metals such as manganese or chromium, a high grade of steel. An even more recent variation on the Bessemer converter concept is the basic oxygen process (BOP). In the BOP, a mixture of pig iron, scrap iron, and scrap steel is melted in a large steel container and a blast of pure oxygen is blown through the container. The introduction of alloying metals makes possible the production of various types of steel with many different properties. Uses of iron. Alloyed with other metals, iron is the most widely used of all metallic elements. The way in which it is alloyed determines the uses to which the final product is put. Steel, for example, is a general term used to describe iron alloyed with carbon and, in some cases, with other elements. The American Iron and Steel Institute recognizes 27 standard types of steel. Three of these are designated as carbon steels that may contain, in addition to carbon, small amounts of phosphorus and/or sulfur. Another 20 types of steel are made of iron alloyed with one or more of the following elements: chromium, manganese, molybdenum, nickel, silicon, and vanadium. Finally, four types of stainless and heat-resisting steels contain some combination of chromium, nickel, and manganese alloyed with iron. Steel is widely used in many types of construction. It has at least six times the strength of concrete, another traditional building material, and about three times the strength of special forms of high-strength concrete. A combination of these two materials—called reinforced concrete—is one of the strongest of all building materials available to architects. The strength of steel has made possible some remarkable feats of construction, including very tall buildings (skyscrapers) and bridges with very wide spans. It also has been used in the manufacture of automobile bodies, ship hulls, and heavy machinery and machine parts. Metallurgists (specialists in the science and technology of metals) have invented special iron alloys to meet very specific needs. Alloys of cobalt and iron (both magnetic materials themselves) can be used in the manufacture of very powerful permanent magnets. Steels that contain the element niobium (originally called columbium) have unusually great strength and have been used, among other places, in the construction of nuclear reactors. Tungsten steels also are very strong and have been used in the production of high-speed metal cutting tools and drills. The alloying of aluminum with iron produces a material that can be used in AC (alternating current) magnetic circuits since it can gain and lose magnetism very quickly. Metallic iron has other applications as well. Its natural magnetic properties make it suitable for both permanent magnets and electromagnets. It also is used in the production of various types of dyes, including blueprint paper and certain inks, and in the manufacture of abrasives. Biochemical applications. Iron is essential to the survival of all vertebrates. Hemoglobin, the molecule in blood that transports oxygen from the lungs to an organism's cells, contains a single iron atom buried deep within its complex structure. When humans do not take in sufficient amounts of iron in their daily diets, they may develop a disorder known as anemia. Anemia is characterized by a loss of skin color, a weakness and tendency to faint, palpitation of the heart, and a general sense of exhaustion. Iron also is important to the good health of plants. It is found in a group of compounds known as porphyrins (pronounced POUR-fuhrinz) that play an important role in the growth and development of plant cells. Plants that lack iron have a tendency to lose their color, become weak, and die. Copper is one of only two metals with a distinctive color (the other being gold). Copper is often described as having a reddish-brown hue. It has a melting point of 1,985°F (1,085°C) and a boiling point 4,645°F (2,563°C). Its chemical symbol, Cu, is derived from the Latin name for the element, cuprum. Copper is one of the elements that is essential to life in tiny amounts (often referred to as trace elements), although larger amounts can be toxic. About 0.0004 percent of the weight of the human body is copper. It can be found in such foods as liver, shellfish, nuts, raisins, and dried beans. Copper also is found in an essential biochemical compound known as hemocyanin. Hemocyanin is chemically similar to the red hemoglobin found in human blood, which has an iron atom in the center of its molecule. By contrast, hemocyanin contains an atom of copper rather than iron in its core. Lobsters and other large crustaceans have blue blood whose color is caused by the presence of hemocyanin. History of copper. Copper was one of the first metals known to humans. One reason for this fact is that copper occurs not only as ores (compounds that must be converted to metal) but occasionally as native copper, a pure form of the element found in the ground. In prehistoric times an early human could actually find a chunk of pure copper in the earth and hammer it into a tool with a rock. Native copper was mined and used in the Tigris-Euphrates valley (modern Iraq) as long as 7,000 years ago. Copper ores have been mined for at least 5,000 years because it is fairly easy to get the copper out of the ore. For example, if an ore of copper oxide is heated in a wood fire, the carbon in the charcoal reacts with oxygen in the oxide and converts it to pure copper metal. Making pure copper. Extremely pure copper (greater than 99.95 percent purity) is generally called electrolytic copper because it is made by the process known as electrolysis. Electrolysis is a reaction by which electrical energy is used to bring about some kind of chemical change. The high purity is needed because most copper is used to make electrical equipment. Small amounts of impurities present in copper can seriously reduce its ability to conduct electricity. Even 0.05 percent of arsenic as an impurity, for example, will reduce copper's conductivity by 15 percent. Electric wires must be made of very pure copper, especially if the electricity is to be carried for many miles through high-voltage transmission lines. Uses of copper. By far the most important use of copper is in electrical wiring. It is an excellent conductor of electricity (second only to silver), it can be made extremely pure, it corrodes very slowly, and it can be formed easily into thin wires. Copper is also an important ingredient of many useful alloys. (An alloy is a mixture of one metal with another to improve on the original metal's properties.) Brass is an alloy of copper and zinc. If the brass contains mostly copper, it is a golden yellow color; if it contains mostly zinc, it is pale yellow or silvery. Brass is one of the most useful of all alloys. It can be cast or machined into everything from candlesticks to cheap, gold-imitating jewelry (but this type of jewelry often turns human skin green—the copper reacts with salts and acids in the skin to form green copper chloride and other compounds). Several other copper alloys include: bronze, which is mainly copper plus tin; German silver and sterling silver, which consist of silver plus copper; and silver tooth fillings, which contain about 12 percent copper. Probably the first alloy ever to be made and used by humans was bronze. Archaeologists broadly divide human history into three periods. The Bronze Age (c. 4000–3000 B.C. ) is the second of these periods, occurring after the Stone Age and before the Iron Age. During the Bronze Age, both bronze and pure copper were used for making tools and weapons. Because it resists corrosion and conducts heat well, copper is widely used in plumbing and heating applications. Copper pipes and tubing are used to distribute hot and cold water through houses and other buildings. Copper's superior ability to conduct heat also makes it useful in the manufacture of cooking utensils such as pots and pans. An even temperature across the pan bottom is important for cooking so food doesn't burn or stick to hot spots. The insides of the pans must be coated with tin, however, to keep excessive amounts of copper from seeping into the food. Copper corrodes only slowly in moist air—much more slowly than iron rusts. First, it darkens in color because of the formation of a thin layer of black copper oxide. Then, as the years go by, the copper oxide is converted into a bluish-green patina (a surface appearance that comes with age) of basic copper carbonate. The green color of the Statue of Liberty, for example, was formed in this way. Mercury, the only liquid metal, has a beautiful silvery color. Its chemical symbol, Hg, comes from the Latin name of the element, hydrargyrum, for "liquid silver." Mercury has a melting point of −38°F (−70°C) and a boiling point of 673°F (352.5°C). Its presence in Earth's crust is relatively low compared to other elements, equal to about 0.08 parts per million. Mercury is not considered to be rare, however, because it is found in large, highly concentrated deposits. Nearly all mercury exists in the form of a red ore called cinnabar, or mercury (II) sulfide. Sometimes shiny globules of mercury appear among outcrops of cinnabar, which is probably the reason that mercury was discovered so long ago. The metal is relatively easy to extract from the ore. In fact, the modern technique for extracting mercury is nearly identical in principle to the method used centuries ago. Cinnabar is heated in the open air. Oxygen in the air reacts with sulfur in the cinnabar, producing pure mercury metal. The mercury metal vaporizes and is allowed to condense on a cool surface, from which it can be collected. Mercury does not react readily with air, water, acids, alkalis, or most other chemicals. It has a surface tension six times greater than that of water. Surface tension refers to the tendency of a liquid to form a tough "skin" on its surface. The high surface tension of mercury explains its tendency not to "wet" surfaces with which it comes into contact. No one knows exactly when mercury was discovered, but many ancient civilizations were familiar with this element. As long ago as Roman times, people had learned to extract mercury from ore and used it to purify gold and silver. Ore containing gold or silver would be crushed and treated with mercury, which rejects impurities, to form a mercury alloy, called an amalgam. When the amalgam is heated, the mercury vaporizes, leaving pure gold or silver behind. Toxicity. Mercury and all of its compounds are extremely poisonous. The element also has no known natural function in the human body. Classified as a heavy metal, mercury is difficult for the body to eliminate. This means that even small amounts of the metal can act as a cumulative poison, collecting over a long period of time until it reaches a dangerous level. Humans can absorb mercury through any mucous membrane and through the skin. Its vapor can be inhaled, and mercury can be ingested in foods such as fish, eggs, meat, and grain. In the body, mercury affects the nervous system, liver, and kidneys. Symptoms of mercury poisoning include tremors, tunnel vision, loss of balance, slurred speech, and unpredictable emotions. (Tunnel vision is a narrowing of the visual field so that peripheral vision—the outer part of the field of vision that encompasses the far right and far left sides—is completely eliminated.) The phrase "mad as a hatter" owes it origin to symptoms of mercury poisoning that afflicted hatmakers in the 1800s, when a mercury compound was used to prepare beaver fur and felt materials. Until recently, scientists thought that inorganic mercury was relatively harmless. As a result, industrial wastes containing mercury were routinely discharged into large bodies of water. Then, in the 1950s, more than 100 people in Japan were poisoned by fish containing mercury. Forty-three people died, dozens more were horribly crippled, and babies born after the outbreak developed irreversible damage. It was found that inorganic mercury in industrial wastes had been converted to a much more harmful organic form known as methyl mercury. As this substance works its way up the food chain, its quantities accumulate to dangerous levels in larger fish. Today, the dumping of mercury-containing wastes has been largely banned, and many of its industrial uses have been halted. Uses. Mercury is used widely in a variety of measuring instruments and devices, such as thermometers, barometers, hydrometers, and pyrometers. It also is used in electrical switches and relays, in mercury arc lamps, and for the extraction of gold and silver from amalgams. A small amount is still used in the preparation of amalgams for dental repairs. The largest single use of mercury today, however, is in electrolytic cells, in which sodium chloride is converted to metallic sodium and gaseous chlorine. The mercury is used to form an amalgam with sodium in the cells. [ See also Alloy ]
http://www.scienceclarified.com/Ti-Vi/Transition-Elements.html
13
28
Economics For Dummies (UK Edition) Economics is the science that studies how people and societies make decisions that allow them to get the most out of their limited resources. Because every country, every business, and every person deals with constraints and limitations, economics is literally everywhere. This Cheat Sheet gives you some of the basic essential information about economics. The Big Definitions in Economics When studying any subject, a key first step is to learn the lingo. Here are definitions for three of the most important words in economics: Economics studies how people allocate resources among alternative uses. The reason people have to make choices is scarcity, the fact that we don’t have enough resources to satisfy all our wants. Microeconomics studies the maximizing behaviour of individual people and individual firms. Economists assume that people work toward maximizing their utility, or happiness, while firms act to maximize profits. Macroeconomics studies national economies, concentrating on economic growth and how to prevent and ameliorate recessions. Macroeconomics and Government Policy Economists use gross domestic product (GDP) to keep track of how an economy is doing. GDP measures the value of all final goods and services produced in an economy in a given period of time, usually a quarter or a year. A recession occurs when GDP is decreasing. An expansion occurs when GDP is increasing. The unemployment rate measures what fraction of the labour force cannot find jobs. The unemployment rate rises during recessions and falls during expansions. Anti-recessionary economic policies come in two flavours: Monetary policy uses an increase in the money supply to lower interest rates. Lower interest rates make loans for cars, homes, and investment goods cheaper, which means consumption spending by households and investment spending by businesses increase. Fiscal policy refers to using either an increase in government purchases of goods and services or a decrease in taxes to stimulate the economy. The government purchases increase economic activity directly, while the tax reductions are designed to increase household spending by leaving households more after-tax monies to spend. Types of Industries by Economic Definition To help them to make sense of industries in which firms are interacting, economists group industries into three basic structures. These three structures are as follows: Perfect competition happens in an industry when numerous small firms compete against each other. Firms in a competitive industry produce the socially optimal output level at the minimum possible cost per unit. A monopoly is a firm that has no competitors in its industry. It reduces output to drive up prices and increase profits. By doing so, it produces less than the socially optimal output level and produces at higher costs than competitive firms. An oligopoly is an industry with only a few firms. If they collude, they reduce output and drive up profits the way a monopoly does. However, because of strong incentives to cheat on collusive agreements, oligopoly firms often end up competing against each other. What Is Market Equilibrium? Buyers and sellers interact in markets. The market equilibrium price, p*, and equilibrium quantity, q*, are determined by where the demand curve of the buyers, D, crosses the supply curve of the sellers, S. In the absence of externalities (costs or benefits that fall on persons not directly involved in an activity), the market equilibrium quantity, q*, is also the socially optimal output level. For each unit from 0 up to q*, the demand curve is above the supply curve, meaning that people are willing to pay more to buy those units than they cost to produce. There are gains from producing and then consuming those units. Market Failures from an Economic Perspective Several prerequisites must be fulfilled before perfect competition and free markets can work properly and generate the socially optimal output level. Several common problems include the following: Externalities caused by incomplete or nonexistent property rights: Without full and complete property rights, markets are unable to take all the costs of production into account. Asymmetric information: If a buyer or seller has private information that gives her an edge when negotiating a deal, the opposite party may be too suspicious for them to reach a mutually agreeable price. The market may collapse, with no trades being made. Public goods: Some goods have to be provided by the government or philanthropists. Private firms can’t make money producing them because there’s no way to exclude non-payers from receiving the good.
http://www.dummies.com/how-to/content/economics-for-dummies-cheat-sheet-uk-edition.html
13
33
Inflation is an increase in the average price level of goods and services over time. It happens when the total of all goods and services demanded exceeds production, or when there is a decrease in the amount of all goods and services supplied by producers. Let's take a look at two economic scenarios. If business is booming, unemployment is low, and wages are increasing, consumers have more disposable income available to purchase goods and services. Therefore, average prices will tend to rise due to the increase in demand for all goods and services. However, if the economy is suffering and wages remain stagnant, consumers are unable to purchase additional goods and services. As a result, producers slow down production and raise prices to cut their losses, and average prices rise due to a decrease in the supply of all goods and services. Of course, consumers are not the only market participants that affect the economy. Businesses, government agencies, and foreign markets spend billions of dollars on U.S. goods and services, and their spending can also influence supply and demand, which, in turn, can result in inflation. Inflation and Economic Policy Decisions Some inflation is normal in a healthy economy; one of the United States' economic policy goals is to maintain a 0-3% annual inflation rate. However, too much inflation, or no inflation at all, is a bad sign. Two types of federal economic policies are used to control the economy and manipulate inflation. Fiscal policy, made by Congress, uses taxation and spending to promote employment, stabilize prices, and boost economic growth; monetary policy, controlled by the Federal Reserve Bank (the Fed), manipulates the money supply and short-term interest rates to spur growth or control inflation. Congress and the Fed look at the monthly Consumer Price Index (CPI) when making policy decisions. The CPI gauges the average change in prices paid by urban consumers for a fixed market basket of goods and services over a period of time. If the CPI rises too fast, Congress or the Fed will take measures to bring the rate of inflation down. The Fed reacts more quickly, since Congress requires political debate and the passage of legislation before fiscal decisions can be carried out. Inflation and Personal Economics In sound economic times, price increases are usually accompanied by wage increases that keep pace with inflation. However, during downturns in the economy when wages remain level, the cost of living increases and purchasing power diminishes. One of your greatest challenges in saving and investing will be making your investments work harder to exceed inflation. Therefore, you should always take inflation into consideration when you save, invest, and make purchase decisions.
http://www.pgpalmer.com/page.jsp?pagenum=27&type=cpa&decider=pgpalmer2
13
16
Table of contents Recall the basic definition for a derivative This definition gives the instantaneous slope of a line tangent to a curve. We also write the derivative as f’(x) [f prime of x]. From the above equation with just a little algebra you can derive the general formula for polynomials There are a few basic rules that will allow you to apply this to a large number of functions. The product rule states that The quotient rule states that If you have a hard time remembering the order of f(x) and g(x) in the quotient rule you can also treat f(x)/g(x) as the product of f(x) and 1/g(x). This has the form Which is completely equivalent to the quotient rule. Note that we used the polynomial rule here since 1/g(x) = g(x)-1. In general, if you are given a function in the denominator just write as a negative exponent first. This will make taking the derivative much easier. where we treated the function 1/f(x) as f(x)-1 and therefore n = -1 and nf(x)n-1= (-1)f(x)-2. In this example, we have used the chain rule. The chain rule applies when one function is “buried” inside another, e.g. g(f(x)). First, take the derivative with respect to g(x) treating the whole of f(x) as the variable, then take the derivative with respect to f(x). In this example, we take the derivative of a Gaussian function with respect to x. Note that here g(f(x)) = ef(x) and f(x) = -ax2. The derivative of an exponential is the exponential itself times the derivative of the exponent. Functions of more than one variable If we have a function of more than one variable, f(x,y) we can take the derivative with respect to either one. These are called partial derivatives with respect to x or y (or whatever the variable is). The partial derivative with respect to x is The partial derivative with respect to y is The total derivative is We say the total derivative is an exact differential is the second cross derivatives are equal If these cross derivatives are not equal the total derivative is not an exact differential. In physical chemistry this is important because State functions are exact differentials Path functions are inexact differentials A state function has the same magnitude regardless of the path taken. The integral has the same magnitude regardless of the path taken if the total derivative of x is exact. If the total derivative is not exact then For example, in thermodynamics we show that the internal energy is a state function, but the work and the heat are path functions.
http://chemwiki.ucdavis.edu/Wikitexts/Misc/VV%3A_Mathematical_Concepts/Differentiation
13
15
Meningitis is inflammation of the meninges that results in swelling of brain tissue and sometimes spinal tissue (spinal meningitis). Swelling inhibits the flow of blood and oxygen to brain tissue. The characteristic symptoms of meningitis are stiff neck, severe headache, and fever. The meninges are three ultrathin membranes that surround and protect the brain and a portion of the spinal cord: the outer membrane (dura mater), middle membrane (arachnoid), and inner membrane (pia mater). Meningitis is either infectious (contagious) or noninfectious. Infectious meningitis is classified as viral, bacterial, fungal, or parasitic, depending on the type of organism causing the infection. Viral meningitis, also called aseptic meningitis, is the most common type. It is rarely fatal and usually resolves with treatment. Meningitis develops in fewer than 1 in 1000 people who are infected with one of the viruses associated with the condition. Bacterial meningitis is often severe and is considered a potential medical emergency. If left untreated, bacterial meningitis may be fatal or cause serious long-term complications. Because bacterial meningitis can progress rapidly, it is important to identify the bacteria and begin antibiotic treatment as soon as possible. Bacterial infection in the ears, mouth, or sinuses can spread directly to the brain and spinal cord. Some types of bacteria are transmitted from person to person through secretions from the mouth and nose. Fungal meningitis develops in patients with conditions that compromise the effectiveness of their immune systems (e.g., HIV/AIDS, lupus, diabetes). Fungal meningitis occurs in 10% of patients with AIDS. Crytococcus neoformans and Candida albicans are commonly involved in fungal meningitis. Parasitic meningitis is more common in underdeveloped countries and usually is caused by parasites found in contaminated water, food, and soil. Noninfectious meningitis may develop as a complication of another illness (e.g., mumps, tuberculosis, syphilis). A break in the skin and/or bones in the face or skull (caused by birth defect, brain surgery, head injury) can allow bacteria to enter the body. Rarely, meningitis can be caused by exposure to certain medications, such as the following: Incidence and Prevalence Most (approx. 70%) cases of meningitis occur in children under the age of 5 and people over the age of 60. In the United States, bacterial meningitis affects about 3 in 100,000 people each year, and viral meningitis affects about 10 in 100,000. Hib vaccine has reduced U.S. incidence of bacterial meningitis caused by Haemophilus influenzae type b by approximately 90%. The disease is more prevalent in people between the ages of 15 and 24 who have not been vaccinated. Worldwide, bacterial resistance to penicillin and other antibiotics and the lack of access to vaccines accounts for rising rates of bacterial meningitis. The primary risk factor for meningitis is a suppressed immune system, which may be caused by the following: Not receiving the mumps, Haemophilus influenzae type b, and pneumococcal (children aged 2 and younger) vaccines increases the risk for meningitis. Age is also a risk factor for meningitis. It is more common in people younger than 5 years old and those older than 60. People between the ages of 15 and 24 who live in boarding schools and college dormitories are also at increased risk. Living and working with large groups of people (e.g., military bases, child care facilities) increases the risk for infectious meningitis. People who work with domestic animals (e.g., dairy farmers, ranchers) and pregnant women are at increased risk for meningitis associated with listeriosis (disease transmitted from animals to humans via soil). Listeriosis can be transmitted from mother to fetus through the placenta, causing spontaneous abortion. The disease is usually fatal in newborns. Head injuries and brain surgery also put patients at risk for meningitis. Viruses and bacteria that spread to or directly infect the central nervous system cause most cases of infectious meningitis. About 90% of cases of viral meningitis are caused by one of the enteroviruses (e.g., coxsackievirus, echovirus, poliovirus). Mumps, herpesvirus, and arboviruses (transmitted by insect bites) also may cause viral meningitis. About 30% of mumps cases in people not vaccinated for the disease develop meningitis. Common causes of bacterial meningitis include Streptococcus pneumoniae, Neisseria meningitides, Staphylococcus aureus, Escherichia coli, and Staphylococcus epidermidis. Prior to the 1990s, Haemophilus influenzae type b was the primary cause, but widespread vaccination (Hib vaccine) has greatly reduced the incidence of this infection. Candida albicans, Crytococcus neoformans, and Histoplasma are often involved in cases of fungal meningitis. Causes of noninfectious meningitis include the following: Signs and Symptoms Symptoms of bacterial meningitis are usually acute, developing within a few hours and last 2 to 3 weeks. It is important to seek immediate medical attention when symptoms occur, because acute bacterial meningitis can be fatal within hours. Viral meningitis may develop suddenly or within days or weeks, depending on the virus and the overall health of the patient. Characteristic symptoms of both viral and bacterial meningitis are stiff neck, headache, and fever. Symptoms may develop over the course a few hours (acute bacterial meningitis) or a few days. Some patients experience cough, runny nose, and congestion prior to developing other symptoms. Other signs and symptoms of meningitis include the following: Symptoms of meningitis in infants may be difficult to detect and include the following: Complications such as the following can develop during the course of meningitis: Prompt medical treatment decreases the risk for brain damage and long-term complications, including these: Severe bacterial meningitis also may cause the head and heels to bend backward and the body to bow forward (called opisthotonos), coma, and death. Newborns and young children may develop heart, liver, intestinal problems, or malformed limbs. A diagnosis of meningitis depends primarily on a thorough physical examination and cerebrospinal fluid (CSF) analysis. In the physical examination stiff neck, severe headache, and fever indicate meningitis. It may be extremely painful to move the neck forward. The neck may be so stiff that attempting to move it causes the entire body to move. Other signs the physician may look for include swelling in the eyes, which indicates elevated intracranial pressure, and skin rash. Computed tomography (CT scan) or magnetic resonance imaging (MRI scan) of the brain may be used to evaluate possible swelling (edema) and bleeding (hemorrhage) and to rule out other neurological disorders. Laboratory tests that may be performed include complete blood count (CBC), blood culture, and spinal tap. CBC will show elevated levels of white blood cells if there is an active infection in the body. Blood is cultured to identify bacteria in the blood. Spinal tap, or lumbar puncture, is essential in diagnosing and selecting appropriate treatment for meningitis. About 2 tablespoons of cerebrospinal fluid is drawn into a needle inserted between two lumbar vertebrae. Lab analysis looks for elevated levels of white blood cells and blood. The fluid also is cultured to identify the organism causing meningitis. Treatment is determined by the type of meningitis and the organism causing the disease. Viral meningitis usually requires only symptom relief (palliative care). Palliative care may include bed rest, increased fluid intake to prevent dehydration, and analgesics (e.g., aspirin, acetaminophen) to reduce fever and relieve body aches. Meningitis caused by herpesvirus can be treated using antiviral medication such as acyclovir (Zovirax®) or ribavirin (Virazole®). Side effects of these medications include nausea, vomiting, and headache. Suspected bacterial meningitis requires prompt intravenous (IV) antibiotic treatment in the hospital to prevent serious complications and neurological damage. If symptoms are severe, IV treatment may be initiated before the lumbar puncture is performed. Severly ill patients are treated immediately with a combination of antibiotics. Penicillin combined with a cephalosporin (e.g., ceftriaxone [Rocephin®], cefotaxime [Claforan®]) is commonly used. Because some bacteria are resistant to these drugs, vancomycin, with or without rifampin, ampicillin, and gentamicin may be added to cover resistant pneumococcal strains of bacteria and Listeria monocytogenes. Side effects include abdominal pain, nausea, vomiting, and diarrhea. Once the CSF culture has revealed the disease-causing organism (pathogen), antibiotic treatment is adjusted accordingly. Amphotericin B and fluconazole (Diflucan®) are effective against most disease-causing fungi and are the drugs of choice for treatment of fungal meningitis. They may be administered singly or as combined therapy. Both drugs are well tolerated in most patients. Possible side effects of fluconazole include nausea and vomiting, diarrhea, headache, skin rash, and abdominal pain. Intravenously administered amphotericin B may produce the same side effects, as well as shaking chills and fever, slowed heart rate, low blood pressure (hypotension), body ache, and weight loss. Parasitic meningitis usually is treated with a benzimidazole derivative or other antihelminthic agent. Complications that develop also must be treated. Corticosteroids (e.g., dexamethasone) may be administered to reduce the risk for hearing loss. Increased intracranial pressure may be reduced with diuretics (e.g., mannitol) and a surgically placed shunt that drains excess fluid. Bacterial meningitis is fatal in as many as 25% of cases. Patients with meningitis caused by Streptococcus pneumoniae and patients younger than 2 years old or over the age of 60 have a poor prognosis. Prompt medical treatment (i.e., antibiotics) reduces the risk for dying from bacterial meningitis to less than 15%. Viral meningitis usually resolves in 7-10 days and is fatal in fewer than 1% of cases. Immunization with the vaccines listed below is the most effective way to prevent meningitis: Medications such as rifampin (Rifadin®), ceftriaxone (Duricef®), and ciprofloxacin (Cipro®) may be used to prevent the development of bacterial meningitis in people exposed to the disease.
http://www.kmcpa.com/neurology/education/meningitis.php
13
18
Disunion follows the Civil War as it unfolded. Disunion follows the Civil War as it unfolded. Like President Lincoln, Congress took office not knowing if the United States would endure. When it convened, about a third of the seats in both chambers were vacant, as newly declared Confederates had emptied Washington. Fittingly, members served under an unfinished Capitol dome – construction on the cast-iron edifice began in 1855 and would not be completed until 1863 – at once a symbol of republican government striving to rise up just as fierce fighting mere miles south on the battlefields of Virginia, and elsewhere, sought to tear it down. Yet, despite the mortal threat that hung over the nation throughout the two-year session, the new Congress was able to pass laws of incredible breadth and significance for both the immediate stability and future growth of the United States. Congress’s work in these early years of the Civil War helped lay the track not simply for the Union’s victory, but the groundwork for the nation’s educational, socio-economic and physical expansion. The 37th Congress, in the words of the historian Leonard Curry, set the “blueprint for modern America.” First came the Revenue Acts of 1861 (and later 1862) which created the first federal income tax, to help fund the Union war effort. While the acts would be repealed after the war, their impact on the future economic direction of the nation is clear: with their precedent, income taxes would serve as future keystones of the nation’s economy, as would the National Banking Act passed near the conclusion of the session, which established a single national currency. In 1862, Congress ended slavery in the District of Columbia, a critical forerunner to the Emancipation Proclamation and, eventually, abolition. Soon thereafter it created the Department of Agriculture, a guiding engine for the nation’s agricultural expansion during the post-Civil War era — a boom that the same Congress facilitated with the Homestead Act which enticed over a million Americans westward on the promise of earning 160 acres of land to call their own. Also helping spur that drive was the Pacific Railway Act of 1862, which began the construction of the first transcontinental railroad from Omaha and San Francisco, culminating with the famous linking of the Central and Union Pacific lines at Promontory, Utah seven years later. The 37th Congress’s contribution to education was also estimable. A week before the brutal Seven Days Battles raged outside of Richmond, it passed the Morrill Land-Grant Colleges Act, which set aside over 15 million acres for the founding of agricultural and mechanics schools. The landmark act led to the founding of institutions including Cornell, Berkeley and the University of Wisconsin, and established the backbone of the finest public university system in the world. And in one of its final legislative moves in 1863, it passed the False Claims Act to combat abuses by federal contractors. Better known today as “Lincoln’s Law,” the act remains arguably the most effective anti-fraud statute ever passed, having recovered over $20 billion from war profiteers and deterred exponentially more since 1986 alone. How did such a visionary slate of legislation come to pass? To be sure, Congress remained focused throughout the session on supporting the war effort, and assorted measures like the Revenue Acts were enacted with that overarching goal in mind. The quick answer is that Congress was able to move so adroitly because Republicans held the White House and possessed huge majorities in both the House and Senate. Founded in the mid-1850s, the party catapulted to immediate success with the dissolution of the Whig Party and the collapse of the Know Nothing movement, doing well in the 1856 elections and capturing the House of Representatives just two years later. In 1860, Republicans won nearly three-fifth majorities in each chamber, paving the way for a unified government. Prior to Republicans’ birth and quick rise, Democratic congresses had been stymieing economic growth for years. Looking back in 1863, Maine Republican Senator and future Treasury Secretary William Fessenden captured his party’s ambitious outlook in the 37th, noting, “I cannot say that the wiser course was not to make the most of our time, for no one knows how soon this country may again fall into a democratic slough.” From the beginning, the Republicans had a clear idea about what they wanted. As the historian Eric Foner has observed about the young party, “their outlook was grounded in … its emphasis on social mobility and economic growth, it reflected an adaption of that ethic to the dynamic, expansive, capitalist society of the ante-Bellum North.” Indeed, the party’s core principle held that any American could advance himself in society and achieve economic independence, and that the future lay with industrial growth and westward expansion, away from the decayed, slaved-based southern model. The role of government, then, was to make this happen. These were old Whig philosophies Lincoln himself had subscribed to from the very start of his political career in his almost monastic pursuit of internal improvements. At times the anti-slavery element of the party platform could overwhelm the economic-development element as the era in Congress is remembered mostly for colorful figures like Thaddeus Stevens, Benjamin Wade and Charles Sumner, who were dedicated first and foremost to the slave issue. Similarly, many Democrats who had switched sides to the Republican Party in the 1850s, including Salmon Chase, Gideon Welles and Francis P. Blair, did so over slavery and did not agree with many of their new party’s core economic views. Ultimately, however, Radical Republicans’ focus on slavery, and former Democrats’ misgivings, did not derail the enactment of the party’s economic agenda. While many early state Republican platforms ignored economic issues to avoid divisions, this had already begun to change near the end of the 1850s, as many converted Jacksonian Democrats began to tone down their anti-government sentiments in the wake of the Panic of 1857. Not that infighting was lacking on Capitol Hill. Partisanship during the war was perhaps even more toxic than ever; Republicans created the Committee on the Conduct of the War largely to disgrace Democratic generals in the aftermath of the disastrous Battle of Balls Bluff (where Republican Senator Edward Baker was killed). But in the end, progress would define the Congress’s work and would build the nation as the old-line Whiggish vision of economic growth and opportunity won out. It’s understandable why the drama of the Civil War should overshadow the grinding legislative activities of Capitol Hill and the 37th Congress. But for anyone who has ever gone to a public university, spent a dollar bill or ridden a passenger train cross-country, it’s a legislative session that deserves to be remembered. “Mark Greenbaum is a writer and attorney in Washington.
http://opinionator.blogs.nytimes.com/2011/08/05/the-do-everything-congress/
13
18
One of the most pronounced effects of climate change has been melting of masses of ice around the world. Glaciers and ice sheets are large, slow-moving assemblages of ice that cover about 10% of the world’s land area and exist on every continent except Australia. They are the world’s largest reservoir of fresh water, holding approximately 75% (1). Over the past century, most of the world’s mountain glaciers and the ice sheets in both Greenland and Antarctica have lost mass. Retreat of this ice occurs when the mass balance (the difference between accumulation of ice in the winter versus ablation or melting in the summer) is negative such that more ice melts each year than is replaced (2). By affecting the temperature and precipitation of a particular area, both of which are key factors in the ability of a glacier to replenish its volume of ice, climate change affects the mass balance of glaciers and ice sheets. When the temperature exceeds a particular level or warm temperatures last for a long enough period, and/or there is insufficient precipitation, glaciers and ice sheets will lose mass. One of the best-documented examples of glacial retreat has been on Mount Kilimanjaro in Africa. It is the tallest peak on the continent, and so, despite being located in the tropics, it is high enough so that glacial ice has been present for at least many centuries. However, over the past century, the volume of Mount Kilimanjaro’s glacial ice has decreased by about 80% (3). If this rate of loss continues, its glaciers will likely disappear within the next decade (4). Similar glacial meltbacks are occurring in Alaska, the Himalayas, and the Andes. Image from global-greenhouse-warming.com When researching glacial melting, scientists must consider not only how much ice is being lost, but also how quickly. Recent studies show that the movement of ice towards the ocean from both of the major ice sheets has increased significantly. As the speed increases, the ice streams flow more rapidly into the ocean, too quickly to be replenished by snowfall near their heads. The speed of movement of some of the ice streams draining the Greenland Ice Sheet, for example, has doubled in just a few years (5). Using various methods to estimate how much ice is being lost (such as creating a ‘before and after’ image of the ice sheet to estimate the change in shape and therefore volume, or using satellites to ‘weigh’ the ice sheet by computing its gravitational pull), scientists have discovered that the mass balance of the Greenland Ice Sheet has become negative in the past few years. Estimates put the net loss of ice at anywhere between 82 and 224 cubic kilometers per year (5). Image from UNEP In Antarctica, recent estimates show a sharp contrast between what is occurring in the East and West Antarctic Ice Sheets. The acceleration of ice loss from the West Antarctic Ice Sheet has doubled in recent years, which is similar to what has happened in Greenland. In West Antarctica, as well as in Greenland, the main reason for this increase is the quickening pace at which glacial streams are flowing into the ocean. Scientists estimate the loss of ice from the West Antarctic ice sheet to be from 47 to 148 cubic kilometers per year. On the other hand, recent measurements indicate that the East Antarctic ice sheet (which is much larger than the West) is gaining mass because of increased precipitation. However, it must be noted that this gain in mass by the East Antarctic ice sheet is nowhere near equal to the loss from the West Antarctic ice sheet (5). Therefore, the mass balance of the entire Antarctic Ice Sheet is negative. The melting back of the glaciers and ice sheets has two major impacts. First, areas that rely on the runoff from the melting of mountain glaciers are very likely to experience severe water shortages as the glaciers disappear. Less runoff will lead to a reduced capability to irrigate crops as freshwater dams and reservoirs more frequently go dry. Water shortages could be especially severe in parts of South America and Central Asia, where summertime runoff from the Andes and the Himalayas, respectively, is crucial for fresh water supplies (6). Also, in areas of North America and Europe, glacial runoff is used to power hydroelectric plants, sustain fish runs and irrigate crops as well as to supply the needs of large metropolitan areas. As the volume of runoff decreases, then the energy, urban, and agricultural infrastructures of such locations are likely to be stressed (7). In addition, the melting of glaciers and ice sheets adds water to the oceans, contributing to sea level rise, as explained in the next subsection. Most of the world’s coastal cities were established during the last few millennia, a period when global sea level has been near constant. Since the mid-19th century, sea level has been rising, likely primarily as a result of human-induced climate change. During the 20th century, sea level rose about 15-20 centimeters (roughly 1.5 to 2.0 mm/year), with the rate at the end of the century greater than over the early part of the century (8, 9). Satellite measurements taken over the past decade, however, indicate that the rate of increase has jumped to about 3.1 mm/year, which is significantly higher than the average rate for the 20th century (10). Projections suggest that the rate of sea level rise is likely to increase during the 21st century, although there is considerable controversy about the likely size of the increase. As explained in the next section, this controversy arises mainly due to uncertainties about the contributions to expect from the three main processes responsible for sea level rise: thermal expansion, the melting of glaciers and ice caps, and the loss of ice from the Greenland and West Antarctic ice sheets (11). Image from NASA Causes of sea level rise Before describing the major factors contributing to climate change, it should be understood that the melting back of sea ice (e.g., in the Arctic and the floating ice shelves) will not directly contribute to sea level rise because this ice is already floating on the ocean (and so already displacing its mass of water). However, the melting back of this ice can lead to indirect contributions on sea level. For example, the melting back of sea ice leads to a reduction in albedo (surface reflectivity) and allows for greater absorption of solar radiation. More solar radiation being absorbed will accelerate warming, thus increasing the melting back of snow and ice on land. In addition, ongoing break up of the floating ice shelves will allow a faster flow of ice on land into the oceans, thereby providing an additional contribution to sea level rise. There are three major processes by which human-induced climate change directly affects sea level. First, like air and other fluids, water expands as its temperature increases (i.e., its density goes down as temperature rises). As climate change increases ocean temperatures, initially at the surface and over centuries at depth, the water will expand, contributing to sea level rise due to thermal expansion. Thermal expansion is likely to have contributed to about 2.5 cm of sea level rise during the second half of the 20th century (11), with the rate of rise due to this term having increased to about 3 times this rate during the early 21st century. Because this contribution to sea level rise depends mainly on the temperature of the ocean, projecting the increase in ocean temperatures provides an estimate of future growth. Over the 21st century, the IPCC’s Fourth Assessment projected that thermal expansion will lead to sea level rise of about 17-28 cm (plus or minus about 50%). That this estimate is less than would occur from a linear extrapolation of the rate during the first decade of the 21st century when all model projections indicate ongoing ocean warming has led to concerns that the IPCC estimate may be too low. A second, and less certain, contributor to sea level rise is the melting of glaciers and ice caps. IPCC’s Fourth Assessment estimated that, during the second half of the 20th century, melting of mountain glaciers and ice caps led to about a 2.5 cm rise in sea level. This is a higher amount than was caused by the loss of ice from the Greenland and Antarctic ice sheets, which added about 1 cm to the sea level. For the 21st century, IPCC’s Fourth Assessment projected that melting of glaciers and ice caps will contribute roughly 10-12 cm to sea level rise, with an uncertainty of roughly a third. This would represent a melting of roughly a quarter of the total amount of ice tied up in mountain glaciers and small ice caps. The third process that can cause sea level to rise is the loss of ice mass from Greenland and Antarctica. Were all the ice on Greenland to melt, a process that would likely take many centuries to millennia, sea level would go up by roughly 7 meters. The West Antarctic ice sheet holds about 5 m of sea level equivalent and is particularly vulnerable as much of it is grounded below sea level; the East Antarctic ice sheet, which is less vulnerable, holds about 55 m of sea level equivalent. The models used to estimate potential changes in ice mass are, so far, only capable of estimating the changes in mass due to surface processes leading to evaporation/sublimation and snowfall and conversion to ice. In summarizing the results of model simulations for the 21st century, IPCC reported that the central estimates projected that Greenland would induce about a 2 cm rise in sea level whereas Antarctica would, because of increased snow accumulation, induce about a 2 cm fall in sea level. That there are likely to be problems with these estimates, however, has become clear with recent satellite observations, which indicate that both Greenland and Antarctica are currently losing ice mass, and we are only in the first decade of a century that is projected to become much warmer over its course. Image from wildwildweather.com The Sea Level Rise Debate Because the model simulations underestimate the sea level rise observed during the 20th century, significant debate has developed within the scientific community about IPCC’s projections of sea level rise for the 21st century. The accuracy of the projections has been questioned for a variety of reasons, particularly relating to limitations of the model representations of the ice sheets, which do not account for the increase in ice sheet movement (i.e., dynamics) that occurs as ice sheets warm, mainly because the physics are not well understood. There are also problems projecting how rapidly and how much global temperature will increase during the 21st century, in part due to the range of possible emissions. Because rising temperatures play a key role in all three of the terms that contribute to sea level rise, uncertainties in projections of global warming lead to uncertainties in projections of sea level rise (9). Regarding thermal expansion, there remain questions about the amount of heat that has been taken up by the oceans. Part of the problem results from the various types of instruments that have been used over time to measure ocean temperatures- different instruments create different results. At present, simulations of 20th century heat uptake by the oceans and of the amount of sea level rise do not fully match, making it more difficult to project the amount of thermal expansion that can be expected in the 21st century. Second, the uncertainties in the increase in temperature affect the ability to project the rate of melting of mountain glaciers and ice caps. Observations of the retreat of glaciers have been, in a number of situations, more rapid than models have simulated. Whether this is a result of inadequacies in the modeling or a possible increase in the rate of melting prompted by deposition of soot, or both or possibly other factors, is not yet clear. Third, and most important, are uncertainties relating to the potential loss of ice from the Greenland and West Antarctic ice sheets. The dynamics of ice sheet movement are not well understood—some ice streams are moving very rapidly, suggesting the potential for contributions to sea level rise of order 10 mm/year or even larger, a rate that is far larger than any of the other terms. There seems even the possibility of a collapse of one or both ice sheets, especially if there is rapid loss of buttressing ice shelves that would reduce the resistance to ice stream flows (9). Capturing these processes accurately in climate models is extremely difficult, while omitting the process that is likely the most important contributor to sea level rise presents quite a quandary—the result being that IPCC’s projections of sea level rise during the 21st century and beyond may be significantly too low. Impacts of sea level rise While there are obviously many challenges to projecting future sea level rise, even a seemingly small increase in sea level can have a dramatic impact on many coastal environments. Over 600 million people live in coastal areas that are less than 10 meters above sea level, and two-thirds of the world’s cities that have populations over five million are located in these at-risk areas (12). With sea level projected to rise at an accelerated rate for at least several centuries, very large numbers of people in vulnerable locations are going to be forced to relocate. If relocation is delayed or populations do not evacuate during times when the areas are inundated by storm surges, very large numbers of environmental refugees are likely to result. According to the IPCC, even the best-case scenarios indicate that a rising sea level would have a wide range of impacts on coastal environments and infrastructure. Effects are likely to include coastal erosion, wetland and coastal plain flooding, salinization of aquifers and soils, and a loss of habitats for fish, birds, and other wildlife and plants (11). The Environmental Protection Agency estimates that 26,000 square kilometers of land would be lost should sea level rise by 0.66 meters, while the IPCC notes that as much as 33% of coastal land and wetland habitats are likely to be lost in the next hundred years if the level of the ocean continues to rise at its present rate. Even more land would be lost if the increase is significantly greater, and this is quite possible (11). As a result, very large numbers of wetland and swamp species are likely at serious risk. In addition, species that rely upon the existence of sea ice to survive are likely to be especially impacted as the retreat accelerates, posing the threat of extinction for polar bears, seals, and some breeds of penguins (13). Unfortunately, many of the nations that are most vulnerable to sea level rise do not have the resources to prepare for it. Low-lying coastal regions in developing countries such as Bangladesh, Vietnam, India, and China have especially large populations living in at-risk coastal areas such as deltas, where river systems enter the ocean. Both large island nations such as the Philippines and Indonesia and small ones such as Tuvalu and Vanuatu are at severe risk because they do not have enough land at higher elevations to support displaced coastal populations. Another possibility for some island nations is the danger of losing their fresh-water supplies as sea level rise pushes saltwater into their aquifers. For these reasons, those living on several small island nations (including the Maldives in the Indian Ocean and the Marshall Islands in the Pacific) could be forced to evacuate over the 21st century (11). Image from globalwarmingart.com Each year the oceans absorb the equivalent of about a third of human emissions of carbon dioxide (CO2), transferring most of it to the deep ocean (13). Over the past 200 years, the increasing CO2 emissions from fossil fuel combustion have led to an exponential increase in the net amount of CO2 being dissolved in the ocean. Dissolved CO2 creates carbonic acid, which reduces the ocean pH level, making it more acidic (15). Acidity is measured using the pH scale, where items are given a numerical value between 0 and 14. A value of seven is neutral, with higher values being described as basic and lowers values as acidic. Historically, ocean pH has averaged around 8.17, meaning that ocean waters are slightly basic. But with the rising CO2 concentration causing acidification, today the pH levels are around 8.09, edging the waters closer to neutral (16). Geological evidence and model reconstructions indicate that, over the past 300 million years, the average pH of the ocean has not varied by more than 0.6 from its present value (14). Thus, the marine ecosystems present today have evolved in a relatively stable pH environment. With the rising CO2 concentration over the last 200 years, ocean pH has been steadily decreasing. While the acidification of the oceans is not yet itself worrisome except in polar regions, the rate at which the pH is dropping is becoming alarming. This is because the rate of change is so much higher than the natural weathering processes that have, in the past, buffered changes in ocean pH. If the CO2 concentration continues to rise and the pH level continues to fall at current rates, the ocean pH could drop by as much as 0.5 during the 22nd century (14). Such a drastic change would very likely have a substantial adverse impact on ocean life. Possible impacts/ Preventative Measures The most direct impacts of ocean acidification will be on marine ecosystems. A decrease in ocean pH would affect marine life by lowering the amount of calcium carbonate (the substance created when CO2 is initially dissolved) in the water. Calcium carbonate is the substance used by many marine organisms (including coral, shellfish, crustaceans, and mollusks) to build their shells (17). If the pH drops by the expected 0.5 during this century, the resulting effect would be a 60% drop in available calcium carbonate (17). Such a decrease would put the productivity and even the survival of thousands of marine species at risk. To prevent the rapid acidification of the ocean and hold the pH level within an acceptable range for marine life, the atmospheric CO2 concentration needs to be kept below no more than about 450 parts per million (ppm). With the current concentration at roughly 387 ppm, the concentration seems likely to be near 500 ppm by mid-century without sharp reductions in emissions. To keep the decrease in pH to less than 0.2 pH, which could help to protect critical marine ecosystems, will require keeping the CO2 concentration below about 450 ppm (18). Another impact of glacial retreat is the possible effect fresh melt water will have on the thermohaline circulation. Driven by density gradients in ocean waters, the thermohaline (or deep ocean overturning) circulation is made up of the global flow of ocean currents. As ocean waters move around, different water masses are formed as evaporation removes fresh water and precipitation and river runoff add fresh water, each changing ocean salinity and therefore the density of the waters. Surface currents, which are largely driven by wind patterns, take the water masses to areas where they are warmed by high solar radiation (leading to lower density) or cooled in higher latitudes (leading to higher density). When surface water density becomes greater than for waters below, downwelling currents carry the denser surface waters down and push less dense, nutrient rich waters toward the surface, where winds bring them all the way to the surface and create areas rich with marine life. Thus, the density gradients created by temperature (cold water is more dense than water that is warm) and salinity (salt water is more dense than freshwater) are critical to both how ocean waters move and where there are nutrients that promote significant marine life (19). Because both temperature and salinity are influenced by changes in the climate, there are concerns about the ways in which the thermohaline circulation might be affected. The influences can operate in various ways. First, ocean circulation could be influenced by changes in runoff from glaciers and ice sheets. As glaciers melt and release fresh water into the ocean, the influx dilutes saltier waters, likely reducing the rate of bottom water formation because relatively fresh water will not be able to sink (even at higher latitudes where it becomes cold and dense), thus affecting deep ocean currents (20). With the rate at which glaciers are melting and the amount of freshwater that might be introduced into the ocean changing, it is thus quite possible that the intensity of the thermohaline circulation could be reduced. Climate change will not only affect salinity levels, but will also affect ocean temperatures and circulation patterns. First, as ocean temperatures increase, thermal expansion will cause the density to decrease and so increase the volume of ocean waters, raising sea level. Because surface currents are driven by the winds, warm surface waters moved by the winds are generally replaced by the colder waters underneath, with the upwelling bringing up nutrient-rich colder waters that promote flourishing marine life (19). As ocean surface waters warm and become less likely to sink, a smaller amount of cold water is brought up to the surface, impacting circulation patterns and marine life. In addition, warmer temperatures will lead to more evaporation. When the water evaporates, the salt stays behind. An increase in salinity changes the density of the water, and therefore affects circulations patterns (21). Given the interactions of these processes, there are increasing concerns that climate change will reduce the overall intensity of the thermohaline (deep-ocean) circulation. Should the increase in freshwater or the increasing ocean temperatures drastically alter density levels, the path of the thermohaline circulation could be altered or even significantly disrupted. Because the circulation plays a key role in ocean temperature patterns around the globe, weather patterns are also likely to be disrupted. Image from UCAR Changes such as these could be quite important for northern European countries. The Gulf Stream carries warm water from the tropics to the North Atlantic, and the heat it gives off to the atmosphere contributes to the mild temperatures in the region, even though Europe is located at a relatively high latitude. With sufficient cooling, the water sinks near Greenland and further north, pulling more warm waters northward from the tropics. If ocean warming slows the thermohaline circulation, less warm water would be transported north and Europe would likely experience less warming or even a cooling (21). Such a cooling event may have occurred during the Younger Dryas about 12,000 years ago when meltwater release from rapid deglaciation of North America freshened the North Atlantic, likely shutting off the deep ocean circulation (22) and disrupting weather and ocean circulation patterns (23). Within a decade of the shutdown of the thermohaline circulation, global climate patterns were altered significantly and European and North American temperatures dropped by as much as 15ºC. Such a rapid and dramatic shift in climate has not happened since, but with melting of Greenland beginning, there is an increasing risk of a similarly sudden shift in the future (24). As CO2 emissions and climate change continue, risks to the health of the ocean will become a more prominent concern. With accelerated melting back of glaciers and ice sheets and the subsequent rise in sea level, with further decreases in oceanic pH, and with deceleration of the thermohaline circulation, there are many ways in which the delicate balance of ocean dynamics and ecosystems are being put at risk. These factors, combined with the uncertainty in predicting exactly how these impacts will interact, are causing changes in the ocean: an increasingly problematic issue for future generations. Figure 1: Causes of sea level rise from climate change. (2002). In UNEP/GRID-Arendal Maps and Graphics Library. Retrieved 17:10, March 26, 2008 from http://maps.grida.no/go/graphic/causes-of-sea-level-rise-from-climate-change. Nicholls, R.J., P.P. Wong, V.R. Burkett, J.O. Codignotto, J.E. Hay, R.F. McLean, S. Ragoonaden and C.D. Woodroffe, 2007: Coastal systems and low-lying areas. Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, M.L. Parry, O.F. Canziani, J.P. Palutikof, P.J. van der Linden and C.E. Hanson, Eds., Cambridge University Press, Cambridge, UK, 315-356. http://www.ipcc-wg2.org/index.html EPA. “Coastal Zones and Sea Level Rise.” Updated 08 February 2008 Union of Concerned Scientists. "Highlights from the First Section of the IPCC Fourth Assessment Report." Interactive sea level map: http://flood.firetree.net/ The High Stakes for Small Islands (Autumn 2009 Climate Alert) Join the Climate Institute e-news mailing list: © 2007 - 2010 All Rights Reserved 900 17th St. NW, Suite 700, Washington, DC 20006
http://climate.org/topics/sea-level/index.html
13
67
3. Ricardo is going to be discussing only the latter category--goods that can be produced, and whose exchange value thus depends on cost of production. B. Distinction between labor to produce and labor to earn the money to purchase. 6. Suppose the labor cost of producing corn doubles: 7. Ricardo is sneaking up on his rent theory--note "land last taken into production." The fact that the amount of corn produced by a given amount of labor varies with the fertility of the marginal land is going to be key to his analysis. C. Which standard of value is right? B. In a more advanced society, tracing out the labor used to produce a final good is more complicated, but the logic of the problem is still the same. C. Going back to the primitive society, Ricardo sketches an equilibrium price argument for why relative prices must be equal to relative amounts of embodied labor. C. If all capital were circulating, and all had the same production period, then exchangable value would be proportional to labor input. D. But if that is not the case, then: 4. Example using machinery 5. So far, this gives us different ratios of exchangable value to embedded labor, depending on the capital labor ratio. 7. If wages go up, labor intensive goods rise relative to capital intensive goods. But... 9. But changes in the inputs needed to produce outputs can result in much larger changes in relative prices. C. Consider a machine that lasts only a year, substitutes for 100 men, and costs £5000. VII. Changes in price due to changes in the value of money C. More precisely, suppose three qualities, yielding, with the same inputs of labor and capital, net produce (after supporting the laborers) of 100, 90 and 80 quarters of corn. D. But there is an intensive margin as well--before the really bad land is cultivated, it becomes worth applying more labor and capital to the good land to get additional output. E. The increase of rent is a symptom of wealth, not a cause. H. A reduction in national capital, and consequent reduction in population, would push the margin towards more fertile lands, reducing rent. B. Improvement such as better plows give us the same output from the land at lower cost in labor (and capital) C. Note that we are measuring rent as proportion of output, not as exchangable value. B. With mines of varying fertility C. If the labor cost of producing gold from marginal mines was always the same, gold would be as nearly invariable a measure of value as is possible. Ricardo will write as if this were true--i.e. use "gold" to mean an imaginary standard having this characteristic. II. And refers the reader back to Smith. B. The natural price of goods other than raw produce and labor tends to fall over time C. The market price of labor, as of other goods, can deviate from its natural price, but tends to conform. D. Market rate could be above natural rate in an improving society for an indefinite period of time E. In talking of capital accumulating, one should distinguish between II. The natural price of labour, measured in food etc., is not fixed D. In new settlements, this happens--until they get pushed onto worse land. F. The friends of humanity cannot but wish ... that workers should have luxurious tastes. F. All of these calculations are done in terms of Ricardo's imaginary gold of fixed value, but the argument does not depend on that. G. Digression on the poor laws, citing Malthus: 6. If everyone could live comfortably on welfare, the principle of gravitation is not more certain than the tendency of such laws to convert wealth into misery. 5. Note that all of this increase in wages is with technology fixed--output per worker is the same, save that we are moving to worse marginal land, so output of corn per worker on marginal land is less, but ... B. So the farmer has an interest in keeping rent low C. Effect on other goods: D. The same thing would happen if there was an increase in the price of other goods consumed by the laborer--profits would fall. E. All of this describes equilibrium. F. Very long term story: 9. So the shift in value is away from profit and into labor and rent--the latter get a larger fraction of the total, and eventually the former starts to fall absolutely as well as relatively. B. It has been argued by high authority (Smith?) that when capital is shifted into foreign commerce, the result will be to raise the rate of profit in general. But ... C. The effect of trade is to get us more usefulness for the same value. D. Profits depend only on wages (measured in value terms!), Prices are independent of wages (rise in wage compensated by drop in profits) but depend on productivity. D. Gold and silver distribute themselves among countries in such a way as to make profitable in money terms those trades that would be profitable in barter terms. B. Contrast that to the modern sense. C. We are interested in two different questions, neither of which is Ricardo's. E. Second question is incidence--when you impose a tax, who (or what factor) really pays it? F. Part of the reason for the difference is that Ricardo is much less interested than we are in the effect of incentives. C. So wages will go up. B. During the interval between imposing the tax and adjusting the population, workers would suffer. 3. If the reason is increased demand, that is a consequence of high wages--which resulted in the population increase that is increasing the demand. 6. Defense of Ricardo's claim that wages would rise immediately when you tax corn. C. Raising wages and thus lowering profits discourages accumulation. D. Because the price of all commodities that contain raw produce has been raised, England will be unable to compete in foreign trade. II. This assumes we are really taxing economic rent, and not also taxing the profit on the landlowner's investment in buildings etc. C. The only difference between a tithe and a tax on raw produce is that the tithe is a fixed percent rather than a fixed amount of money per quarter of corn D. Just like the tax on raw produce, it changes the corn rent but not the money rent. E. But it does encourage the import of corn, since we buy it with outputs that do not pay such a tax. F. More generally, taxing one industry and not another results in an inefficient pattern of imports and exports. IV. Such taxes discourage cultivation--but for the same reason as all taxes. C. The total value of all money would stay about the same, independent of its quantity, because D. On the gold used for money, although a large tax was received, nobody would pay it: E. Quang Ng article: Diamonds are a Government's Best Friend 2. This is Ricardo's argument, except that he is applying it to gold used for money, where there is a much clearer reason why usefulness depends on value not weight. 2. Leaves all prices (in money) the same if money is taxed too (i.e. tax on the profit from mining gold) 3. Ricardo now drops his implicit assumption that all goods are produced with the same labor/capital ratio, and notes that a tax on profits will raise the price of capital intensive goods relative to labor intensive goods. E. Does a change in the money supply change relative prices? F. Suppose we tax profits of everyone except the farmer (money untaxed) G. Suppose we tax everyone's profits. H. Stock holder has a fixed income, so it matters to him whether you do or don't tax money. I. In fact, with real money, imported from abroad ... II. Buchanan attacks Smith's claim that a higher price of provisions leads to higher wages. Ricardo responds: H. Suppose the tax were simply handed over to employers. I. But taxes are wastefully spent, so tend ultimately to diminish the capital stock and lower wages. K. Landlords as such do not pay the tax, but as consumers of labor (hiring servants etc) they do. N. Ricardo argues that excess burden is impossible, but ... remember he is calculating in value not usefulness. O. Ricardo points out the argument for taxing the export of goods where you have a national but not individual monoposony. II. The economics of debt finance B. Suppose the government had collected the whole £20 million in taxes the year it was spent. C. When you get the same effect, but with the government doing the borrowing, the logic of the situation is the same. D. Debt finance of war is a bad idea, because B. Raw produce is not at a monopoly price C. It would be at a monopoly price if the land were so thoroughly cultivated that no more could be produced--output on both the extensive and intensive margin <=cost of production. D. Argument with Mr. Buchanan (and Smith) on whether raw produce is at a monopoly price: E. Inconsistency between the position Smith and Buchanan take here and their position on the subject of a tax on malt. 6. Say argues that agricultural output is unaffected by tax, so market price is unaffected, so the landlord must pay the tax out of rent. B. Under current application of the law, the farmer is paying a much larger fraction of his profits than the manufacturer, since C. In an advancing society (where more land is being taken into production, so the cost of making bad land productive is part of the marginal cost of producing food) poor rates are largely a tax on corn, D. But in a stationary or retrograde society B. Riches (aka Smith's use value, roughly our utility) are increased by technological improvements. D. Smith's labor commanded definition of value doesn't make sense for riches, since increased productivity gives us more riches without more command over labor. F. Long discussion on Say's inconsistent use of the terms. B. But Smith attributes the fall of profits to the accumulation of capital and the resulting competition. C. As Say has shown (still called "Say's Law") production creates its own demand, D. Smith thinks that merchants invest in foreign trade because there is nothing left to do with their capital at home, but ... E. What does real world evidence tell us? 4. In a footnote, Ricardo points out that Say is missing the logic of risk premiums, in assuming that because the riskiness of a government loan (in a country whose government cannot be trusted to repay them) requires the government to pay a higher interest rate, that will force up the general interest rate--presumably for safer investments. B. Someone (not named) argues that a bounty will increase the money price of corn C. Ricardo replies that in the short run, there is a scarcity of corn, which will not be solved by higher wages, D. Ricardo disagrees with Smith's analysis of bounties on corn exports (while referring to his "justly celebrated work.") E. Smith's error is believing that all prices rise or fall in proportion to the price of corn. F. Smith argued that bounties and tariffs really benefitted manfacturers, but only appeared to benefit landowners. 4. What is wrong with bounties and tariffs is that they cause capital to be misallocated among nations G. The bad effects of trade restrictions 3. This point is important, since landowners argue, on the authority of Smith, that the should have import duties on corn, to balance the effect onthem of the import duties on manufactured goods. 4. The right solution is not to balance one bad law by adding another, but to repeal the first. B. Contrast the effect of a fall in the price of corn due to a bounty with the effect of a fall in the value of corn due to producing it with less labor. C. Suppose we reverse the bounty--tax corn and subsidize manufactured goods. D. So far we have assumed no foreign trade. B. Somebody invents a new machine which can do the same work, in either his farm or his manufacture, with much less labor (Ricardo omits this step, but there must be some reason why the capitalist changes from his old pattern of production) F. Ricardo's conclusion G. Note that Ricardo does not (here) note the feedback from wages to capital/labor ratio B. Similarly, a country engaged in war with lots of people employed as soldiers and sailors increases the demand for labor, since ... Back to Home Page
http://daviddfriedman.com/Academic/Course_Pages/History_of_Thought_98/Ricardo_notes.html
13
45
Economics in Six Minutes Economics is the science of utility, which includes people's preferences and the satisfaction and importance they subjectively derive from goods. Desires are unlimited, but people get less extra value from more and more units of the same good. by Fred E. Foldvary, Senior Editor Demand is a list of prices and the quantities bought at those prices. The law of demand is that at lower prices, people usually buy greater quantities and never fewer quantities. The law of supply is that, holding production methods constant, greater quantities are produced and provided with higher prices. The law of diminishing returns says that adding a variable input to a fixed input eventually yields ever less output per extra input unit. Where supply intersects demand is where market prices and quantities are determined. Price controls above this equilibrium such as minimum wages create a surplus, and prices below it such as rent control create a shortage. Without price controls a surplus drives the price down, and a shortage drives the price up, like an invisible hand directing prices to equilibrium. Eliminating restrictions and taxes on labor creates full employment. Firms maximize profits at the quantity where the marginal (extra) revenue equals the marginal cost. In a very competitive market, economic profits, above normal costs, lead more firms to enter the industry, increasing supply and decreasing price until the profits are just normal. Losses lead to fewer firms and a shift to less supply until profits are normal. The factors, categories of inputs and resources, are land, labor, and capital goods, yielding land rent, wages, and capital-goods rentals. Entrepreneurs organize the factors and drive the economy to better directions with better products and marketing, earning their wages in the form of economic profits. Other labor earns its marginal product, what it contributes to output. Land varies in quality, and the production in the better land relative to that of the least productive marginal land yields a rent to the more productive land. Speculative holdings reduce the margin of production, hiking up rent and pushing down wages. Civic services such as parks, streets, and security increase the demand for land, raising the rent. If these are paid for by taxes on labor and capital goods, the users pay both the tax and the extra rent. When rent is used to pay for the public goods, the landowners get neither subsidized nor penalized, since they pay back to the provider the rent generated by the works. Paying the rent to the community and charging market prices for utilities also eliminates urban sprawl by making the best use of urban land. Taxes on labor and goods must be added to the costs, raising prices and reducing quantities, placing an excess burden on the economy beyond the actual tax. Land is fixed in supply and has no cost of production, so taxing the rent does not shift the supply or reduce the rent. Taxing the rent keeps wages high and eliminates poverty both by letting workers keep their full product and by making the most productive use of resources. Folks tend to prefer goods today rather than in the uncertain future. This time preference and difference in present versus future prices gives future goods a discount and present-day goods a premium, the difference creating the natural interest rate. Market interest rates then make savings equal to investments as we get more investment with lower rates. Money is a medium of exchange and can either be based on a commodity such as gold or be fiat, based on nothing but laws and custom like today. If the growth of money is greater than the growth of goods, this is monetary inflation that leads to a continuous increase in the level of prices, or price inflation. Free-market banking with money based on a commodity leads to a flexible supply of money and purchasing media without inflation. Business cycles are caused by speculative real-estate buying and building, fueled by excessive money growth. Depressions can be avoided by using the rent for public revenue and with free-market banking, avoiding the financial and real causes for cycles. Pollution is caused by making the public rather than polluters pay the social cost. Charging polluters will make them avoid pollution or pass the cost to consumers, reducing quantities and pollution. Likewise, cars and parking should be charged during the most congested times. Eliminating restrictions on private transit and using rent for more public transit eliminates traffic congestion. Trade is mutually beneficial. Even countries with higher costs benefit from free trade by concentrating on their comparative advantage, what they are most productive in. Global free trade with a common environmental policy leads to universal prosperity. Public choice is the branch of economics that studies the decisions of voters and government officials. Having concentrated benefits while spreading the cost thinly among consumers and taxpayers leads to seeking privileges, subsidies, special protections, and other transfers. Mass democracy and the need for expensive media campaigns leads to this transfer seeking. Switching to small-group voting with bottom-up multi-level governance, along with constitutional constraints, minimizes this corruption. The French Physiocrats of the 1700s such as Quesnay advocated a single tax on rent and also free trade. Adam Smith in the late 1700s said a market turns self-interest into public benefits, but benevolent giving in addition to that is virtuous. David Ricardo came up with the margin of production and comparative advantage. Karl Marx thought labor creates all value and get exploited when they don't get the whole value, but the Austrian economist Carl Manger said no, values are subjective. American economist Henry George said the surplus is rent, so tax that, and have free trade. Austrian economist Ludwig von Mises pure socialism would be hopelessly inefficient, and government intervention makes the economy worse. Friedrich Hayek said so too, because knowledge is decentralized, so just let the spontaneous market order work. John Maynard Keynes in Great Britain thought government should make and spend money during depressions, but New Classical economists point out that when people expect inflation, government stimulus just raises prices. Milton Friedman in the USA says don't try to manipulate the money, and let folks choose for themselves. The bottom line to all this is that economic freedom leads to the most prosperity. Don't restrict labor and capital other than to prevent coercive harm to others. Don't tax labor or enterprise. Get public revenues from rent and pollution fees. Let the market handle the money and banking. True free trade and enterprise are good; decentralized and market-based governance works best. As Henry George said, economics and ethics are one. The environment and the economy are one. Good governance and economics are one. Share rent, charge for damage, don't steal wages. That's economics in six minutes, and the path to prosperity. -- Fred Foldvary Copyright 2000 by Fred E. Foldvary. All rights reserved. No part of this material may be reproduced or transmitted in any form or by any means, electronic or mechanical, which includes but is not limited to facsimile transmission, photocopying, recording, rekeying, or using any information storage or retrieval system, without giving full credit to Fred Foldvary and The Progress Report.
http://www.progress.org/archive/fold144.htm
13
43
A page from the "Causes of Color" exhibit... What causes the colors of metals like gold? The lure of gold has been the downfall of many, from those worshipping the biblical golden calf to those unsuccessfully staking their claims during the 19th century gold rushes. Nevertheless, this lustrous metal continues to connote the pinnacle of achievement, as evidenced by Nobel Prize medals, Olympic medals, and Academy Award statuettes. Would our fascination with gold be lessened if we knew that its shiny allure was the result of excited electrons? Silver, iron, platinum, gold, and copper are all metals, which generally are malleable and ductile, conduct electricity and heat, and have a metallic luster. Some of their properties can be attributed to the way electrons are arranged in the material. The bonding of metals When two atoms combine, different types of bonding can occur: covalent, ionic, and metallic. Silver, iron, platinum, gold, and copper all form metallic bonds. Unlike covalent bonding, metallic bonding is non-directional. The strong bond consists of positively charged metal atoms in fixed positions, surrounded by delocalized electrons. These delocalized electrons are often referred to as "a sea of electrons," and can help explain why copper and gold are yellow and orange, while most other metals are silver. The color of metals can be explained by band theory, which assumes that overlapping energy levels form bands. The mobility of electrons exposed to an electric field depends on the width of the energy bands, and their proximity to other electrons. In metallic substances, empty bands can overlap with bands containing electrons. The electrons of a particular atom are able to move to what would normally be a higher-level state, with little or no additional energy. The outer electrons are said to be "free," and ready to move in the presence of an electric field. Some substances do not experience band overlap, no matter how many atoms are in close proximity. For these substances, a large gap remains between the highest band containing electrons (the valence band) and the next band, which is empty (the conduction band). As a result, valence electrons are bound to a particular atom and cannot become mobile without a significant amount of energy being made available. These substances are electrical insulators. Semiconductors are similar, except that the gap is smaller, falling between these two extremes. The highest energy level occupied by electrons is called the Fermi energy, Fermi level, or Fermi surface. If the efficiency of absorption and re-emission is approximately equal at all optical energies, then all the different colors in white light will be reflected equally well. This leads to the silver color of polished iron and silver surfaces. The efficiency of this emission process depends on selection rules. However, even when the energy supplied is sufficient, and an energy level transition is permitted by the selection rules, this transition may not yield appreciable absorption. This can happen because the energy level accommodates a small number of electrons. For most metals, a single continuous band extends through to high energies. Inside this band, each energy level accommodates only so many electrons (we call this the density of states). The available electrons fill the band structure to the level of the Fermi surface and the density of states varies as energy increases (the shape is based on which energy levels broaden to form the various parts of the band). If the efficiency decreases with increasing energy, as is the case for gold and copper, the reduced reflectivity at the blue end of the spectrum produces yellow and reddish colors. Silver, gold and copper have similar electron configurations, but we perceive them as having quite distinct colors. Electrons absorb energy from incident light, and are excited from lower energy levels to higher, vacant energy levels. The excited electrons can then return to the lower energies and emit the difference of energy as a photon. If an energy level (like the 3d band) holds many more electrons (than other energy levels) then the excitation of electrons from this highly occupied level to above the Fermi level will become quite important. Gold fulfills all the requirements for an intense absorption of light with energy of 2.3 eV (from the 3d band to above the Fermi level). The color we see is yellow, as the corresponding wavelengths are re-emitted. Copper has a strong absorption at a slightly lower energy, with orange being most strongly absorbed and re-emitted. In silver, the absorption peak lies in the ultraviolet region, at about 4 eV. As a result, silver maintains high reflectivity evenly across the visible spectrum, and we see it as a pure white. The lower energies (which in this case contain energies corresponding to the entire visible spectrum of color) are equally absorbed and re-emitted. Silver and aluminum powders appear black because the white light that has been re-emitted is absorbed by nearby grains of powder and no light reaches the eye. Transmitted color of gold Gold is so malleable that it can be beaten into gold leaf less than 100 nm thick, revealing a bluish-green color when light is transmitted through it. Gold reflects yellow and red, but not blue or blue-green. The direct transmission of light through a metal in the absence of reflection is observed only in rare instances. Colored gold alloys When two metals are dissolved in each other (as is the case with alloys), the color is often a mixture of the two. For example, copper dissolved in gold changes the color from a yellow-gold to a red-gold. Silver dissolved in gold creates a green-gold color. White gold contains palladium and silver. The color of gold jewelry can be attributed to the addition of different amounts of several metals (such as copper, silver, zinc, and so on). Some of these color changes can be explained by shifts in the energy levels relative to the Fermi level. Some alloys form intermetallics, where strong covalent bonds replace metallic bonding. Bonding is localized, so there is no sea of electrons. When indium or gallium is added to gold, a blue color can result. The cause of color in these intermetallics is different than that of yellow gold. Many metals create the illusion of being colored. The color can be attributed to a very thin surface coating, such as a paint or dye, or thin oxide layers can create interference colors (see butterflies) similar to those in oil or soap bubbles. The color of nanoparticles The color known as "Purple of Cassius" in glass and glass enamel is created by incorporating a colloidal suspension of gold nanoparticles, a technology in use since ancient times. Colloidal silver is yellow, and alloys of gold and silver create shades of purple-red and pink. Nanoshells are a recent product from the field of nanotechnology. A dielectric core is coated with metal, and a plasmon resonance mechanism creates color, the wavelength depending on the ratio of coating thickness to core size. For gold, a purple color gives way to greens and blues as the coating shell is made thinner. In the future, jewelry applications may include other precious metals, such as platinum.
http://www.webexhibits.org/causesofcolor/9.html
13
82
An excise or excise tax (sometimes called a duty of excise or a special tax) may be defined broadly as an inland tax on the production or sale of a good, or narrowly as a tax on a good produced within the country. Excises are distinguished from customs duties, which are taxes on importation. Excises, whether broadly defined or narrowly defined, are inland taxes, whereas customs duties are border taxes. An excise is an indirect tax, meaning that the producer or seller who pays the tax to the government is expected to try to recover the tax by raising the price paid by the buyer (that is, to shift or pass on the tax). Excises are typically imposed in addition to another indirect tax such as a sales tax or VAT. In common terminology (but not necessarily in law) an excise is distinguished from a sales tax or VAT in three ways: (i) an excise typically applies to a narrower range of products; (ii) an excise is typically heavier, accounting for higher fractions (sometimes half or more) of the retail prices of the targeted products; and (iii) an excise is typically specific (so much per unit of measure; e.g. so many cents per gallon), whereas a sales tax or VAT is ad valorem, i.e. proportional to value (a percentage of the price in the case of a sales tax, or of value added in the case of a VAT). Tax is notable for vagueness of definition. According to the New Oxford English Dictionary (Revised 2nd Ed., 2005), an excise is "a tax levied on certain goods and commodities produced or sold within a country and on licenses granted for certain activities" (emphasis added). The formula "produced or sold" is applicable to both domestic and foreign products. But the word "certain" is not further explained in the definition — or even in the etymology, according to which the word excise is derived from the Dutch accijns, which is presumed to come from the Latin accensare, meaning simply "to tax". It would be impossible to give a general formula predicting which goods are subject to excise. Lists of such goods are readily provided by governments, and from each list one may be able to infer the motives for grouping such goods together; however, no explicit formula appears to be provided by any one government. For example: - In the United Kingdom, HM Revenue and Customs lists "alcohol, environmental taxes, gambling, holdings & movements, hydrocarbon oil, money laundering, refunds of duty, revenue trader's records, tobacco duty, and visiting forces" as being subject to excise, but offers no explanation of what these items have in common, apart from being excisable. Some of the listed items are not even goods, but rather services. - The Australian Taxation Office describes an excise as "a tax levied on certain types of goods produced or manufactured in Australia. These... include alcohol, tobacco and petroleum and alternative fuels" (emphasis added). What the Office calls an "excise" on locally produced goods is typically matched by what it calls a "customs duty" on comparable imported goods. But there is no general formula indicating which goods are subject to the duties in question. In Australia, where the Constitution stipulates that only the Federal Parliament may impose duties of excise, the meaning of "excise" is not merely academic, but has been the subject of numerous court cases. Notwithstanding the terminology preferred by the Taxation Office, the High Court of Australia has repeated held that a tax can be an "excise" regardless of whether the taxed goods are of domestic or foreign origin; most recently, in Ha v New South Wales (1997), the majority of the Court endorsed the view that an excise is "an inland tax on a step in production, manufacture, sale or distribution of goods", and took a wide view of the kind of "step" which, if subject to a tax, would make the tax an excise. The fact that an "excise" need not discriminate between local and imported goods seems to imply that duties levied by Australian States on sales of livestock and registrations of new vehicles are duties of excise, while the wide view of the taxable "step" seems to imply that even the payroll taxes levied by the States are excises. Whatever the merits or demerits of such arguments, it is clear that the constitutional meaning of "excise" is wider than the everyday meaning. In defence of excises on strong drink, Adam Smith wrote: "It has for some time past been the policy of Great Britain to discourage the consumption of spirituous liquors, on account of their supposed tendency to ruin the health and to corrupt the morals of the common people." Samuel Johnson was less flattering in his 1755 dictionary: "EXCI'SE. n.s. ... A hateful tax levied upon commodities, and adjudged not by the common judges of property, but wretches hired by those to whom excise is paid." Deducing from the types of goods, services and areas listed as excisable by many governments, and considering the thinkers' comments, a logical conclusion might be that excise duty was originally invented for some or all of the following reasons: - to protect people – - from harming their health by abusing substances such as tobacco and alcohol, thus making excise a kind of sumptuary tax - from harming themselves and others indirectly and morally by engaging in activities such as gambling and prostitution (see below) (including solicitation and pimping) – thus making it a type of vice tax or sin tax - from harming those around them and the general environment, both from overuse of the above-mentioned substances, and including curbing activities contributing to pollution (hence the tax on hydrocarbon oil and of other environmental taxes, as in the UK), or from harming the natural environment (hence the tax on hunting) - thus also making excise a kind of pigovian tax - to provide monies needed – - for the extra healthcare and other public expenditures which will be needed as a direct or indirect result of excisable activities, such as lung cancer from smoking or road accidents resulting from drunk driving - for defense - including taxation directly levied on other countries' militaries and/or governments, such as the UK's taxation on "visiting forces" - This latter area can go wrong if unwisely implemented: a demonstrative situation arose in 2006 around central London's congestion charge (which, although not strictly branded as an excise tax, is a sort of environmental tax, as part of its aim is to reduce pollution in busy central London), where several foreign embassies got in a heated exchange with Greater London Authority for refusing to pay the charge arguing that, as diplomatic entities, they were exempt from paying it. - to punish – - many US states impose taxes on drugs, and the UK government imposes excise on money laundering and on "visiting forces" (which can, from a legal standpoint, also be interpreted as "invading forces"). These are included in the statute books not because the government expects smugglers, launderers and invaders to pay for the right to conduct their harmful and illegal activities, but so that greater punishments and reparations/war reparations - based mainly around tax evasion - can be imposed in the case that the perpetrator is caught and tried. Aside from the extra revenue, this of course can also act as a deterrent. Targets of taxation As already mentioned, many US states tax drugs, partly in order to be able to impose heavier punishments, as the Kansas Department of Revenue states on its website: - "The fact that dealing marijuana and controlled substances is illegal does not exempt it from taxation. Therefore drug dealers are required by law to purchase drug tax stamps." There are at least two major criticisms of such legislation, however - see below. Gambling licences are subject to excise in many countries; however, gambling itself was for a time also subject to taxation, in the form of stamp duty, whereby a revenue stamp had to be placed on the ace of spades in every pack of cards to demonstrate that the duty had been paid (hence the elaborate designs that evolved on this card in many packs as a result). Since stamp duty was originally only meant to be applied to documents (and cards were categorized as such), the fact that dice were also subject to stamp duty (and were in fact the only non-paper item listed under the 1765 Stamp Act) suggests that its implementation to cards and dice can be viewed as a type of excise duty on gambling. - "5.5 Implementation of a excise tax on prostitution, the brothel is taxed and passed it on." (Canada) ; - "An excise tax is hereby imposed on each patron who uses the prostitution services of a prostitute in the amount of $5 for each calendar day or portion thereof that the patron uses the prostitution services of that prostitute." (Nevada) The reasons given by Canadian MPs entering the bill covered many of the above-mentioned areas, including extra funding for police protection and better healthcare for the prostitutes - however, so did many of the counterarguments. Salt, paper, and windows Excise (often under different names, especially before the 15th century, usually constituting of several separate laws, each referring to the individual item being taxed) has been known to be applied to substances which would in today's world seem rather unusual, such as salt, paper, and coffee. In fact, salt was taxed as early as the second century, and as late as the twentieth. Many different reasons have been given for the taxation of such substances, but have usually - if not explicitly - revolved around the scarcity and high value of the substance, with governments clearly feeling entitled to a share of the profits traders make on these expensive items. Such would the justification of salt tax, paper excise, and even advertisement duty have been. Examples by country An excise is "a tax upon manufacture, sale or for a business license or charter," according to Law.com's Legal Dictionary, and is to be distinguished from a tax on real property, income or estates." In the United States, the term "excise" means: (A) any tax other than a property tax or capitation (i.e., an indirect tax, or excise, in the constitutional law sense), or (B) a tax that is simply called an excise in the language of the statute imposing that tax (an excise in the statutory law sense, sometimes called a "miscellaneous excise"). An excise under definition (A) is not necessarily the same as an excise under definition (B), but the reverse is false. Example: The Whiskey Tax that resulted in the Whiskey Rebellion which started in 1792. Her Majesty's Customs and Excise (HMCE) was, until April 2005, a department of the British Government in the UK. It was responsible for the collection of Value added tax (VAT), Customs Duties, Excise Duties, and other indirect taxes such as Air Passenger Duty, Climate Change Levy, Insurance Premium Tax, Landfill Tax and Aggregates Levy. It was also responsible for managing the import and export of goods and services into the UK. HMCE was merged with the Inland Revenue (which was responsible for the administration and collection of direct taxes) to form a new department, HM Revenue and Customs, with effect from 18 April 2003. The tax was first implemented in the UK under this name in the mid-17th century. In India, an excise tax is levied on the service industry. Formerly called the Central Excise Duty, this tax is now known as the Central Value Added Tax (CENVAT). Manufacturers may offset duty paid on materials used in the manufacturing process by using that duty as a credit against excise tax through a process known as Central Value Added Tax Credit (CENVAT Credit). The offsetting process was formerly known as Modified Value Added Tax (MODVAT). Machinery of implementation In many countries, excise duty is applied by the affixation of revenue stamps to the products being sold. In the case of tobacco or alcohol, for example, the producer buys a certain bulk amount of excise stamps from the government and is then obliged to affix one to every packet of cigarettes or bottle of spirits produced. Critics of excise tax - such as Samuel Johnson, above - have interpreted and described excise duty as simply a government's way of levying further and unnecessary taxation on the population. The presence of "refunds of duty" under the UK's list of excisable activities has been used to support this argument, as it results in taxation being implemented on persons even where they would normally be exempt from paying other types of taxes – hence why they are getting the refund in the first place. Furthermore, excise is often somewhat similar to other taxes and sometimes doubles up with them, as in the above example, or as in the case of customs duties: since the two taxes largely apply to the same types of goods, people are forced to pay tax twice over on the same items (except in the case of duty-free) - once through excise upon purchase and a second time around through customs duties upon transportation. (A justification for this is that the country the items are being entered into is applying the customs partly for the same reasons as the original excise was charged, as it is the country of import which will suffer the ill environmental, health and social effects of, say, the cigarettes and alcohol being brought in; thus customs has many similar pros and cons as has excise.) There are at least two major criticisms of excise legislation on drugs - - One is that, while it acts as a deterrent, it is also been argued that the state in question is able to gain revenues, while the legislation protects the anonymity of the dealers - as in the example of Kansas: - The other criticism is that as far as legal drugs are concerned (ie. medicines and pharmaceuticals) – these are also subject to taxation in some countries, notably India. This has raised controversy about the fact that this tax leads to hugely inflated prices of ordinary and even potentially lifesaving medication. - ↑ Sullivan, Arthur; Steven M. Sheffrin (2003). Economics: Principles in action. Upper Saddle River, NJ: Pearson Prentice Hall. pp. 118. ISBN 0-13-063085-3. http://www.pearsonschool.com/index.cfm?locator=PSZ3R9&PMDbSiteId=2781&PMDbSolutionId=6724&PMDbCategoryId=&PMDbProgramId=12881&level=4. - ↑ HM Revenue & Customs, Excise Duty - Index (see list along left-hand side of page). Retrieved - July 2009. - ↑ Australian Taxation Office, Businesses - Excise. Retrieved - July 2009. - ↑ Patricia Sampathy, "Section 90 of the Constitution and Victorian Stamp Duty on Dealings in Goods", Journal of Australian Taxation, Vol.4, no.1 (2001), pp.133–155. - ↑ The implication is noted, albeit with disapproval, by Sir Harry Gibbs in "‘A Hateful Tax’? Section 90 of the Constitution", Proceedings of the Fifth Conference of The Samuel Griffith Society (Sydney, Mar.31 to Apr.2, 1995), ch.6. - ↑ Adam Smith, The Wealth of Nations (1776), Bk.V, Ch.2, Art.IV; retrieved Nov.3, 2009. - ↑ Samuel Johnson, A Dictionary of the English Language, Ninth Ed. (London, 1805), Vol.2; retrieved Nov.3, 2009. - ↑ 8.0 8.1 Kansas Department of Revenue, Tax Types: Drug Tax Stamp. Retrieved - July 2009. - ↑ Stamp Act History Project, "Stamp Act, 1765". http://www.stamp-act-history.com/category/stamp-act/. Retrieved - July 2009. - ↑ http://clubs.myams.org/qmp/parties/bills/lib-prostitution.pdf [broken link] - ↑ Prostitution tax an option for Nevada Legislature, by Geoff Dornan, North Lake Tahoe Bonanza, 23 Mar 2009. Retrieved - July 2009. - ↑ "Canada Preparing to Legalize Prostitution?". http://www.lifesitenews.com/ldn/2005/feb/05022509.html. , LifeSiteNews.com, 25 Feb 2005. Retrieved - July 2009. - ↑ "P. Duk. Inv. 314: Agathis, Strategos and Hipparches of the Arsinoite Nome". http://www.uni-koeln.de/phil-fak/ifa/zpe/downloads/1997/118pdf/118251.pdf. by J.D. Sosin & J.F. Oates, University of Cologne, 1997. Mention of salt tax in early 3rd century papyrus (pp. 6-7). Retrieved - July 2009. - ↑ "The Salt March To Dandi". http://www.english.emory.edu/Bahri/Dandi.html. by Scott Graham, Emory University, 1998. Discussion of salt excise in 1930s India. Retrieved - July 2009. - ↑ Routledge Library of British Political History – Labour and Radical Politics 1762-1937, p.327 |Look up excise in Wiktionary, the free dictionary.|
http://dictionnaire.sensagent.com/Excise/en-en/
13
20
Insulation refers to an energy savings measure, which provides resistance to heat flow. Naturally, heat flows from a warmer to a cooler space. By insulating a house, one can reduce the heat loss in buildings in cold weather or climate, and reduce the heat surplus in warmer weather or climate. Insulating a house has several benefits such as energy savings, cost savings and increased comfort. Barriers to undertake energy savings measures may be split incentives, relatively high investment costs, and the time and effort required to realise the energy savings. There are several types of insulation against heat loss in cold climates, each with its own technical characteristics and financial costs and benefits. Insulation measures are generally one of the most cost effective energy savings measures. By insulating a house, one can reduce the heat loss in buildings in cold weather or climate, and reduce a heat surplus in warmer weather or climate. Thus, insulation limits the need for heating or cooling the house. Heat losses or heat surpluses arise because of differences between the indoor and outdoor air temperature. Naturally, heat flows from a warmer to a cooler space, and the temperatures will converge to an equilibrium temperature, a physical phenomenon based on mechanisms like transmission (the heat flow through materials) and ventilation (heat flow by air). Insulation aims at reducing the speed of this convergence of temperature in order to decrease the need for heating or cooling. This technology description focuses on insulation against heat loss, but includes some references on insulation for cooling. Several types of insulation measures exist. Below insulation measures for residential buildings are described: Wall, roof and attic, floor and soil insulation Wall, roof and floor insulation may be done by fixing insulation material to the wall, roof or floor, either on the inside of outside, e.g. by using insulation plates. Different materials for walls, roofs and floors require different types of insulation measures. Buildings may for example have cavity walls consisting of two 'skins' separated by a hollow space. This space already provides some insulation but can be filled up with additional insulation material, e.g. foam, to further improve the insulation effect. Roof insulation for flat roofs differs from insulation for steeper roofs. Floors are usually made of wood or concrete, each requiring specific insulation measures. Another option to reduce heat losses to the ground is soil insulation, for example by placing insulation material on the soil in a so-called “crawl space” (a very low basement). The age of a building is an important factor determining the type of insulation and the way in which it is installed, e.g. if insulation is put on the outside or inside of the construction. Window and door insulation Windows and exterior doors have a large impact on the heating and cooling requirements of a building. New materials, coatings, and designs have led to significantly improved energy efficiency of new, high-performing windows and doors. New high-quality windows may be up to six times more energy efficient than lower-quality, older windows (Pew Centre, 2009). Some of the latest developments concerning improved windows include multiple glazing, the use of two or more panes of glass or other films for insulation, and low-emissivity coatings reducing the flow of infrared energy from the building to the environment (Pew Centre, 2009). Attention needs to be paid not only to the window itself, but also to the window frame, which can significantly impact a window’s insulation level. Another insulation measure that reduces the amount of heat loss is sealing cracks in the ‘shell’ of the building. Cracks cause infiltration of cold air from outside or leakage of warm air to the outside. Strips or other material can be used to seal cracks in moving parts, such as windows and doors, and in places where different construction parts are attached to each other. Increasing insulation is technically feasible for almost all buildings, although it is most efficient to add insulation during the construction phase. Because of the diversity of insulation measures, a suitable option is generally available for almost every building, since most buildings have room for improvement with respect to insulation. Next to technical requirements, human preferences regarding comfort and aesthetics also play a role, e.g. for windows better insulation comes with lower insolation, i.e. less light. In practice, the suitability of insulation measures depends largely on the current technical state of a dwelling. Specifically the insulation already in place limits additional insulation. This is due to the physical space left for insulation and the suitability of the existing construction (e.g. availability of a cavity wall or sufficient cavity width, enough frame space to install better insulated but usually thicker windows, enough crawl space under the floor), but also because the law of diminishing returns applies: Every additional layer of insulation yields less energy savings than the previous one. The level of insulation that can be achieved by different insulation materials, i.e. the insulation value, is typically expressed as the R-value. The R-value indicates the insulation material’s resistance to heat flow. The higher the R-value, the better the insulation of a wall, roof or floor. For windows the value U is used, mathematically different but analogue to the R-value. Opposite to the R-value, the lower the U value the better the insulation of the window. Table 1 presents typical insulation values for wall, roof, floor and window (glass and frame) insulation in Dutch buildings according to their age. Heat loss and typical insulation formats |Built in period:||Rc (m2xK/W), U (W/m2xK), insulation material width (cm):| Wall cavity and other wall (inside/outside) insulation 5 centimetres > Rc = 1,61 5 centimetres > Rc = 1,61 3 centimetres > Rc = 0,97 5 centimetres > Rc = 1,47 |< 1975||3 centimetres > Rc = 0,90| 8 centimetres > Rc = 2,15 |>1975||5 centimetres > Rc = 1,40| 10 centimetres > Rc = 2,65 |Double glazed: U = 2,8| HR++ glass: U = 1,2 Table 1: Typical insulation values for wall, roof, floor and window (glass and frame) insulation in Dutch buildings according to their age Insulation measures against heat loss are common practice in countries with frequent cold weather, where they are applied at the construction of new buildings, but also during the renovation of buildings. Older buildings commonly have a much lower level of insulation than newer ones, which in OECD countries are typically built according to the latest energy performance regulations. A large technical potential remains to improve insulation levels of the existing building stock using mature technologies. Many insulation measures would also be cost-effective due to savings in energy costs. In the US, for example, more than 60% of single-family, residential houses are estimated to be “under-insulated”, i.e. by improving the level of insulation home owners could save costs, avoid GHG emissions, and improve indoor climate (Pew Centre, 2009). Figure 1 shows the potential energy savings, costs and barriers for different types of insulation measures [media:image:1] Common barriers why these measures are not implemented include: high initial investment costs, lack of financing options for the up-front investments, the time and effort required to undertake renovation measures in existing buildings, relatively long payback times for some measures, lacking knowledge and awareness, and split incentives, i.e. the decision makers who can / must decide on the level of insulation in a building and pay for the higher upfront costs are not the same persons who will reap the benefits of lower energy costs for heating and/or cooling. Governments in different regions of the world have introduced measures to reduce these barriers, including mandatory energy efficiency standards, building certification, voluntary labelling, and financial incentives to stimulate investments into increased insulation and other energy saving measures in buildings. Moreover governments, civil society and industry organisations use information campaigns to increase awareness and knowledge of energy saving options in buildings. In the EU the Energy Performance of Buildings Directive (EPBD) is the main regulatory framework to prescribe the use of energy labels for European buildings. In other regions such as the United States and some Asian countries, there is a stronger focus on a combination of mandatory regulation (such as the Energy Conversation Building Code for commercial buildings in India) and voluntary labels (e.g. the US Energy Star Qualified Homes rating system) (Levine et al., 2007). Insulation leads to energy savings, which reduce the demand for fossil fuels and associated GHG emissions and other environmental impacts. It is estimated that improvements in the level of insulation of the existing building stock can reduce heating requirements by a factor of two to four (Levine et al., 2007). New houses built according to the latest available technology and design in various cold-climate countries use as little as 10% of the energy for heating than houses built according to the local national building codes (Levine et al., 2007). For countries with milder winters, where heating is still required, as is the case in many developing countries, modest levels of insulation at a reasonable cost may already reduce the heating requirements by more than half of current levels, and in addition may contribute to reducing indoor temperatures in summer (Levine et al., 2007). If there is no air conditioning, lower temperatures in summer improve indoor comfort, or, if air conditioning is used, lead to additional energy savings. The investment costs of insulating a building and the associated savings in energy costs play an important role in decisions on the level of insulation in a building. However, frequently homeowners are not aware of the economic benefits of insulation measures. Table 2 shows average payback times for insulation measures added to existing buildings in the Netherlands. Average payback times Wall cavity insulation Other wall insulation (inside/outside) 3 to 11 years 4 to 9 years 5 to 11 years 14 to 23 years +/- 1 year Table 2: Estimates of average payback times for insulation measures in the Netherlands (PRC Bouwcentrum, 2010) Investment costs and payback times for different insulation measures differ significantly. Whilst in some cases investment costs can be high and payback times longer than 8 years, other insulation measures are among the most cost-efficient options for reducing energy costs in buildings and saving GHG emissions. [This information is kindly provided by the UNEP Risoe Centre Carbon Markets Group .] The CDM methodology AMS-II.E.: Energy efficiency and fuel switching measures for buildings opens the possibility to include projects which improve the insulation of buildings under the CDM. As of January 2011, there are 4 projects using this methodology in the CDM pipeline, one of which has been registered and CERs have been issued. Levine, M., D. Ürge-Vorsatz, K. Blok, L. Geng, D. Harvey, S. Lang, G. Levermore, A. Mongameli Mehlwana, S. Mirasgedis, A. Novikova, J. Rilling, H. Yoshino, (2007). Residential and commercial buildings. In Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [B. Metz, O.R. Davidson, P.R. Bosch, R. Dave, L.A. Meyer (eds)], Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. Pew Center on Global Climate Change (2009). Climate TechBook: Building Envelope. Available at http://www.pewclimate.org/technology/factsheet/BuildingEnvelope#8 PRC Bouwcentrum (2010). Actualisation of investment costs for energy saving measures for existing dwellings, March 2010. WBCDS (2008). Energy efficiency in buildings – Business realities and opportunities. Available at http://www.wbcsd.org/DocRoot/JNHhGVcWoRIIP4p2NaKl/WBCSD_EEB_final.pdf
http://climatetechwiki.org/print/technology/insulation
13
38
Here are the sub-themes that will be incorporated into the students' " Plan of Action": - Right to Self-Determination - Right to Land, Territory and Natural Resources - Right to Culturally Sensitive Education - Right to the Highest Attainable Standard of Physical and Mental Health - Right to Employment - Protecting the Rights of Indigenous Children and Youth There are an estimated 370 million indigenous people in more than 70 countries worldwide representing approximately 4% of the world’s population. Indigenous peoples live in every region of the world. They live in climates ranging from Arctic cold to Amazon heat, and often claim a deep connection to their lands and natural environments. In every region of the world, many different cultural groups live together and interact, but not all of these groups are considered indigenous to their particular geographic area. In some parts of Asia and Africa the term “ethnic groups” or “ethnic minorities” is used by governments to refer to certain groups of people living within their borders who have identified themselves as “indigenous.” Although there has been considerable debate on the meaning of “indigenous peoples”, no definition has ever been adopted by any UN-system body. In absence of a consensus on the meaning of this term, Jose R. Martinez Cobo, the Special Rapporteur of the Sub-Commission on Prevention of Discrimination and Protection of Minorities, has offered a working definition of “indigenous communities, peoples and nations”. An excerpt from the working definition reads as follows: Indigenous communities, peoples and nations are those which, having a historical continuity with pre-invasion and pre-colonial societies that developed on their territories, consider themselves distinct from other sectors of the societies now prevailing on those territories, or parts of them. They form at present non-dominant sectors of society and are determined to preserve, develop and transmit to future generations their ancestral territories, and their ethnic identity, as the basis of their continued existence as peoples, in accordance with their own cultural patterns, social institutions and legal system. Among many indigenous peoples of the Americas are the Mayas of Guatemala and the Aymaras of Bolivia; the Inuit and Aleutians of the circumpolar region, the Saami of northern Europe; the Aborigines and Torres Strait Islanders of Australia; and the Maori of New Zealand. These, like most other indigenous peoples, have retained social, cultural, economic and political characteristics, which are clearly distinct from those of the other segments of the national populations. For many indigenous peoples, the natural world is a valued source of food, health, spirituality and identity. Land is both a critical resource that sustains life and a major cause of struggle and death. Despite their cultural differences, indigenous peoples around the world share common problems related to the protection of their rights as distinct peoples including their rights to culture, identity, language, employment, health, education and other issues that will be discussed during the 2007 UN Student Conference on Human Rights. Throughout human history, whenever neighbouring peoples have expanded their territories or settlers have acquired new lands by force, the cultures, livelihoods, and existence of indigenous peoples have been endangered. Exploration and colonization beginning in the fifteenth century not only led to rapid appropriation of indigenous peoples' lands and natural resources, but also violated the sacred character of their arts, sciences, and culture. Around the world, indigenous peoples are asserting their rights in order to retain their separate identity and cultural heritage. It is now generally admitted that policies of assimilation and integration aimed at bringing these groups fully into the mainstream of majority populations are often counter-productive. Today, interest in indigenous peoples' knowledge and cultures is stronger than ever and the exploitation of their cultures continues. Indigenous medicinal knowledge and expertise in agricultural biodiversity and environmental management are used, but the profits are rarely shared with indigenous peoples themselves. They cannot exercise their fundamental human rights as distinct nations, societies and peoples without the ability to control the knowledge they have inherited from their ancestors. Many indigenous peoples are also concerned about skeletal remains of their ancestors and sacred objects being held by museums and are exploring ways for their restitution. For indigenous peoples all over the world the protection of their cultural and intellectual property has taken on growing importance and urgency. Although some indigenous groups have been relatively successful at maintaining their culture and identity, the spiritual and cultural identity of many indigenous peoples continue to be threatened. Indigenous people are arguably among the most disadvantaged and vulnerable groups of people in the world today. The international community now recognizes that special measures are required to protect the rights of the world’s indigenous peoples. In the thirty-year history of indigenous issues at the United Nations, the establishment and protection of the rights of indigenous peoples have been recognized as an essential part of human rights and a legitimate concern of the international community. The Universal Declaration of Human Rights affirms the inherent dignity, equality, and inalienable rights of all members of the human family. The rights of all members of indigenous populations are included in this declaration. However, indigenous peoples also have rights as distinct cultural groups. The United Nations, its partners and indigenous peoples have developed a programme of work to set standards for the protection of the human rights of indigenous peoples. The Declaration on the Rights of Indigenous Peoples adopted by the General Assembly in September 2007 is an important part of that process. In addition, the UN also plays an important role in monitoring whether the human rights and fundamental freedoms of indigenous peoples are being protected. The ILO, which was established in 1919 at the end of World War I to address social peace, was concerned early on with indigenous peoples particularly in cases where they were expelled from their ancestral lands to become seasonal, migrant, bonded or home-based workers which exposed them to forms of exploitation covered by the ILO mandate. It is within this context that the ILO first began to address the situation of indigenous workers in the overseas colonies of European countries in 1921. This led to the adoption in 1930 of the ILO’s Forced Labour Convention (No.29). Following the creation of the United Nations, the ILO examination of indigenous issues widened. From 1952 to 1972, the ILO led an interagency development programme, which assisted over 250,000 indigenous people in the Andes. In 1957, the ILO adopted the Indigenous and Tribal Populations Convention (No. 107), the first international treaty that focused exclusively on indigenous issues. At the time it was adopted, it assumed that integration into mainstream society offered the best possible future for indigenous peoples. As years went by, public opinion changed. With growing participation of indigenous peoples during the 1960s and 1970s, these assumptions were challenged. A Committee of Experts convened in 1986 by the Governing Body of the ILO concluded that “the integrationist approach of the Convention was obsolete and that its application was detrimental in the modern world.” A second treaty, the Indigenous and Tribal Peoples Convention (No. 169), was eventually adopted in 1989 to include the fundamental concept that the traditional life of indigenous peoples should and will survive. Another fundamental change was the assertion that indigenous peoples should be closely involved in the planning and implementation of development projects that affect their lives. Convention 169 covers a wide range of issues including land rights, access to natural resources, health, education, vocational training, and conditions of employment. The ILO is currently exploring and documenting various aspects of discrimination against indigenous peoples within the labour market, within indigenous communities as well as the effects of discrimination on traditional occupations such as herding. It is important to keep in mind that although Convention 169 is recognized internationally as the leading human rights instrument on indigenous issues, it has been ratified by only 19 countries. Another ILO Convention (No. 111) -- which protects all workers, including indigenous workers, against discrimination – has been ratified by 165 countries and therefore provides an important entry point in many countries for addressing indigenous issues. Convention 169 and 111, together provide a framework for protecting indigenous rights to engage in traditional occupations. When researching which countries have ratified these Conventions it is important to ask whether there is any national legislation to enforce the terms of these treaties (see the Indigenous World 2007 report for more information.) Indigenous peoples have been working with the United Nations to name and assert their collective rights for decades. In fact, August 9, the International Day of the World’s Indigenous Peoples, commemorates the first meeting of the UN Commission on Human Rights Working Group on Indigenous Populations in 1982. In 1971, the Sub-Commission on the Prevention of Discrimination and the Protection of Minorities, which is composed of 26 independent human rights experts, appointed one of its members, Mr. Martinez Cobo, as Special Rapporteur to conduct a comprehensive study on discrimination against indigenous populations and recommend national and international measures for eliminating such discrimination. The Martinez Cobo study (1986) addressed a wide range of human rights issues affecting indigenous peoples, including health, housing and education. The study called on governments to formulate guidelines for their activities concerning indigenous peoples on the basis of respect for the ethnic identity, rights and freedoms of indigenous peoples. The report, now out of print, represented an important development in recognizing the human rights problems confronting indigenous peoples. The first international conference of non-governmental organizations on indigenous issues was held in Geneva in 1977. This was followed by another non-governmental conference on Indigenous Peoples and the Land, also in Geneva, in 1981. These meetings, and a special United Nations study then nearing completion, led to the establishment in 1982 of the United Nations Working Group on Indigenous Populations. In past sessions the WGIP has examined a range of indigenous issues including: health and indigenous people; environment, land and sustainable development; education and language; indigenous peoples and their relationship to land; indigenous children and youth; and indigenous peoples and their right to development among other issues. Currently, the WGIP is not meeting pending a review of its status by the new Human Rights Council in September 2007. The International Decade of the World's Indigenous People (1995-2004) was proclaimed by the General Assembly in its resolution 48/163 of 21 December 1993 with the main objective of strengthening international cooperation for the solution of problems faced by indigenous people in such areas as human rights, the environment, development, education and health. The theme for the Decade was "Indigenous people: partnership in action". In the same resolution, the General Assembly requested the Secretary-General to appoint the Assistant Secretary-General for Human Rights as the Coordinator of the Decade and established the Voluntary Fund for the Decade to assist the funding of projects and programmes which promote the goals of the International Decade of the World's Indigenous People. In its resolution 52/108, the General Assembly appointed the High Commissioner for Human Rights as Coordinator of the Decade. In 2004 the Assembly proclaimed a Second International Decade (2005-2015) by resolution 59/174. The goal of this Decade is to further strengthen international cooperation for the solution of problems faced by indigenous people in such areas as social and economic development, culture, education, health, human rights, and the environment. In April 2000, the Commission on Human Rights adopted a resolution to establish the UN Permanent Forum on Indigenous Issues (UNPFII), which was endorsed by the Economic and Social Council (ECOSOC) in resolution 2000/22 of 28 July 2000. The mandate of the Permanent Forum is to discuss indigenous issues related to culture, economic and social development, education, the environment, health and human rights. The UNPFII has played an important role in expanding the role of indigenous representatives in UN activities. The Forum is an advisory body that reports to the ECOSOC. It is comprised of 16 experts, eight of whom are proposed by indigenous peoples. The establishment of the UNPFII was one of the advances achieved during the first International Decade of the World's Indigenous People, celebrated between 1995-2004. The theme for the Decade was 'Indigenous People: Partnership in Action.' The main objective was the strengthening of international cooperation for the solution of problems faced by indigenous people in such areas as human rights, the environment, education and health. To further support the monitoring of human rights violations against indigenous peoples, a Special Rapporteur was appointed in 2001 to assess and verify complaints. As of April 2007, 15 active organizations of indigenous peoples have consultative status with the ECOSOC. Consultative status means that these organizations can attend and contribute to a wide range of international and intergovernmental conferences. There are also hundreds of representatives of indigenous peoples and their organizations who participate in UN meetings. Non-governmental organizations (NGOs) interested in human rights also help promote indigenous peoples’ rights and actively support indigenous peoples’ causes. Indigenous people are also becoming more prominent as individual players on the world stage. In 1989, Chief Ted Moses, of the Grand Council of the Crees in Canada, was the first indigenous person elected to office at a UN meeting to discuss the effects of racial discrimination on the social and economic situation of indigenous peoples. Since then, increasing numbers of indigenous people have held office at meetings related to indigenous matters. The above six themes were chosen for students to research for the 2007 Student Conference on Human Rights: - Right to Self-determination - Right to Land, Territory and Natural Resources - Right to Culturally Sensitive Education - Right to the Highest Attainable Standard of Physical and Mental Health - Right to Employment - Protecting the Rights of Indigenous Children and Youth These are just a subset of issues that relate to indigenous rights. Although there are other indigenous rights, it is important to keep in mind that all of these rights are interdependent. For example, the basic right to self-determination is closely intertwined with land rights and the right to a culturally sensitive education. When the natural resources on indigenous lands are exploited by corporations without their consent or when indigenous peoples don’t have a say in developing their educational system, this violates the right to self-determination. While studying these six themes, note as many concrete examples of the connections between them as you can find. Some examples have already been included in the description of each theme. See how many more you can find in the resources available online. Back to top 1.Right to Self-determination “Indigenous peoples have the right of self-determination. By virtue of that right they freely determine their political status and freely pursue their economic, social and cultural development.” - Declaration on the Rights of Indigenous Peoples, Article 3 Self-determination refers to the fundamental right of all peoples to determine their economic, social, political, and cultural development. In order to exercise this right, people must have the opportunity to participate in making decisions that affect the quality of their life. Because indigenous peoples are among the most marginalized groups, decisions that impact their economic, social, political, and cultural development are often made without consulting them. This is a violation of their right to self-determination. When the Norwegian Government gave permission to a multi-national company to mine minerals in areas occupied by the Saami, the Saami Parliament asked the Norwegian government to stop the company from mining because they were not consulted. When this request was not granted, the Saami Council and Saami Parliament entered into direct discussions with the company and reached an agreement that no mining would take place without the consent of the Saami Parliament.1 The experiences of the Saami, an indigenous people living in the Northern Europe, are typical of the threats to self-determination that many indigenous peoples around the world face. It highlights the importance of protecting the right of indigenous peoples to be involved in the governance of regions where they live and how their development can suffer if they are not included in determining their future. One of the obstacles to achieving indigenous participation in national politics is the election system. Indigenous peoples typically cannot participate in elections for a variety of reasons. They often lack documentation required to participate in elections, live in remote villages that are difficult to get to, or the election materials that are distributed are in non-indigenous languages. 2 The absence of indigenous representatives in the national government makes it difficult for their concerns to be heard.3 The right to self-determination is an established principle in international law. It is embodied in the Charter of the United Nations and the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights. The right to self-determination has been recognized in other international and regional human rights instruments, endorsed by the International Court of Justice and elaborated upon by the United Nations Human Rights Committee and Committee on the Elimination of Racial Discrimination. Despite the recognition of the basic right to self-determination in international law, there is still much work that needs to be done to get this basic right recognized on the national level. In some countries, indigenous peoples are active participants in national politics; while in other countries indigenous organizations have yet to be officially recognized. While some national constitutions (e.g., Mexico, Panama) do indeed refer to the right of self-determination of indigenous peoples, most constitutions avoid any mention of it. Sometimes, indigenous rights are recognized in special agreements. For example, in Guatemala, more than half of the national population consists of indigenous peoples, mainly Maya. After a brutal, thirty year civil war, these peoples’ rights are now recognized in the Agreement on the Identity and Rights of Indigenous Peoples, signed in 1995 in Mexico City. This document recognizes many rights, including protection from discrimination, cultural and political rights.4 The demand for some kind of autonomy is often associated with the struggle for self-determination. In northern Canada, for example, the Inuit people have – after decades of legal struggles over ancient land rights - negotiated a political agreement with the federal government, whereby they achieved the creation, in 1999, of the self-governing territory of Nunavut.5 Panama provides another example of the headway that indigenous peoples have made in recent years. Seven indigenous peoples -- the Ngöbe, Kuna, Emberá, Wounaan, Buglé, Naso and Bri Bri, who together represent 8.3% of the population of Panama -- live in five legally constituted territorial units (comarcas) which make up almost 20% of the country’s total land area. These comarcas are semi-autonomous regions governed by local councils and traditional governors. In some instances where indigenous peoples have been granted autonomy, implementation has been met with varied success. A case in point is the Constitution of the Philippines. In response to strong lobbying efforts by indigenous peoples, the framers of the 1987 Constitution included several provisions that, taken together, could serve as a basic framework for recognizing and promoting indigenous peoples' rights. Laws allowing regional autonomy in two regions of the Philippines, Mindanao and the Cordillera, were passed by the Philippines’ Congress, and subjected to ratification through plebiscites.6 This significant change in government policy led to the creation of the Autonomous Region in Muslim Mindanao in 1990. In contrast, the creation of an Autonomous Cordillera Region which is home to numerous indigenous peoples in the Philippines was rejected in two separate plebiscites. Governments, national and international courts, intergovernmental agencies and even some corporations are gradually becoming more aware of the need to respect indigenous people's rights. This is due, in part, to the increasing number of cases being won by indigenous organizations that are taking governments and corporations to court for violating indigenous rights. For example, the High Court in Botswana ruled in 2006, that the relocation of the San hunter-gatherers from the Central Kalahari Game Reserve was unlawful. In Argentina, the Lhaka Honhat, who have been fighting for title to their traditional territory for years, will have their case brought before the Inter-American Commission on Human Rights. And indigenous communities in Russia and the North West Territories of Artic Canada have been successful in getting a seat at the negotiating table and obtaining compensation or profit sharing agreements with oil companies operating on their territories. Similar gains are being made at the national level as well. In the ongoing discussions on the future of Nepal, the issue of indigenous self-determination is high on the agenda. 7 Back to top 2. Right to land, territory and natural resources "It is essential to know and understand the deeply spiritual special relationship between indigenous peoples and their land as basic to their existence as such and to all their beliefs, customs, traditions and culture...for such people, the land is not merely a possession and a means of production...Their land is not a commodity which can be acquired, but a material element to be enjoyed freely.”8 (Photo: J. van der Ploeg ) The Northern Sierra Madre Natural Park covers nearly 360,000 hectares of diverse tropical rain forest, and is considered as one of the most biologically rich in the Philippines. The majority of the population living in this rain forest consists of immigrants from other provinces who entered the area in the 1960s and 70s when the logging industry was at its height. They presently dominate the area, both in number and culture. Next to these immigrants, the indigenous Agta population are estimated to number 2,000 out of a total population of 23,000. The Agta are a forest dwelling people whose main livelihood consists of a combination of hunting, fishing, gathering, and shifting cultivation. They are widely considered to belong to 'the poorest of the poor', suffering from a long history of oppression, discrimination and marginalization by dominant groups. To compensate for the disadvantaged position of these and other indigenous communities in the Philippines, the Indigenous Peoples' Rights Act (IPRA) was enacted in 1997. This law grants a wide array of rights to indigenous peoples throughout the country, one of which is the right to formal recognition of ancestral lands. Around the world, indigenous peoples are fighting for recognition of their right to own, manage and develop their traditional lands. Indigenous peoples have a special relationship with the land. It is the foundation of their spiritual well-being and cultural identity. In many cases, their traditional knowledge and oral histories are closely linked to the forests, rivers, mountains and sea that surround them and are considered sacred; for example, the Black Hills which are sacred to the Lakota in central United States or the rivers sacred to the Paez in Colombia. Indigenous communities maintain historical and spiritual links with their homelands where their culture be preserved and flourish from generation to generation. Too often this necessary spiritual link between indigenous communities and their homelands is misunderstood by non-indigenous persons and is frequently ignored when legislation on land rights is adopted. Land also provides the main source of livelihood for indigenous communities. The management of land typically relies on a communal decision-making process. Though individuals or families may use portions of the land, the majority of indigenous land is set aside for the benefit of the community. Indigenous peoples see a clear relationship between the loss of lands and the loss of their identity and culture. According to Erica Irene Daes, a UN Special Rapporteur in 2002, “The gradual deterioration of indigenous societies can be traced to the non-recognition of the profound relation that indigenous peoples have to their lands, territories and resources.”9 While land rights are protected by legislation in some countries, powerful economic interests often succeed in turning the communal possession of land that is common in indigenous communities into private property. The global market’s increasing consumption of natural resources is a big threat to the indigenous way of life. Indigenous territory is often desired by international corporations, either for its mineral wealth, oil deposits, pastures, tropical or hard-wood forests, medicinal plants, suitability for commercial plantations, hydraulic resources or even its tourist potential. Sometimes agents of these corporation use bribes or intimidation to get indigenous leaders to sign over land rights in order to gain access to natural resources on indigenous lands. Read the Indigenous World 2007 entry on Ecuador for an example of this. Control of natural resources is therefore an important issue to indigenous peoples because many of the current conflicts over land and territory relate to the possession, control, exploitation, and use of natural resources. In many cases, State Constitutions give the State sole possession of any minerals or other resources (e.g., water) within its borders. This gives States the legal right under its own laws to displace anyone it wants in order to exploit these resources. Indigenous communities have managed their environments sustainably for generations. In turn, the flora, fauna and other resources available on their lands have provided them with livelihoods and have nurtured their communities. When natural resources (e.g., minerals or oil) found on indigenous territories are exploited without their consent, health problems caused by environmental pollution and economic hardship caused by the disruption of traditional occupations are often the result. In Nigeria, for example, the commercial exploitation of oil in the Niger Delta had severe ecological and social consequences for the Ogoni people. Oil leaking from pipelines and tanks polluted rivers, streams and fields, and killed animals and vegetation. Forests were cut down to make way for roads and pipelines, destroying the subsistence economy of the Ogoni people. Environmental pollution led to severe health problems such as tuberculosis, and respiratory and stomach diseases. The Ogoni were not consulted and did not receive any benefit from the profits made.10 According to a recent UN report, around 60 million indigenous people around the world depend almost entirely on forests for their survival. Indigenous communities continue to be expelled from their territories so that protected areas or national parks can be established. The report claims that the forced displacement of indigenous peoples from their traditional forests is a major contributor to the impoverishment of these communities. Indonesia is home to 10 per cent of the world’s forest resources, which provide a livelihood for approximately 30 million indigenous people. Out of 143 million hectares of indigenous territories that are classified as State forestlands, almost 58 million are now in the hands of timber companies with the remainder in the process of being converted into commercial plantations.11 As indigenous protest movements emerge to defend their rights, Rodolfo Stavenhagen, the UN Special Rapporteur on the situation of human rights and fundamental freedoms of indigenous peoples has noted that the criminalization of indigenous movements is a troubling trend in recent years.12 In a number of cases, the response to indigenous protests has resulted in violence. In January 2006, for example, 14 indigenous peoples were killed while protesting against a large steel plant taking over their land. And in India, as reported in The Indigenous World 2007, the acquisition of indigenous lands without their consent has led to protests that are often silenced with “the indiscriminate use of firearms.” When the land rights of indigenous peoples are not protected and their access to traditional lands is reduced, indigenous peoples are sometimes forced to seek work outside their traditional communities in order to survive. This is exactly what happened to the Enxet people in Paraguay. When large-scale cattle ranching was introduced in their region, the wild animals were driven away and the Exnet hunting areas were reduced in size. This forced the Exnet to become cheap labourers for businesses or farms outside their community. Many had no choice but to take loans from moneylenders that resulted in debt bondage. In this form of labour, indigenous people are forced to work to pay off their loan.13 Convention 169 of the International Labor Organization (ILO), adopted in 1989, asks nations to respect indigenous lands and territories. Article 15 states “The rights of the peoples concerned to the natural resources pertaining to their lands shall be specially safeguarded. These rights include the right of these peoples to participate in the use, management and conservation of these resources.” As noted above, however, only 19 countries have ratified this treaty. In recent decades, many countries have reformed their constitutional and legal systems in response to calls from indigenous movements for the legal recognition of their rights to the protection and control of their lands, territories and natural resources. Latin America has led the way with constitutional reforms taking place in Argentina, Bolivia, Brazil, Colombia, Guatemala, Mexico, Nicaragua, Panama, Paraguay, Peru, Ecuador and Venezuela.14 However, in his March 2007 report, the UN Special Rapporteur the Situation of Human Rights and Fundamental Freedoms of Indigenous Peoples, stated that: Although in recent years many countries have adopted laws recognizing the indigenous communities’ collective and inalienable right to ownership of their lands, land-titling procedures have been slow and complex and, in many cases, the titles awarded to the communities are not respected in practice.15 In order to protect indigenous lands, it is important to establish where they are located. Establishing which areas are indigenous lands can help resolve problems that arise out of conflicting land claims with other indigenous communities or outside settlers. Brazil, for example, adopted Decree No. 1775 in 1996 which sets the administrative procedures for determining the areas which are to be considered indigenous lands. It also includes a provision for appealing decisions if there are disagreements on the area recognized as indigenous territory. Even though administrative procedures may be in place, disputes over land rights can go on for years. For example, indigenous peoples living in an area of Brazil known as Raposa do Sol, have objected to the government’s demarcation of their land. They claim that the government’s records reduce their land by approximately 300,000 hectares, makes access possible to non-indigenous persons, and excludes more than twenty indigenous villages from the area.16 This land dispute has yet to be resolved and the indigenous peoples living in Raposo do Sol have not yet obtained an official recognition of their lands. The indigenous Aymara people in Bolivia – which make up 60 to 80 per cent of the total population – had filed land claims covering 143,000 square miles but due to the slow, under-funded titling process, only 19,300 miles had been granted by the end of 2006.17 As the land-titling issues get sorted out, privatization of indigenous lands has been increasing. In Canada, for example, only a small portion of traditional lands are recognized as indigenous reserves, leaving the remainder vulnerable to privatization. Federal and provincial governments are negotiating with the First Nations people of British Columbia on this issue.18 The Declaration on the Rights of Indigenous Peoples recognizes the vital relationship between indigenous peoples’ control over the development of their lands and resources and their ability to maintain and strengthen their institutions, cultures and traditions: • The Declaration states that “indigenous peoples have the right to the lands, territories and resources which they have traditionally owned, occupied or otherwise used or acquired” as well as the right to own, use and develop them (Article 26). • It further stresses the responsibility of States to work with indigenous peoples to establish fair and transparent processes for the adjudication of indigenous rights pertaining to such lands, territories and resources (Article 27); and to consult and obtain the free and informed consent of indigenous peoples prior to the approval of any project that impacts on their lands and resources, for example through the development, utilization or exploitation of mineral, water or other resources (Article 32). • The Declaration also states that indigenous people shall not be “forcibly removed from their lands or territories” (Article 10) and includes the right of indigenous peoples to be compensated for lands, territories and resources which “have been confiscated, taken, occupied, used or damaged without their free, prior and informed consent” (Article 28). Back to top 3. Right to Culturally Sensitive Education (Photo: Nancy Santullo) Approximately 60% of Peru's total population is indigenous. These native communities speak more than 60 languages other than Spanish as their first language. Quechua, the language of the Inca empire and now the official second language of the nation, is taught in many primary schools and even in universities. In recent years, Peru has inaugurated a government program to promote bilingual education--Spanish plus the native language--for indigenous children. Unfortunately, these programs have not yet reached the indigenous population living in the Peruvian Amazon rain forest. Bilingual education is especially important to save the Wachipaeri language, which could soon be lost since so few adult speakers remain. Convention 169 states that indigenous peoples have the same right to benefit from the national education system as everyone else living in the country. Yet, in many countries around the world, access to basic and secondary education is significantly worse for indigenous peoples than for the rest of society. Those that do receive formal schooling often face overt racism and the curricula used in the classroom is typically taught in non-indigenous languages and are not adapted to the indigenous way of life and culture. Mr. Ole Henrik Magga, Chairperson of the United Nations Permanent Forum on Indigenous Issues, provides an excellent description of his childhood experiences that summarizes the kind of challenges indigenous children typically face in school: On your first day you find that the teachers do not speak your language, in fact, they don’t even want you to speak your language – you might be punished by doing so. The teachers don’t know anything about your culture – they say “look at me when I speak to you” – but in your culture it may be disrespectful to look at adults directly. Day by day you are torn between two worlds. You look through the many textbooks and find no reflections of yourself or your family or culture. Even in the history books your people are invisible – as if they never exceeded “shadow people” or worse – if your people are mentioned they are mentioned as “obstacles to settlement” or simply as “problems” for your country to overcome.19 Given these types of experiences, it is not surprising that indigenous students are more likely to drop out of school than non-indigenous students. In order to benefit equally, it is vital that indigenous peoples have the right to establish and control their educational systems and institutions providing education in their own languages, in a manner appropriate to their cultural methods of teaching and learning. This is an important component of the right to self-determination. Over the last two decades, progress has been made in improving the quality of indigenous education. In Malaysia, the government has been implementing changes since the late 1990s, in collaboration with non-governmental organizations (NGOs), to meet the needs of indigenous students. As a result of a 1997 policy on indigenous languages, for example, indigenous children in the Penampang District are being taught the local Kadazan language.20 Among indigenous peoples today there is a growing awareness of the need to preserve their languages and a demand to use them in school. In some countries programmes are being developed which include teaching curricula in both national and indigenous languages. This helps to prepare them to participate in national life while at the same time permitting them to learn about their own culture. However, it is important to note, that indigenous children do not always get the benefits of these programmes if they live in a remote part of the country. Take, for example, a small indigenous community living in Huacaria, Peru – a remote part of the Amazon rainforest. In recent years, Peru has inaugurated a government program to promote bilingual education--Spanish plus the native language--for indigenous children. Unfortunately, these programs have not yet reached those living in Huarcaria (See UN Cyberschoolbus slideshow on the indigenous groups living in Huacaria to learn more about these children. Pay particular attention to slides 19-22.) In addition to the importance of preserving indigenous languages, preserving indigenous knowledge and culture is equally important. Indigenous educational systems are different from other educational systems. They include learning skills such as hunting, trapping and weaving which are not generally included in non-indigenous school curricula. To assure that indigenous education is culturally sensitive, the responsibility for designing and implementing educational programmes must be transferred to indigenous peoples themselves. In order to facilitate the transfer of responsibility for education, governments will need to provide the necessary financial assistance and resources. A special programme developed by indigenous peoples in Oaxaca, Mexico provides an excellent example of what can be done when indigenous people are empowered to develop their own educational system. With the help of linguists, indigenous people in Oaxaca were able to promote literacy in their indigenous Mixe language, which included developing a Mixe alphabet. In addition, the courses they developed incorporated the knowledge of their elders, Mixe mathematics and agriculture, as well as legal training to defend communal land ownership.21 Recognizing the centrality of language and the fact that more than 4000 of the world’s remaining 6000 languages are spoken by indigenous peoples, many of which are under threat of extinction, the UN Permanent Forum on Indigenous Issues will devote discussion at their 2008 session on the issue of indigenous languages. The General Assembly has also been declared that 2008 will be the International Year of Languages. In the area of education, the Declaration on the Rights of Indigenous Peoples states that: • Indigenous peoples have the right to revitalize, use, develop and transmit to future generations their histories, languages, oral traditions, philosophies, writing systems and literatures and says that States shall take effective measures to ensure this right is protected (Article 13). • Indigenous peoples have the right to establish and control their educational systems and institutions, providing education in their own languages and in a manner appropriate to their cultural methods of teaching and learning (Article 14.1). • Indigenous individuals, particularly children, have the right to all levels and forms of education of the State without discrimination and States shall take effective measures to ensure that indigenous peoples, particularly children, have access to an education in their own culture and provided in their own language (Articles 14.1 and 14.2). Back to top 4. Right to the Highest Attainable Standard of Physical and Mental Health (including the right to traditional medicines and health practices) In both industrialized and developing countries, the health status of indigenous peoples is generally lower than that of the overall population. They have higher infant mortality rates, lower life expectancy, higher incidence of disease and chronic illnesses than the non-indigenous. In Tanzania, for example, nomadic herdsman, or pastoralists, suffer from poorer health standards than the national average. Like many groups of indigenous peoples, pastoralists suffer from an above average number of cases of malaria, pneumonia, diarrhea, tuberculosis, and infant mortality. Further compounding difficulties, pastoralists may have to travel great distances for all but the most basic of health concerns.22 This is true for many indigenous groups that live in remote areas. Communicable diseases such as tuberculosis and malaria are also on the rise in indigenous populations. In addition, viral diseases frequently explode into epidemics, particularly among groups with low levels of immunity. Health care workers frequently report high rates of cholera in indigenous populations, as well as a considerable increase in the occurrence of sexually transmitted diseases. HIV/AIDS is another serious threat to indigenous populations, where infection rates among indigenous peoples can be more than double the national average in some countries. Malnutrition is also a persistent problem, along with problems resulting from deficiencies of micronutrients, especially iron, vitamin A, and iodine. Thyroid hyperplasia, obesity, and diabetes are frequently occurring conditions, particularly in the North American indigenous population.23 The health of indigenous peoples is closely related to their lands, which provide them with their traditional diet. When natural resources on indigenous lands diminish or when indigenous peoples are forced off their land, there is less traditional food available. Therefore, an important component of restoring health involves reclaiming traditional lands. In some instances, diseases are introduced by non-indigenous people who illegally enter indigenous lands. Unregistered gold miners (called garimpeiros) in Brazil, for example, brought new diseases that killed more than 21% of the Yanomami people.24 Access to clean water is yet another problem that impacts indigenous populations. Water resources are often affected by development and industrialization projects. This can be seen in the mining of metals, which frequently take place in high mountainous areas, such as the Andes. They pose the greatest risk to indigenous communities who often inhabit these regions and therefore have more direct exposure to the contamination. In addition, Persistent Organic Pollutants (POPs) can have a large impact on indigenous populations. The problem has grown to the point where significant traces of substances such as DDT, as well as toxic levels of mercury, have been detected in surface water, food, and other basic nutrients necessary for survival, such as breast milk. Indigenous peoples are particularly affected by these issues not only because of the areas in which they live, but also because of their reliance on the land. (see UN Cyberschoolbus slideshow for an example of how development has impacted water resources in remote indigenous communities living in the Amazon rain forest. Pay particular attention to slides 7-14). The health care crisis among indigenous peoples is particularly acute for indigenous women. Like most women in the world, they have been victims of discrimination for centuries. But as indigenous women, they have been doubly discriminated against: for being indigenous people and for being women. Building on the example described above, many of the diseases that killed Yanomami women were caused by forced prostitution. The health care crisis for indigenous women is not limited to developing countries. In the United States, for example, indigenous women have the least access to maternal healthcare.25 Maternal health is a serious issue as indigenous women are often pregnant at an early age; may not have access to a hospital or midwife; and suffer from other health conditions, such as vitamin and iron deficiencies that complicate care during pregnancy. (Photo: Glenn Shepard Jr.) Indigenous women have specialized knowledge about herbs used in regulating fertility, easing the pains of childbirth, and treating the illnesses of children. Here, a Matsigenka woman shows a plant used to bathe newborn babies to ensure they grow up strong and healthy. Care during childbirth and subsequent care and feeding of indigenous infants are strongly influenced by indigenous culture. Therefore, even when maternal healthcare is available there are often differences between the medical services provided in the hospital and the home care that is administered by family members and traditional midwives. To address this issue, the UN Permanent Forum on Indigenous Issues has recommended that all UN entities, regional health organizations and governments incorporate a cultural perspective into health services--including emergency obstetric care, voluntary family planning, and skilled assistance at birth--that aim to provide quality health care for indigenous women. In addition, it recommends that the roles of traditional midwives be expanded so that they may assist indigenous women during their pregnancy and act as cultural intermediaries between the health care systems and the indigenous communities’ cultural practices. Although traditional medicines do not have remedies for new illnesses brought in or caused by outside factors (e.g., contamination as a result of mining, cancer, AIDS, or radioactive pollution), traditional healing practices are known to be effective for the management for many afflictions. Traditional healing practices focus on more than the physical and mental well-being of a person or helping a patient become free from disease. When treating patients, traditional healers seek to restore a state of balance between body, mind and spirit which involves being in harmony with nature (see UN Cyberschoolbus slideshow for an example of how traditional and non-traditional approaches to illness co-exist side by side. Pay particular attention to slides 30-32). The cultural diversity of indigenous peoples makes it nearly impossible to adopt one approach or develop a universal health care model. An appropriate health care system that works for one group, may fail somewhere else due to cultural differences. Therefore it is important to focus on local solutions to the indigenous health crisis, rather than look for one solution or programme. In order to improve the quality of health care for indigenous peoples, governments need to allocate more resources. A few Ministries of Health have established ad hoc groups or offices to oversee the health of indigenous communities in their country, usually as part of projects in partnership with nongovernmental organizations (NGOs) and private foundations that are actively committed to working with indigenous communities to improve living conditions in marginalized urban and rural areas. However, these initiatives are generally of limited coverage and duration. In addition, most countries do not have adequate financing to support the development of specific programmes to support traditional medicine nor are they researching or developing alternative models of care for indigenous populations. With respect to health, the Declaration on the Rights of Indigenous Peoples states that: • Indigenous peoples have the right to their traditional medicines and to maintain their health practices, including the conservation of their vital medicinal plants, animals and minerals. Indigenous individuals also have the right to access, without any discrimination, to all social and health services (Article 24.1). • Indigenous individuals have an equal right to the enjoyment of the highest attainable standard of physical and mental health. States shall take the necessary steps with a view to achieving progressively the full realization of this right (Article 24.2). Back to top 5. Right to employment Today, indigenous people make up less than 5 per cent of the world’s population, but comprise 15 per cent of the world’s poor. That disparity is even greater in Asia, where indigenous peoples make up to 40 per cent of the extreme poor.26 Indigenous peoples are vulnerable to a range of factors that impact on their right to employment. They tend to lack access to education, to live on lands that are vulnerable to natural disasters and have poor access, if any, to health services. While the majority of indigenous peoples live in rural areas, the prospect of better opportunities in cities is contributing to their migration to cities. Life in the cities, however, is often not any better. Indigenous peoples receive lower wages or find no work at all because they lack the necessary skills and education. They live in poor settlements outside the support of their traditional community making it difficult to maintain their language, identity and culture. In the northern Philippine city of Baguio, it is estimated that 65 per cent of indigenous migrants suffer from extreme poverty. And in Tanzania, 90 per cent of Masaai men who have migrated to the capital city, Dar es Salaam, end up working as security guards, earning around $40 per month and are often only able to afford to live in slums on the outskirts of the city.27 Overall, indigenous people living in cities have been found to drop out of school to seek employment earlier than their non-indigenous counterparts. This leads to a pattern of working in poorly paid, low-skilled jobs, with 50 per cent of the indigenous population earning an income of between $150- 300 per month.28 Illiteracy rates among the urban indigenous population is four times higher than non-indigenous urban dwellers.29 In addition to these challenges, indigenous peoples in urban areas are more likely to experience discrimination. The Declaration on the Rights of Indigenous Peoples reaffirms that indigenous individuals have the right not to be subject to discriminatory conditions of labour, employment or salary (Article 17) and that they have the right to the improvement of their economic and social conditions including in the areas of employment and education (Article 21). Monitoring the working conditions of indigenous peoples is an important component in protecting their rights. In Brazil, for example, Mobile Inspection Teams have been established with the objective of investigating a greater number of complaints about degrading forms of work. As noted above, indigenous people are sometimes forced into debt bondage when they borrow money and can’t pay it back. Another concern was raised at last year’s High-Level Dialogue on Migration and Development where it was pointed out that indigenous people were among the groups most vulnerable to the illegal trafficking of migrants.30 Back to top 6. Protecting the Rights of Indigenous Children and Youth The Permanent Forum on Indigenous Issues (UNPFII) believes the issues of indigenous children and young people are so important it decided to make indigenous children and youth a focus in its work for years to come. They chose this theme in order to focus attention on the survival of indigenous peoples. Without ensuring that they are appropriately educated in their indigenous languages, cultures and values as the basis of their learning, indigenous peoples and their unique cultures will not survive. In the 2003 report on the second session of the Permanent Forum on Indigenous Issues, the Forum indicated they were deeply concerned about the problems and discrimination faced by indigenous children and youth in the areas of education, health, culture, extreme poverty, mortality, incarceration, employment and other relevant areas.31 With regard to the trend toward urban migration among indigenous peoples, the Forum noted that there is a massive exodus of indigenous youth to the cities around the world where they are faced with discrimination, socio-economic hardship, weakened family networks and drug abuse. Many of the indigenous people migrating today are youth. Indigenous youth migrate for a variety of reasons including: • Increased need for labor in industries such as tourism and entertainment; • Desire for higher education; • Traditional means of survival are being threatened; and/or • Desire to be free of the social and religious control of their community Each of these reasons is exhibited in the migration of the eight Hill Tribes of Thailand. These tribes have experienced increased youth migration since 1990.32 Finally, the report highlights the Forum’s deep concern about the harmful and widespread impact of armed conflict on indigenous children. Indigenous children are particularly vulnerable during times of armed conflict because many are not registered at birth. The right to be registered at birth and the right to a name and identity are recognized by the Convention on the Rights of the Child. UNICEF notes that birth registration is especially important during times of conflict to protect children from being illegally moved out of the country or enlisted for military service. Some of the indigenous peoples who have suffered violence and conflict include the Maya and Meskito of Central America, the H’mong in South-East Asia, the East Timorese, the Embera and Huaorani in South America, and the Twa in East Africa.33 The Convention on the Rights of the Child (CRC) was adopted because it was felt that children need special protection. The rights of all children are indicated in the Convention on the Rights of the Child, but indigenous children are especially vulnerable. The Forum, therefore, has requested that the Committee on the Rights of the Child (which monitors implementation of the CRC) pay special attention to issues related to safeguarding the integrity of indigenous families. According to recent reports by Minority Rights Group, a non-governmental organization (NGO), the experience of indigenous children in conflicts in Iraq;34 Jammu and Kashmir, Punjab and Nagaland, India;35 Somalia and Guatemala36 has included physical injury and death; torture and rape; the witnessing of atrocities; separation from parents and community; lost access to health care, education and housing; eviction and forced displacement; the destruction of villages and farms; and neglect during humanitarian relief and reconstruction programmes. In addition, indigenous children may be forcibly recruited into armed groups, either to fight or to provide support, and indigenous girls are at risk of being forced to provide sex to soldiers. In Colombia for example, non-government combatant groups forcibly recruit indigenous children, often to serve as guides in remote areas. Indigenous girls have been the victims of sexual assault and violence by military and non-military groups alike.37 The Declaration on the Rights of Indigenous Peoples recognises the special vulnerability of indigenous youth (Article 22) and encourages special measures to ensure that the economic and social needs of indigenous youth are met (Article 21). Back to top
http://cyberschoolbus.un.org/student/2007/theme.asp
13
17
Commercial restrictions through tariffs have been an integral part of American history. The federal government has used forms of commercial restriction as a source of revenue and to protect American industry and labor. Before the Civil War, the federal government obtained close to ninety percent of its revenue from tariffs, and because of this, the government avoided income taxation. Americans have used protective and revenue tariffs. Protective tariffs help new American industries compete with established foreign industries. Proponents of protective tariffs claim that all segments of America benefit from tariffs. Foes of protective tariffs argue that protective tariffs help a few interests at the expense of many. By barring foreign goods from American markets, American manufacturers can charge whatever price they want for their goods and force American consumers to pay exorbitant prices. Conversely, revenue tariffs are used to provide the government with revenue and offer only incidental protection to industries. The Articles of Confederation did not grant Congress the power to enact tariffs. In the spring of 1781, with the outcome of the Revolution undecided and the nation heavily in debt, Congress asked the thirteen states for the power to levy an impost. North Carolina was among the first states to grant Congress the power because Tar Heels preferred an impost to a tax on land. Furthermore, North Carolina had few ports from which they could collect duties on imports. Despite the efforts of its supporters, the impost never became law. Some scholars have suggested that the Founding Fathers replaced the Articles with the Constitution of 1787 because of the commercial flaws of the Articles. In the new federal constitution, Congress obtained the power “to lay and collect taxes, duties, imposts, and excises.” One of the first pieces of legislation that the first Congress passed was the tariff of 1789. Most imported goods received a duty of only five percent ad valorem (in proportion to the value) under this tariff. The Republican Party of Thomas Jefferson showed little interest in the tariff. During the War of 1812, the British navy prevented goods from coming to American shores. As a result, Americans manufactured their own products. To protect infant manufacturers, Congress passed the nation’s first protective tariff: the tariff of 1816. Average duties stood at around twenty-five percent ad valorem. Every North Carolina Congressmen voted against this measure. Congress attempted to raise tariff levels with the Baldwin Tariff of 1820 but failed by a single vote in the Senate. Lemuel Sawyer was the only Tar Heel to support this tariff in Congress. In 1824, Speaker of the House Henry Clay argued that a protective tariff would increase the national wealth. It would also create a home market where agricultural prices and wages increased. P. P. Barbour first argued that a protective tariff was unconstitutional. The bill passed by only five votes in the House and four in the Senate. The distribution of the vote revealed that the tariff had become a sectional issue. Every North Carolina Congressmen opposed the tariff in 1824. Importers now paid duties of about thirty-five percent ad valorem. Some manufacturing interests claimed that the tariff of 1824 did not offer them enough protection. They successfully passed the tariff of 1828, which southerners branded as the “tariff of abominations.” Once again, every North Carolina Congressmen disproved of a tariff bill. Average import rates now stood close to fifty percent. Opponents of the tariff in South Carolina nullified this tariff and the subsequent tariff of 1833, which lowered average duties to about thirty three percent ad valorem. President Andrew Jackson equated nullification with treason and talked of hanging his own vice president, John C. Calhoun, whom he held responsible for the crisis over the tariff. At the end of 1832, Calhoun resigned the vice presidency and returned to Washington, D. C. as a Senator. There, he helped pass a compromise tariff that gradually lowered duties over the span of ten years with the sharpest cuts coming after 1840. Every North Carolina Congressmen endorsed the compromise tariff. Few in North Carolina supported the doctrine of nullification but most agreed with South Carolina over the unconstitutionality of a tariff. The state legislature concurred with Jackson and called nullification a “revolutionary” and “subversive” doctrine. Some of the leading proponents of states’ rights in the state such as Willie P. Mangum, however, broke with Jackson over his handling of the crisis. The accord of 1833 lasted until 1842. President John Tyler vetoed several tariff bills so protectionists called his impeachment and tried to change the rules to provide that only a majority was required to override a presidential veto. With a nearly bankrupt treasury, Tyler finally approved the tariff of 1842, which restored many of the levels of the tariff of 1832. Every North Carolinian in Congress, regardless of party, opposed this tariff. James K. Polk, who had been born in the Old North State and graduated from the University of North Carolina, entered the Executive Mansion with a commitment to lowering the tariff. The Walker tariff slashed duties to about twenty percent ad valorem. The vote on the Walker tariff divided North Carolina Congressmen along partisan lines. The four Whigs opposed the measure while the six Democrats supported it. Advocates of high tariffs claimed that the Walker tariff would ruin the country, but its low duties on iron actually allowed for the railroad boom of the 1850s. Congress then lowered most of the duties of the Walker with the tariff of 1857. The Panic of 1857 resurrected the tariff debate. The Republicans needed an issue other than the opposition to the extension of slavery, and Republican leaders seized on the protective tariff. Justin S. Morrill proposed a tariff bill in 1860, which passed the House but stalled in the Senate. After the first Southern states seceded, little opposition to the tariff remained in the Senate and the Morrill tariff passed with ease. North Carolina Congressmen opposed the measure at every step. During the Civil War, no session of Congress occurred where Congress did not alter the tariff schedules. Few items remained duty free by the end of the war. The tariffs helped keep the federal government solvent and allowed it to pay for a costly war. Congress increased the levels of protection after the war. The issue continued to polarize political parties. Republicans sponsored a high protective tariff and Democrats advocated free trade principles. These Democrats believed that there should be no barriers to trade. North Carolina Populists criticized the “money power” of corporations, trusts, railroads, banks, and protective tariffs. In 1887, President Grover Cleveland addressed their concerns when he devoted his entire message to Congress to reforming the tariff. The presidential election of 1888 became a referendum on the tariff and the Republican candidate, Benjamin Harrison, won more electoral votes than Cleveland but lost the popular vote to the incumbent. Cleveland won North Carolina by over thirteen thousand votes. In 1890, Republicans passed the McKinley tariff of 1890 on a strict party line vote. At the time, this tariff became the highest in the nation’s history. American voters took their wrath out on Republicans who lost the presidency and both houses of Congress in 1892 over the tariff. Democrats then tried to lower duties but the Wilson-Gorman tariff of 1894 made few reductions to the McKinley tariff. The Dingley Tariff of 1897 restored levels close to those of the McKinley tariff. At the start of the twentieth century, Republicans second-guessed the usefulness of high tariffs. Reformers in the GOP argued that high tariffs aided trusts. In one of his first actions as president, William H. Taft called a special session of Congress to lower the tariff. This action pleased farmers and reformers in the Old North State. Congress responded with the Payne-Aldrich Tariff of 1909. This bill lowered duties on about thirty-percent of enumerated goods but raised duties on about ten-percent of goods. About sixty percent of the goods, then, were unaffected. The Underwood-Simmons Tariff of 1913 marked the most significant lowering of tariff duties since 1857. Average duties stood at around twenty-five percent. To offset the loss of revenue from tariff duties, Congress passed and the people ratified the sixteenth amendment, which granted Congress the power to collect an income tax. After World War I, Congress raised tariff rates once again through the Fordney-McCumber tariff of 1922. To cope with the Great Depression, Republicans in Congress passed the Smoot-Hawley Tariff of 1930. This became the highest tariff in the nation’s history and also one of the biggest blunders in American history since it inaugurated a series of trade wars in the middle of a severe economic crisis. The New Deal proposals of Cordell Hull that eventually won the support of Franklin D. Roosevelt chipped away at high tariffs. The policy of reciprocity allowed the president to raise or lower tariff levels with other nations depending on the restrictions those nations imposed on American exports. In 1934, Congress gave the president this power with the Trade Agreements Act. The destruction of authoritarian regimes and the rise of American influence abroad negated the need for protective tariffs. The United States participated in the General Agreement on Tariffs and Trade (GATT) until the World Trade Organization replaced it in 1995. Congress ratified the North American Free Trade Agreement (NAFTA) in November 1993, and this removed most obstacles to trade between the United States, Canada, and Mexico. In 2005, Congress approved the US-Central American Free Trade Agreement (CAFTA). This agreement removed most trade restrictions between the United States and over thirty nations in Central America. Foes of these free trade alignments contend that they threaten national sovereignties. The process of globalization (some contend) enriches members of multinational corporations and hurts common people. In contemporary America, free trade has become the norm as neither of the two principal political parties in America support protectionism. Alfred Eckes, Opening America’s Market: U.S. Foreign Trade Policy Since 1776 (Chapel Hill, 1995); Richard E. Ellis, The Union at Risk: Jacksonian Democracy, States’ Rights, and the Nullification Crisis (New York, 1987); Eric Foner, Free Soil Free Labor, Free Men: The Ideology of the Republican Party Before the Civil War (New York, 1970); William W. Freehling, Prelude to Civil War: The Nullification Controversy in South Carolina, 1816-1836 (New York, 1965); Richard Hofstadter, “The Tariff Issue on the Eve of the Civil War,” American Historical Review, XLIV (October, 1938); James L. Huston, The Panic of 1857 and the Coming of the Civil War (Baton Rouge, 1987); H. Paul Jeffers, An Honest President: The Life and Presidencies of Grover Cleveland (New York, 2000); William E. Leuchtenburg, Franklin D. Roosevelt and the New Deal, 1932-1940 (New York, 1963); Jackson T. Main, The Antifederalists: Critics of the Constitution, 1781-1788 (Chapel Hill, 1961); H. Wayne Morgan, William McKinley and His America (Kent, 2003); Phillip Shaw Paludan, A People’s Contest: The Union and Civil War, 1861-1865 (Lawrence, 1988); Jonathan J. Pincus, Pressure Groups and Politics in Antebellum Tariffs (New York, 1977); Joanne Reitano, The Tariff Question in the Gilded Age: The Great Debate of 1888 (University Park, PA, 1994); Robert V. Remini, Henry Clay: Statesman for the Union (New York, 1991), Martin Van Buren and the Making of the Democratic Party (New York, 1951); Robert Seager II, And Tyler Too: A Biography of John and Julia Gardiner Tyler (New York, 1963); Edward Stanwood, American Tariff Controversies in the Nineteenth Century. Vol. 1 of 2. (Boston, 1903); Robert F. Wesser, “Election of 1888,” Arthur M. Schlesinger (ed.), History of American Presidential Elections. Vol. 4 of 4. (New York, 1971); Paul Wolman, Most Favored Nation: The Republican Revisionists and US Tariff Policy, 1897-1912 (Chapel Hill, 1992); William Frank Zornow, “North Carolina State Tariff Policies, 1775-1789,” North Carolina Historical Review, XXXII (April, 1955). By William K. Bolt, University of Tennessee Related Categories: Political History , Business and Industry , Revolution Era Related Encyclopedia Entries: Constitution of 1835 , William Blount (1749-1800) , James K. Polk (1795-1849) , Curtis Hooks Brogden (1816-1901) , David Lowry Swain (1801-1868) , Union County (1842) , Jackson County (1851) , Willie P. Mangum (1792 - 1861) , Andrew Jackson Related Commentary: Graham Brothers , The War of 1812 James Turner opposed the tariff of 1816, the nation's first protective tariff. Image courtesy of the North Carolina Office of Archives and History, Raleigh, NC. Senator James Iredell Jr., opposed the "tariff of abominations" in 1828. Image courtesy of the North Carolina Office of Archives and History, Raleigh, NC. Nathaniel Macon opposed the 1828 tariff, known by many southerners as the "tariff of abominations." Image courtesy of the North Carolina Office of Archives and History, Raleigh, NC. Senator Willie P. Mangum disagreed with the way President Andrew Jackson handled the nullification crisis in the 1830s. Image courtesy of the North Carolina Office of Archives and History, Raleigh, NC. President James K. Polk, a native Tar Heel, entered the White House with a commitment to lower the tariff of 1842. Image courtesy of the North Carolina Office of Archives and History, Raleigh, NC. Senator Zebulon B. Vance, along with every Congressmen from North Carolina, opposed the Morrill tariff of 1860. Image courtesy of the North Carolina Collection, University of North Carolina at Chapel Hill Libraries.
http://northcarolinahistory.org/encyclopedia/81/entry
13
14
This week we did a number of experiments with oil, water, food coloring and various props to explore the property of surfaces. The physical properties like surface tension and solubility are related to the strength of Intermolecular Forces -- the attractive forces between molecules. Surface Tension Experiments These came from the website of the Chicago Section of the American Chemical Society 3 bowls or containers with water a piece of string a paper clip 1. Sprinkle pepper on the surface of cold clean water in a shallow dish. Allow the particles to spread out and cover the surface. 2. Put your finger in the bowl. 3. Put a drop of liquid soap on your finger. Put your finger in the bowl again. What should happen: Pepper should rush away from your finger in a star pattern. What did happen: Pepper rushed away from finger in a circle -- still impressive. 1. Float a small loop of string in the middle of the surface of water. 2. Put a drop of liquid soap inside the loop. What should happen: The surface tension inside the loop of string should weaken by the soap but the surface tension outside the string should have pulled the string outward.What did happen: The string sank before we could try step 2. 2. If they don't, place a paper towel on the surface of the water, place the objects on the paper, and then remove the paper. 3. Now put a drop of liquid soap on the water surface. What should happen: As soon as the tension is broken by the soap, these items should sink to the bottom. This one worked as planned! 2 clear glasses or plastic cups 1. Pour about an inch of water into the cup. 2. Add food coloring to the water. 3. Pour about an inch of glycerin into the second cup. 4. Gently add colored water. 5. Add oil until you get three layers. 6. Stir. Allow to settle.The water will mix with the glycerin, but the oil will separate back out. 7. Add a layer of liquid soap. 8. Stir gently. The oil will mix with the glycerin. What's Happening: Different liquids have different densities, and according to the density, the liquids will settle in a certain order when mixed. Oil is less dense than water and therefore will settle on top of water. (Sorry that it's sideways. When I figure out how to fix it, I will repost it!) Tall narrow jar 1. Fill the cylinder with water. 2. Add the food coloring. Do not let the water become too dark. 3. Slowly pour oil into the cylinder. It should make a thick layer on top of the water. 4. Slowly sprinkle the salt into the cylinder on top of the oil. The salt coats the oil and causes it to fall to the bottom of the graduated cylinder in globs. The oil will gradually return to the top of the graduated cylinder. Vegetable oil is less dense than water. When the salt is added, it sticks to the oil and drags it down. Once at the bottom, the water dissolves the salt and the oil floats back up. The reason the oil doesn't dissolve into the water happens because of its difference in polarity. Water and salt are both polar. Oil is non-polar. Only polar substances will dissolve polar substances. A non-polar substance will not dissolve in a polar substance. This is the rule of "like dissolves like."
http://homechemistry.blogspot.com/2008/05/surfaces-and-density.html
13
21
Open educational resources Open Educational Resources (OER) are freely accessible, usually openly licensed documents and media that are useful for teaching, learning, educational, assessment and research purposes. Although some people consider the use of an open format to be an essential characteristic of OER, this is not a universally acknowledged requirement. Defining the Scope and Nature of Open Educational Resources There are numerous working definitions for the idea of open educational resources (OER). Often cited is the William and Flora Hewlett Foundation which defines OER as: "teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use and re-purposing by others. Open educational resources include full courses, course materials, modules, textbooks, streaming videos, tests, software, and any other tools, materials, or techniques used to support access to knowledge". The Organization for Economic Co-operation and Development (OECD) defines OER as: "digitised materials offered freely and openly for educators, students, and self-learners to use and reuse for teaching, learning, and research. OER includes learning content, software tools to develop, use, and distribute content, and implementation resources such as open licences". (Notably, this is the definition cited by Wikipedia's sister project, Wikiversity.) By way of comparison, the Commonwealth of Learning "has adopted the widest definition of Open Educational Resources (OER) as ‘materials offered freely and openly to use and adapt for teaching, learning, development and research’". The WikiEducator project suggests that OER refers "to educational resources (lesson plans, quizzes, syllabi, instructional modules, simulations, etc.) that are freely available for use, reuse, adaptation, and sharing'. Given the diversity of users, creators and sponsors of open educational resources, it is not surprising to find a variety of use cases and requirements. For this reason, it may be as helpful to consider the differences between descriptions of open educational resources as it is to consider the descriptions themselves. One of several tensions in reaching a consensus description of OER (as found in the above definitions) is whether there should be explicit emphasis placed on specific technologies. For example, a video can be openly licensed and freely used without being a streaming video. A book can be openly licensed and freely used without being an electronic document. This technologically driven tension is deeply bound up with the discourse of open-source licensing. For more, see Licensing and Types of OER later in this article. There is also a tension between entities which find value in quantifying usage of OER and those which see such metrics as themselves being irrelevant to free and open resources. Those requiring metrics associated with OER are often those with economic investment in the technologies needed to access or provide electronic OER, those with economic interests potentially threatened by OER, or those requiring justification for the costs of implementing and maintaining the infrastructure or access to the freely available OER. While a semantic distinction can be made delineating the technologies used to access and host learning content from the content itself, these technologies are generally accepted as part of the collective of open educational resources. Since OER are intended to be available for a variety of educational purposes, organizations using OER presently neither award degrees nor provide academic or administrative support to students seeking college credits towards a diploma from a degree granting accredited institution. In open education, there is an emerging effort by some accredited institutions to offer free certifications, or achievement badges, to document and acknowledge the accomplishments of participants. The term learning object was coined in 1994 by Wayne Hodgins and quickly gained currency among educators and instructional designers, popularizing the idea that digital materials can be designed to allow easy reuse in a wide range of teaching and learning situations. The OER movement originated from developments in open and distance learning (ODL) and in the wider context of a culture of open knowledge, open source, free sharing and peer collaboration, which emerged in the late 20th century. OER and Free/Libre Open Source Software (FLOSS), for instance, have many aspects in common, a connection first established in 1998 by David Wiley, who introduced the concept of open content by analogy with open source. The MIT OpenCourseWare project is credited for having sparked a global Open Educational Resources Movement after announcing in 2001 that it was going to put MIT's entire course catalog online and launching this project in 2002. In a first manifestation of this movement, MIT entered a partnership with Utah State University, where assistant professor of instructional technology David Wiley set up a distributed peer support network for the OCW's content through voluntary, self-organizing communities of interest. In 2005 OECD’s Centre for Educational Research and Innovation (CERI) launched a 20-month study to analyse and map the scale and scope of initiatives regarding "open educational resources" in terms of their purpose, content, and funding. The report "Giving Knowledge for Free: The Emergence of Open Educational Resources", published in May 2007, is the main output of the project, which involved a number of expert meetings in 2006. In September 2007, the Open Society Institute and the Shuttleworth Foundation convened a meeting in Cape Town to which thirty leading proponents of open education were invited to collaborate on the text of a manifesto. The Cape Town Open Education Declaration was released on 22 January 2008, urging governments and publishers to make publicly funded educational materials available at no charge via the internet. Licensing and Types of OER Open educational resources often involve issues relating to intellectual property rights. Traditional educational materials, such as textbooks, are protected under conventional copyright terms. However, alternative and more flexible licensing options have become available as a result of the work of Creative Commons, an organisation that provides ready-made licensing agreements that are less restrictive than the "all rights reserved" terms of standard international copyright. These new options have become a "critical infrastructure service for the OER movement." Another license, typically used by developers of OER software, is the GNU Public License from the FOSS community. Open licensing allows uses of the materials that would not be easily permitted under copyright alone. There is ongoing discussion in the OER community regarding the idea of there being an implicit reliance on explicit licensing. For example, knowledge found in the public domain may or may not be considered a legitimate open educational resource depending on whether the absence of an open license prevents it from meeting differing criteria of openness. Related to the discussion on licensing is discussion on reusage which a license may or may not clearly indicate. Types of open educational resources include: full courses, course materials, modules, learning objects, open textbooks, openly licensed (often streamed) videos, tests, software, and other tools, materials, or techniques used to support access to knowledge. OER may be freely and openly available static resources, dynamic resources which change over time in the course of having knowledge seekers interacting with and updating them (such as this Wikipedia article), or a course or module with a combination of these resources. OER policy Open educational resources policies are principles or tenets adopted by governing bodies in support of the use of open content and practices in educational institutions. Such policies are emerging increasingly at the country, state/province and more local level. Some major OER programs include: - OER Africa, an initiative established by the South African Institute for Distance Education (Saide) to play a leading role in driving the development and use of OER across all education sectors on the African continent. - Wikiwijs (the Netherlands), a program intended to promote the use of open educational resources (OER) in the Dutch education sector; - The Open educational resources programme (phases one and two) (United Kingdom), funded by HEFCE, the UK Higher Education Academy and JISC, which has supported pilot projects and activities around the open release of learning resources, for free use and repurposing worldwide. Institutional Support A large part of the early work on open educational resources was funded by universities and foundations such as the William and Flora Hewlett Foundation, which was the main financial supporter of open educational resources in the early years and has spent more than $110 million in the 2002 to 2010 period, of which more than $14 million went to MIT. The Shuttleworth Foundation, which focuses on projects concerning collaborative content creation, has contributed as well. With the British government contributing £5.7m, institutional support has also been provided by the UK funding bodies JISC and HEFCE. UNESCO is taking a leading role in "making countries aware of the potential of OER." The organisation has instigated debate on how to apply OERs in practice and chaired vivid discussions on this matter through its International Institute of Educational Planning (IIEP). Believing that OERs can widen access to quality education, particularly when shared by many countries and higher education institutions, UNESCO also champions OERs as a means of promoting access, equity and quality in the spirit of the Universal Declaration of Human Rights. Recently, the 2012 Paris OER Declaration was approved during the 2012 OER World Congress held in UNESCO HQ. A parallel initiative Connexions, came out of Rice University starting in 1999. In contrast to the OCW projects, content licenses are required to be open under a Creative Commons Attribution only license. The hallmark of Connexions is the use of a custom XML format CNXML, designed to aid and enable mixing and reuse of the content. Other initiatives derived from MIT OpenCourseWare are China Open Resources for Education and OpenCourseWare in Japan. The OpenCourseWare Consortium, founded in 2005 to extend the reach and impact of open course materials and foster new open course materials, counted more than 200 member institutions from around the world in 2009. In 2003, the ownership of Wikipedia and Wiktionary projects was transferred to the Wikimedia Foundation, a non-profit charitable organization whose goal is to collecting and developing free educational content and to disseminate it effectively and globally. Wikipedia ranks in the top-ten most visited websites worldwide since 2007. OER Commons was spearheaded in 2007 by ISKME, a nonprofit education research institute dedicated to innovation in open education content and practices, as a way to aggregate, share, and promote open educational resources to educators, administrators, parents, and students. OER Commons also provides educators tools to align OER to the Common Core State Standards; to evaluate the quality of OER to OER Rubrics developed by Achieve; and to contribute and share OERs with other teachers and learners worldwide. To further promote the sharing of these resources among educators, in 2008 ISKME launched the OER Commons Teacher Training Initiative, which focuses on advancing Open Educational Practices and on building opportunities for systemic change in teaching and learning. One of the first OER resources for K-20 education is Curriki. A nonprofit organization, Curriki provides an Internet site for open source curriculum (OSC) development, to provide universal access to free curricula and instructional materials for students up to the age of 18 (K-12). By applying the open source process to education, Curriki empowers educational professionals to become an active community in the creation of good curricula. Kim Jones serves as Curriki's Executive Director. In August 2006 WikiEducator was launched to provide a venue for planning education projects built on OER, creating and promoting open education resources (OERs), and networking towards funding proposals. Its Wikieducator's Learning4Content project builds skills in the use of MediaWiki and related free software technologies for mass-collaboration in the authoring of free content and claims to be the world's largest wiki training project for education. By 30 June 2009 the project facilitated 86 workshops training 3,001 educators from 113 different countries. Peer production has also been utilized in producing collaborative open education resources (OERs). Writing Commons, an international open textbook spearheaded by Joe Moxley at the University of South Florida, has evolved from a print textbook into a crowd-sourced resource for college writers around the world. Massive open online course (MOOC) platforms have also generated interest in building online eBooks. The Cultivating Change Community (CCMOOC) at the University of Minnesota is one such project founded entirely on a grassroots model to generate content. In 10 weeks, 150 authors contributed more than 50 chapters to the CCMOOC eBook and companion site. Another project is the Free Education Initiative from the Saylor Foundation, which is currently more than 80% of the way towards its initial goal of providing 241 college-level courses across 13 subject areas. The Saylor Foundation makes use of university and college faculty members and subject experts to assist in this process, as well as to provide peer review of each course to ensure its quality. The foundation also supports the creation of new openly licensed materials where they are not already available as well as through its Open Textbook Challenge. In 2006, the African Virtual University (AVU) released 73 modules of its Teacher Education Programs as Open Education Resources to make the courses freely available for all. In 2010, the AVU developed the OER Repository which has contributed to increase the number of Africans that use, contextualize, share and disseminate the existing as well as future academic content. The online portal http://oer.avu.org serves as a platform where the 219 modules of Mathematics, Physics, Chemistry, Biology, ICT in education, and teacher education professional courses are published. The modules are available in three different languages – English, French, and Portuguese, making the AVU the leading African institution in providing and using Open Education Resources International programs - Europe - Learning Resource Exchange for schools (LRE) is a service launched by European Schoolnet in 2004 enabling educators to find multilingual open educational resources from many different countries and providers. Currently, more than 200,000 learning resources are searchable in one portal based on language, subject, resource type and age range. - India -National Council Of Educational Research and Training digitized all its textbooks from 1st standard to 12th standard .The textbooks are available online for free. Central Institute of Educational Technology , a constituent Unit of NCERT digitized more than thousand audio and video programmes. All the educational AV material developed by CIET is presently available at Sakshat Portal an initiative of Ministry of Human Resources and Development. In addition, NROER ( National Repository for Open Educational Resources) houses variety of e content. - US - Washington State's Open Course Library Project is a collection of expertly developed educational materials – including textbooks, syllabi, course activities, readings, and assessments – for 81 high-enrolling college courses. 42 courses have been completed so far, providing faculty with a high-quality option that will cost students no more than $30 per course. - Bangladesh is the first country to digitize a complete set of textbooks for grades 1-12. Distribution is free to all. - Uruguay sought up to 1,000 digital learning resources in a Request For Proposals (RFP) in June 2011. - South Korea has announced a plan to digitize all of its textbooks and to provide all students with computers and digitized textbooks. - The California Learning Resources Network Free Digital Textbook Initiative at high school level, initiated by former Gov. Arnold Schwarzenegger. - The Shuttleworth Foundation's Free high school science texts for South Africa - Saudi Arabia had a comprehensive project in 2008 to digitize and improve the Math and Science text books in all k-12 grades. - Saudi Arabia started a project in 2011 to digitize all text books other than Math and Science. With the advent of growing international awareness and implementation of open educational resources, a global OER logo was adopted for use in multiple languages by UNESCO. The design of the Global OER logo creates a common global visual idea, representing "subtle and explicit representations of the subjects and goals of OER". Its full explanation and recommendation of use is available from UNESCO. Critical discourse about OER as a movement External discourse External criticism of the OER movement has been accused of insularity and failure to connect with the larger world: "OERs will not be able to help countries reach their educational goals unless awareness of their power and potential can rapidly be expanded beyond the communities of interest that they have already attracted." More fundamentally, doubts have been cast on the altruistic motives typically claimed for OERs. The very project has been accused of imperialism in that the creation and dissemination of knowledge according to the economic, political and cultural preferences of highly developed countries for the use of less developed countries is alleged to be a self-serving imposition. Internal discourse Within the open educational resources movement, OER are an essentially contested and active concept. One example of this can be found in the conceptions of gratis versus libre knowledge as found in the discourse about massive open online courses which may offer free courses but charge for end-of-course awards or course verification certificates from commercial entities. A second example of essentially contested ideas in OER can be found in the usage of different OER logos which can be interpreted as indicating more or less allegiance to the notion of OER as a global movement. See also - Distance education - Free education - Free High School Science Texts - George Siemens - IMS Global - Internet Archive - Khan Academy - Libre knowledge - Massive open online courses (MOOCs) - MIT OpenCourseWare - Open access - Open content - Open Library - Open source curriculum - Project Gutenberg - Question and Test Interoperability specification - Stephen Downes - Virginia Open Education Foundation - Wikipedia itself! - Writing Commons - Kauppinen, Ilkka (29). "Different meanings of 'knowledge as commodity' in the context of higher education". Critical Sociology. doi:10.1177/0896920512471218. Retrieved 23 April 2013. - Sanchez, Claudia. "The use of technological resources for education: a new professional competency for teachers". Intel® Learning Series blog. Intel Corporation. Retrieved 23 April 2013. - "What is OER?". wiki.creativecommons.org. Creative Commons. Retrieved 18 April 2013. - "Open Educational Resources". The William and Flora Hewlitt Foundation. Retrieved 27 March 2013. - "Giving Knowledge for Free: THE EMERGENCE OF OPEN EDUCATIONAL RESOURCES". Center for Educational Research and Innovation. Retrieved 28 March 2013. - "Open educational resources". Wikiversity (English). Wikimedia Foundation. Retrieved 17 April 2013. - "Open Educational Resources (OER)". CoL.org. Commonwealth of Learning. Retrieved 16 April 2013. - "Oer". WikiEducator.org. WikiEducator.org. Retrieved 17 April 2013. - "Defining OER". WikiEducator.org. Open Education Resource Foundation. Retrieved 18 April 2013. - (CERI), Center for Educational Research and Innovation (2007). Giving Knowledge for Free: The Emergence of Open Educational Resources. Executive Summary (Policy implications and recommendations): Organization for Economic Co-Operation and Development (OECD). p. 15. ISBN 978-92-64-03174-6. - (CERI), Center for Educational Research and Innovation (2007). Giving Knowledge for Free: The Emergence of Open Educational Resources. Executive Summary (What are open educational resources?): Organization for Economic Co-Operation and Development (OECD). p. 10. ISBN 978-92-64-03174-6. - Hafner, Katie (2010-04-16). "Higher Education Reimagined With Online Courseware". New York Times (New York). Retrieved 2010-12-19. - Johnstone, Sally M. (2005). "Open Educational Resources Serve the World". Educause Quarterly 28 (3). Retrieved 2010-11-01. - Wiley, David (2006-02-06), Expert Meeting on Open Educational Resources, Centre for Educational Research and Innovation, retrieved 2010-12-03 - "FOSS solutions for OER - summary report". Unesco. 2009-05-28. Retrieved 2011-02-20. - Hylén, Jan (2007). Giving Knowledge for Free: The Emergence of Open Educational Resources. Paris, France: OECD Publishing. doi:10.1787/9789264032125-en. Retrieved 2010-12-03. - Grossman, Lev (1998-07-18). "New Free License to Cover Content Online". Netly News. Archived from the original on 2000-06-19. Retrieved 2010-12-27. - Wiley, David (1998). "Open Content". OpenContent.org. Retrieved 2010-01-12. - Guttenplan, D. D. (2010-11-01). "For Exposure, Universities Put Courses on the Web". New York Times (New York). Retrieved 2010-12-19. - Ticoll, David (2003-09-04). "MIT initiative could revolutionize learning". The Globe and Mail (Toronto). Archived from the original on 2003-09-20. Retrieved 2010-12-20. - "Open Educational Resources". CERI. Retrieved 2011-01-02. - Giving Knowledge for Free: The Emergence of Open Educational Resources. Paris, France: OECD Publishing. 2007. doi:10.1787/9789264032125-en. Retrieved 2010-12-03. - Deacon, Andrew; Catherine Wynsculley (2009). "Educators and the Cape Town Open Learning Declaration: Rhetorically reducing distance". International Journal of Education and Development using ICT 5 (5). Retrieved 2010-12-27. - "The Cape Town Open Education Declaration". Cape Town Declaration. 2007. Retrieved 2010-12-27. - Atkins, Daniel E.; John Seely Brown, Allen L. Hammond (2007-02). "A Review of the Open Educational Resources (OER) Movement: Achievements, Challenges, and New Opportunities". Menlo Park, CA: The William and Flora Hewlett Foundation. p. 13. Retrieved 2010-12-03. - Hylén, Jan (2007). Giving Knowledge for Free: The Emergence of Open Educational Resources. Paris, France: OECD Publishing. p. 30. doi:10.1787/9789264032125-en. Retrieved 2010-12-03. - Giving Knowledge for Free: The Emergence of Open Educational Resources. Centre for Educational Research and Innovation (CERI), OECD. 2007. Retrieved 24 April 2013. - "Introducing OER Africa". South African Institute for Distance Education. - "Trend Report: Open Educational Resources 2013". SURF. Open Educational Resources Special Interest Group (SIG OER). March 2013. - "Open educational resources programme - phase 1". - "Open educational resources programme - phase 2". - "OER Policy Registry". Creative Commons. Retrieved 24 April 2013. - Swain, Harriet (2009-11-10). "Any student, any subject, anywhere". The Guardian (London). Retrieved 2010-12-19. - "Open educational resources programme - phase 2". JISC. 2010. Retrieved 2010-12-03. - "Open educational resources programme - phase 1". JISC. 2009. Retrieved 2010-12-03. - "Initiative Background". Taking OER beyond the OER Community. 2009. Retrieved 2011-01-01. - Communiqué: The New Dynamics of Higher Education and Research for Societal Change and Development, UNESCO World Conference on Higher Education, 2009 - "UNESCO Paris OER Declaration 2012". 2012. Retrieved 2012-06-27. - Attwood, Rebecca (2009-09-24). "Get it out in the open". Times Higher Education (London). Retrieved 2010-12-18. - "What is WikiEducator? (October 2006)". COL. Retrieved 2010-12-21. - "The Purpose of Learning for Content - outcomes and results". Wikieducator. 2010-02-10. Retrieved 2010-12-28. - "About.""Writing Commons". CC BY-NC-ND 3.0. Retrieved 11 February 2013. - Anders, Abram (November 9, 2012). "Experimenting with MOOCs: Network-based Communities of Practice.". Great Plains Alliance for Computers and Writing Conference. Mankato, MN. Text "https://cultivatingchange.wp.d.umn.edu/community/ccmooc-experimenting-with-moocs-at-gpacw/ " ignored (help); - "About.""Cultivating Change Community". CC BY-NC 3.0. Retrieved 11 February 2013. - OER ECONOMICS "MU OER PORTAL". Wikieducator. - Thibault, Joseph. "241 OER Courses with Assessments in Moodle: How Saylor.org has created one of the largest Free and Open Course Initiatives on the web". Moodlenews.com. Retrieved 30 January 2012. - "Saylor Foundation to Launch Multi-Million Dollar Open Textbook Challenge! | College Open Textbooks Blog". Collegeopentextbooks.org. 2011-08-09. Retrieved 2011-10-21. - PM opens e-content repository - http://ceibal.org.uy/index.php?option=com_content&view=article&id=486:licitacion-publica-internacional-no-01522011-seleccion-de-proveedor-para-la-adquisicion-de-plataforma-educativa-on-line-yo-recursos-educativos-digitales-para-educacion-primaria-y-media-uruguaya&catid=51:convocatorias-vigentes&Itemid=82 (PDF in Spanish) - Mello, Jonathas. "Global OER Logo". UNESCO. United Nations Educational, Scientific and Cultural Organization. Retrieved 16 April 2013. - Mulder, Jorrit (2008). Knowledge Dissemination in Sub-Saharan Africa: What Role for Open Educational Resources (OER)?. Amsterdam: University of Amsterdam. p. 14. - "UNESCO and COL promote wider use of OERs". International Council for Open and Distance Education. 2010-06-24. Retrieved 2011-01-01. - Mulder, Jorrit (2008). Knowledge Dissemination in Sub-Saharan Africa: What Role for Open Educational Resources (OER)?. Amsterdam: University of Amsterdam. pp. 58–67. Retrieved 2011-01-01. - Scanlon, Eileen (February/March 2012). "Digital futures: Changes in scholarship, open educational resources and the inevitability of interdisciplinarity". Arts and Humanities in Higher Education 11: 177–184. doi:10.1177/1474022211429279. - "OER: Articles, Books, Presentations and Seminars". EduCause.edu. Educause. Retrieved 23 April 2013. - Rivard, Ry. "Coursera begins to make money". InsideHigherEd.com. Inside Higher Ed. Retrieved 25 April 2013. - Carey, Kevin. "The Brave New World of College Branding". Chronicle.com. The Chronicle of Higher Education. Retrieved 25 April 2013. - Inamorato, Andreia. "George Siemens' interview on MOOCs and Open Education". YouTube.com. Open Content Online blog. Retrieved 13 May 2013. - "Open Library". Open Library. One web page for every book. Internet Archive. Retrieved 3 April 2013. - Downes, Stephen (2011). Free Learning: Essays on Open Educational Resources and Copyright. National Research Council Canada. - Downes, Stephen. "The Role of Open Educational Resources in Personal Learning". YouTube.com. Universitat Oberta de Catalunya (UoC). Retrieved 13 May 2013.
http://en.wikipedia.org/wiki/Open_educational_resources
13
16
Students will work in partners to examine and discuss the primary source probate record inventory of goods belonging to Sarah Green (d. 1757) to improve comprehension of life in Colonial Virginia. After sharing with others, students will answer questions predicting what they think Sarah Green’s life was like. Students will listen to (or read a transcript from) an interview with a colonial historian and compare their predictions with the historian’s conclusions. By the eighteenth century, the colony of Virginia had grown into a society with distinctions between race, class, and gender. Like most southern colonies, Virginia had a slave-based, planter-dominated society. Even though only a relative few, about five percent, of southern white landowners were planters (with 20-plus slaves), they were the role models for other aspiring white men. Influenced by English law, men were the sole possessors of the family’s wealth, but women could be the beneficiaries of shared wealth upon the death of their husbands. Legal documents, such as tax records or probate inventories, often provide our only information about the status of a woman and the lifestyles of ordinary people during the colonial and early national periods. Such listings of household possessions, from a time when household goods were not widely mass produced, can illuminate a fair amount about a person’s or a family's routines, rituals, and social relations, as well as about a region's economy and its connections to larger markets. The students will: VS.1 The student will develop skills for historical and geographical analysis including the ability to VS.4 The student will demonstrate knowledge of life in the Virginia colony by (Downloads are in .pdf format) Hook (8-10 minutes) (Option 1): Explain to students that today the lesson involves the things we can learn about people based on their possessions—the things they own—“What People’s Stuff Can Tell You” (this can be made into a sign or written as the day’s objective). Ask students to “brainstorm.” Give students two minutes to write down everything they own—their stuff. At the end of two minutes ask them to partner with someone (Clock Buddies, elbow partners, across the table, etc.) and share their lists. As they look at the other person’s list they should think about what the list tells them about the person; what questions they would have based on the list; and how the knowledge they have about the person already helps them understand the list. Give students two-three minutes to study their partner’s list. At the end of the time, discuss what they learned, questions, and background. (Option 2)If available, bring in several items on the inventory list that would not be familiar to all students: pestle and mortar, Dutch Oven, Damask napkins, dog irons (andirons), etc. Pass them around, asking students to think about what the objects might have been used for 250 years ago. Share ideas in an open discussion. Explain to students that today the lesson involves the things we can learn about people based on their possessions—the things they own—“What People’s Stuff Can Tell You” (this can be made into a sign or written as the day’s objective). Ask what the items you passed around might tell them about the person who owned the them. 1. Inventory Analysis (10-12 minutes) 2. Group Discussion (12-15 minutes) Draw three columns on the board (or chart packs or SmartBoard): Headings: Notice, Questions, Historical Background. Ask students to share, and write on board: (Break here for block classes) 3. Extended Activity (15-25 minutes: complete as homework for regular period class) Explain that students are now going to “do history” (working on their own). Historians take primary source information like this and make conclusions about people or a time in history. Hook (3-5 minutes) (Option 1): Remind students that the lesson the day before, “What People’s Stuff Can Tell You,” started with students making lists of “stuff” they own and then sharing with partners. Review some of the things partners learned by looking at another student’s list. (Call on some who did not share on Day One) (Option 2): Display a written list of “Stuff I Own.” It can be your stuff, your spouse’s stuff, or a list from someone in the class. Brainstorm what the list tells you about the person. Refer to Day One Activity Questions; explain students can use the information to make conclusions or inferences. Give the day’s objective: to compare their conclusions about Sarah Green with that of a professional historian. 1. Student Discussion: Student conclusions about Sarah Green (8-12 minutes) 2. Historian's Conclusions (35-40 minutes) Summarize the interview with Dr. Rosemarie Zagarri of George Mason University on Sarah Green's probate record. Explain that she is a historian focusing on the colonial period of American history. (Break here for block classes) (DAY THREE FOR REGULAR PERIOD CLASSES— review to this point) 3. Compare and Contrast (10-15 minutes) 3. Extension (10-15 minutes) (Option 1): Distribute the floor plan of Gunston Hall and explain, reviewing the importance of George Mason. Use the large entry as the “hall.” Students label where they would find some items from Sarah Green'’ inventory if she lived there. (Option 2): Project the Virtual Tour of the Gunston Hall Plantation from the Gunston Hall Plantation website: http://www.gunstonhall.org/mansion/virtual_tour.html 4. Closing (3-5 minutes Either in small groups or as a class, have each student share, based on the lessons, “What People’s Stuff Can Tell You.” Depending on time, work on “Conclusions” column of the chart can be finished at home after Day One. The “floor plan” extension option can be completed at home, also. Assessment should be ongoing based on participation and progress on worksheet. Venn Diagram with “compare/contrast” activity is the culminating assessment. Special needs students should work with supportive partners; given assistance as needed with worksheet; allowed to draw items in final Venn Diagram assessment. G&T students should be directed to the Gunston Hall website for exploration. They could complete a second worksheet using another of the Virginia probate inventories on the website. Options can be given for the Venn Diagram assessment, possibly including: Barbara Clark Smith, "Analyzing an 1804 Inventory," History Matters: The U.S. Survey Course on the Web, http://historymatters.gmu.edu/mse/sia/inventory.htm, February 2002. This is an excellent example of how to use an inventory as a teaching tool. Gunston Hall Plantation Probate Inventory Databases, Green59, http://chnm.gmu.edu/probateinventory/pdfs/green59.pdf. This is the actual copy of the inventory. Gunston Hall Plantation House Tour, http://www.gunstonhall.org/mansion/virtual_tour.html. This on-line tour of George Mason’s home is an excellent extension for the lesson or can be incorporated by showing the names of rooms. Rosemarie Zagarri, Sarah Green Probate Transcript. This discussion with George Mason University's expert on the colonial period answers questions and provides insight into the Green inventory.
http://chnm.gmu.edu/probateinventory/lessons/lesson1.php
13
14
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Is Superman Really All That Super? Critically Exploring Superheroes |Grades||3 – 5| |Lesson Plan Type||Standard Lesson| |Estimated Time||Four 60-minute sessions| Long Beach, California - Access prior knowledge about character development and traits and practice applying that knowledge by defining character traits in superheroes - Practice both comparing and using a graphic organizer by looking at and diagramming the character traits of superheroes in popular culture texts and books - Engage in critical analysis by exploring what these character traits mean from multiple perspectives |1.||Review the concept of character traits using several examples of characters from a book you have read together recently. You may use some of the following questions to guide this review; make sure you ask students to provide evidence from the text to support their answers: |2.||Ask students to define the term superhero and to explain what kinds of traits these characters usually have. List the adjectives students use to describe superheroes on a sheet of chart paper. Remind students that these traits do not all have to be positive. For example, Superman can't withstand kryptonite; other superheroes may have personality quirks, just like real people. Keep this list posted in your classroom during Sessions 2 and 3. |3.||Have students share examples of superheroes they like or are familiar with. Write students' examples on a sheet of chart paper that you post in your classroom, being sure to note where the superheroes come from (e.g., are they characters in a video game? a movie? a comic book? all three?). If students choose a superhero from a children's book, list these on a separate sheet of paper. |4.||Using the list you have just created, introduce the concept of popular culture texts to students. Explain that a text is not always a book; it can be something that they read, watch, listen to, or play. It can be a book, a comic book, a movie, a TV show, or song lyrics. Video games can also be considered texts because players read the directions given on the screen and watch the images while playing. A popular culture text is a text that many people currently like and enjoy reading, watching, or playing. Homework (Due at the beginning of Session 2): Students should choose two or three of their favorite superheroes from popular culture texts. If they have game cards or comic books about these superheroes, they can bring them in if your school policy allows students to do so. Note: In between Sessions 1 and 2, you may choose to do some research on the characters that your students mention, especially if you are unfamiliar with them. You will most likely find websites for many of them that describe the characters and the context that they operate in. |1.||Distribute the My Favorite Superheroes handout. Ask students to list their favorite superheroes and then use one or two adjectives to describe the traits of each superhero. You may choose to allow students to see the list of adjectives you created during Session 1, or if you prefer, you can remind students of the types of words it contained, but cover it up and expect them to draw on their own memories of the words when completing this activity. |2.||Have students work in groups of five or six to share their list of superheroes and character traits. Each group should identify which two of their superheroes are the most unusual. They can use the following questions to guide their discussion: |3.||Gather students back together and ask each group to share the two superheroes they have selected. List these characters on the Our Favorite Superheroes transparency and ask students to help you fill in the additional information about each one. |4.||Talk about the superheroes you have listed, using the following questions to guide your discussion: |1.||Ask students if they can think of any examples of superheroes from books. (If students listed some during Session 1, you can post this list and add to it.) |2.||Share the five books you have collected (see Preparation, Step 3). For each book, read the title, show the front and back cover, and ask students to predict what the book might be about. Then page through each book, show the illustrations, and provide an overview of the plot. |3.||Ask students to get into their groups from Session 2 and give each group a copy of the Traits of Superheroes From Books chart. Ask one student from each group to read the book aloud while the rest of the group listens and takes notes. Tell students that while they are listening, they should decide which character or characters in the story are superheroes and list some of their character traits. After the read-aloud, students should share their notes. One student from the group should record the group findings on the Traits of Superheroes From Books chart. |4.||Gather students together and ask each group to share its superheroes and character traits. Write down the list on the Traits of Superheroes From Books transparency. |5.||Talk about the superheroes you have listed. You may use some of the following questions to guide the discussion: Note: If necessary, this session will take place in the computer lab. |1.||Using an LCD projector, if you have one available, model how to use the Interactive Venn Diagram tool to compare one of the characters from the Our Favorite Superheroes transparency and the Traits of Superheroes From Books transparency. Use some of the following questions to guide the discussion: |2.||Have students get into their groups from Sessions 2 and 3 and then split into pairs or groups of three. Each pair should choose one of the superheroes they picked during Session 2 and compare it with the superhero in the book they read during Session 3. |3.||Each small group of students should complete the interactive Venn diagram for their two superheroes. They should print their diagrams when they are finished. |4.||Students should share their diagrams with the entire class. Post the diagrams around the room as each pair or group finishes presenting it. Have students read the posted diagrams and take notes about what they observe. |5.||Have students share what they noticed about the Venn diagrams, listing their observations on a piece of chart paper or the board. After about five or ten minutes, introduce the concept of perspective, saying something like: We all understand things differently because we all have our own perspectives, or points of view. For example, here in the United States, wolves are often portrayed as evil, like the Big Bad Wolf. That perspective comes partly from our history: During the American westward expansion, pioneers were rightly afraid of wolves. However, in some cultures, wolves are seen as strong and mystical creatures. These cultures have a different perspective of wolves. Your perspective comes from your history, your past experiences, your own beliefs and thoughts. |6.||After you have discussed perspective with students for a few minutes, ask them to think about how it relates to superheroes. How does our point of view, or perspective, influence our choice of favorite superheroes? |7.||Distribute the Guiding Questions for Exploring Superheroes handout and explain that each question is intended to guide students to look at the character traits of superheroes in a more critical way. For example, explain that the question, "Who do you think would like this superhero?" helps examine the superhero from the perspective of the audience (e.g., people who are interested in strength, action, and saving the world will like Superman). The question "Who would not like this superhero?" helps us see that some groups bring a different perspective (e.g., Superman is less likely to appeal to those who do not like violence or to girls who don't identify with this male character). |8.||After you have explained what each question means, students should get into their larger groups and choose two superheroes from their My Favorite Superheroes handouts and another two from Traits of Superheroes From Books handout. Students in the group should choose two perspectives from the Guiding Questions for Exploring Superheroes handout to explore the four chosen superheroes. In helping students choose their perspectives, you might say something like: Superhero X is a tall man with blond hair and blue eyes. He is very strong and very fast and uses his strength and speed to save the day. Now let's say this story was set in Japan and was created by a Japanese writer with a Japanese perspective. Would X still be a superhero? Would he look the same? Talk the same? Act the same? |9.||As a group, students should fill out the first column of the Exploring Superheroes Chart. Students will then determine which group members will explore the first perspective selected and which group members will explore the second perspective selected. Group members will work independently to complete their columns of the chart. Once they are completed, students exploring the same perspective should share their results and then finally the entire group should share their charts. The group should complete a final chart to share with the class. |10.||After the groups share their charts, encourage discussions of the different points of view. For example, one group might have focused on Spider-Man, saying that he is strong and powerful, while another group may have said that he is witty and smart. Discussion of such differences will help students understand how perspective or point of view can change how people view the same person or thing. - Ask students to create a story about one of the superheroes from their My Favorite Superheroes handout or Traits of Superheroes From Books handout. This story should be written with a consideration of multiple perspectives. After selecting a character and considering the different perspectives, ask each student to choose one perspective from which to write the story. For example, if a student's superhero was male, he or she may consider writing the character as female; or if the hero was from the present, he or she may consider setting the story a hundred years ago. - Have students create a picture of a superhero who would appear in a children's book, a comic book, a graphic novel, an anime (Japanese animation), or a video game. Students should focus on the visual characteristics of the superhero. Students can then share their superhero pictures in a group. - Observe students’ participation in group and whole-class activities throughout this lesson. Listen to their comments and responses shared in the discussions. - Read the completed My Favorite Superheroes and Traits of Superheroes From Books handouts to see if students are able to identify character traits of superheroes portrayed in media texts and in children’s books. - Read the completed Guiding Questions for Exploring Superheroes handout and Exploring Superheroes Chart to see if students are able to examine their superheroes from multiple perspectives.
http://www.readwritethink.org/classroom-resources/lesson-plans/superman-really-that-super-990.html?tab=4
13
42
Children with hearing loss sensory; impairment; hearing; aid; ear; ears; meningitis; deafness; deaf; Usher's; syndrome; conductive; sensorineural; sign; language; Auslan; signed; english; lip; reading; fingerspelling; speech; cochlear; implants; bionic; ear; TTY; teletypewriter; communication; aural; CAP; auditory; processing; sound; noise; disability; better; start; initiative; Colds, infections, allergies and flu can temporarily reduce hearing. However some children have permanent hearing loss. This may be due to serious illness (such as meningitis), genetic problems (such as Usher's syndrome) or untreated ear problems. In many cases, the cause of a hearing loss is not known. There are a range of professionals who can help children with hearing loss develop their skills and talents. These include your local family doctor, paediatrician, ear nose and throat specialist, audiologist, speech pathologist and teachers. Staff at local community health centres and early intervention services for children with additional needs can also provide help. Sometimes, special teaching methods may be needed to develop communication skills. There may be local centres and services especially for children who are deaf or hearing impaired. Consult your doctor or an audiologist if there appears to be a sudden change in your child's hearing. Some conditions require prompt treatment to prevent permanent damage. hearing loss (impairment)? Having a hearing loss (impairment) means that a child has lost some hearing in one or both ears. Hearing impairments are described according to how much hearing has been lost. Loss is usually explained as mild, moderate, moderate to severe, severe or profound. - Mild hearing loss (impairment) The child can hear normal conversation but may not hear whispers or soft sounds. - Moderate hearing loss (impairment) The child does not hear normal speech. However, he will hear if a person speaks in a loud voice. A moderate hearing impairment will affect a child's language and/or speech development because not all words and sounds are heard clearly. - Moderate to severe hearing loss (impairment)Speech must be very loud to be heard. Even when speech is loud, not all words and sounds will be heard clearly. Speech and language development will be affected and the child will benefit from specialised professional help. - Severe hearing loss (impairment)The child will not hear normal conversation and will only be able to pick out a few loud sounds and words. Speech and language development will be affected and specialised professional help will be needed. - Profound hearing loss (impairment)No sounds can be heard without the help of a hearing aid. In some cases, a cochlear implant will be used to increase the amount of sound a child can hear. Speech and language development will be affected and professional assistance will be needed. Deafness is another name for profound hearing loss. However, people who call themselves 'Deaf' usually are identifying themselves as members of the Deaf Community. This means that they use Auslan (Australian Sign Language) as their first language. Not all people with a severe or profound hearing loss use Auslan. types of hearing loss (impairment) Professionals usually talk about three different types of hearing loss. Conductive hearing loss happens when there is some block to the transfer of sound from the outer ear to the inner ear (cochlea). In some types of conductive loss, hearing levels may change gradually over time or they may change from day to day. Middle ear infections cause conductive hearing loss. Sensorineural hearing loss happens when there is damage to the inner ear (cochlea) or to the auditory (hearing) nerve. This type of hearing impairment may affect: Combined conductive and sensorineural hearing loss (sometimes called mixed loss) happens when sound is not transferred from the outer to the inner ear (cochlea) and there is also damage to the inner ear or auditory (hearing) nerve. - how loud the sound seems - how clear the sound seems. In addition, professionals talk about whether the hearing loss is in one (unilateral) or both ears (bilateral). There are also disorders that involve listening and understanding, such as Central Auditory Processing (CAP) Disorder. CAP Disorder is also referred to as Auditory Processing Disorder. hearing loss (impairment) There are many causes of hearing loss. These may include: - repeated middle ear infections - holes in the ear drum - disorders that damage nerves involved in hearing (degenerative disorders) - inherited condition or genetic cause, such as Usher's syndrome - infections that occur during pregnancy such as rubella (german measles) and toxoplasmosis - infections after birth, such as meningitis, mumps - exposure to very loud noise over long periods - abnormalities of the head and face that affect the structure of the ear - premature birth, especially when the birth weight is less than 1500 grams - head injury including loss of consciousness or skull fracture. However, in some cases it may not be possible to identify the cause of deafness or hearing loss. - The treatment for hearing loss depends on the reason for the impairment, and the severity of the impairment. - Most conductive hearing losses can be improved by medication or surgery. - Sensorineural hearing loss usually cannot be treated. Different types of technology are used to help children with permanent hearing loss. Hearing aids and cochlear implants are used most often to improve hearing in children with permanent disabling hearing loss. A hearing aid is a device that makes sounds louder. Hearing aids are fitted to match the hearing loss of your child. Hearing aids will increase your child's hearing but will not make hearing normal. Different types of hearing aids are named according to where they are worn. Hearing aids can be: - in-the-ear (ITE) - in-the-canal (ITC). This is the passage between the outer and middle ear. - behind-the-ear (BTE) - body level hearing aids that clip on to a belt or clothing - hearing aids that are built onto the ear pieces of spectacles (eye glasses). In school classrooms, FM soundfield amplification systems can be used to make the teacher's voice louder for students with a hearing loss or who are deaf. A cochlear implant is sometimes called a bionic ear because it uses technology to allow the person to hear. A cochlear implant is designed to stimulate the surviving nerve cells in the inner ear (cochlea). This allows messages about sound to be sent from the inner ear to the brain. Some parts of the cochlear implant (the speech processors) are worn in a pocket, belt pouch or body harness. Other parts are surgically fitted into the head and inner ear. People with hearing impairments may communicate in different ways. - Many use speech as their main method of communication. However, they usually rely on hearing aids or cochlear implants to help them do this. - Other people use a type of signed or written language. - Some people use a combination of signing and talking known as Total Communication. - In Australia, people with hearing impairment may learn Auslan, the language of Australia's Deaf Community. Auslan is the sign language that has developed in the Australian Deaf Community. Auslan is a unique language that uses hand signs, body movements, facial expressions, mime and gestures. It is not another form of English. Methods to support use of English There are different ways that people with hearing impairments can communicate using English. - Signed English is a way of communicating in English using signs for each English word. Often, a person speaks in English and signs the same message while she speaks. This is an artificial system used in schools to teach English to hearing impaired children. - Fingerspelling uses set hand positions for each letter of the English alphabet. This way, words can be spelled out using the fingers and hands. In Australia, the fingerspelling alphabet uses both hands. In other countries, like the United States, this alphabet requires only one hand. - Lip Reading or Speech Reading is used by some people with hearing impairment when speaking with people who do not sign, fingerspell, etc. Usually, a person will watch facial expressions and body language as well as lip movements. People with hearing impairments can use a special telephone called a teletypewriter (TTY). A TTY is a typewriter or computer that is connected to a telephone or modem. This means that people with hearing impairments can use the telephone by sending and receiving typed messages to other people with TTY connections. Sometimes you will see a TTY telephone number listed for some services. When people with a hearing impairment wish to contact someone who does not have a TTY connection, they can use: - a fax - the Short Message Service (SMS) - text messages sent between mobile phones - the National Relay Service. The National Relay Service can help by providing: - Voice Carry Over (VCO). VCO is for people with hearing impairment who are able to use their voice to speak but cannot properly hear what is said to them over the telephone. When Voice Carry Over is used, the relay officer at the National Relay Service translates information spoken to the person with hearing impairment into typed messages and sends it to their teletypewriter. - Hearing Carry Over (HCO). HCO is for people who can hear but who cannot speak clearly enough to communicate over the telephone. Messages from the person with speech impairment are typed and sent to a relay officer from the National Relay Service. The relay officer then reads the message for the other caller. loss or deafblindness - The combination of hearing and vision loss is often referred to as 'dual sensory loss'. - Most children will have some degree of hearing or sight. Very few are both profoundly deaf and totally blind. - The term 'Deafblind' is used for children who are totally blind and profoundly deaf. Some people also use this term when speaking about children who have both a significant hearing loss and a significant vision loss but may have some degree of hearing or vision. - Individuals will have different needs depending upon the combination of hearing and vision impairments affecting them. - See a doctor promptly if you feel that your child is not responding to sounds, and ask to be referred for a hearing test. - In some places, you can visit your doctor or local health clinic for a hearing screening. - There may be an audiology service or clinic in your local area. Many audiologists do not require a referral from a doctor. - However, if you are not sure about your choices, your doctor or the staff from local community health services can advise you. They can help you make an appointment with an audiologist (hearing specialist) in your area. - See a doctor if your child complains of pain in the ears. (Young children may cry and pull on their ears if they are not yet able to talk.) - If your child wears hearing aids or has a cochlear implant, keep the equipment clean and cared for. Teach the child to do this as well. - Even if your child has a recognised hearing impairment, he will still need regular hearing checks, at least once a year. - Some parents find it helpful to join support groups. Some groups are listed at the end of this topic. At preschool or school, teachers will think about seating and classroom acoustics. - Teachers will also think about special needs for sound amplification, communication devices and changes to their teaching. - Depending on the needs of your child, teachers may get extra help in the classroom or advice from visiting specialist teachers. - Parents can assist teachers by giving them all necessary and up-to-date information about their child's hearing. There are different ways to teach communication to children with hearing impairments. These include the: - Bilingual/bicultural approach using Auslan In this approach, children are taught the two languages of English and Auslan. They learn the different cultures and concepts that are part of each language. This approach concentrates on developing spoken language in the child with hearing impairment. - The child is able to listen to and understand what others say by using hearing aids or cochlear implants. - Usually, this method requires intense teaching on a one-to-one basis, especially if the hearing loss is severe. - Auditory/Verbal Therapy is one method of Oral/Aural teaching that is used to help children with hearing impairment learn listening skills. Total communication is an educational philosophy that uses all types of communication including signed language and speech. The approaches selected will depend upon your child's degree of hearing loss, personality, age and general abilities. Better Start initiative Better Start for Children with Disability (Better Start) initiative. This initiative aims to assist eligible children with developmental disabilities to access funding for early intervention services. Australian Government Department of Families, Housing, Community Services and Indigenous Affairs (FaHCSIA) - Child and Youth Health Hearing Assessment Centre (clinics are held at South Tce, Adelaide, Noarlunga Hospital, Elizabeth Vale and major country centres) Address: 295 South Terrace, Adelaide, 5000 Tel. 8303 1530 (Country callers - 1300 364 100) For a more comprehensive list of resources in South Australia, and books about hearing loss, see the topic Children with hearing loss - resources. Prepared in collaboration with Department of Education, Training and Employment Ministerial Advisory Committee on Students with Disabilities The information on this site should not be used as an alternative to professional care. If you have a particular problem, see a doctor, or ring the Parent Helpline on 1300 364 100 (local call cost from anywhere in South Australia). This topic may use 'he' and 'she' in turn - please change to suit your child's sex.
http://www.cyh.sa.gov.au/HealthTopics/HealthTopicDetails.aspx?p=114&np=306&id=1878
13
14
Evidence of Vitrified Stonework in the Inca Vestiges of Peru Vitrified stones are simply stones that have been melted to a point where they form a glass or glaze. There is much debate in archaeological circles over the ancient examples under study for two reasons. Firstly, few cases are known to have been tested and even if they have been, there are many questions over how they were made. Glassy rocks form naturally under conditions of high temperature and pressures found in and around volcanoes. Glass or glazes are traditionally created using a furnace. Furnace or kiln examples are found on everyday objects such as glassware and ceramics. Ceramic glazes are created by pasting certain finely crushed stones, sometimes with tinctures, onto fired pots and plates. The whole is then fired to temperatures usually in excess of 1000 degrees centigrade. Many of the ancient vitrified examples are found on objects so large that they cannot be placed in a furnace. Previous analysis concluded that the temperatures needed to produce the vitrification were up to 1,100°C. There are confirmed cases from Scotland, Ireland, France and Germany. These are mostly forts and buildings with vitrified ramparts. This fusion is often uneven throughout the various forts and even on a single wall. Some stones are only partially melted, whilst in others their adjoining edges are fused firmly together. In many instances, pieces of rock are enveloped in a glassy enamel-like coating, which binds them into a whole. At times, the entire length of the wall presents one solid mass of vitreous substance. There are many more examples from Malta, Egypt, Iraq, Sudan, South East Asia and others that are speculated to fall into the grouping. However, these have not all been subjected to scientific testing like the European cases. They simply appear to be glazed finishes on equally large objects or on walls that are impossible to fire conventionally. There has been much discussion about the Inca vestiges in the Peruvian Andes. It mostly revolves around whether the stones are vitrified or not. This article focuses on these Peruvian cases where there are indications of heat treatment. THE PERUVIAN CASE STUDY The vitrified examples under study come from famous Peruvian sites, in South America. Without testing, the debate is open to claims of unusual polishing techniques, natural degradation, lava flows and many other odd explanations. The analysis below eliminates some of these ideas. The vitrified stones of Peru were first brought to popular attention by Erich von Daniken in the 1970s. He noted the vitrification at Sacsayhuaman in his book Chariots of the Gods. Peruvian Alfredo Gamarra had identified this vitrification earlier. The identification and cataloging of these intriguing stones has been carried on by Alfredo’s son Jesus Gamara, and Jan Peter de Jong. In Sacsayhuaman, there are many other indications of the use of heat. Strange marks on the stones like the one pictured can be found; shiny, completely smooth and with another color to the rest of the rock: Vitrification appears on different kinds of stones and structures, as the photos show. It is found on the perfectly fitted walls with irregular blocks. It is also observed on walls made with regular oblong blocks. It has been spotted on mountainsides, caves and rocks in situ. The location arrangements vary as well. Some sites are surrounded or overbuilt by walls whilst others have single exposed isolated stones. There seems to have been some very adaptable ancient technology at work. A list of vestiges where stonework seems to have been treated with this technology include; In Cusco, the walls of Koricancha, Loreto Street, Sacsayhuaman, Kenko, Tetecaca, Templo de la Luna (or Amaru Machay), Zona X, Tambo Machay, Puca Pucara, Pisac, Ollantaytambo, Chinchero, Machu Picchu, Raqchi and in Bolivia in Tiahuanaco. Archaeologists assume that the perfect fitting stones are the most developed style of the Incas. Regardless, there is no explanation of the shiny surfaces that can be observed. These often appear on the borders where the stones join perfectly. It is normally assumed that these parts were simply polished by the Incas. During many visits to the vestiges mentioned, Jesus Gamarra and Jan Peter de Jong have examined these stones with highly reflective surfaces. They have captured many of them on video. Through personal observations and analysis of the video material, they have concluded that something other than polishing has occurred. Identifying Vitrified Stones. Many cases display some or all of the qualities mentioned below. The vitrified spots show discoloration and smoothness around the particular areas. They clearly look like the stone has been melted just in those spots. A simple flashlight test was developed, to help identify the layers of glaze or glass. Filming was carried out at night with a flashlight beam passing through the glaze. This shows the reflection and diffraction of the light as it passes through the surface. Sacsayhuaman, Kenko and Loreto Street were all filmed at night using a flashlight or the nocturnal illumination to capture the effect. The following traits help to identify vitrified stones: - The melted effect is obvious - Reflection is high - The layer refracts, diffracts and diffuses light - A separate vitrified layer is present on the surface - Damaged layers show a ´film´ on the stone - The glazed layer is independent of rock type - The surface is smooth to the touch even if the surface is irregular - There is often associated heat discoloration surrounding the glaze The diffraction effect can be seen in the video of ‘the Inca Throne’ at Sacsayhuaman. The rainbow effect is clearly captured by the camera. This is directly linked to the light passing through the glass layer and splitting into its constituent parts. After noticing this effect, it was also detected on videos of other vitrified stones. This can be viewed on this short video: http://www.youtube.com/watch?v=ae_8ri2fiwI, and on the DVD that will be available shortly. The DVD ”The Cosmogony of the 3 Worlds” shows an overview of this phenomenon in the chapter on Vitrified Stones. VITRIFIED STONE SAMPLE ANALYSIS A small sample from the Peruvian site called Tetecaca was collected and then analyzed by the University of Utrecht, Holland. The sample is from a rock outcrop above Cuzco. Inside the cave there is an altar formed from rectangular shapes made of the rock. Several lines in the rock have a shiny surface, as if they were branded into the rock. They are on right lines on the wall of the cave. The walls are cut out with curved and rectangular forms in them. These are man made structures, which rules out natural phenomena. The photos show the site. Pictures from inside of the cave, walls with long, straight reflecting lines and an altar structure: Below is a picture of the spot where the sample was found. The white line indicates where the thin section was made. The smooth layer on the picture is about 2 cm wide and 1.3 cm deep. The sample was carefully cut into two parts and a thin section was taken for analysis. RESULTS & CONCLUSION The microscope photograph of the sample shows two distinct regions, the surface layer and the body stone. There is a less distinct intermediate area between the two that seems to transition from stone body to surface layer. Samples from all three regions were subjected to detailed analysis. Photo 1: The Vitrified Surface of the Stone (The line at bottom is 21 micrometer) Composition of the Surface Layer Note: The full set of photos, spectra, tables and text can be found in the full article The body of the stone is limestone, which is not surprising. However, The Vitrified Surface of the stone shows a different spectrum of elements. (See spectra above) The glaring difference is that Silicon is present with much higher concentrations. The trace elements of Aluminum and Magnesium are also significantly higher than the body. Oxygen is also present in double the quantities. The Calcium and Carbon are much lower than the body sample. The intermediate regions show a gradation between the surface and body of the stone. This implies either the surface layer was somehow ground and mixed with the stone body. The body limestone somehow merged/melted with the surface layer. Alternatively, the limestone constituents could have been a part of the added surface layer. The surface shows some similarity to Wollastonite, which forms when impure limestone is subjected to high temperatures and pressures. However, the impurities in the surface are not present in the stone body. This indicates that the compounds in the surface layer were added. It appears they were applied and then treated with heat. This option does have some merits, but it is moving towards the techniques of the ceramist. Whilst the spectra do not show explicitly that the surface is vitrified, the layer does have the composition, sheen, hardness and glassy texture of a glaze. It is very likely that the glaze was made from a ceramic paste applied to the limestone surface. This is reinforced with a comparison to ancient glazed ceramic pottery shards. If an antique ceramic is compared to the spectra of the glaze above there is little to separate the two. In the Paper X-Ray Techniques Applied to Surface Paintings of Ceramic Pottery Pieces From Aguada Culture (Catamarca, Argentina) there are several comparable results. Ignoring the gold leaf and colorants, the key constituents Silicon, Aluminum, Magnesium, Carbon and Oxygen are present in the same ratios. The glazed results strongly indicate that heat was used to produce the finish. This raises several questions. Even if a layer of a ceramic paste was applied, how was it heated to the requisite temperatures without cracking the limestone? How was the heat produced to treat these structures? Whilst this sample is from a cave, there are similar structures that are outside with the same kind of glaze. The same conclusion may not necessarily be applied to these other cases. Analysis is needed, but the similarities with the investigated sample and other photographed cases, are clear. It is likely that these other cases are also vitrified. The amount of heat needed to fire the huge stones on which these glazes are found is enormous. In furnaces, the whole body has to be raised to the temperature of the surface glaze. The stones pictured above provoke much debate. Explanations on how they were produced vary from the use of advanced machines, simple metal/stone tools, molded stonework, concentrated sunlight and fire methods. Whilst the analysis above says little about the way the shapes were made, it does eliminate some ideas on the means of producing these exquisite finishes. Heat is used to form glazes. How the heat was applied is not clear. To create ceramics on this scale, the heat production would have been greater than the normal ceramic methods. Protzen has looked at these effects and suggested it could be achieved with polishing. To date, only Andesite has been attempted with very limited success. After the analysis of the surface layer above, it is clear that polishing alone will not produce the heat needed to produce a ceramic glaze. This eliminates polishing as a means of creation. Peruvian Alfredo Gamarra has identified vitrification on many stones and has argued that the ancients had a technology to treat stone with heat and that the stone was soft at the moment of construction. The comparison at the spectrum level with clay and ceramic pastes is interesting. Ceramic pastes and clay are soft prior to being treated with heat. Conventional geological understanding is not compatible with this idea. However, the impression from the vitrified stonework is that the stone was once soft. In many of the stones, there are places where it looks as if objects or molds were pressed into the stone. The perfect fitting stones in the walls of Cusco and the other Inca vestiges could have been obtained more easily this way. Another option is the use of sun dishes and concentrated sunlight by the ancients. This is discussed by Prof. Watkins in his 1990 paper on fine Inca stonework. In this seminal paper, his chief concern was the methods of cutting the stone. Since he was proposing intense heat to cut the stones, it was not a great step to consider the stones melted. His conclusions have been much maligned since there had been no analyses performed. The analysis does support this, but the location of the sample was on a wall in a cave. The ceramic paste had to be heated whilst on the stone vestige. This means light would have to be reflected deep into the cave. Whilst it is possible that the ancients were capable of producing flat mirrors for the task, it does seem complicated. This method could work for stones on the surface, but is clearly limited in this case. If the stones were fired in a kiln, the glaze could be a result of the extremely high temperatures. The knowledge of ceramics in ancient Peru suggests this is a distinct possibility. This prospect however, only arises with the stones that can be placed in a kiln or stonework that is part of a kiln. The examples laid onto the sides of huge natural rocks cannot have been produced by standard fire techniques. As European studies of vitrified forts and experimental work have shown. There is the possibility is that the cave itself was a kiln. Pots or vases may have been fired in the cave and the ceramic pastes may have been applied to protect the stone structure. There is much discoloration within the cave and innumerable glazed areas. The comparison to vitrified vestiges in the open air or in places without a smoke escape, leaves many questions. On balance, it has to be admitted that a method is difficult to define. Further analysis of samples from the other sites needs to be undertaken to confirm the use of heat in all of the sites. However, the sample tested shows that the similarity to ceramic pastes is near certain. It is easy to conclude that heat was used. The treatment method may have been similar to the technology used for ceramic pastes, but on a much larger scale. -Jesús Gamarra Farfan especially, for showing, explaining and filming these stones. The following persons we want to thank for their cooperation and feedback: -Prof. Schuiling, Tilly Bouten and Anita van Leeuwen, Geology department University of Utrecht. -prof. Kars, Institute for Geo and Bioarchaeology, IGBA, Faculty of Earth and Life Sciences, Vrije Universiteit, Amsterdam -David Campbell, http://www.anarchaeology.com -Paul D. Burley, http://www.pauldburley.com -Gamarra Farfán, J.B., Parawayso. April 2008. -de Jong, Jan Peter, www.ancient-mysteries-explained.com -Morris, M., The great pyramid secret, Scribal arts 2010. -Protzen, J.-P. l986. Inca stonemasonry. Scientific Amer. 254: 94-105. -Prof. Watkins, I. 1990. How Did the Incas Create Such Beautiful Stonemasonry?” in “Rocks and Minerals” Vol. 65 Nov/Dec 1990. -Thurlings, B, Wie hielp de mens? Uitgeverij Aspekt. 2008. -X – Ray spectra of minerals and materials: http://www.cannonmicroprobe.com/XRay_%20Spectra.htm. -Silvano R. Bertolino, Victor Galván Josa, Alejo C. Carreras, Andrés Laguens, Guillermo de la Fuente and José A. Riveros in Wiley Interscience Online, Dec. 2008. X-Ray Techniques Applied to Surface Paintings of Ceramic Pottery Pieces From Aguada Culture (Catamarca, Argentina). Jan Peter de Jong website Ancient Mysteries Explained Website Secrets of the Sun Sects
http://secretsofthesunsects.wordpress.com/2011/12/05/incan-vitrified-stones/
13
25
In atoms, as you know, electrons reside in orbitals of differing energy levels such as 1s, 2s, 3d, etc. These orbitals represent the probability distribution for finding an electron anywhere around the atom. Molecular orbital theory posits the notion that electrons in molecules likewise exist in different orbitals that give the probability of finding the electron at particular points around the molecule. To produce the set of orbitals for a molecule, we add together the valence atomic wavefunctions for the bonded atoms in the molecule. This is not as complicated as it may sound. Let's consider the bonding in homonuclear diatomic molecules--molecules of the formula A2. Perhaps the simplest molecule we can imagine is hydrogen, H2. As we have discussed, to produce the molecular orbitals for hydrogen, we add together the valence atomic wavefunctions to produce the molecular orbitals for hydrogen. Each hydrogen atom in H2 has only the 1s orbital, so we add the two 1s wavefunctions. As you have learned in your study of atomic structure, atomic wavefunctions can have either plus or minus phases--this means the value of the wavefunction y is either positive or negative. There are two ways to add the wavefunctions, either both in-phase (either both plus or both minus) or out-of-phase (one plus and the other minus). shows how atomic wavefunctions can be added together to produce molecular orbitals. The in-phase overlap combination (top set of orbitals in ) produces a build-up of electron density between the two nuclei which results in a lower energy for that orbital. The electrons occupying the s H-H orbital represent the bonding pair of electrons from the Lewis structure of H2 and is aptly named a bonding molecular orbital. The other molecular orbital produced, s * H-H shows a decrease in electron density between the nuclei reaching a value of zero at the midpoint between the nuclei where there is a nodal plane. Since the s * H-H orbital shows a decrease in bonding between the two nuclei, it is called an antibonding molecular orbital. Due to the decrease in electron density between the nuclei, the antibonding orbital is higher in energy than both the bonding orbital and the hydrogen 1s orbitals. In the molecule H2, no electrons occupy the antibonding orbital. To summarize these findings about the relative energies of the bonding, antibonding, and atomic orbitals, we can construct an orbital correlation diagram, shown in : Notice that the orbitals of the separated atoms are written on either side of the diagram as horizontal lines at heights denoting their relative energies. The electrons in each atomic orbital are represented by arrows. In the middle of the diagram, the molecular orbitals of the molecule of interest are written. Dashed lines connect the parent atomic orbitals with the daughter molecular orbitals. In general, bonding molecular orbitals are lower in energy than either of their parent atomic orbitals. Similarly, antibonding orbitals are higher in energy than either of its parent atomic orbitals. Because we must obey the law of conservation of energy, the amount of stabilization of the bonding orbital must equal the amount of destabilization of the antibonding orbital, as shown above. You may be wondering whether the Lewis structure and the molecular orbital treatment of the hydrogen molecule agree with one another. In fact, they do. The Lewis structure for H2 is H-H, predicting a single bond between each hydrogen atom with two electrons in the bond. The orbital correlation diagram in predicts the same thing--two electrons fill a single bonding molecular orbital. To further demonstrate the consistency of the Lewis structures with M.O. theory, we will formalize a definition of bond order--the number of bonds between atoms in a molecule. The bond order is the difference in the number of electron pairs occupying an antibonding and a bonding molecular orbital. Because hydrogen has one electron pair in its bonding orbital and none in its antibonding orbital, molecular orbital theory predicts that H 2 has a bond order of one--the same result that is derived from Lewis structures. To demonstrate why it is important to take the number of antibonding electrons into account in our bond order calculation, let us consider the possibility of making a molecule of He2. An orbital correlation diagram for He2 is provided in : From the orbital correlation diagram above you should notice that the amount of stabilization due to bonding is equal to the amount of destabilization due to antibonding, because there are two electrons in the bonding orbital and two electrons in the antibonding orbital. Therefore, there is no net stabilization due to bonding so the He2 molecule will not exist. The bond order calculation shows that there will be a bond order of zero for the He2 molecule--exactly what we should predict given that helium is a noble gas and does not form covalent compounds. Both hydrogen and helium only have 1s atomic orbitals so they produce very simple correlation diagrams. However, we have already developed the techniques necessary to draw a correlation diagram for a more complex homonuclear diatomic like diboron, B2. Before we can draw a correlation diagram for B2, we must first find the in-phase and out-of-phase overlap combinations for boron's atomic orbitals. Then, we rank them in order of increasing energy. Each boron atom has one 2s and three 2p valence orbitals. Due to the great difference in energy between the 2s and 2p orbitals, we can ignore the overlap of these orbitals with each other. All orbitals composed primarily of the 2s orbitals will be lower in energy than those comprised of the 2p orbitals. shows the process of creating the molecular orbitals for diboron by combining orbitals of atomic boron. Note that the orbitals of lowest energy have the most constructive overlap (fewest nodes) and the orbitals with the highest energy have the most destructive overlap (most nodes). Notice that there are two different kinds of overlap for p-orbitals--end-on and side-on types of overlap. For the p-orbitals, there is one end-on overlap possible which occurs between the two pz. Two side-on overlaps are possible--one between the two px and one between the two p y. P-orbitals overlapping end-on create s bonds. When p-orbitals bond in a side-on fashion, they create p bonds. The difference between a p bond and a s bond is the symmetry of the molecular orbital produced. s bonds are cylindrically symmetric about the bonding axis, the z-direction. That means one can rotate the s bond about the z-axis and the bond remains the same. In contrast, p bonds lack that cylindrical symmetry and have a node passing through the bonding axis. Now that we have determined the energy levels for B2, let's draw the orbital correlation diagram (): The orbital correlation diagram for diboron, however, is not generally applicable for all homonuclear diatomic molecules. It turns out that only when the bond lengths are relatively short (as in B2, C2, and N2) can the two p-orbitals on the bonded atoms efficiently overlap to form a strong p bond. Some textbooks explain this observation in terms of a concept called s-p mixing. For any atom with an atomic number greater than seven, the p bond is less stable and higher in energy than is the s bond formed by the two end-on overlapping p orbitals. Therefore, the following orbital correlation diagram for fluorine is representative of all homonuclear diatomic molecules with atomic numbers greater than seven. To draw the correlation diagrams for heteronuclear diatomic molecules, we face a new problem: where do we place the atomic orbitals on an atom relative to atomic orbitals on other atoms? For example, how can we predict whether a fluorine 2s or a lithium 2s orbital is lower in energy? The answer comes from our understanding of electronegativity. Fluorine is more electronegative than lithium. Then electrons are more stable, i.e. lower in energy, when they are lone pairs on fluorine rather than on lithium. The more electronegative element's orbitals are placed lower on the correlation diagram than those of the more electropositive element. illustrates this point: Since lithium only has one occupied valence orbital, only one bonding and one antibonding orbital are possible. Furthermore, the electrons in orbitals on F that cannot bond with Li are left on F as lone pairs. As you can see, the electrons in the Li-F s bond are quite close in energy to fluorine's 2p orbitals. Then the bonding orbital is primarily composed of a fluorine 2p orbital, so the M.O. diagram predicts that the bond should be polarized toward fluorine--exactly what is found by measuring the bond dipole. Such an extreme polarization of electron density towards fluorine represents a transfer of an electron from lithium to fluorine and the creation of an ionic compound. The construction of other heteronuclear diatomic orbital correlation diagrams follows exactly the same principles as those we employed for LiF. To see more examples of such diagrams, consult your favorite chemistry textbook. As you can imagine, to describe the bonding in polyatomic molecules, we would need a molecular orbital diagram with more than two dimensions so we could describe the bonds both between the central atom and each terminal atom and between the terminal atoms themselves. Such diagrams are impractically difficult to draw or require complex methods to collapse such multidimensional figures into two dimensions. Instead we will describe a simple yet powerful method to describe the bonding in polyatomic molecules called hybridization. By adding together certain atomic orbitals, we can produce a set of hybridized atomic orbitals that have the correct shape and directionality to account for the known bond angles in polyatomic molecules. Hybrid orbitals describe the bonding in polyatomic molecules one bond at a time. From the geometry of the molecules, as predicted by VSEPR, we can deduce the hybridization of the central atom. Linear molecules are sp hybridized. Each hybrid orbital is composed of a combination of an s and a p orbital on the central atom. The other geometries are produced by the proper mixture of atomic orbitals. Molecules based on a triangle are sp2 hybridized. Tetrahedrally based molecules are sp3 hybridized. Trigonal bipyramidally based molecules are dsp3 hybridized. Octahedrally based molecules are d2sp3 hybridized. To illustrate how hybrid orbitals are used to describe the bonding in polyatomic molecules, we will examine the bonds that form water, H2O. Water is AB2e2, therefore, its geometry is based on a tetrahedron, and it is sp3 hybridized. Two sp3 hybrid orbitals on oxygen with one electron each can form a bond with the singly occupied 1s orbitals on the hydrogen atoms. The remaining two sp3 hybrid orbitals on oxygen each have two electrons in them and are, therefore, lone pairs. A model of the bonding in water is shown in : To produce hybrid bonding descriptions of any compound, first decide what is the hybridization of the central atom based on its geometry. Next, form bonds between the hybrid or atomic orbitals on terminal atoms and the central atom. Finally, check to make sure that your bonding description agrees with the Lewis structure in the number of bonds formed and the number of lone pairs.
http://www.sparknotes.com/chemistry/bonding/molecularorbital/section1.rhtml
13
19
Earth-Moon-Earth, also known as moon bounce, is a radio communications technique which relies on the propagation of radio waves from an Earth-based transmitter directed via reflection from the surface of the Moon back to an Earth-based receiver. The use of the Moon as a passive communications satellite was proposed by W.J. Bray of the British General Post Office in 1940. It was calculated that with the available microwave transmission powers and low noise receivers, it would be possible to beam microwave signals up from Earth and reflect off the Moon. It was thought that at least one voice channel would be possible. The "moon bounce" technique was developed by the United States Military in the years after World War II, with the first successful reception of echoes off the Moon being carried out at Fort Monmouth, New Jersey on January 10, 1946 by John H. DeWitt as part of Project Diana. The Communication Moon Relay project that followed led to more practical uses, including a teletype link between the naval base at Pearl Harbor, Hawaii and United States Navy headquarters in Washington, DC. In the days before communications satellites, a link free of the vagaries of ionospheric propagation was revolutionary. Later, the technique was used by non-military commercial users, and the first amateur detection of signals from the Moon took place in 1953. EME communications technical details As the albedo of the Moon is very low (maximally 12% but usually closer to 7%), and the path loss over the 770,000 kilometre return distance is extreme (around 250 to 310 dB depending on VHF-UHF band used, modulation format and Doppler shift effects), high power (more than 100 watts) and high-gain antennas (more than 20 dB) must be used. In practice, this limits the use of this technique to the spectrum at VHF and above. The Moon must be above the horizon in order for EME communications to be possible. To determine EME Path Loss we need to know - - Moon distance from either the transmitting or receiving station - Transmitter station output in watts, expressed as ERP [roughly transmitter power output (minus feedline loss) x forward antenna gain] - Receive station gain (actual receiver gain minus feedline loss, x antenna gain) - The operating frequency of the transmitter and receiver Free space loss from an isotropic omnidirectional antenna is described by this formula. It calculates the surface area of an imaginary sphere of radius, d, that the radio wave illuminates uniformly: - Loss = where pi ≈ 3.14, d = distance and lambda = wavelength, in meters - Lambda = c/F F = Hz, c = meters/sec. - Lambda = when F is in MHz. Substituting F into the free-space loss formula and converting to d into km: - Loss = or - Loss(dB) = Adding factors for reflection from the Moon results in - Loss-eme(dB) = 32.45 + 20Log(F) + 20Log(2*d) + 50.21 - 10Log(.065) The standard radar path link formula is basis for EME path-loss calculations - Loss = After including the factor for surface reflectivity it becomes - where is the Moon's diameter Since the diameter of the Moon is ≈ 3500 km The formula becomes - Loss-eme(dB) = 20Log(F) + 40LOG(d) - 17.49, F = MHz, d = km For some reason not specified, Josef has increased the loss by 3-dB producing: - Loss-eme(dB) = 103.4 + 20LOG(F) + 40LOG(d) - 10Log(rho) or - Loss-eme(dB) = 20Log(F) + 40LOG(d) - 14.49 Note that the distance from the Earth to the Moon varies because the orbit of the Moon is not perfectly circular, it is somewhat elliptical with a mean radius of 240,000 miles. This means there is an apogee (the largest distance) and a perigee (the shortest distance). In addition, the orbital plane precesses with a principal period of 18.6 years. Depending on the position of the Moon with respect to the Earth, Apogee can be as much as 406,700km, while Perigee can be as little as 356,400km. - This translates to as much as 2.25dB difference in path loss from apogee to perigee. - The mean distance from Earth to Moon is given as 384,400km. - These calculations consider the fact that the Moon is only 7% efficient as a reflector, use the radar equation (which defines a two-way path-loss model) and the assumption that the Moon is a spherical reflector. Current EME communications Amateur radio (ham) operators utilize EME for two-way communications. EME presents significant challenges to amateur operators interested in working weak signal communications. Currently, EME provides the longest communications path any two stations on Earth can utilize for bi-directional communications. Amateur operations use VHF, UHF and microwave frequencies. All amateur frequency bands from 50 MHz to 47 GHz have been used successfully, but most EME communications are on the 2 meter, 70-centimeter, or 23-centimeter bands. Common modulation modes utilized by amateurs are continuous wave with Morse Code, digital (JT65) and when the link budgets allow, voice. World Moon Bounce Day, June 29, 2009, was created by Echoes of Apollo and celebrated world wide as an event preceding the 40th anniversary of the Apollo 11 Moon landing. A highlight of the celebrations was an interview via the Moon with Apollo 8 astronaut Bill Anders. He was also part of the backup crew for Apollo 11. The University of Tasmania in Australia with their 26m dish was able to bounce a data signal off the surface of the Moon which was received by a large dish in the Netherlands - Dwingeloo Radio Observatory. The data signal was successfully resolved back to data setting a world record for the lowest power data signal returned from the Moon with a transmit power of 3 milliwatts - about 1,000th of the power of a strong flashlight filament globe. World Moon Bounce Day 2010 was set to precede the Apollo 13 mission sometime in early 2010. The second World Moon Bounce Day was April 17, 2010 and coincided with the landing of Apollo 13 on its 40th anniversary. In October 2009 visual artist Daniela de Paulis and the CAMRAS radio amateurs association based at Dwingeloo radio telescope (NL) developed a new application of Moonbounce, called Visual Moonbounce, which allows moonbouncing images using the MMSSTV software. The technology was applied to a live performance called OPTICKS during which digital images are sent to the Moon and back in real time and projected live. Modulation types and frequencies optimal for EME Other factors influencing EME communications Doppler effect - 300 Hz at Moonrise/set - At Moonrise, returned signals will be shifted approximately 300 Hz higher in frequency due to the Doppler effect between the Earth and Moon. - As the Moon traverses the sky to a point due south the Doppler effect approaches nothing. As the Moon sets, signals are shifted lower in frequency until at Moonset they are shifted 300 Hz lower. - Doppler effects cause many problems when tuning into and locking onto signals from the Moon. A single sideband contact between IZ1BPN in Italy and PI9CAM at the Dwingeloo Radio Observatory. IZ1BPN's transmission is shifted up in pitch slightly to compensate for PI9CAM's transmission being Doppler Shifted down. At the end of IZ1BPN's transmission you can hear the echo of his signal returning from the Moon, again pitched down by Doppler Shift. |Problems listening to this file? See media help.| An array of 8 Yagi antennas for 144 MHz EME at EA6VQ, Balearic Islands, Spain - Pether, John (1998). The Post Office at War. Bletchley Park Trust. p. 25. - Butrica, Andrew J. (1996). To See the Unseen: A History of Planetary Radar Astronomy. NASA. See also - Information theory - Lunar Laser Ranging experiment - Meteor burst communications - Passive repeater - Radar Equation - NASA, Beyond the Ionosphere: the development of satellite communications - http://www.k5rmg.org/tech/EME.html (another calculator) - http://www.df9cy.de/tech-mat/pathloss.htm (gives formulas for EME path loss calculation) - site of CAMRAS radio amateurs association at Dwingeloo radio telescope - World Moon Bounce Day - Echoes of Apollo - Amateur Radio - August 2009 - Wireless Institute of Australia
http://en.wikipedia.org/wiki/EME_(communications)
13
25
Lesson Plans and Worksheets Browse by Subject Consumer Math and Personal Finance Teacher Resources Find teacher approved Consumer Math and Personal Finance educational resource ideas and activities Learners explore budgeting myths. For this personal finance lesson, students complete a series of activities that help them recognize the pros and cons of credit. Learners also discover the process for obtaining loans. This lesson includes several worksheets and supplementary materials. Bring Consumer Mathematics and Economics to life with this lesson, where learners investigate personal finance and budgeting. They use the newspaper’s classified section to determine a future job and potential earnings and determine a gross and monthly income as they use the data to calculate the cost of living. Students investigate the concept of personal finance. They take a look at some of the misconceptions that surround many financial responsibilities. Students examine statistics and other factors in order to comprehend sound principles that can benefit those who practice them. Ready to delve into personal finance? Learners discover how to organize a check book register. They practice debits and credits in a math game involving the register they set up. While they gain valuable practical knowledge, they also spend time practicing addition and subtraction of decimals. Students discover what debt, saving, and credit are. In this personal finance lesson, the teacher reads Not for a Billion Gazillion Dollars, and the students discuss what the main character does in the book in relation to debt, saving, and credit. At the end of the lesson students have a chance to borrow homework passes on credit. Students determine whether or not to save or spend and defend a decision. In this personal finance lesson, students identify opportunity cost of various spending and saving decisions. Students read a story where two girls share profits made by selling tissue roses, but they make differing choices with their profits. Students solve saving/spending choice problems. Students study personal finance and building wealth. In this economics lessons, students use Federal Reserve Bank publications to research answers and to make written recommendations for solutions to problems presented by several callers to a show viewed. Students also increase awareness of budgeting, saving, credit cards and insurance. Investigate personal finances and budgeting with your middle schoolers. They calculate their cost of living given various costs of amenities. They use their calculations to determine if their expenses exceed their income, if so they must re-do their budget. A great activity, that really brings economics to life. What do figures of speech have to do with financial literacy? Take an interdisciplinary look at The Berenstain Bears' Trouble with Money to find out. Young analysts read about the cubs' spendthrift ways and how Mama and Papa Bear teach them to save money. They explore figures of speech and create "critter banks" in which they begin to save both coins and interesting language.
http://www.lessonplanet.com/lesson-plans/consumer-math-and-personal-finance
13
16
History of Germany during World War I |Part of a series on the| |History of Germany| During World War I, the German Empire was one of the Central Powers that ultimately lost the war. It began participation with the conflict after the declaration of war against Serbia by its ally, Austria-Hungary. German forces fought the Allies on both the eastern and western fronts, although German territory itself remained relatively safe from widespread invasion for most of the war, except for a brief period in 1914 when East Prussia was invaded. A tight blockade imposed by the British Navy caused severe food shortages in the cities, especially in the winter of 1916-1917, known as the turnip winter. The German population responded to the outbreak of war in 1914 with a complex mix of emotions, in a similar way to the populations in other countries of Europe; notions of overt enthusiasm known as the Spirit of 1914 have been challenged by more recent scholarship. The German government, dominated by the Junkers, thought of the war as a way to end Germany's disputes with rivals France, Russia and Britain. The beginning of war was presented in authoritarian Germany as the chance for the nation to secure "our place under the sun," as the Foreign Minister Bernhard von Bulow had put it, which was readily supported by prevalent nationalism among the public. The Kaiser and the German establishment hoped the war would unite the public behind the monarchy, and lessen the threat posed by the dramatic growth of the Social Democratic Party of Germany, which had been the most vocal critic of the Kaiser in the Reichstag before the war. Despite its membership in the Second International, the Social Democratic Party of Germany ended its differences with the Imperial government and abandoned its principles of internationalism to support the war effort. It soon became apparent that Germany was not prepared for a war lasting more than a few months. At first, little was done to regulate the economy for a wartime footing, and the German war economy would remain badly organized throughout the war. Germany depended on imports of food and raw materials, which were stopped by the British blockade of Germany. Food prices were first limited, then rationing was introduced. The winter of 1916/17 was called "turnip winter" because the potato harvest was poor and people ate animal feed especially vile tasting turnips. During the war from August 1914 to mid 1919, the excess deaths over peacetime caused by malnutrition and high rates of exhaustion and disease and despair came to about 474,000 civilians. The German army opened the war on the Western Front with a modified version of the Schlieffen Plan, designed to quickly attack France through neutral Belgium before turning southwards to encircle the French army on the German border. The Belgians fought back, and sabotaged their rail system to delay the Germans. The Germans did not expect this and were delayed, and responded with systematic reprisals on civilians, killing nearly 6,000 Belgian noncombatants, including women and children, and burning 25,000 houses and buildings. The plan called for the right flank of the German advance to converge on Paris and initially, the Germans were very successful, particularly in the Battle of the Frontiers (14–24 August). By 12 September, the French with assistance from the British forces halted the German advance east of Paris at the First Battle of the Marne (5–12 September). The last days of this battle signified the end of mobile warfare in the west. The French offensive into Germany launched on 7 August with the Battle of Mulhouse had limited success. In the east, only one Field Army defended East Prussia and when Russia attacked in this region it diverted German forces intended for the Western Front. Germany defeated Russia in a series of battles collectively known as the First Battle of Tannenberg (17 August – 2 September), but this diversion exacerbated problems of insufficient speed of advance from rail-heads not foreseen by the German General Staff. The Central Powers were thereby denied a quick victory and forced to fight a war on two fronts. The German army had fought its way into a good defensive position inside France and had permanently incapacitated 230,000 more French and British troops than it had lost itself. Despite this, communications problems and questionable command decisions cost Germany the chance of obtaining an early victory. 1916 was characterized by two great battles on the Western front, at Verdun and Somme. They each lasted most of the year, achieved minimal gains, and drained away the best soldiers of both sides. Verdun became the iconic symbol of the murderous power of modern defensive weapons, with 280,000 German casualties, and 315,000 French. At Somme, there were over 600,000 German casualties, against over 400,000 British, and nearly 200,000 French. Add Verdun, the Germans attacked what they considered to be a weak French salient which nevertheless the French would defend for reasons of national pride. The Somme was part of a multinational plan of the Allies to attack on different fronts simultaneously. German experts are divided in their interpretation of the Somme. Some say it was a standoff, but most see it as a British victory and argue it marked the point at which German morale began a permanent decline and the strategic initiative was lost, along with irreplaceable veterans and confidence. Enthusiasm faded with the enormous numbers of casualties, the dwindling supply of manpower, the mounting difficulties on the homefront, and the never-ending flow of casualty reports. A grimmer and grimmer attitude began to prevail amongst the general population. Morale was helped by victories against Serbia, Greece, Italy, and Russia which made great gains for the Central Powers. Morale was at its greatest since 1914 at the end of 1917 and beginning of 1918 with the defeat of Russia following her rise into revolution, and the German people braced for what Ludendorff said would be the "Peace Offensive" in the west. In spring 1918, Germany realized that time was running out. It prepared for the decisive strike with new armies and new tactics, expecting to win the war on the Western front before millions of American soldiers appeared in battle. General Erich von Ludendorff and Field Marshal Paul von Hindenburg had full control of the army, they had a large supply of reinforcements moved him from the Eastern front, and they trained storm troopers with new tactics that raced through the trenches and attacked the enemy's command and communications centers. The new tactics would indeed restore mobility to the Western front, but the German army was too optimistic. During the winter of 1917-18 it was "quiet" on the Western Front—British casualties averaged "only" 3,000 a week. Serious attacks were impossible in the winter because of the deep caramel-thick mud. Quietly the Germans brought in their best soldiers from the eastern front, selected elite storm troops, and trained them all winter in the new tactics. With stopwatch timing, the German artillery would lay down a sudden, fearsome barrage just ahead of its advancing infantry. Moving in small units, firing light machine guns, the storm troopers would bypass enemy strongpoints, and head directly for critical bridges, command posts, supply dumps and, above all, artillery parks. By cutting enemy communications they would paralyze response in the critical first half hour. By silencing the artillery they would break the enemy's firepower. Rigid schedules sent in two more waves of infantry to mop up the strong points that had been bypassed. The shock troops always frightened and disoriented the first line of defenders, who would flee in panic. In one instance an easy-going Allied regiment broke and fled; reinforcements rushed in on bicycles. The panicky men seized the bikes and beat an even faster retreat. The stormtrooper tactics provided mobility, but not increased firepower. Eventually—in 1939 and 1940—the formula would be perfected with the aid of dive bombers and tanks, but in 1918 the Germans lacked both. Ludendorff erred by attacking the British first in 1918, instead of the French. He mistakenly thought the British to be too uninspired to respond rapidly to the new tactics. The exhausted, dispirited French perhaps might have folded. The German assaults on the British were ferocious—the largest of the entire war. At the Somme River in March, 63 divisions attacked in a blinding fog. No matter, the German lieutenants had memorized their maps and their orders. The British lost 270,000 men, fell back 40 miles, and then held. They quickly learned how to handle the new German tactics: fall back, abandon the trenches, let the attackers overextend themselves, and then counterattack. They gained an advantage in firepower from their artillery and from tanks used as mobile pillboxes that could retreat and counterattack at will. In April Ludendorff hit the British again, inflicting 305,000 casualties—but he lacked the reserves to follow up. Ludendorf launched five great attacks between March and July, inflicting a million British and French casualties. The Western Front now had opened up—trenches were still there but the importance of mobility now reasserted itself. The Allies held. The Germans suffered as many casualties as they inflicted, including most of their precious stormtroopers. The new German replacements were under-aged youth or embittered middle-aged family men in poor condition. They were not inspired by the elan of 1914, nor thrilled with battle—they hated it, and some began talking of revolution. Ludendorff could not replace his losses, nor could he devise a new brainstorm that might somehow snatch victory from the jaws of defeat. The British likewise were bringing in boys and men aged 50, but since their homefront was in good condition, and since they could see the Yanks pouring in, their morale was stiff. The great German spring offensive was a race against time, for everyone could see the Americans were training millions of fresh young man would eventually arrive on the Western Front. The attrition warfare now caught up to both sides. Germany had used up all the good fighters they had, and still had not conquered much territory. The British were out of fresh manpower, the French nearly so. Berlin had calculated it would take months for the Americans to ship all their men and supplies—but the Yanks came much sooner, for they left their supplies behind, and relied on British and French artillery, tanks, airplanes, trucks and equipment. Berlin also assumed that Americans were fat, undisciplined and unaccustomed to hardship and severe fighting. They soon discovered these supposedly soft, materialistic Yankees really could fight. The Germans reported that "The qualities of the [Americans] individually may be described as remarkable. They are physically well set up, their attitude is good... They lack at present only training and experience to make formidable adversaries. The men are in fine spirits and are filled with naive assurance." By September 1918, the Central Powers were exhausted from fighting, and the American forces were pouring into France at 10,000 a day. The decisive Allied counteroffensive, known as the Hundred Days Offensive, began on 8 August 1918—what Ludendorff called the "Black Day of the German army." The Allied armies advanced steadily as German defenses faltered. Although German armies were still on enemy soil as the war ended, the generals, the civilian leadership—and indeed the soldiers and the people—knew all was hopeless. They started looking for scapegoats. The hunger and popular dissatisfaction with the war precipitated revolution throughout Germany. By 11 November Germany had virtually surrendered, the Kaiser and all the royal families had abdicated, and the Empire had been replaced by the Weimar Republic. Home front Germany had no plans for mobilizing its civilian economy for the war effort, and no stockpiles of food or critical supplies had been made. Germany had to improvise rapidly. All major political sectors supported the war at least at first, including the Socialists. The "spirit of 1914" was the overwhelming, enthusiastic support of all elements of the population for war in 1914. In the Reichstag, the vote for credits was unanimous, with all the Socialist joining in. One professor testified to a "great single feeling of moral elevation of soaring of religious sentiment, in short, the ascent of a whole people to the heights." At the same time, there was a level of anxiety; most commentators predicted the short victorious war – but that hope was dashed in a matter of weeks, as the invasion of Belgium bogged down and the French Army held in front of Paris. The Western Front became a killing machine, as neither army moved more than a few hundred yards at a time. Industry In late 1914 was in chaos, unemployment soared while it took months to reconvert to munitions productions. In 1916, the Hindenburg Program called for the mobilization of all economic resources to produce artillery, shells, and machine guns. Church bells and copper roofs were ripped out and melted down. The German economy was severely handicapped by the British blockade that cut off food supplies. The mobilization of so many farmers – and horses – steadily reduce the food supply. Supplies that had once come in from Russia and Austria were cut off. The concept of "total war" in World War I, meant that supplies had to be redirected towards the armed forces and, with German commerce being stopped by the British blockade, German civilians were forced to live in increasingly meager conditions. Food prices were first controlled. Bread rationing was introduced in 1915 but apart from Berlin it never worked well. Hundreds of thousands of civilians died from malnutrition—usually from a typhus or a disease their weakened body could not resist. (Starvation itself rarely caused death.) Conditions deteriorated rapidly on the home front, with severe food shortages reported in all urban areas. The causes involved the transfer of so many farmers and food workers into the military, combined with the overburdened railroad system, shortages of coal, and the British blockade that cut off imports from abroad. The winter of 1916-1917 was known as the "turnip winter," because that hardly-edible vegetable, usually fed to livestock, was used by people as a substitute for potatoes and meat, which were increasingly scarce. Thousands of soup kitchens were opened to feed the hungry people, who grumbled that the farmers were keeping the food for themselves. Even the army had to cut the rations for soldiers. Morale of both civilians and soldiers continued to sink. The drafting of miners reduced the main energy source, coal. The textile factories produced Army uniforms, and warm clothing for civilians ran short. The device of using ersatz materials, such as paper and cardboard for cloth and leather proved unsatisfactory. Soap was in short supply, as was hot water. All the cities reduced tram services, cut back on street lighting, and close down theaters and cabarets. The food supply increasingly focused on potatoes and bread, it was harder and harder to buy meat. The meat ration in late 1916 was only 31% of peacetime, and it fell to 12% in late 1918. The fish ration was 51% in 1916, and none at all by late 1917. The rations for cheese, butter, rice, cereals, eggs and lard were less than 20% of peacetime levels. In 1917 the harvest was poor, and the potato supply ran short, and Germans substituted almost inedible turnips; the "turnip winter" of 1917–18 was remembered with bitter distaste for generations. German women were not employed in the Army, but large numbers took paid employment in industry and factories, and even larger numbers engaged in volunteer services. Housewives were taught how to cook without milk, eggs or fat; agencies helped widows find work. Banks, insurance companies and government offices for the first time hired women for clerical positions. Factories hired them for unskilled labor – by December 1917, half the workers in chemicals, metals, and machine tools were women. Laws protecting women in the workplace were relaxed, and factories set up canteens to provide food for their workers, lest their productivity fall off. The food situation in 1918 was better, because the harvest was better, but serious shortages continued, with high prices, and a complete lack of condiments and fresh fruit. Many migrants had flocked into cities to work in industry, which made for overcrowded housing. Reduced coal supplies left everyone in the cold. Daily life involved long working hours, poor health, and little or no recreation, an increasing fears for the safety of loved ones in the Army and in prisoner of war camp. The men who returned from the front were those who had been permanently crippled; wounded soldiers who had recovered were sent back to the trenches. Defeat and revolt Many Germans wanted an end to the war and increasing numbers of Germans began to associate with the political left, such as the Social Democratic Party and the more radical Independent Social Democratic Party which demanded an end to the war. The third reason was the entry of the United States into the war in April 1917, which changed the long-run balance of power in favor of the Allies. The end of October 1918, in Kiel, in northern Germany, saw the beginning of the German Revolution of 1918–19. Civilian dock workers led a revolt and convinced many sailors to join them; the revolt quickly spread to other cities. Meanwhile, Hindenburg and the senior generals lost confidence in the Kaiser and his government. In November 1918, with internal revolution, a stalemated war, Bulgaria and the Ottoman Empire suing for peace, Austria-Hungary falling apart from multiple ethnic tensions, and pressure from the German high command, the Kaiser and all German ruling princes abdicated. On 9 November 1918, the Social Democrat Philipp Scheidemann proclaimed a Republic, in cooperation with the business and middle classes, not the revolting workers. The new government led by the German Social Democrats called for and received an armistice on 11 November 1918; in practice it was a surrender, and the Allies kept up the food blockade to guarantee an upper hand. The war was over; the history books closed on the German Empire. It was succeeded by the democratic, yet flawed, Weimar Republic. Seven million soldiers and sailors were quickly demobilized, and they became a conservative voice that drowned out the radical left in cities such as Kiel and Berlin. The radicals formed the Spartakusbund and later the Communist Party of Germany (KPD). Germany lost the war because it was decisively defeated by a stronger military power; it was out of soldiers and ideas, and was losing ground every day by October 1918. Nevertheless it was still in France when the war ended on Nov. 11 giving die-hard nationalists the chance to blame the civilians back home for betraying the army and surrendering. This was the false "Stab-in-the-back legend" that soured German politics in the 1920s and caused a distrust of democracy and the Weimar government. War deaths Out of a population of 65 million, Germany suffered 2.1 million military deaths and 430,000 civilian deaths due to wartime causes (especially the food blockade), plus about 17,000 killed in Africa and the other overseas colonies. The Allied blockade continued until July 1919, causing severe additional hardships. - Jeffrey Verhey, The Spirit of 1914: Militarism, Myth and Mobilization in Germany (Cambridge U.P., 2000). - N.P. Howard, "The Social and Political Consequences of the Allied Food Blockade of Germany, 1918-19," German History (1993) 11#2 pp 161-88 online p 166, with 271,000 excess deaths in 1918 and 71,000 in 1919. - Hew Strachan (1998). World War 1. Oxford University Press. p. 125. - Jeff Lipkes, Rehearsals: The German Army in Belgium, August 1914 (2007) - Barbara Tuchman, The Guns of August (1962) - Fred R. Van Hartesveldt, The Battles of the Somme, 1916: Historiography and Annotated Bibliography (1996) pp 26-27 - C.R.M.F. Cruttwell, A History of the Great War: 1914-1918 (1935) ch 15-29 - Holger H. Herwig, The First World War: Germany and Austria-Hungary 1914-1918 (1997) ch 4-6 - Bruce I. Gudmundsson, Stormtroop Tactics: Innovation in the German Army, 1914-1918 (1989) pp 155-70 - David Stevenson, With Our Backs to the Wall: Victory and Defeat in 1918 (2011) pp 30-111 - C.R.M.F. Cruttwell, A History of the Great War: 1914-1918 (1935) pp 505-35r - Allan Millett (1991). Semper Fidelis: The History of the United States Marine Corps. Simon and Schuster. p. 304. - Spencer C. Tucker (2005). World War I: A - D.. ABC-CLIO. p. 1256. - Roger Chickering, Imperial Germany and the Great War, 1914-1918 (1998) p. 14 - Richie, Faust's Metropolis pp 272-75 - Feldman, Gerald D. "The Political and Social Foundations of Germany's Economic Mobilization, 1914-1916," Armed Forces & Society (1976) 3#1 pp 121-145. online - Keith Allen, "Sharing scarcity: Bread rationing and the First World War in Berlin, 1914-1923," Journal of Social History, Winter 1998, Vol. 32 Issue 2, pp 371-93 - N. P. Howard, "The Social and Political Consequences of the Allied Food Blockade of Germany, 1918-19," German History, April 1993, Vol. 11 Issue 2, pp 161-188, - Roger Chickering, Imperial Germany and the Great War, 1914-1918 (2004) p. 141-42 - David Welch, Germany, Propaganda and Total War, 1914-1918 (2000) p.122 - Chickering, Imperial Germany pp 140-145 - Alexandra Richie, Faust's Metropolis (1998) pp 277-80 - A. J. Ryder, The German Revolution of 1918: A Study of German Socialism in War and Revolt (2008) - Wilhelm Diest and E. J. Feuchtwanger, "The Military Collapse of the German Empire: the Reality Behind the Stab-in-the-Back Myth," War in History, April 1996, Vol. 3 Issue 2, pp 186-207 - Leo Grebler and Wilhelm Winkler, The Cost of the World War to Germany and Austria-Hungary (Yale University Press, 1940) - N.P. Howard, N.P. "The Social and Political Consequences of the Allied Food Blockade of Germany, 1918-19," German History (1993) p 162 - Cecil, Lamar (1996), Wilhelm II: Emperor and Exile, 1900-1941 II, Chapel Hill, North Carolina: University of North Carolina Press, p. 176, ISBN 0-8078-2283-3, OCLC 186744003 - Chickering, Roger, et al. eds. Great War, Total War: Combat and Mobilization on the Western Front, 1914-1918 (Publications of the German Historical Institute) (2000). ISBN 0-521-77352-0. 584 pgs. - Cowin, Hugh W. German and Austrian Aviation of World War I: A Pictorial Chronicle of the Airmen and Aircraft That Forged German Airpower (2000). Osprey Pub Co. ISBN 1-84176-069-2. 96 pgs. - Cross, Wilbur (1991), Zeppelins of World War I, ISBN 1-55778-382-9 - Herwig, Holger H. The First World War: Germany and Austria-Hungary 1914-1918 (1996), mostly military - Hubatsch, Walther; Backus, Oswald P (1963), Germany and the Central Powers in the World War, 1914–1918, Lawrence, Kansas: University of Kansas, OCLC 250441891 - Kitchen, Martin. The Silent Dictatorship: The Politics of the German High Command under Hindenburg and Ludendorff, 1916–1918 (London: Croom Helm, 1976) - Sheldon, Jack (2005). The German Army on the Somme: 1914 - 1916. Barnsley: Pen and Sword Books Ltd. ISBN 1-84415-269-3. - Tuchman, Barbara. The Guns of August (1962), tells of the opening diplomatic and military manoeuvres. - Morrow, John. German Air Power in World War I (U. of Nebraska Press, 1982); Contains design and production figures, as well as economic influences. - Allen, Keith. "Sharing Scarcity: Bread Rationing and the First World War in Berlin, 1914– 1923," Journal of Social History (1998) 32#2 pp 371–96. - Armeson, Robert. Total Warfare and Compulsory Labor: A Study of the Military-Industrial Complex in Germany during World War I (The Hague: M. Nijhoff, 1964) - Bailey, S. “The Berlin Strike of 1918," Central European History (1980) 13#2 pp 158–74. - Bell, Archibald. A History of the Blockade of Germany and the Countries Associated with Her in the Great War, Austria-Hungary, Bulgaria, and Turkey, 1914–1918 (London: H. M. Stationery Office, 1937) - Broadberry, Stephen and Mark Harrison, eds. The Economics of World War I (2005) ISBN 0-521-85212-9. Covers France, UK, USA, Russia, Italy, Germany, Austria-Hungary, the Ottoman Empire, and the Netherlands - Burchardt, Lothar. “The Impact of the War Economy on the Civilian Population of Germany during the First and the Second World Wars," in The German Military in the Age of Total War, edited by Wilhelm Deist, 111–36. Leamington Spa: Berg, 1985. - Chickering, Roger. Imperial Germany and the Great War, 1914–1918 (1998), wide-ranging survey - Daniel, Ute. The War from Within: German Working-Class Women in the First World War (1997) - Dasey, Robyn. "Women's Work and the Family: Women Garment Workers in Berlin and Hamburg before the First World War," in The German Family: Essays on the Social History of the Family in Nineteenth-and Twentieth-Century Germany, edited by Richard J. Evans and W. R. Lee, (London: Croom Helm, 1981), pp 221–53. - Davis, Belinda J. Home Fires Burning: Food, Politics, and Everyday Life in World War I Berlin (2000) online edition - Dobson, Sean. Authority and Upheaval in Leipzig, 1910–1920 (2000). - Domansky, Elisabeth. "Militarization and Reproduction in World War I Germany," in Society, Culture, and the State in Germany, 1870–1930, edited by Geoff Eley, (University of Michigan Press, 1996), pp 427–64. - Donson, Andrew. "Why did German youth become fascists? Nationalist males born 1900 to 1908 in war and revolution," Social History, Aug2006, Vol. 31 Issue 3, pp 337–358 - Feldman, Gerald D. "The Political and Social Foundations of Germany's Economic Mobilization, 1914-1916," Armed Forces & Society (1976) 3#1 pp 121–145. online - Feldman, Gerald. Army, Industry, and Labor in Germany, 1914–1918 (1966) - Ferguson, Niall The Pity of War (1999), cultural and economic themes, worldwide - Hardach, Gerd. The First World War 1914-1918 (1977), economics - Herwig, Holger H. The First World War: Germany and Austria-Hungary 1914-1918 (1996), one third on the homefront - Howard, N.P. "The Social and Political Consequences of the Allied Food Blockade of Germany, 1918-19," German History (1993) 11#2 pp 161-88 online - Kocka, Jürgen. Facing total war: German society, 1914-1918 (1984). online at ACLS e-books - Lee, Joe. "German Administrators and Agriculture during the First World War," in War and Economic Development, edited by Jay M. Winter. (Cambridge University Press, 1975) - Marquis, H. G. "Words as Weapons: Propaganda in Britain and Germany during the First World War." Journal of Contemporary History (1978) 12: 467–98. - McKibbin, David. War and Revolution in Leipzig, 1914–1918: Socialist Politics and Urban Evolution in a German City (University Press of America, 1998) - Moeller, Robert G. "Dimensions of Social Conflict in the Great War: A View from the Countryside," Central European History (1981) 14#2 pp 142–68 - Moeller, Robert G. German Peasants and Agrarian Politics, 1914–1924: The Rhineland and Westphalia (1986). online edition - Offer, Avner. The First World War: An Agrarian Interpretation (1991), on food supply of Britain and Germany - Osborne, Eric. Britain's Economic Blockade of Germany, 1914-1919 (2004) - Richie, Alexandra. Faust's Metropolis: a History of Berlin (1998) pp 234–83 - Ryder, A. J. The German Revolution of 1918 (Cambridge University Press, 1967) - Siney, Marion. The Allied Blockade of Germany, 1914–1916 (1957) - Steege, Paul. Black Market, Cold War: Everyday Life in Berlin, 1946-1949 (2008) excerpt and text search - Terraine, John. "'An Actual Revolutionary Situation': In 1917 there was little to sustain German morale at home," History Today (1978) 28#1 pp 14–22, online - Tobin, Elizabeth. "War and the Working Class: The Case of Düsseldorf, 1914–1918," Central European History (1985) 13#3 pp 257–98 - Triebel, Armin. "Consumption in Wartime Germany," in The Upheaval of War: Family, Work, and Welfare in Europe, 1914–1918 edited by Richard Wall and Jay M. Winter, (Cambridge University Press, 1988), pp 159–96. - Usborne, Cornelie. "Pregnancy Is a Woman's Active Service," in The Upheaval of War: Family, Work, and Welfare in Europe, 1914–1918 edited by Richard Wall and Jay M. Winter, (Cambridge University Press, 1988) pp 289–416. - Verhey, Jeffrey. The Spirit of 1914. Militarism, Myth and Mobilization in Germany (Cambridge University Press 2000) - Welch, David. Germany, Propaganda and Total War, 1914-1918 (2003) - Winter, Jay, and Jean-Louis Robert, eds. Capital Cities at War: Paris, London, Berlin 1914-1919 (2 vol. 1999, 2007), 30 chapters 1200pp; comprehensive coverage by scholars vol 1 excerpt; vol 2 excerpt and text search - Winter, Jay. Sites of Memory, Sites of Mourning: The Great War in European Cultural History (1995) - Ziemann, Benjamin. War Experiences in Rural Germany, 1914-1923 (Berg, 2007) online edition - (German) "Der Erste Weltkrieg" ("The First World War", in German (use Chrome for English translation) - (German) WWI at German Historic Museum online
http://en.wikipedia.org/wiki/History_of_Germany_during_World_War_I
13
17
THE GENERAL understanding of affirmative action is that it is about providing opportunities for previously disadvantaged people, which includes people of colour and women. Although disability and homosexuality continue to be issues of concern, this research does not address these issues. Homosexual staff however, expressed problems of not having their partners recognised and not being given the same rights as partners of heterosexual staff. The questions that arise then, are "what is affirmative action?" and "who should benefit from affirmative action policies?" The definition affects implementation and is very important in assessing its result. 1. A Contentious Concept Various groupings see affirmative action as a contentious concept, with a variety of meanings. Innes (1993:6) argues that it has two meanings and purposes, namely to: i) Overcome discriminatory obstacles that stand in the way of achieving equality of employment; and ii) Introduce preferential policies aimed at promoting one group over others to achieve equality of employment. The implementation of affirmative action depends on the specific emphasis of the company and government, through its policies and laws. 2. Origins of Affirmative Action Affirmative action originated in the United States in the 1960ís. It was a response to pressure by the civil rights movement; thus, race was instrumental in deciding its beneficiaries (Sikhosana 1996). In the United States, unlike South Africa, its purpose was to uplift the position of oppressed minority groups, rather than that of an oppressed majority. Thus, its application and impact in the US would be different from that in South Africa. Nevertheless, affirmative action is a process of transformation. It is evident that the context of the particular country within which the affirmative action operates is of utmost importance (Schreiner 1996). 3. Definition of Affirmative Action a) General Definition in South Africa The implementation of affirmative action began in South Africa in 1992. It is thus firmly located in the political transition from apartheid to democracy. The South African transition brought with it a strong belief that, in addition to political freedom, blacks must also be provided with access to means and resources to overcome their past economic marginalisation. Unless this occurs, the patterns of economic control, ownership and management produced by the apartheid system will remain unchanged even in a non-racial, non-sexist, democratic South Africa (Nkuhlu 1993). Deracialisation and equalisation of economic opportunity will not automatically occur, with the abolition of apartheid laws (Sikhosana 1993). Redressing the effects of past discrimination via social measures is necessary. In achieving these goals, blacks should receive preferential support, have access to resources and be given the opportunity and space to contribute to the development of the organisation and to the economy of the country. Hence the mindset of both blacks and whites has to be changed (Nkuhlu 1993). Affirmative action is thus conceptualised as a tool to bring about a changing set of social and economic relations, in the transition to democracy. Therefore, in South Africa, affirmative action in general is a Ö part of transformation away from apartheid, poverty and exploitation, towards a non-racial, non-sexist and democratic nation in which the socio-economic conditions of the majority, that is, black working women and men, are substantially transformed in a manner which is empowering (Schreiner 1996:80). In the early days of political transition, companies implemented affirmative action policies in anticipation of a change in government. They feared that unless they voluntarily changed their policies, blacks would revolt. However, they also acknowledged, and continue to acknowledge, the need to remove obstacles to black advancement. Further, at present, companies must tread a careful path in the implementation of affirmative action policies. This is necessary so that informal discrimination does not replace formal discrimination. Informal discrimination is embedded in attitudes, behaviour, subconscious values and beliefs and is therefore harder to remove (Innes 1993). This is the general context and definition of affirmative action. b) Closer examination of the definition and beneficiaries of affirmative action in terms of Management and Trade Unions On closer examination, the definition of affirmative action by trade unions and management is very different. Trade unions view affirmative action in South Africa as a comprehensive strategy to overcome the imbalances caused by Apartheid and racism. It is therefore collective empowerment, the aim of which is to make up for long term deficits. Affirmative action seen in this light should address wide-ranging goals of work-place equality rather than simply developing a small number of management trainees. Further, trade unions, unlike management, target gender as well as racial discrimination. Affirmative action for trade unions reflects Schreinerís (1996) view that it is a process of development and empowerment. Management on the other hand generally sees affirmative action as any action that is taken specifically to overcome the results of past discriminatory practices (Hall and Albrecht in Hugo 1986:55). Further, as a process of identifying, recruiting, training and promoting blacks (and less often women) into junior management positions (Alperson 1993a:120). For business, affirmative action is necessary to increase access, affordability and the creation of opportunities. This ensures that blacks take an interest in the business which would then serve to enhance the company (Thomas 1994; Montsi 1994). Thus, the focus by management is on individual empowerment. (This view is, in fact, reflective of earlier beliefs of white managers that blacks are not interested in the company.) The search for consensus between management and trade unions continues. 4. Beneficiaries of Affirmative Action It is clear that people of colour should benefit from affirmative action. This view is also accepted by managers and trade unions. Less examined is gender. Human activity or material life, not only structures but also sets limits on human understanding: what we do shapes and constrains what we know (Harding 1987:185) In South Africa, most past research on affirmative action has focused on race. However, women in South Africa and internationally are also recipients of discrimination in the workplace. Like racial discrimination, gender discrimination is historic. Distinct trends have occurred internationally regarding women in the workplace. Initially there was a disregard of women as workers. It was only out of "necessity" in World War 2 that they were encouraged to enter the workforce. Thereafter, they were urged to return to the home (Rappaport and Rappaport 1993). In the 1970ís, through the Contemporary Womenís Movement, women expressed their desire to re-enter the workplace. There were no concessions and provisions made for those issues perceived as womenís responsibilities such as child-care. Career women were therefore disadvantaged. Pressure from womenís movements resulted in discussions that the structures had to change. There were however, no tangible outcomes. It was only in the late 1980ís, when research revealed that these concerns were no longer female concerns alone, but that males expressed them as well, that some structures changed to address these needs. These took the form of child-care facilities, maternity leave and in some countries, paternity leave. However, the real structure and role stereotypes remained unchanged (Rappaport and Rappaport 1993). Today, society remains structured in ways, which favour men and disfavour women in the competitive race for goods with which our society rewards us: power, prestige and money (Tong 1989:29). Ideally, society should acknowledge and consider the differences (biological, social or cultural) between men and women and then equalise the problem. Instead, there is often a disregard for women as active members of society. Further, men regard them as not fitting into the work place (Hersch 1993). The point is that women and men must be treated as equals, and this requires that women are not penalised for the ways in which they are different from menÖ (Hersch 1993:171). The above trends are also true for the media industry (Beasley 1989). Although most women experience the general difficulties described above, they are not a homogenous group. Power relations among men and women, racial and ethnic groups, classes, women and men in rural and urban areas differ across cultures (Harding 1987; Mbilinyi 1992; Steeves 1989). This is particularly relevant in a multicultural society such as South Africa. Black women feel particularly alienated from the broader research agenda of the womenís movement. They feel that past research focused on the needs of white middle class women. Further, they feel that a broader analysis is needed which focuses on their experiences as black working class women, that is, as experiencing triple discrimination. This discrimination is related to them being women, black and workers (Alperson 1993b; Matabane 1989; Rhodes 1989). Harding (1987), while acknowledging that women share different experiences because of culture, race and class, argues that women can and should come together to form a resistance to fight oppression at the general level. c) Evidence of Race and Gender as part of affirmative action The interview discussions reflected the difference in definitions and beneficiaries. With regard to definitions: The general understanding was that affirmative action is about providing opportunities for previously disadvantaged people. Managerial and editorial staff focused on individual empowerment and were thus more in keeping with the Ďmanagerialí viewpoint. Journalists however focused on collective empowerment (except for interviewee 19). They were thus more inclined to support the trade union definition of affirmative action. Regarding beneficiaries: The interviewees had different perceptions of who qualifies for affirmative action. Generally, white and older males felt that it applied to race alone while females and younger males, both usually from disadvantaged communities, felt that it included both race and gender. The question "What do you understand by affirmative action?" was put to all the interviewees. Some of the responses follow. The responses not only related to their understanding but also indicated the beneficiaries of affirmative action. A young black female journalist (3) responded: My understanding is that there have been people who have been disadvantaged. These include blacks, Indians, coloureds and women. They are disadvantaged by not being able to enter the job market equally to another group of people and to me affirmative action seems to address that issue by allowing fair chance to all. An older coloured male and a union representative (2) stated: Affirmative action for me is the redressing of that situation where people who were previously disadvantaged (blacks) are given the opportunity to develop Ö even if they do not have the highest qualification. A white male editor (17) said: A mechanism, to address the wrongs of the past, in as much as they impacted on people who werenít given the opportunity. My understanding is that people from previously disadvantaged backgroundsÖ (in terms of race Ö women are not seen as having the same priorityÖ ) will be given the same opportunity to reach higher status on the basis that should the candidate be of the same quality as a white candidate, that person will get the job. With a view to bring to our company, through all the steps of seniority, a balance which somewhat reflects the society we live in. An Indian female manager (5) expressed: Affirmative action is an opportunity for people, the less privileged from before, it doesnít necessarily mean that you are a black person, you could be anyone like a women, black, coloured, Malay, anyone from the disadvantaged sector who must now be given the opportunity and training to advance. You must take a person who shows a potential and develop them to work their way up; it doesnít just apply to managers or senior positions. Affirmative action doesnít mean that just because you are of the right skin colour you will get the job, it also doesnít mean kick a person out if they are doing a good job. A black male manager (13) stated: To me it is to affirm people who have been disadvantaged. We see it as being organised as priorities: 1. African women; 2. African Men; 3. Indian and coloured males and females. To a certain extent, white women but they would be the last priority. These feelings were also found by Manhando (1994) who found that people regarded affirmative action as racially neutral and did not fully understand who should be benefiting and why. One such person was an Indian presenter and producer who stated (Manhando 1994:56): My understanding is that people will have the ability to do certain tasks, who for some reason have not had the opportunity to do so, will be given the opportunity, making certain that not race or gender should come into question. We are talking about affirmative action, relating it to blacks and women in particular but this will create problems in the future. What about the white people, especially the young? Twenty years from now, we will need another policy to redress a policy that disadvantaged the minority. I cannot help but feel that we are going to live through this programme again, targeted at the white population. A black male young journalist (19) who was different and related to individual, rather than a collective empowerment responded: Affirmative action in most cases is linked to black empowerment, or so called black empowerment. And my theory in life is that there is no such thing as black empowerment. Thereís always self-empowerment. A white male, working on the technical side, (10) whose job was threatened, offered a completely different view of affirmative action: Affirmative action is a way to get cheap labour. They want to get rid of the qualified people, and bring in semi-trained people. I say get the person trained, give him the opportunity to learn and then compete fairly. Donít just get them into the company and donít train them. Uplift them to a point, help the person. |All interviewees expressed the need for both affirmative action and a company policy document specifically addressing affirmative action. In keeping with international trends and academic theory, affirmative action was seen as both a way of overcoming past discrimination and a way of introducing policy aimed at promoting one group over another to achieve equality of employment. Hence, affirmative action cannot be racially neutral (Degenaar 1980; Innes 1993). d) Disability and homosexuality as part of affirmative action The Employment Equity Act states that affirmative action will apply to race, gender and disability. Homosexuals experience specific problems that need addressing. Discrimination against both homosexuals and the disabled receives little attention by business in South Africa. I was fortunate to interview a male homosexual administrator (interviewee 6). While exploring the questions he remarked: When people talk about gender, they always refer to male/female but you know gays also have very distinct problems. It seems as though you are focusing on the promotion of women and Iíd like it if you would include my experience as a gay person. Although our initial aim was to focus on women, we do recognise the needs and rights of gay people and do feel it is important to address these as well, hence feel free to talk about your specific experience. He went on to say: They (the company) talk about equality, but thereís no gender equality and you talk about man and woman and not gay and lesbian Ö with your pension fund, provident fund, medical aid they donít have provision for your partner. I would put a man but I canít because heís not my husband. Our policy fund, pension funds donít give you the opportunity to make your boyfriend, your partner the right to claim your pension/provident fund, if you should die. I think they should actually include this by law. Just recently somebody fought the case where his partner died and the company just paid him a lump sum and not the provident fund. I didnít know that the constitution actually gave you that right until I read the article. But I think that the company should on black and white make provision for that as well, and the unions should fight for it.. Thus, it is possible that homosexual staff experience problems of not having their partners recognised and accorded the same rights as partners as partners of heterosexual staff. I proceeded to ask interviewee 1 (white male editor) and 12 (black male senior journalist) about gay rights: How are the rights of gays addressed by the company? Interviewee 1 (white male editor) responded, To be honest I donít know. I think itís not something that would come up because I think weíve got a tolerant environment. Certainly any kind of discrimination against gays would have to be stopped. They are treated like anybody else, I think. And certainly I have never heard a compliant. But I think if there were complaints we would take them seriously. Interviewee 12 (black senior male journalist) responded: There is a commitment not to discriminate against people based on race, gender, sexual orientation etc Ö sexual orientation is not a problem Farhana asked interviewee 12: Are you aware if homosexual partners are given the same recognition as heterosexual partners? I am aware that the company feels the need to review certain issues and chapters in their policies. I do believe the administration has taken great strides to have a policy in place that serves the interest of its employees. |Unfortunately this study failed to explore the experience of gays more profoundly and did not tackle the experience of the disabled at all. From these responses, one may wonder if homosexuals experience discrimination. Further, there is ignorance around the company's policies towards homosexuals. 5. The Need for Affirmative Actionas Seen by the Interviewees Given that the government has passed legislation to which companies and responding , and that we know what constitutes affirmative action and who should benefit, the next question asked was whether staff felt that affirmative action was necessary. Most interviewees replied "definitely". Some interviewees expanded on this belief. A white male technician (10) expanded: Yes, everybody needs an opportunity. But I mean at the end of the day, you must pick the best person applying for the job, irrespective of his colour because your company can only benefit if the company benefits, the worker benefits. A young black female journalist (3) responded: Definitely without a doubt there is. For instance when I look at myself, I donít think that I would have had this job 10 years ago, but I do now and I am allowed to compete Ö at least I have a fair chance at competing. So most definitely there is a need for it. A coloured female administrator and union representative (4) responded: I think so, Ö right now affirmative action is keeping them (the company) under control and not putting 90% of the whites in positions again. The idea that careful implementation of affirmative action is necessary was expressed by an Indian female manager (5) who responded: I think to a degree there is a need for affirmative action, but it must be implemented properly. I donít believe its been implemented properly. Thereís no use putting someone in a position that they canít do .. you are setting them to fail. I think you have to start training people to fill positions. And why you have to always take people from outside and put them in top positions when you can actually develop people from the bottom and work them to the top, within the company. In general, people of colour and women are considered to be beneficiaries. There was also a belief that affirmative action is necessary.
http://ccms.ukzn.ac.za/index.php?option=com_content&task=view&id=684&Itemid=86
13
17
Reelection in 1832 In the meantime, Jackson acquiesced to the pressure of friends and sought a second term. As the election of 1832 approached, Jackson's opponents hoped to embarrass him by posing a new dilemma. The charter of the Bank of the United States was due to expire in 1836. The president had not clearly defined his position on the bank, but he was increasingly uneasy about how it was then organized. More significant in an election year was the fact that large blocs of voters who favoured Jackson were openly hostile to the bank. In the summer of 1832, Jackson's opponents rushed through Congress a bill to recharter the bank, thus forcing Jackson either to sign the measure and alienate many of his supporters or to veto it and appear to be a foe of sound banking. Jackson's cabinet was divided between friends and critics of the bank, but the obviously political motives of the recharter bill reconciled all of them to the necessity of a veto. The question before Jackson actually was whether the veto message should leave the door open to future compromise. Few presidential vetoes have caused as much controversy in their own time or later as the one Jackson sent to Congress on July 10, 1832. The veto of the bill to recharter the bank was the prelude to a conflict over financial policy that continued through Jackson's second term, which he nevertheless won easily (see primary source document: Second Inaugural Address). Efforts to persuade Congress to enact legislation limiting the circulation of bank notes failed, but there was one critical point at which Jackson was free to apply his theories. Nearly all purchasers of public lands paid with bank notes, many of which had to be discounted because of doubts as to the continuing solvency of the banks that issued them. Partly to protect federal revenues against loss and partly to advance his concept of a sound currency, Jackson issued the Specie Circular in July 1836, requiring payment in gold or silver for all public lands. This measure created a demand for specie that many of the banks could not meet; banks began to fail, and the effect of bank failures in the West spread to the East. By the spring of 1837 the entire country was gripped by a financial panic. The panic did not come, however, until after Jackson had had the pleasure of seeing Van Buren inaugurated as president on March 4, 1837 (see primary source document: Farewell Address). During Jackson's time, the President's House underwent noteworthy alterations. The North Portico, which had long been advocated by James Hoban, its architect, was added to the mansion. The appropriation that Jackson obtained for this work included a sum for refurbishing the interior of the building, and the public rooms were refitted on a grand scale. A system of iron pipes was also installed in order to convey water from a well to a small reservoir on the grounds from which it could be pumped to various parts of the building. For the first time, the occupants' needs for water could be met without relying on the time-honoured system of filling pails and carrying them where required. Jackson retired to his home, the Hermitage. For decades in poor health, he was virtually an invalid during the remaining eight years of his life, but he continued to have a lively interest in public affairs.
http://britannica.com/presidents/article-3619
13
26
In This Issue - Taking Strides for the Fox on Stilts - Biodiversity Hotspot Highlight: Western Ghats, India - Current Literature Researchers at the National Zoo's Conservation & Research Center in Front Royal, Virginia, are collaborating with scientists in Brazil's Associação Pró-Carnívoros to study the impact of human development on maned wolf ecology, behavior, reproduction, and health in the Serra da Canastra National Park, Minas Gerais State. Besides being the largest canid of South America, the maned wolf (Chrysocyon brachyurus) is one of the most unusual canids in the world. It is the only member of its genus, and has an evolutionary history that dates back six million years in South America. Adult maned wolves weigh 50-60 pounds and usually travel alone, staying in pairs only during breeding seasons. Their thick red coat is long at the neck and shoulders, forming a mane that may become erect when they feel threatened. Having evolved to live in the tall grasses of the South American savannas, the wolves have absurdly long black legs, an elongated snout, a fox-like head, and huge, erect ears, earning them the moniker "fox on stilts." Maned wolves live in the Cerrado, the second-largest biome in South America, encompassing about 23 percent of Brazil's land mass. Currently, more than 80 percent of the Cerrado has been converted or modified in some way by humans. The greatest impact comes from the growing agricultural frontier, increased colonization, and the creation of many new highways. Since March 2004, the study's researchers have captured and radio-collared 8 wolves, and have obtained blood and urine samples for analysis of hematology, blood biochemistry, parasitology, and potential exposure to any infectious diseases transmitted by domestic dogs living in 50 farms surrounding the national park. The relatively high density of domestic dogs around the park's boundary represents a disease transmission threat that could potentially wipe out the entire maned wolf population. Currently, the researchers are setting 19 traps in the park and farms to capture more wolves for the study. The study's collective findings will eventually be offered in a formal report to the National Brazilian Environmental Agency to assist in the development of conservation action plans for the maned wolf and other species sharing the same habitat. The results could provide the basis for more convincing arguments for expanding protected areas, establishing corridors, and limiting changes in land use. The findings will also be useful for adopting captive husbandry and management protocols that are closer to the species' natural conditions, with the ultimate goal of establishing viable, healthy captive populations. By Jayanti Ray Mukherjee <[email protected]> Kalakad and Mundanthurai, in the southern Western Ghats mountain range of India, were two separate entities until 1988, when owing to their importance for conservation of threatened plants and animals, the province was proclaimed the Kalakad - Mundanthurai Tiger Reserve (KMTR). These verdant hills lie along the south-western coast of the Indian Peninsula, which is well known as a global biodiversity hotspot. The KMTR harbors five broad forest types ranging from tropical dry to evergreen forests. Its entire stretch of pristine evergreen forests houses a rich repository of rare and endangered species of flora and fauna, which can be attributed to the biogeography and isolation of this region along with its varied climates. The area has high plant diversity harboring 1,500 plant species of which 150 are narrow endemics. This domain also provides more than 250 species of medicinal plants and wild relatives of cultivated plants like mango, banana, jackfruit, cardamom, ginger, pepper, tea and coffee. Sixty-six species of orchids have found a home in this region, 8 species with a very narrow distribution. Recently, Paphiopedilum druryi Pfitz., was rediscovered in the wild after having been thought to be extinct for a hundred years. KMTR has 77 mammal species, 273 bird species, 37 amphibian species, 81 reptile species and 33 fish species. It is the southernmost home for the Indian tiger (Panthera tigris), and also retains several endemic and threatened mammals such as the Nilgiri tahr (Hemitragus hylocrius), lion-tailed macaque (Macaca silenus), the Nilgiri marten (Martes gwatkinsi sub sp.), and others. Like any other protected area in India, KMTR has threats to its biodiversity. It is bounded by 145 villages along the 5-km stretch of buffer zone, and widespread disturbance processes, such as livestock grazing, fuelwood collection, and sudden outbreaks of fire, occur in parallel with rare instances of poaching, gem stone collection and extraction of minor forest products. The area was used as a model for World Bank's successful Ecodevelopment Project during which the Wildlife Institute of India, Dehradun, accepted the challenge of conducting a multi-disciplinary research project in KMTR. The major goal of the project was to document various components of biodiversity and to quantify the dependence of the local people on its natural resources for formulating long-term conservation and ecodevelopment goals. Although the project successfully identified a range of important ecological and socio-economic issues facing the KTMR, there remains a long way to go to implement a management strategy based on these findings. Adams, W.M., Aveling, R., Brockington, D., Dickson, B., Elliott, J., Hutton, J., Roe, D., Vira, B., and Wolmer, W. 2004. Biodiversity conservation and the eradication of poverty. Science 306(5699):1146-1149. Akhani, H. 2004. A new spiny, cushion-like Euphorbia (Euphorbiaceae) from south-west Iran with special reference to the phytogeographic importance of local endemic species. Bot. J. Linn. Soc. 146(1):107-121. Allendorf, F.W., Leary, R.F., Hitt, N.P., Knudsen, K.L., Lundquist, L.L., and Spruell, P. 2004. Intercrosses and the US Endangered Species Act: should hybridized populations be included as westslope cutthroat trout? Conserv. Biol. 18(5):1203-1213. Alley, H., and Affolter, J.M. 2004. Experimental comparison of reintroduction methods for the endangered Echinacea laevigata (Boynton and Beadle) Blake. Nat. Areas J. 24(4):345-350. Allison, E.H., and Badjeck, M.C. 2004. Livelihoods, local knowledge and the integration of economic development and conservation concerns in the lower Tana River basin. Hydrobiologia 527(1):19-23. Als, T.D., Vila, R., Kandul, N.P., Nash, D.R., Yen, S.H., Hsu, Y.F., Mignault, A.A., Boomsma, J.J., and Pierce, N.E. 2004. The evolution of alternative parasitic life histories in large blue butterflies. Nature 432(7015):386-390. Andelman, S.J., Groves, C., and Regan, H.M. 2004. A review of protocols for selecting species at risk in the context of US Forest Service viability assessments. Acta Oecol. 26(2):75-83. Anderson, P.K., Cunningham, A.A., Patel, N.G., Morales, F.J., Epstein, P.R., and Daszak, P. 2004. Emerging infectious diseases of plants: pathogen pollution, climate change and agrotechnology drivers. TREE 19(10):535-544. Angelibert, S., Marty, P., Cereghino, R., and Giani, N. 2004. Seasonal variations in the physical and chemical characteristics of ponds: implications for biodiversity conservation. Aquat. Conserv. 14(5):439-456. Aquilani, S.M., and Brewer, J.S. 2004. Area and edge effects on forest songbirds in a non-agricultural upland landscape in northern Mississippi, USA. Nat. Areas J. 24(4):326-335. Arkoosh, M.R., Johnson, L., Rossignol, P.A., and Collier, T.K. 2004. Predicting the impact of perturbations on salmon (Oncorhynchus spp.) communities: implications for monitoring. Can. J. Fish. Aquat. Sci. 61(7):1166-1175. Aung, M., Swe, K.K., Oo, T., Moe, K.K., Leimgruber, P., Allendorf, T., Duncan, C., and Wemmer, C. 2004. The environmental history of Chatthin Wildlife Sanctuary, a protected area in Myanmar (Burma). J. Environ. Manage. 72(4):205-216. Avilés, J.M., and Parejo, D. 2004. Farming practices and roller Coracias garrulus conservation in south-west Spain. Bird Conserv. Int. 14(3):173-181. Barker, N.H.L., and Roberts, C.M. 2004. Scuba diver behaviour and the management of diving impacts on coral reefs. Biol. Conserv. 120(4):481-489. Barlow, J., and Peres, C.A. 2004. Avifaunal responses to single and recurrent wildfires in Amazonian forests. Ecol. Appl. 14(5):1358-1373. Beck, M.W., Marsh, T.D., Reisewitz, S.E., and Bortman, M.L. 2004. New tools for marine conservation: the leasing and ownership of submerged lands. Conserv. Biol. 18(5):1214-1223. Bienen, L. 2004. Thamin conservation hinges on park's history. Front. Ecol. Environ. 2(7):344. Bird, B.L., Branch, L.C., and Miller, D.L. 2004. Effects of coastal lighting on foraging behavior of beach mice. Conserv. Biol. 18(5):1435-1439. Blake, S., and Hedges, S. 2004. Sinking the flagship: the case of forest elephants in Asia and Africa. Conserv. Biol. 18(5):1191-1202. Blumstein, D.T., and Fernández-Juricic, E. 2004. The emergence of conservation behavior. Conserv. Biol. 18(5):1175-1177. Bonn, D. 2004. Iraq marshland restoration begins. Front. Ecol. Environ. 2(7):343. Bradford, B.M. 2004. Basics of backyard conservation. Front. Ecol. Environ. 2(7):386. Brashares, J.S., Arcese, P., Sam, M.K., Coppolillo, P.B., Sinclair, A.R.E., and Balmford, A. 2004. Bushmeat hunting, wildlife declines, and fish supply in West Africa. Science 306(5699):1180-1183. Bro, E., Mayot, P., Corda, E., and Reitz, F. 2004. Impact of habitat management on grey partridge populations: assessing wildlife cover using a multisite BACI experiment. J. Appl. Ecol. 41(5):846-857. Brown, J.H., and Sax, D.E. 2004. An essay on some topics concerning invasive species. Austral Ecol. 29(5):530-536. Calvete, C., and Estrada, R. 2004. Short-term survival and dispersal of translocated European wild rabbits. Improving the release protocol. Biol. Conserv. 120(4):507-516. Cardoso, P., Silva, I., de Oliveira, N.G., and Serrano, A.R.M. 2004. Indicator taxa of spider (Araneae) diversity and their efficiency in conservation. Biol. Conserv. 120(4):517-524. Ceballos, C.P., and Fitzgerald, A.A. 2004. The trade in native and exotic turtles in Texas. Wildlife Soc. Bull. 32(3):881-892. Cederbaum, S.B., Carroll, J.P., and Cooper, R.J. 2004. Effects of alternative cotton agriculture on avian and arthropod populations. Conserv. Biol. 18(5):1272-1282. Chee, Y.E. 2004. An ecological perspective on the valuation of ecosystem services. Biol. Conserv. 120(4):549-565. Chen, Z.Y., Li, B., Zhong, Y., and Chen, J.K. 2004. Local competitive effects of introduced Spartina alterniflora on Scirpus mariqueter at Dongtan of Chongming Island, the Yangtze River estuary and their potential ecological consequences. Hydrobiologia 528(1-3):99-106. Choquenot, D., Nicol, S.J., and Koehn, J.D. 2004. Bioeconomic modelling in the development of invasive fish policy. New Zeal. J. Mar. Fresh. 38(3):419-428. Coma, R., Pola, E., Ribes, M., and Zabala, M. 2004. Long-term assessment of temperate octocoral mortality patterns, protected vs. unprotected areas. Ecol. Appl. 14(5):1466-1478. Coomes, O.T., and Ban, N. 2004. Cultivated plant species diversity in home gardens of an Amazonian peasant village in northeastern Peru. Econ. Bot. 58(3):420-434. Corney, P.M., Le Duc, M.G., Smart, S.M., Kirby, K.J., Bunce, R.G.H., and Marrs, R.H. 2004. The effect of landscape-scale environmental drivers on the vegetation composition of British woodlands. Biol. Conserv. 120(4):491-505. Cox, J. 2004. Commentary: Population declines and generation lengths can bias estimates of vulnerability. Wildlife Soc. Bull. 32(3):979-982. Cuevas, J.G., Marticorena, A., and Cavieres, L.A. 2004. New additions to the introduced flora of the Juan Fernandez Islands: origin, distribution, life history traits, and potential of invasion. Rev. Chil. Hist. Nat. 77(3):523-538. Cumming, G.S. 2004. The impact of low-head dams on fish species richness in Wisconsin, USA. Ecol. Appl. 14(5):1495-1506. Cuthbert, R. 2004. Breeding biology of the Atlantic petrel, Pterodroma incerta, and a population estimate of this and other burrowing petrels on Gough Island, South Atlantic Ocean. Emu 104(3):221-228. Davis, A.P., and Mvungi, E.F. 2004. Two new and endangered species of Coffea (Rubiaceae) from the Eastern Arc Mountains (Tanzania) and notes on associated conservation issues. Bot. J. Linn. Soc. 146(2):237-245. Davis, S.K. 2004. Area sensitivity in grassland passerines: effects of patch size, patch shape, and vegetation structure on bird abundance and occurrence in southern Saskatchewan. Auk 121(4):1130-1145. Decandido, R., Muir, A.A., and Gargiullo, M.B. 2004. A first approximation of the historial and extant vascular flora of New York City: implications for native plant species conservation. J. Torrey Bot. Soc. 131(3):243-251. Dech, J.P., and Nosko, P. 2004. Rapid growth and early flowering in an invasive plant, purple loosestrife (Lythrum salicaria L.) during an El Nino spring. Int. J. Biometeorol. 49(1):26-31. del Viejo, A.M., Vega, X., González, M.A., and Sánchez, J.M. 2004. Disturbance sources, human predation and reproductive success of seabirds in tropical coastal ecosystems of Sinaloa State, Mexico. Bird Conserv. Int. 14(3):191-202. Donmez, A.A., and Mutlu, B. 2004. A new species of Nigella (Ranunculaceae) from Turkey. Bot. J. Linn. Soc. 146(2):251-255. Drake, J.M., and Bossenbroek, J.M. 2004. The potential distribution of zebra mussels in the United States. BioScience 54(10):931-941. Driscoll, M.J.L., and Donovan, T.M. 2004. Landscape context moderates edge effects: nesting success of wood thrushes in central New York. Conserv. Biol. 18(5):1330-1338. Dukes, J.S., and Mooney, H.A. 2004. Disruption of ecosystem processes in western North America by invasive species. Rev. Chil. Hist. Nat. 77(3):411-437. Duval, M.A., Rader, D.N., and Lindeman, K.C. 2004. Linking habitat protection and marine protected area programs to conserve coral reefs and associated back reef habitats. Bull. Mar. Sci. 75(2):321-334. Ebenman, B., Law, R., and Borrvall, C. 2004. Community viability analysis: the response of ecological communities to species loss. Ecology 85(9):2591-2600. Elderkin, C.L., Perkins, E.J., Leberg, P.L., Klerks, P.L., and Lance, R.F. 2004. Amplified fragment length polymorphism (AFLP) analysis of the genetic structure of the zebra mussel, Dreissena polymorpha, in the Mississippi River. Freshwater Biol. 49(11):1487-1494. Estoup, A., Beaumont, M., Sennedot, F., Moritz, C., and Cornuet, J.M. 2004. Genetic analysis of complex demographic scenarios: spatially expanding populations of the cane toad, Bufo marinus. Evolution 58(9):2021-2036. Ewel, J.J., and Putz, F.E. 2004. A place for alien species in ecosystem restoration. Front. Ecol. Environ. 2(7):354-360. Fabricius, K.E., and De'ath, G. 2004. Identifying ecological change and its causes: a case study on coral reefs. Ecol. Appl. 14(5):1448-1465. Fashing, P.J. 2004. Mortality trends in the African cherry (Prunus africana) and the implications for colobus monkeys (Colobus guereza) in Kakamega Forest, Kenya. Biol. Conserv. 120(4):449-459. Fensham, R.J., Fairfax, R.J., and Sharpe, P.R. 2004. Spring wetlands in seasonally arid Queensland: floristics, environmental relations, classification and conservation values. Aust. J. Bot. 52(5):583-595. Fernández, J., Toro, M.A., and Caballero, A. 2004. Managing individuals' contributions to maximize the allelic diversity maintained in small, conserved populations. Conserv. Biol. 18(5):1358-1367. Fischer, J., Lindenmayer, D.B., and Fazey, I. 2004. Appreciating ecological complexity: habitat contours as a conceptual landscape model. Conserv. Biol. 18(5):1245-1253. Ford, W.M., Stephenson, S.L., Menzel, J.M., Black, D.R., and Edwards, J.W. 2004. Habitat characteristics of the endangered Virginia northern flying squirrel (Glaucomys sabrinus fuscus) in the central Appalachian mountains. Am. Midl. Nat. 152(2):430-438. Forseth, I.N., and Innis, A.F. 2004. Kudzu (Pueraria montana): history, physiology, and ecology combine to make a major ecosystem threat. Critical Rev. Plant Sci. 23(5):401-413. Ganas, J., Robbins, M.M., Nkurunungi, J.B., Kaplin, B.A., and McNeilage, A. 2004. Dietary variability of mountain gorillas in Bwindi Impenetrable National Park, Uganda. Int. J. Primatol. 25(5):1043-1072. Gerber, L.R., Tinker, M.T., Doak, D.F., Estes, J.A., and Jessup, D.A. 2004. Mortality sensitivity in life-stage simulation analysis: a case study of southern sea otters. Ecol. Appl. 14(5):1554-1565. Gleason, R.A., Euliss, N.H., Hubbard, D.E., and Duffy, W.G. 2004. Invertebrate egg banks of restored, natural, and drained wetlands in the prairie pothole region of the United States. Wetlands 24(3):562-572. Gleason, S.M., and Ares, A. 2004. Photosynthesis, carbohydrate storage and survival of a native and an introduced tree species in relation to light and defoliation. Tree Physiol. 24(10):1087-1097. Godfree, R.C., Young, A.G., Lonsdale, W.M., Woods, M.J., and Burdon, J.J. 2004. Ecological risk assessment of transgenic pasture plants: a community gradient modelling approach. Ecol. Lett. 7(11):1077-1089. Golladay, S.W., Gagnon, P., Kearns, M., Battle, J.M., and Hicks, D.W. 2004. Response of freshwater mussel assemblages (Bivalvia: Unionidae) to a record drought in the Gulf Coastal Plain of southwestern Georgia. J. N. Am. Benthol. Soc. 23(3):494-506. González-Astorga, J.G., Cruz-Angón, A., Flores-Palacios, A., and Vovides, A.P. 2004. Diversity and genetic structure of the Mexican endemic epiphyte Tillandsia achyrostachys E. Morr. ex Baker var. achyrostachys (Bromeliaceae). Ann. Botany 94(4):545-551. Goolsby, J.A. 2004. Potential distribution of the invasive Old World climbing fern, Lygodium microphyllum in North and South America. Nat. Areas J. 24(4):351-353. Gower, D.J., Bhatta, G., Giri, V., Oommen, O.V., Ravichandran, M.S., and Wilkinson, M. 2004. Biodiversity in the Western Ghats: the discovery of new species of caecilian amphibians. Curr. Sci. 87(6):739-740. Graham, L.E. 2004. Foreword to the special issue on invasive plants. Critical Rev. Plant Sci. 23(5):365. Grant, T.A., Madden, E., and Berkey, G.B. 2004. Tree and shrub invasion in northern mixed-grass prairie: implications for breeding grassland birds. Wildlife Soc. Bull. 32(3):807-818. Gray, M.J., Smith, L.M., and Brenes, R. 2004. Effects of agricultural cultivation on demographics of Southern High Plains amphibians. Conserv. Biol. 18(5):1368-1377. Guidetti, P., Fraschetti, S., Terlizzi, A., and Boero, F. 2004. Effects of desertification caused by Lithophaga lithophaga (Mollusca) fishery on littoral fish assemblages along rocky coasts of southeastern Italy. Conserv. Biol. 18(5):1417-1423. Haig, S.M., Mullins, T.D., Forsman, E.D., Trail, P.W., and Wennerberg, L. 2004. Genetic identification of spotted owls, barred owls, and their hybrids: legal implications of hybrid identity. Conserv. Biol. 18(5):1347-1357. Halbert, N.D., Raudsepp, T., Chowdhary, B.P., and Derr, J.N. 2004. Conservation genetic analysis of the Texas state bison herd. J. Mammal. 85(5):924-931. Harden, G.J., Fox, M.D., and Fox, B.J. 2004. Monitoring and assessment of restoration of a rainforest remnant at Wingham Brush, NSW. Austral Ecol. 29(5):489-507. Hardie, S.A., Barmuta, L.A., and White, R.W.G. 2004. Threatened fishes of the world: Galaxias auratus Johnston, 1883 (Galaxiidae). Environ. Biol. Fish. 71(2):126. Harveson, P.M., Tewes, M.E., and Anderson, G.L. 2004. Habitat use by ocelots in south Texas: implications for restoration. Wildlife Soc. Bull. 32(3):948-954. Hawbaker, T.J., and Radeloff, V.C. 2004. Roads and landscape pattern in northern Wisconsin based on a comparison of four road data sources. Conserv. Biol. 18(5):1233-1244. Heikkinen, R.K., Luoto, M., Virkkala, R., and Rainio, K. 2004. Effects of habitat cover, landscape structure and spatial variables on the abundance of birds in an agricultural-forest mosaic. J. Appl. Ecol. 41(5):824-835. Heilmann-Clausen, J., and Christensen, M. 2004. Does size matter? On the importance of various dead wood fractions for fungal diversity in Danish beech forests. Forest Ecol. Manag. 201(1):105-119. Hewitt, C.L., Willing, J., Bauckham, A., Cassidy, A.M., Cox, C.M.S., Jones, L., and Wotton, D.M. 2004. New Zealand marine biosecurity: delivering outcomes in a fluid environment. New Zeal. J. Mar. Fresh. 38(3):429-438. Hickey, A.J.R., Lavery, S.D., Eyton, S.R., and Clements, K.D. 2004. Verifying invasive marine fish species using molecular techniques: a model example using triplefin fishes (family Tripterygiidae). New Zeal. J. Mar. Fresh. 38(3):439-446. Holl, K.D., and Crone, E.E. 2004. Applicability of landscape and island biogeography theory to restoration of riparian understorey plants. J. Appl. Ecol. 41(5):922-933. Homan, R.N., Windmiller, B.S., and Reed, J.M. 2004. Critical thresholds associated with habitat loss for two vernal pool-breeding amphibians. Ecol. Appl. 14(5):1547-1553. Hook, P.B., Olson, B.E., and Wraith, J.M. 2004. Effects of the invasive forb Centaurea maculosa on grassland carbon and nitrogen pools in Montana, USA. Ecosystems 7(6):686-694. Howe, H.F., and Lane, D. 2004. Vole-driven succession in experimental wet-prairie restorations. Ecol. Appl. 14(5):1295-1305. Jackson, J.E., Raadik, T.A., Lintermans, M., and Hammer, M. 2004. Alien salmonids in Australia: impediments to effective impact management, and future directions. New Zeal. J. Mar. Fresh. 38(3):447-455. Jiang, L., and Morin, P.J. 2004. Productivity gradients cause positive diversity-invasibility relationships in microbial communities. Ecol. Lett. 7(11):1047-1057. Jongepierova, I., Jongepier, J.W., and Klimes, L. 2004. Restoring grassland on arable land: an example of a fast spontaneous succession without weed-dominated stages. Preslia 76(4):361-369. Juutinen, A., and Monkkonen, M. 2004. Testing alternative indicators for biodiversity conservation in old-growth boreal forests: ecology and economics. Ecol. Econ. 50(1-2):35-48. Kati, V., Devillers, P., Dufrêne, M., Legakis, A., Vokou, D., and Lebrun, P. 2004. Hotspots, complementarity or representativeness? Designing optimal small-scale reserves for biodiversity conservation. Biol. Conserv. 120(4):471-480. Keith, D.A., McCarthy, M.A., Regan, H., Regan, T., Bowles, C., Drill, C., Craig, C., Pellow, B., Burgman, M.A., Master, L.L., Ruckelshaus, M., Mackenzie, B., Andelman, S.J., and Wade, P.R. 2004. Protocols for listing threatened species can forecast extinction. Ecol. Lett. 7(11):1101-1108. Kercher, S.M., Carpenter, Q.J., and Zedler, J.B. 2004. Interrelationships of hydrologic disturbance, reed canary grass (Phalaris arundinacea L.), and native plants in Wisconsin wet meadows. Nat. Areas J. 24(4):316-325. Koehn, J.D., and MacKenzie, R.F. 2004. Priority management actions for alien freshwater fish species in Australia. New Zeal. J. Mar. Fresh. 38(3):457-472. Koehn, J.D., and McDowall, R.M. 2004. Invasive species: fish and fisheries workshop overview, then and now - foreword. New Zeal. J. Mar. Fresh. 38(3):383-389. Kolar, C. 2004. Risk assessment and screening for potentially invasive fishes. New Zeal. J. Mar. Fresh. 38(3):391-397. Kremen, C., Williams, N.M., Bugg, R.L., Fay, J.P., and Thorp, R.W. 2004. The area requirements of an ecosystem service: crop pollination by native bee communities in California. Ecol. Lett. 7(11):1109-1119. Kumara, H.N., and Singh, M. 2004. Distribution and abundance of primates in rain forests of the Western Ghats, Karnataka, India and the conservation of Macaca silenus. Int. J. Primatol. 25(5):1001-1018. Laffaille, P., Baisez, A., Rigaud, C., and Feunteun, E. 2004. Habitat preferences of different European eel size classes in a reclaimed marsh: a contribution to species and ecosystem conservation. Wetlands 24(3):642-651. Larson, M.A., Thompson, F.R., Millspaugh, J.J., Dijak, W.D., and Shifley, S.R. 2004. Linking population viability, habitat suitability, and landscape simulation models for conservation planning. Ecol. Model. 180(1):103-118. Lavergne, S., and Molofsky, J. 2004. Reed canary grass (Phalaris arundinacea) as a biological model in the study of plant invasions. Critical Rev. Plant Sci. 23(5):415-429. Lee, D.K., Kang, H.S., and Park, Y.D. 2004. Natural restoration of deforested woodlots in South Korea. Forest Ecol. Manag. 201(1):23-32. Lee, D.K., and Sayer, J. 2004. Restoration research on degraded forest ecosystems - preface. Forest Ecol. Manag. 201(1):1. Lee, K.A., and Klasing, K.C. 2004. A role for immunology in invasion biology. TREE 19(10):523-529. Lee, S., Ma, S., Lim, Y., Choi, H.K., and Shin, H. 2004. Genetic diversity and its implications in the conservation of endangered Zostera japonica in Korea. J. Plant Biol. 47(3):275-281. Lesica, P., and McCune, B. 2004. Decline of arctic-alpine plants at the southern margin of their range following a decade of climatic warming. J. Veg. Sci. 15(5):679-690. Levine, J.M., and Rees, M. 2004. Effects of temporal variability on rare plant persistence in annual systems. Am. Nat. 164(3):350-363. Li, W.H. 2004. Degradation and restoration of forest ecosystems in China. Forest Ecol. Manag. 201(1):33-41. Li, Y., Cheng, Z.M., Smith, W.A., Ellis, D.R., Chen, Y.Q., Zheng, X.L., Pei, Y., Luo, K.M., Zhao, D.G., Yao, Q.H., Duan, H., and Li, Q. 2004. Invasive ornamental plants: problems, challenges, and molecular tools to neutralize their invasiveness. Critical Rev. Plant Sci. 23(5):381-389. Ling, N. 2004. Gambusia in New Zealand: really bad or just misunderstood? New Zeal. J. Mar. Fresh. 38(3):473-480. Lintermans, M. 2004. Human-assisted dispersal of alien freshwater fish in Australia. New Zeal. J. Mar. Fresh. 38(3):481-501. Lotts, K.C., Waite, T.A., and Vucetich, J.A. 2004. Reliability of absolute and relative predictions of population persistence based on time series. Conserv. Biol. 18(5):1224-1232. Lotze, H.K., and Milewski, I. 2004. Two centuries of multiple human impacts and successive changes in a North Atlantic food web. Ecol. Appl. 14(5):1428-1447. Luken, J.O. 2004. An index of invasion for the ground layer of riparian forest vegetation. Nat. Areas J. 24(4):336-340. Lunney, D., Gresser, S.M., Mahon, P.S., and Matthews, A. 2004. Post-fire survival and reproduction of rehabilitated and unburnt koalas. Biol. Conserv. 120(4):567-575. Lynch, A.J.J., and Balmer, J. 2004. The ecology, phytosociology and stand structure of an ancient endemic plant Lomatia tasmanica (Proteaceae) approaching extinction. Aust. J. Bot. 52(5):619-627. MacDonald, G.E. 2004. Cogongrass (Imperata cylindrica) - biology, ecology, and management. Critical Rev. Plant Sci. 23(5):367-380. Madhusudan, M.D. 2004. Recovery of wild large herbivores following livestock decline in a tropical Indian wildlife reserve. J. Appl. Ecol. 41(5):858-869. Marchetti, M.P., Light, T., Moyle, P.B., and Viers, J.H. 2004. Fish invasions in California watersheds: testing hypotheses using landscape patterns. Ecol. Appl. 14(5):1507-1525. Marrero, P., Oliveira, P., and Nogales, M. 2004. Diet of the endemic Madeira Laurel pigeon Columba trocaz in agricultural and forest areas: implications for conservation. Bird Conserv. Int. 14(3):165-172. Matoso, D.A., Artoni, R.F., and Galetti, P.M. 2004. Genetic diversity of the small characid fish Astyanax sp., and its significance for conservation. Hydrobiologia 527(1):223-225. Matter, S.F., Roland, J., Moilanen, A., and Hanski, I. 2004. Migration and survival of Parnassius smintheus: detecting effects of habitat for individual butterflies. Ecol. Appl. 14(5):1526-1534. McCarthy, M.A., Keith, D., Tietjen, J., Burgman, M.A., Maunder, M., Master, L., Brook, B.W., Mace, G., Possingham, H.P., Medellin, R., Andelman, S., Regan, H., Regan, T., and Ruckelshaus, M. 2004. Comparing predictions of extinction risk using models and subjective judgement. Acta Oecol. 26(2):67-74. McDowall, R.M. 2004. Shoot first, and then ask questions: a look at aquarium fish imports and invasiveness in New Zealand. New Zeal. J. Mar. Fresh. 38(3):503-510. McPherson, J.M., and Vincent, A.C.J. 2004. Assessing East African trade in seahorse species as a basis for conservation under international controls. Aquat. Conserv. 14(5):521-538. Medhi, R., Chetry, D., Bhattacharjee, P.C., and Patiri, B.N. 2004. Status of Trachypithecus geei in a rubber plantation in western Assam, India. Int. J. Primatol. 25(6):1331-1337. Menke, C.A., and Muir, P.S. 2004. Short-term influence of wildfire on canyon grassland plant communities and Spalding's catchfly, a threatened plant. Northwest Sci. 78(3):192-203. Miller, J.R., Dixon, M.D., and Turner, M.G. 2004. Response of avian communities in large-river floodplains to environmental variation at multiple scales. Ecol. Appl. 14(5):1394-1410. Miskelly, C.M., and Taylor, G.A. 2004. Establishment of a colony of common diving petrels (Pelecanoides urinatrix) by chick transfers and acoustic attraction. Emu 104(3):205-211. Moerke, A.H., Gerard, K.J., Latimore, J.A., Hellenthal, R.A., and Lamberti, G.A. 2004. Restoration of an Indiana, USA, stream: bridging the gap between basic and applied lotic ecology. J. N. Am. Benthol. Soc. 23(3):647-660. Morgan, D.L., Gill, H.S., Maddern, M.G., and Beatty, S.J. 2004. Distribution and impacts of introduced freshwater fishes in Western Australia. New Zeal. J. Mar. Fresh. 38(3):511-523. Morita, K., Tsubo, J.I., and Matsuda, H. 2004. The impact of exotic trout on native charr in a Japanese stream. J. Appl. Ecol. 41(5):962-972. Musante, S. 2004. A now approach to combat invasive species: project-based training for graduate students. BioScience 54(10):893. Mysterud, A., and Østbye, E. 2004. Roe deer (Capreolus capreolus) browsing pressure affects yew (Taxes baccata) recruitment within nature reserves in Norway. Biol. Conserv. 120(4):545-548. Neilson, K., Kelleher, R., Barnes, G., Speirs, D., and Kelly, J. 2004. Use of fine-mesh monofilament gill nets for the removal of rudd (Scardinius etythrophthalmus) from a small lake complex in Waikato, New Zealand. New Zeal. J. Mar. Fresh. 38(3):525-539. Nemesio, A., and Silveira, F.A. 2004. Biogeographic notes on rare species of Euglossina (Hymenoptera: Apidae: Apini) occurring in the Brazilian Atlantic rain forest. Neotrop. Entomol. 33(1):117-120. Newbold, S., and Eadie, J.M. 2004. Using species-habitat models to target conservation: a case study with breeding mallards. Ecol. Appl. 14(5):1384-1393. Newton, I. 2004. The recent declines of farmland bird populations in Britain: an appraisal of causal factors and conservation actions. Ibis 146(4):579-600. Nicholls, H. 2004. Marine conservation: sink or swim. Nature 432(7013):12-14. Nicol, S.J., Lieschke, J.A., Lyon, J.P., and Koehn, J.D. 2004. Observations on the distribution and abundance of carp and native fish, and their responses to a habitat restoration trial in the Murray River, Australia. New Zeal. J. Mar. Fresh. 38(3):541-551. Nijboer, R.C., and Verdonschot, P.F.M. 2004. Rare and common macroinvertebrates: definition of distribution classes and their boundaries. Arch. Hydrobiol. 161(1):45-64. Normile, D. 2004. Expanding trade with China creates ecological backlash. Science 306(5698):968-969. Nowak, A., and Nowak, S. 2004. The effectiveness of plant conservation: a case study of Opole Province, southwest Poland. Environ. Manage. 34(3):363-371. O'Connell, A.F., Gilbert, A.T., and Hatfield, J.S. 2004. Contribution of natural history collection data to biodiversity assessment in national parks. Conserv. Biol. 18(5):1254-1261. Oppel, S., and Beaven, B.M. 2004. Habitat use and foraging behaviour of mohua (Mohoua ochrocephala) in the podocarp forest of Ulva Island, New Zealand. Emu 104(3):235-240. Pacheco, L.F. 2004. Large estimates of minimum viable population sizes. Conserv. Biol. 18(5):1178-1179. Palacios, C.J. 2004. Current status and distribution of birds of prey in the Canary Islands. Bird Conserv. Int. 14(3):203-213. Pärtel, M., Helm, A., Ingerpuu, N., Reier, Ü., and Tuvi, E.L. 2004. Conservation of Northern European plant diversity: the correspondence with soil pH. Biol. Conserv. 120(4):525-531. Phillips, B.L., Brown, G.P., and Shine, R. 2004. Assessing the potential for an evolutionary response to rapid environmental change: invasive toads and an Australian snake. Evol. Ecol. Res. 6(6):799-811. Pina, G.P.L., Gamez, R.A.C., and Gonzalez, C.A.L. 2004. Distribution, habitat association, and activity patterns of medium and large sized mammals of Sonora, Mexico. Nat. Areas J. 24(4):354-357. Pinheiro, P.S., Hartmann, P.A., and Geise, L. 2004. New record of Rhagomys rufescens (Thomas 1886) (Rodentia: Muridae: Sigmodontinae) in the Atlantic forest of southeastern Brazil. Zootaxa 431:1-11. Pressey, R.L., Watts, M.E., and Barrett, T.W. 2004. Is maximizing protection the same as minimizing loss? Efficiency and retention as alternative measures of the effectiveness of proposed reserves. Ecol. Lett. 7(11):1035-1046. Price, P.W., Abrahamson, W.G., Hunter, M.D., and Melika, G. 2004. Using gall wasps on oaks to test broad ecological concepts. Conserv. Biol. 18(5):1405-1416. Pujadas-Salvà, A.J., and Crespo, M.B. 2004. A new species of Orobanche (Orobanchaceae) from south-eastern Spain. Bot. J. Linn. Soc. 146(1):97-102. Pywell, R.F., Bullock, J.M., Walker, K.J., Coulson, S.J., Gregory, S.J., and Stevenson, M.J. 2004. Facilitating grassland diversification using the hemiparasitic plant Rhinanthus minor. J. Appl. Ecol. 41(5):880-887. Reed, D.H., O'Grady, J.J., Brook, B.W., Ballou, J.D., and Frankham, R. 2004. Large estimates of minimum viable population sizes. Conserv. Biol. 18(5):1179. Regan, T.J., Master, L.L., and Hammerson, G.A. 2004. Capturing expert knowledge for threatened species assessments: a case study using NatureServe conservation status ranks. Acta Oecol. 26(2):95-107. Reynolds, J.C., Short, M.J., and Leigh, R.J. 2004. Development of population control strategies for mink Mustela vison, using floating rafts as monitors and trap sites. Biol. Conserv. 120(4):533-543. Ricketts, T.H. 2004. Tropical forest fragments enhance pollinator activity in nearby coffee crops. Conserv. Biol. 18(5):1262-1271. Rickey, M.A., and Anderson, R.C. 2004. Effects of nitrogen addition on the invasive grass Phragmites australis and a native competitor Spartina pectinata. J. Appl. Ecol. 41(5):888-896. Ripple, W.J., and Beschta, R.L. 2004. Wolves, elk, willows, and trophic cascades in the upper Gallatin Range of Southwestern Montana, USA. Forest Ecol. Manag. 200(1-3):161-181. Ross, R.M., Redell, L.A., Bennett, R.M., and Young, J.A. 2004. Mesohabitat use of threatened hemlock forests by breeding birds of the Delaware river basin in northeastern United States. Nat. Areas J. 24(4):307-315. Rossi, C.M.R., Lessa, E.P., and Pascual, M.A. 2004. The origin of introduced rainbow trout (Oncorhynchus mykiss) in the Santa Cruz River, Patagonia, Argentina, as inferred from mitochondrial DNA. Can. J. Fish. Aquat. Sci. 61(7):1095-1101. Rothermel, B.B. 2004. Migratory success of juveniles: a potential constraint on connectivity for pond-breeding amphibians. Ecol. Appl. 14(5):1535-1546. Royle, J.A. 2004. Modeling abundance index data from anuran calling surveys. Conserv. Biol. 18(5):1378-1385. Sadlier, R.A., Smith, S.A., Bauer, A.M., and Whitaker, A.H. 2004. A new genus and species of live-bearing scincid lizard (Reptilia: Scincidae) from New Caledonia. J. Herpetol. 38(3):320-330. Safi, K., and Kerth, G. 2004. A comparative analysis of specialization and extinction risk in temperate-zone bats. Conserv. Biol. 18(5):1293-1303. Samejima, H., Marzuki, M., Nagamitsu, T., and Nakasizuka, T. 2004. The effects of human disturbance on a stingless bee community in a tropical rainforest. Biol. Conserv. 120(4):577-587. Sánchez-Fernández, D., Abellán, P., Velasco, J., and Millán, A. 2004. Selecting areas to protect the biodiversity of aquatic ecosystems in a semiarid Mediterranean region using water beetles. Aquat. Conserv. 14(5):465-479. Sayer, J., Chokkalingam, U., and Poulsen, J. 2004. The restoration of forest biodiversity and ecological values. Forest Ecol. Manag. 201(1):3-11. Schierenbeck, K.A. 2004. Japanese honeysuckle (Lonicera japonica) as an invasive species; history, ecology, and context. Critical Rev. Plant Sci. 23(5):391-400. Schulze, C.H., Waltert, M., Kessler, P.J.A., Pitopang, R., Shahabuddin, Veddeler, D., Mühlenberg, M., Gradstein, S.R., Leuschner, C., Steffan-Dewenter, I., and Tscharntke, T. 2004. Biodiversity indicator groups of tropical land-use systems: comparing plants, birds, and insects. Ecol. Appl. 14(5):1321-1333. Shin, J.H., and Lee, D.K. 2004. Strategies for restoration of forest ecosystems degraded by forest fire in Kangwon ecoregion of Korea. Forest Ecol. Manag. 201(1):43-56. Silliman, B.R., and Bertness, M.D. 2004. Shoreline development drives invasion of Phragmites australis and the loss of plant diversity on New England salt marshes. Conserv. Biol. 18(5):1424-1434. Sirén, A., Hambäck, P., and Machoa, E. 2004. Including spatial heterogeneity and animal dispersal when evaluating hunting: a model analysis and an empirical assessment in an Amazonian community. Conserv. Biol. 18(5):1315-1329. Solan, M., Cardinale, B.J., Downing, A.L., Engelhardt, K.A.M., Ruesink, J.L., and Srivastava, D.S. 2004. Extinction and ecosystem function in the marine benthos. Science 306(5699):1177-1180. Solomon, B.D., Corey-Luse, C.M., and Halvorsen, K.E. 2004. The Florida manatee and eco-tourism: toward a safe minimum standard. Ecol. Econ. 50(1-2):101-115. Sorensen, P.W., and Stacey, N.E. 2004. Brief review of fish pheromones and discussion of their possible uses in the control of non-indigenous teleost fishes. New Zeal. J. Mar. Fresh. 38(3):399-417. Spyreas, G., Ellis, J., Carroll, C., and Molano-Flores, B. 2004. Non-native plant commonness and dominance in the forests, wetlands, and grasslands of Illinois, USA. Nat. Areas J. 24(4):290-299. Stauffer, H.B., Ralph, C.J., and Miller, S.L. 2004. Ranking habitat for marbled murrelets: new conservation approach for species with uncertain detection. Ecol. Appl. 14(5):1374-1383. Stoner, K.J.L., and Joern, A. 2004. Landscape vs. local habitat scale influences to insect communities from tallgrass prairie remnants. Ecol. Appl. 14(5):1306-1320. Szymanski, J., Shuey, J.A., and Oberhauser, K. 2004. Population structure of the endangered Mitchell's satyr, Neonympha mitchellii mitchellii (French): implications for conservation. Am. Midl. Nat. 152(2):304-322. Tempel, D.J., Gilimburg, A.B., and Wright, V. 2004. The status and management of exotic and invasive species in national wildlife refuge wilderness areas. Nat. Areas J. 24(4):300-306. Tenhumberg, B., Tyre, A.J., Shea, K., and Possingham, H.P. 2004. Linking wild and captive populations to maximize species persistence: optimal translocation strategies. Conserv. Biol. 18(5):1304-1314. Terer, T., Ndiritu, G.G., and Gichuki, N.N. 2004. Socio-economic values and traditional strategies of managing wetland resources in lower Tana River, Kenya. Hydrobiologia 527(1):3-14. Ticktin, T., and Nantel, P. 2004. Dynamics of harvested populations of the tropical understory herb Aechmea magdalenae in old-growth versus secondary forests. Biol. Conserv. 120(4):461-470. Timm, R.M., and Genoways, H.H. 2004. The Florida bonneted bat, Eumops floridanus (Chiroptera: Molossidae): distribution, morphometrics, systematics, and ecology. J. Mammal. 85(5):852-865. Tremetsberger, K., Talavera, S., Stuessy, T.F., Ortiz, M.A., Weiss-Schneeweiss, H., and Kadlec, G. 2004. Relationship of Hypochaeris salzmanniana (Asteraceae, Lactuceae), an endangered species of the Iberian Peninsula, to H. radicata and H. glabra and biogeographical implications. Bot. J. Linn. Soc. 146(1):79-95. Trombulak, S.C., Omland, K.S., Robinson, J.A., Lusk, J.J., Fleischner, T.L., Brown, G., and Domroese, M. 2004. Principles of conservation biology: recommended guidelines for conservation literacy from the Education Committee of the Society for Conservation Biology. Conserv. Biol. 18(5):1180-1190. Ture, C., Bingol, N.A., and Middleton, B.A. 2004. Characterization of the habitat of Lythrum salicaria L. in floodplain forests in western Turkey - effects on stem height and seed production. Wetlands 24(3):711-716. Uthicke, S., Welch, D., and Benzie, J.A.H. 2004. Slow growth and lack of recovery in overfished holothurians on the Great Barrier Reef: evidence from DNA fingerprints and repeated large scale surveys. Conserv. Biol. 18(5):1395-1404. van Mantgem, P.J., Stephenson, N.L., Keifer, M., and Keeley, J. 2004. Effects of an introduced pathogen and fire exclusion on the demography of sugar pine. Ecol. Appl. 14(5):1590-1602. Waltert, M., Mardiastuti, A., and Mühlenberg, M. 2004. Effects of land use on bird species richness in Sulawesi, Indonesia. Conserv. Biol. 18(5):1339-1346. Walther, B.A., Wisz, M.S., and Rahbek, C. 2004. Known and predicted African winter distributions and habitat use of the endangered Basra reed warbler (Acrocephalus griseldis) and the near-threatened cinereous bunting (Emberiza cineracea). J. Ornithol. 145(4):287-299. Ward, M.D., and Labisky, R.F. 2004. Post-dispersal germination success of native black gum (Nyssa sylvatica) and introduced camphor tree (Cinnamomum camphora) in Florida, USA. Nat. Areas J. 24(4):341-344. Watson, J.E.M., Whittaker, R.J., and Dawson, T.P. 2004. Avifaunal responses to habitat fragmentation in the threatened littoral forests of south-eastern Madagascar. J. Biogeogr. 31(11):1791-1807. Welk, E. 2004. Constraints in range predictions of invasive plant species due to non-equilibrium distribution patterns: purple loosestrife (Lythrum salicaria) in North America. Ecol. Model. 179(4):551-567. Wickramasinghe, L.P., Harris, S., Jones, G., and Jennings, N.V. 2004. Abundance and species richness of nocturnal insects on organic and conventional farms: effects of agricultural intensification on bat foraging. Conserv. Biol. 18(5):1283-1292. Wilson, R.J., Thomas, C.D., Fox, R., Roy, D.B., and Kunin, W.E. 2004. Spatial patterns in species distributions reveal biodiversity change. Nature 432(7015):393-396. Wood, D.R., Burger, L.W., Bowman, J.L., and Hardy, C.L. 2004. Avian community response to pine-grassland restoration. Wildlife Soc. Bull. 32(3):819-828. Wotton, D.M., and Hewitt, C.L. 2004. Marine biosecurity post-border management: developing incursion response systems for New Zealand. New Zeal. J. Mar. Fresh. 38(3):553-559. Yamaguchi, N., Driscoll, C.A., Kitchener, A.C., Ward, J.M., and Macdonald, D.W. 2004. Craniological differentiation between European wildcats (Felis silvestris silvestris), African wildcats (F. s. lybica) and Asian wildcats (F. s. ornata): implications for their evolution and conservation. Biol. J. Linn. Soc. 83(1):47-63. Zak, M.R., Cabido, M., and Hodgson, J.G. 2004. Do subtropical seasonal forests in the Gran Chaco, Argentina, have a future? Biol. Conserv. 120(4):589-598. Zavaleta, E.S., and Hulvey, K.B. 2004. Realistic species losses disproportionately reduce grassland resistance to biological invaders. Science 306(5699):1175-1177. Zedler, J.B., and Kercher, S. 2004. Causes and consequences of invasive plants in wetlands: opportunities, opportunists, and outcomes. Critical Rev. Plant Sci. 23(5):431-452. Zhang, B., Fang, S.G., and Xi, Y.M. 2004. Low genetic diversity in the endangered crested ibis Nipponia nippon and implications for conservation. Bird Conserv. Int. 14(3):183-190. Zhou, Z.H., and Jiang, Z.G. 2004. International trade status and crisis for snake species in China. Conserv. Biol. 18(5):1386-1394. [ TOP ]
http://botany.si.edu/pubs/bcn/issue/241.htm
13
32
Science Fair Project Encyclopedia General equilibrium theory is a branch of theoretical microeconomics. It seeks to explain production, consumption and prices in a whole economy. General equilibrium tries to give an understanding of the whole economy using a bottom-up approach, starting with individual markets and agents. Macroeconomics, as developed by so-called Keynesian economists, uses a top-down approach where the analysis starts with larger aggregates. Since modern macroeconomics has emphasized microeconomic foundations, this distinction has been slightly blurred. However, many macroeconomic models simply have a 'goods market' and study its interaction with for instance the financial market. General equilibrium models typically model a multitude of different goods markets. Modern general equilibrium models are typically complex and require computers to help with numerical solutions. Under capitalism, the prices and production of all goods are interrelated. A change in the price of one good, say bread, may affect another price, for example, the wages of bakers. If bakers differ in tastes from others, the demand for bread might be affected by a change in bakers' wages, with a consequent effect on the price of bread. Calculating the equilibrium price of just one good, in theory, requires an analysis that accounts for all of the millions of different goods that are available. History of general equilibrium modelling The first attempt in Neoclassical economics to model prices for a whole economy was made by Leon Walras. Walras' 'Elements of Pure Economics provides a succession of models, each taking into account more aspects of a real economy (two commodities, many commodities, production, growth, money). Many think Walras was unsuccessful and the later models in this series inconsistent. Nevertheless, Walras first laid down a research programme much followed by 20th century economists. In particular, Walras' agenda included the investigation of when equilibria are unique and stable. Walras also first introduced a restriction into general equilibrium theory that some think has never been overcome, that of the tatonnement or grouping process. The tatonnement process is a tool for investigating stability of equilibria. Prices are cried, and agents register how much of each good they would like to offer (supply) or purchase (demand). No transactions and no production take place at disequilibrium prices. Instead, prices are lowered for goods with positive prices and excess supply. Prices are raised for goods with excess demand. The question for the mathematician is under what conditions such a process will terminate in equilibrium in which demand equates to supply for goods with positive prices and demand does not exceed supply for goods with a price of zero. Walras was not able to provide a definitive answer to this question. In partial equilibrium analysis, the determination of the price of a good is simplified by just looking at the price of one good, and assuming that the prices of all other goods remain constant. The Marshallian theory of supply and demand is an example of partial equilibrium analysis. Partial equilibrium analysis is adequate when the first-order effects of a shift in, say, the demand curve do not shift the supply curve. Anglo-American economists became more interested in general equilibrium in the late 1920s and 1930s after Piero Sraffa's demonstration that Marshallian economists cannot account for the forces thought to account for the upward-slope of the supply curve for a consumer good. If an industry uses little of a factor of production, a small increase in the output of that industry will not bid the price of that factor up. To a first order approximation, firms in the industry will not experience decreasing costs and the industry supply curves will not slope up. If an industry uses an appreciable amount of that factor of production, an increase in the output of that industry will exhibit increasing costs. But such a factor is likely to be used in substitutes for the industry's product, and an increased price of that factor will have effects on the supply of those substitutes. Consequently, the first order effects of a shift in the supply curve of the original industry under these assumptions include a shift in the original industry's demand curve. General equilibrium is designed to investigate such interactions between markets. Continential European economists made important advances in the 1930s. Walras' proofs of the existence of general equilibrium often were based on the counting of equations and variables. Such arguments are inadequate for non-linear systems of equations and do not imply that equilibrium prices and quantities cannot be negative, a meaningless solution for his models. The replacement of certain equations by inequalities and the use of more rigorous mathematics improved general equilibrium modeling. Classical economics as well as Marxist economics also have had analyses of natural prices or prices of production. Other theoretical macroeconomic models are Wassily Leontief's Input-Output analysis, and John von Neumann's Linear Programming model of growth. Modern concept of general equilibrium in economics The modern conception of general equilibrium is provided by a model developed jointly by Kenneth Arrow and Gerard Debreu in the 1950s. Gerard Debreu presents this model in Theory of Value (1959) as an axiomatic model, following the style of mathematics promoted by Bourbaki. In such an approach, the interpretation of the terms in the theory (e.g., goods, prices) are not fixed by the axioms. Three important theorems have been proved in this framework. First, existence theorems show that equilibria exist under certain abstract conditions. The first fundamental theorem of welfare states that every market equilibrium is Pareto optimal under certain conditions. The second fundamental theorem of welfare states that every Pareto optimum is supported by a price system, again under certain conditions. These conditions were stated in the language of mathematical topology. The proofs used such concepts as separating hyperplanes and fixed point theorems. Three important interpretations of the terms of the theory have been often cited. First, supposed commodities are distinguished by the location where they are delivered. Then the Arrow-Debreu model is a spatial model of, for example, international trade. Second, suppose commodities are distinguished by when they are delivered. That is, suppose all markets equilibriate at some initial instant of time. Agents in the model purchase and sell contracts, where a contract specifies, for example, a good to be delivered and the date at which it is to be delivered. The Arrow-Debreu model of intertemporal equilibrium contains forward markets for all goods at all dates. No markets exist at any future dates. Third, suppose contracts specify states of nature which affect whether or not a commodity is to be delivered: "A contract for the transfer of a commodity now specifies, in addition to its physical properties, its location and its date, an event on the occurrence of which the transfer is conditional. This new definition of a commodity allows one to obtain a theory of [risk] free from any probability concept..." (Debreu 1959) These interpretations can be combined. So the complete Arrow-Debreu model can be said to apply when goods are identified by when they are to be delivered, where they are to be delivered, and under what circumstances they are to be delivered, as well as their intrinsic nature. So there would be a complete set of prices for contracts such as "1 ton of Winter red wheat, delivered on 3rd of January in Minneapolis, if there is a hurricane in Florida during December". A general equilibrium model with complete markets of this sort seems to be a long way from describing the workings of real economies. Unresolved problems in general equilibrium Research building on the Arrow-Debreu model has revealed some problems with the model. The Sonnenschein-Mantel-Debreu results show that, essentially, any restrictions on the shape of excess demand functions are essentially arbitrary. Some think this implies that the Arrow-Debreu model lacks empirical content. At any rate, Arrow-Debreu equilibria cannot be expected to be unique, stable, or determinate. A model organized around the tatonnement process has been said to be a model of a centrally planned economy, not a decentralized market economy. Some research has tried, not very successfully, to develop general equibrium models with other processes. In particular, some economists have developed models in which agents can trade at out-of-equilibrium prices and such trades can affect the equilibria to which the economy tends. Particularly noteworthy are the Hahn process, the Edgeworth process, and the Fisher process. The Arrow-Debreu model of intertemporal equilibrium, in which forward markets exist at the initial instant for goods to be delivered at each future point in time, can be transformed into a model of sequences of temporary equilibrium. Sequences of temporary equilibrium contain spot markets at each point in time. Roy Radner found that in order for equilibria to exist in such models, agents (e.g., firms and consumers) must have unlimited computational capabilities. Although the Arrow-Debreu model is set out in terms of some arbitrary numeraire, the model does not encompass money. Frank Hahn, for example, has investigated whether general equilibrium models can be developed in which money enters in some essential way. The (unsatisfied) goal is to find models in which whether or not money exists alters equilibrium solutions, perhaps because the initial position of agents depends on monetary prices, for example, when they have debts. Some critics of general equilibrium modeling contend that much research in these models constitutes exercises in pure matheamtics with no connection to actual economies. "There are endeavors that now pass for the most desirable kind of economic contributions although they are just plain mathematical exercises, not only without any economic substance but also without any mathematical value" (Nicholas Georgescu-Roegen 1979). Georescu-Roegen cites as an example a paper that assumed more traders than there are points on a real line. Although modern models in general equilibrium theory demonstrate that under certain circumstances prices will indeed converge to equilibria, critics hold that the assumptions necessary for these results are completely unrealistic. The necessary assumptions include perfect rationality of individuals; complete information about all prices both now and in the future; and the conditions necessary for perfect competition. Frank Hahn defends general equilibrium modeling on the grounds that it provides a negative function. General equilibrium models show what the economy would have to be like for an unregulated economy to be Pareto efficient. Note that Hahn's defense drops any claim that general equilibrium models describe actual capitalist economies. Some economists reject equilibrium theory outright in favour of more pragmatic models based more closely on observation of the economy. |List of Marketing Topics||List of Management Topics| |List of Economics Topics||List of Accounting Topics| |List of Finance Topics||List of Economists| The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/General_equilibrium
13
14
Education is central to development. It empowers people and strengthens nations. It is a powerful “equalizer”, opening doors to all to lift themselves out of poverty. It is critical to the world’s attainment of the Millennium Development Goals (MDGs). Two of the eight MDGs pertain to education—namely, universal primary completion and gender parity in primary and secondary schooling. Moreover, education—especially girls’ education—has a direct and proven impact on the goals related to child and reproductive health and environmental sustainability. Education also promotes economic growth, national productivity and innovation, and values of democracy and social cohesion. Benefits of Education Investment in education benefits the individual, society, and the world as a whole. Broad-based education of good quality is among the most powerful instruments known to reduce poverty and inequality. With proven benefits for personal health, it also strengthens nations’ economic health by laying the foundation for sustained economic growth. For individuals and nations, it is key to creating, applying, and spreading knowledge—and thus to the development of dynamic, globally competitive economies. And it is fundamental for the construction of democratic societies. Benefits to the individual Improves health and nutrition: Education greatly benefits personal health. Particularly powerful for girls, it profoundly affects reproductive health, and also improves child mortality and welfare through better nutrition and higher immunization rates. Education may be the single most effective preventive weapon against HIV/AIDS. Increases productivity and earnings: Research has established that every year of schooling increases individual wages for both men and women by a worldwide average of about 10 percent. In poor countries, the gains are even greater. Education is a great “leveler”, illiteracy being one of the strongest predictors of poverty. Primary education plays a catalytic role for those most likely to be poor, including girls, ethnic minorities, orphans, disabled people, and rural families. By enabling larger numbers to share in the growth process, education can be the powerful tide that lifts all boats. Benefits to society Drives economic competitiveness: An educated and skilled workforce is one of the pillars of the knowledge-based economy. Increasingly, comparative advantages among nations come less from natural resources or cheap labor and more from technical innovations and the competitive use of knowledge. Studies also link education to economic growth: education contributes to improved productivity which in theory should lead to higher income and improved economic performance. Has synergistic, poverty-reducing effects: Education can vitally contribute to the attainment of the Millennium Development Goals. While two of the goals pertain directly to education, education also helps to reduce poverty, promote gender equality, lower child mortality rates, protect against HIV/AIDS, reduce fertility rates, and enhance environmental awareness. Contributes to democratization: Countries with higher primary schooling and a smaller gap between rates of boys’ and girls’ schooling tend to enjoy greater democracy. Democratic political institutions (such as power-sharing and clean elections) are more likely to exist in countries with higher literacy rates and education levels. Promotes peace and stability: Peace education—spanning issues of human security, equity, justice, and intercultural understanding— is of paramount importance. Education also reduces crime: poor school environments lead to deficient academic performance, absenteeism, and drop out—precursors of delinquent and violent behavior. Promotes concern for the environment: Education can enhance natural resource management and national capacity for disaster prevention and adoption of new, environmentally friendly technologies. Benefits of Girls’ education: a wise investment . . . Investment in girls’ education yields some of the highest returns of all development investments, yielding both private and social benefits that accrue to individuals, families, and society at large: Reduces women’s fertility rates: Women with formal education are much more likely to use reliable family planning methods, delay marriage and childbearing, and have fewer and healthier babies than women with no formal education. It is estimated that one year of female schooling reduces fertility by 10 percent. The effect is particularly pronounced for secondary schooling. Lowers infant and child mortality rates: Women with some formal education are more likely to seek medical care, ensure their children are immunized, be better informed about their children's nutritional requirements, and adopt improved sanitation practices. As a result, their infants and children have higher survival rates and tend to be healthier and better nourished. Lowers maternal mortality rates: Women with formal education tend to have better knowledge about health care practices, are less likely to become pregnant at a very young age, tend to have fewer, better-spaced pregnancies, and seek pre- and post-natal care. It is estimated that an additional year of schooling for 1,000 women helps prevent two maternal deaths. Protects against HIV/AIDS infection: Girls’ education ranks among the most powerful tools for reducing girls’ vulnerability. It slows and reduces the spread of HIV/AIDS by contributing to female economic independence, delayed marriage, family planning, and work outside the home as well as greater information about the disease and how to prevent it. Increases women’s labor force participation rates and earnings: Education has been proven to increase income for wage earners and increase productivity for employers, yielding benefits for the community and society. Creates intergenerational education benefits: Mothers’ education is a significant variable affecting children’s education attainment and opportunities. A mother with a few years of formal education is considerably more likely to send her children to school. In many countries each additional year of formal education completed by a mother translates into her children remaining in school for an additional one-third to one-half year.
http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTEDUCATION/0%2C%2CcontentMDK:20591648~menuPK:1463858~pagePK:148956~piPK:216618~theSitePK:282386%2C00.html
13
16
Richmond, Virginia, Named the Capital of the Confederacy When delegates from six seceded states (Alabama, Florida, Georgia, Louisiana, Mississippi and South Carolina) convened in Montgomery, Alabama, on Feb. 4, 1861, they had an enormous task ahead of them: form a new country, the Confederate States of America. (Delegates from the seventh seceded state, Texas, joined them on March 2.) These delegates of the Provisional Confederate Congress went immediately to work: adopting a new constitution, choosing an interim president, and setting up a new government. While the delegates were busying themselves, momentous events occurred which dramatically affected their thinking and task at hand. On April 12, 1861, Confederate forces in Charleston, South Carolina, fired upon the Union Fort Sumter, beginning the Civil War. On April 15, U.S. President Abraham Lincoln put out a call for 75,000 volunteers to serve for 90 days to put down the Southern rebellion. Suddenly, four other Southern states (Arkansas, North Carolina, Tennessee and Virginia) faced the prospect of their country asking them to help attack their Southern brethren. This they could not do, and all four decided to secede from the Union and join the Confederacy. The Virginia Convention voted to secede just two days after Lincoln’s call to arms, on April 17, 1861. Even though this decision could not be official until ratified in a statewide referendum—which did not occur until May 23—members of the Virginia Convention went ahead on May 4 and offered their capital city, Richmond, to be the capital of the Confederate States of America. On the last day of their second and final session in Montgomery, May 21, 1861, the Provisional Confederate Congress accepted Virginia’s offer—despite grumblings from the Alabama delegates, who thought Montgomery would make a fine capital for the new country. There were many reasons why Virginia was a desirable location for the Confederate capital. First and foremost, perhaps, was the prestige of Virginia: after all, four of the United States’ first five presidents came from Virginia (George Washington, Thomas Jefferson, James Madison and James Monroe). This meant that except for the four years of John Adams’s presidency, the U.S. had been led by a Virginian throughout its first 36 years of existence. Virginia was a populous, proud, rich and productive state with many fine traditions and institutions. There were other, more practical reasons why Virginia was a logical choice to host the new country’s capital. Chief among them was the mighty Tredegar ironworks complex in Richmond, the only industrial plant in the South capable of producing heavy ordnance essential for the war effort. Richmond had other industrial plants as well, and was the center of a large railroad network. It had the largest population in the Confederacy and a robust, varied economy. It also was blessed with a wide range of natural resources. The following four newspaper articles give some idea of how the news of Richmond becoming the Confederate capital was reported. This first article was published (one suspects rather gleefully) by the Richmond Whig and reprinted by the Plain Dealer (Cleveland, Ohio) on May 8, 1861: The Capital of the South Circumstances render it highly probable that Richmond will speedily become the Capital of the great Southern Confederacy. Its position—political, commercial, strategical, moral and sanitary—gives it vast advantages over all competitors. President Davis, it is supposed, will make it his headquarters at an early day. The following on this subject was adopted by the [Virginia] Convention Saturday: Resolved, by this Convention, that the President of the Confederate States and the constituted authorities of the Confederacy be, and they are hereby, cordially and respectfully invited, whenever in their opinion the public interest or convenience may require it, to make the City of Richmond, or some other place in this State, the seat of the Government of the Confederacy. This article was published by another Virginia paper, the Alexandria Gazette (Alexandria, Virginia) on May 9, 1861: Our contemporaries of the South are debating the final establishment of the Capital of the Confederate States, and nearly all unite upon Richmond as the most appropriate location. Accessibility, climate, the beauty of the city, and the ancient prestige of the State, plead most eloquently in its behalf. Many consider it an admirable stroke of policy, in the harmonizing effect it would have upon all the border States. The Montgomery press say that there is a strong probability that the permanent Capital will be removed within the period of a month, and suggest Richmond as the most eligible location. The Alabama delegation voted alone for it to remain in Montgomery. The news that the Provisional Confederate Congress had voted on May 21 to accept Virginia’s offer of Richmond was reported by the Richmond Whig (Richmond, Virginia) on the front page of its May 24, 1861, issue: The Confederate Capital A dispatch, yesterday, states that Congress has adjourned, and that Richmond has been selected for the seat of Government of the Confederate States. Bravely done! President Davis is expected here some of these days—on his way to pay his respects to his next door neighbor, at Washington! This follow-up report was published by the Daily Constitutionalist (Augusta, Georgia) on May 27, 1861: Removal of the Government Montgomery, Ala., May 27.—The business of the several departments of the Government here is pretty much suspended. The officials and clerks are all busily engaged in packing up papers, documents, furniture, &c., and directing them to Richmond. In a day or two, everything belonging to the Government will be en route for Richmond, Va., the new Capital of the Southern Confederacy. Click here for more articles about the American Civil War.
http://www.newsinhistory.com/blog/richmond-virginia-named-capital-confederacy
13
21
The proportions and expressions of the human face are important to identify origin, emotional tendencies, health qualities, and some social information. From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain. Sometimes, damaged parts of the brain can cause specific impairments in understanding faces or prosopagnosia. From birth, infants possess rudimentary facial processing capacities. Infants as young as two days of age are capable of mimicking the facial expressions of an adult, displaying their capacity to note details like mouth and eye shape as well as to move their own muscles in a way that produces similar patterns in their faces. However, despite this ability, newborns are not yet aware of the emotional content encoded within facial expressions. Five-month olds, when presented with an image of a person making a fearful expression and a person making a happy expression, pay the same amount of attention to and exhibit similar event-related potentials for both. When seven-month-olds are given the same treatment, they focus more on the fearful face, and their event-related potential for the scared face shows a stronger initial negative central component than the happy face. This result indicates an increased attentional and cognitive focus toward fear that reflects the threat-salient nature of the emotion. In addition, infants’ negative central components were not different for new faces that varied in the intensity of an emotional expression but portrayed the same emotion as a face they had been habituated to but were stronger for different-emotion faces, showing that seven-month-olds regarded happy and sad faces as distinct emotive categories. The recognition of faces is an important neurological mechanism that an individual uses every day. Jeffrey and Rhodes said that faces "convey a wealth of information that we use to guide our social interactions." For example, emotions play a large role in our social interactions. The perception of a positive or negative emotion on a face affects the way that an individual perceives and processes that face. A face that is perceived to have a negative emotion is processed in a less holistic manner than a face displaying a positive emotion. The ability of face recognition is apparent even in early childhood. By age five, the neurological mechanisms responsible for face recognition are present. Research shows that the way children process faces is similar to that of adults, however adults process faces more efficiently. The reason for this may be because of improvements in memory and cognitive functioning that occur with age. Infants are able to comprehend facial expressions as social cues representing the feelings of other people before they have been alive for a year. At seven months, the object of an observed face’s apparent emotional reaction is relevant in processing the face. Infants at this age show greater negative central components to angry faces that are looking directly at them than elsewhere, although the direction of fearful faces’ gaze produces no difference. In addition, two ERP components in the posterior part of the brain are differently aroused by the two negative expressions tested. These results indicate that infants at this age can at least partially understand the higher level of threat from anger directed at them as compared to anger directed elsewhere. At least seven months of age, infants are also able to use others’ facial expressions to understand their behavior. Seven-month olds will look to facial cues to understand the motives of other people in ambiguous situations, as shown by a study in which they watched an experimenter’s face longer if she took a toy from them and maintained a neutral expression than if she made a happy expression. Interest in the social world is increased by interaction with the physical environment. Training three-month-old infants to reach for objects with Velcro-covered “sticky mitts” increases the amount of attention that they pay to faces as compared to passively moving objects through their hands and non-trained control groups. In following with the notion that seven-month-olds have categorical understandings of emotion, they are also capable of associating emotional prosodies with corresponding facial expressions. When presented with a happy or angry face, shortly followed by an emotionally-neutral word read in a happy or angry tone, their ERPs follow different patterns. Happy faces followed by angry vocal tones produce more changes than the other incongruous pairing, while there was no such difference between happy and angry congruous pairings, with the greater reaction implying that infants held greater expectations of a happy vocal tone after seeing a happy face than an angry tone following an angry face. Considering an infant’s relative immobility and thus their decreased capacity to elicit negative reactions from their parents, this result implies that experience has a role in building comprehension of facial expressions. Several other studies indicate that early perceptual experience is crucial to the development of capacities characteristic of adult visual perception, including the ability to identify familiar others and to recognize and comprehend facial expressions. The capacity to discern between faces, much like language, appears to have a broad potential in early life that is whittled down to kinds of faces that are experienced in early life. Infants can discern between macaque faces at six months of age, but, without continued exposure, cannot at nine months of age. Being shown photographs of macaques during this three-month period gave nine-month-olds the ability to reliably tell between unfamiliar macaque faces. The neural substrates of face perception in infants are likely similar to those of adults, but the limits of imaging technology that are feasible for use with infants currently prevent very specific localization of function as well as specific information from subcortical areas like the amygdala, which is active in the perception of facial expression in adults. In a study on healthy adults, it was shown that faces are likely to be processed, in part, via a retinotectal (subcortical) pathway. However, there is activity near the fusiform gyrus, as well as in occipital areas. when infants are exposed to faces, and it varies depending on factors including facial expression and eye gaze direction. Adult face perception Theories about the processes involved in adult face perception have largely come from two sources: research on normal adult face perception and the study of impairments in face perception that are caused by brain injury or neurological illness. Novel optical illusions such as the Flashed Face Distortion Effect, in which scientific phenomenology outpaces neurological theory, also provide areas for research. One of the most widely accepted theories of face perception argues that understanding faces involves several stages: from basic perceptual manipulations on the sensory information to derive details about the person (such as age, gender or attractiveness), to being able to recall meaningful details such as their name and any relevant past experiences of the individual. This model (developed by psychologists Vicki Bruce and Andrew Young) argues that face perception might involve several independent sub-processes working in unison. - A "view centered description" is derived from the perceptual input. Simple physical aspects of the face are used to work out age, gender or basic facial expressions. Most analysis at this stage is on feature-by-feature basis. That initial information is used to create a structural model of the face, which allows it to be compared to other faces in memory, and across views. This explains why the same person seen from a novel angle can still be recognized. This structural encoding can be seen to be specific for upright faces as demonstrated by the Thatcher effect. The structurally encoded representation is transferred to notional "face recognition units" that are used with "personal identity nodes" to identify a person through information from semantic memory. The natural ability to produce someone's name when presented with their face has been shown in experimental research to be damaged in some cases of brain injury, suggesting that naming may be a separate process from the memory of other information about a person. The study of prosopagnosia (an impairment in recognizing faces which is usually caused by brain injury) has been particularly helpful in understanding how normal face perception might work. Individuals with prosopagnosia may differ in their abilities to understand faces, and it has been the investigation of these differences which has suggested that several stage theories might be correct. Face perception is an ability that involves many areas of the brain; however, some areas have been shown to be particularly important. Brain imaging studies typically show a great deal of activity in an area of the temporal lobe known as the fusiform gyrus, an area also known to cause prosopagnosia when damaged (particularly when damage occurs on both sides). This evidence has led to a particular interest in this area and it is sometimes referred to as the fusiform face area for that reason. Neuroanatomy of facial processing There are several parts of the brain that play a role in face perception. Rossion, Hanseeuw, and Dricot used BOLD fMRI mapping to identify activation in the brain when subjects viewed both cars and faces. The majority of BOLD fMRI studies use blood oxygen level dependent (BOLD) contrast to determine which areas of the brain are activated by various cognitive functions. They found that the occipital face area, located in the occipital lobe, the fusiform face area, the superior temporal sulcus, the amygdala, and the anterior/inferior cortex of the temporal lobe, all played roles in contrasting the faces from the cars, with the initial face perception beginning in the fusiform face area and occipital face areas. This entire region links to form a network that acts to distinguish faces. The processing of faces in the brain is known as a "sum of parts" perception. However, the individual parts of the face must be processed first in order to put all of the pieces together. In early processing, the occipital face area contributes to face perception by recognizing the eyes, nose, and mouth as individual pieces. Furthermore, Arcurio, Gold, and James used BOLD fMRI mapping to determine the patterns of activation in the brain when parts of the face were presented in combination and when they were presented singly. The occipital face area is activated by the visual perception of single features of the face, for example, the nose and mouth, and preferred combination of two-eyes over other combinations. This research supports that the occipital face area recognizes the parts of the face at the early stages of recognition. On the contrary, the fusiform face area shows no preference for single features, because the fusiform face area is responsible for "holistic/configural" information, meaning that it puts all of the processed pieces of the face together in later processing. This theory is supported by the work of Gold et al. who found that regardless of the orientation of a face, subjects were impacted by the configuration of the individual facial features. Subjects were also impacted by the coding of the relationships between those features. This shows that processing is done by a summation of the parts in the later stages of recognition. Facial perception has well identified, neuroanatomical correlates in the brain. During the perception of faces, major activations occur in the extrastriate areas bilaterally, particularly in the fusiform face area (FFA), the occipital face area (OFA), and the superior temporal sulcus (fSTS). The FFA is located in the lateral fusiform gyrus. It is thought that this area is involved in holistic processing of faces and it is sensitive to the presence of facial parts as well as the configuration of these parts. The FFA is also necessary for successful face detection and identification. This is supported by fMRI activation and studies on prospagnosia, which involves lesions in the FFA. The OFA is located in the inferior occipital gyrus. Similar to the FFA, this area is also active during successful face detection and identification, a finding that is supported by fMRI activation. The OFA is involved and necessary in the analysis of facial parts but not in the spacing or configuration of facial parts. This suggests that the OFA may be involved in a facial processing step that occurs prior to the FFA processing. The fSTS is involved in recognition of facial parts and is not sensitive to the configuration of these parts. It is also thought that this area is involved in gaze perception. The fSTS has demonstrated increased activation when attending to gaze direction. Bilateral activation is generally shown in all of these specialized facial areas. However there are some studies that include increased activation in one side over the other. For instance McCarthy (1997) has shown that the right fusiform gyrus is more important for facial processing in complex situations. Gorno-Tempini and Price have shown that the fusiform gyri are preferentially responsive to faces, whereas the parahippocampal/lingual gyri are responsive to buildings. It is important to note that while certain areas respond selectively to faces, facial processing involves many neural networks. These networks include visual and emotional processing systems as well. Emotional face processing research has demonstrated that there are some of the other functions at work. While looking at faces displaying emotions (especially those with fear facial expressions) compared to neutral faces there is increased activity in the right fusiform gyrus. This increased activity also correlates with increased amygdala activity in the same situations. The emotional processing effects observed in the fusiform gyrus are decreased in patients with amygdala lesions. This demonstrates possible connections between the amygdala and facial processing areas. Another aspect that affects both the fusiform gyrus and the amygdala activation is the familiarity of faces. Having multiple regions that can be activated by similar face components indicates that facial processing is a complex process. Ishai and colleagues have proposed the object form topology hypothesis, which posits that there is a topological organization of neural substrates for object and facial processing. However, Gauthier disagrees and suggests that the category-specific and process-map models could accommodate most other proposed models for the neural underpinnings of facial processing. Most neuroanatomical substrates for facial processing are perfused by the middle cerebral artery (MCA). Therefore, facial processing has been studied using measurements of mean cerebral blood flow velocity in the middle cerebral arteries bilaterally. During facial recognition tasks, greater changes in the right middle cerebral artery (RMCA) than the left (LMCA) have been observed. It has been demonstrated that men were right lateralized and women left lateralized during facial processing tasks. Just as memory and cognitive function separate the abilities of children and adults to recognize faces, the familiarity of a face may also play a role in the perception of faces. Zheng, Mondloch, and Segalowitz recorded event-related potentials in the brain to determine the timing of recognition of faces in the brain. The results of the study showed that familiar faces are indicated and recognized by a stronger N250, a specific wavelength response that plays a role in the visual memory of faces. Similarly, Moulson et al. found that all faces elicit the N170 response in the brain. Hemispheric asymmetries in facial processing capability The mechanisms underlying gender-related differences in facial processing have not been studied extensively. Studies using electrophysiological techniques have demonstrated gender-related differences during a face recognition memory (FRM) task and a facial affect identification task (FAIT). The male subjects used a right, while the female subjects used a left, hemisphere neural activation system in the processing of faces and facial affect. Moreover, in facial perception there was no association to estimated intelligence, suggesting that face recognition performance in women is unrelated to several basic cognitive processes. Gender-related differences may suggest a role for sex hormones. In females there may be variability for psychological functions related to differences in hormonal levels during different phases of the menstrual cycle. Data obtained in norm and in pathology support asymmetric face processing. Gorno-Tempini and others in 2001, suggested that the left inferior frontal cortex and the bilateral occipitotemporal junction respond equally to all face conditions. Some neuroscientists contend that both the left inferior frontal cortex (Brodmann area 47) and the occipitotemporal junction are implicated in facial memory. The right inferior temporal/fusiform gyrus responds selectively to faces but not to non-faces. The right temporal pole is activated during the discrimination of familiar faces and scenes from unfamiliar ones. Right asymmetry in the mid temporal lobe for faces has also been shown using 133-Xenon measured cerebral blood flow (CBF). Other investigators have observed right lateralization for facial recognition in previous electrophysiological and imaging studies. The implication of the observation of asymmetry for facial perception would be that different hemispheric strategies would be implemented. The right hemisphere would be expected to employ a holistic strategy, and the left an analytic strategy. In 2007, Philip Njemanze, using a novel functional transcranial Doppler (fTCD) technique called functional transcranial Doppler spectroscopy (fTCDS), demonstrated that men were right lateralized for object and facial perception, while women were left lateralized for facial tasks but showed a right tendency or no lateralization for object perception. Njemanze demonstrated using fTCDS, summation of responses related to facial stimulus complexity, which could be presumed as evidence for topological organization of these cortical areas in men. It may suggest that the latter extends from the area implicated in object perception to a much greater area involved in facial perception. This agrees with the object form topology hypothesis proposed by Ishai and colleagues in 1999. However, the relatedness of object and facial perception was process based, and appears to be associated with their common holistic processing strategy in the right hemisphere. Moreover, when the same men were presented with facial paradigm requiring analytic processing, the left hemisphere was activated. This agrees with the suggestion made by Gauthier in 2000, that the extrastriate cortex contains areas that are best suited for different computations, and described as the process-map model. Therefore, the proposed models are not mutually exclusive, and this underscores the fact that facial processing does not impose any new constraints on the brain other than those used for other stimuli. It may be suggested that each stimulus was mapped by category into face or non-face, and by process into holistic or analytic. Therefore, a unified category-specific process-mapping system was implemented for either right or left cognitive styles. Njemanze in 2007, concluded that, for facial perception, men used a category-specific process-mapping system for right cognitive style, but women used same for the left. ||This article's Criticism or Controversy section may compromise the article's neutral point of view of the subject. (September 2011)| Cognitive neuroscientists Isabel Gauthier and Michael Tarr are two of the major proponents of the view that face recognition involves expert discrimination of similar objects (See the Perceptual Expertise Network). Other scientists, in particular Nancy Kanwisher and her colleagues, argue that face recognition involves processes that are face-specific and that are not recruited by expert discriminations in other object classes (see the domain specificity). Studies by Gauthier have shown that an area of the brain known as the fusiform gyrus (sometimes called the "fusiform face area, (FFA)" because it is active during face recognition) is also active when study participants are asked to discriminate between different types of birds and cars, and even when participants become expert at distinguishing computer generated nonsense shapes known as greebles. This suggests that the fusiform gyrus may have a general role in the recognition of similar visual objects. Yaoda Xu, then a post doctoral fellow with Nancy Kanwisher, replicated the car and bird expertise study using an improved fMRI design that was less susceptible to attentional accounts. The activity found by Gauthier when participants viewed non-face objects was not as strong as when participants were viewing faces, however this could be because we have much more expertise for faces than for most other objects. Furthermore, not all findings of this research have been successfully replicated, for example, other research groups using different study designs have found that the fusiform gyrus is specific to faces and other nearby regions deal with non-face objects. However, these failures to replicate are difficult to interpret, because studies vary on too many aspects of the method. It has been argued that some studies test experts with objects that are slightly outside of their domain of expertise. More to the point, failures to replicate are null effects and can occur for many different reasons. In contrast, each replication adds a great deal of weight to a particular argument. With regard to "face specific" effects in neuroimaging, there are now multiple replications with Greebles, with birds and cars, and two unpublished studies with chess experts. Although it is sometimes found that expertise recruits the FFA (e.g. as hypothesized by a proponent of this view in the preceding paragraph), a more common and less controversial finding is that expertise leads to focal category-selectivity in the fusiform gyrus—a pattern similar in terms of antecedent factors and neural specificity to that seen for faces. As such, it remains an open question as to whether face recognition and expert-level object recognition recruit similar neural mechanisms across different subregions of the fusiform or whether the two domains literally share the same neural substrates. Moreover, at least one study argues that the issue as to whether expertise-predicated category-selective areas overlap with the FFA is nonsensical in that multiple measurements of the FFA within an individual person often overlap no more with each other than do measurements of FFA and expertise-predicated regions. At the same time, numerous studies have failed to replicate them altogether. For example, four published fMRI studies have asked whether expertise has any specific connection to the FFA in particular, by testing for expertise effects in both the FFA and a nearby but not face-selective region called LOC (Rhodes et al., JOCN 2004; Op de Beeck et al., JN 2006; Moore et al., JN 2006; Yue et al. VR 2006). In all four studies, expertise effects are significantly stronger in the LOC than in the FFA, and indeed expertise effects were only borderline significant in the FFA in two of the studies, while the effects were robust and significant in the LOC in all four studies. Therefore, it is still not clear in exactly which situations the fusiform gyrus becomes active, although it is certain that face recognition relies heavily on this area and damage to it can lead to severe face recognition impairment. Differences in own- versus other-race face recognition and perceptual discrimination was first researched in 1914 Humans tend to perceive people of other races than their own to all look alike: Other things being equal, individuals of a given race are distinguishable from each other in proportion to our familiarity, to our contact with the race as whole. Thus, to the uninitiated American all Asiatics look alike, while to the Asiatics, all White men look alike. This phenomenon is known as the cross-race effect, own-race effect, other-race effect, own race bias or interracial-face-recognition-deficit. The effect occurs as early as 170ms in the brain with the N170 brain response to faces.[clarification needed] A meta-analysis, Mullen has found evidence that the other-race effect is larger among White subjects than among African American subjects, whereas Brigham and Williamson (1979, cited in Shepherd, 1981) obtained the opposite pattern. Shepherd also reviewed studies that found a main effect for race efface like that of the present[clarification needed] study, with better performance on White faces, other studies in which no difference was found, and yet other studies in which performance was better on African American faces. Overall, Shepherd reports a reliable positive correlation between the size of the effect of target race (indexed by the difference in proportion correct on same- and other-race faces) and self-ratings of amount of interaction with members of the other race, r(30) = .57, p < .01. This correlation is at least partly an artifact of the fact that African American subjects, who performed equally well on faces of both races, almost always responded with the highest possible self-rating of amount of interaction with white people (M = 4.75), whereas their white counterparts both demonstrated an other-race effect and reported less other-race interaction (M = 2.13); the difference in ratings was reliable, £(30) = 7.86, p < .01 Further research points to the importance of other-race experience in own- versus other-race face processing (O'Toole et al., 1991; Slone et al., 2000; Walker & Tanaka, 2003). In a series of studies, Walker and colleagues showed the relationship between amount and type of other-race contact and the ability to perceptually differentiate other-race faces (Walker & Tanaka, 2003; Walker & Hewstone, 2006a,b; 2007). Participants with greater other-race experience were consistently more accurate at discriminating between other-race faces than were participants with less other-race experience. In addition to other-race contact, there is suggestion that the own-race effect is linked to increased ability to extract information about the spatial relationships between different features. Richard Ferraro writes that facial recognition is an example of a neuropsychological measure that can be used to assess cognitive abilities that are salient within African-American culture. Daniel T. Levin writes that the deficit occurs because people emphasize visual information specifying race at the expense of individuating information when recognizing faces of other races. Further research using perceptual tasks could shed light on the specific cognitive processes involved in the other-race effect. The question if the own-race effect can be overcome was already indirectly answered by Ekman & Friesen in 1976 and Ducci, Arcuri, Georgis & Sineshaw in 1982. They had observed that people from New Guinea and Ethiopia who had had contact with white people before had a significantly better emotional recognition rate. Studies on adults have also shown sex differences in face recognition. Men tend to recognize fewer faces of women than women do, whereas there are no sex differences with regard to male faces. In individuals with autism spectrum disorder Autism spectrum disorder (ASD) is a comprehensive neural developmental disorder that produces many deficits including social, communicative, and perceptual deficits. Of specific interest, individuals with autism exhibit difficulties in various aspects of facial perception, including facial identity recognition and recognition of emotional expressions. These deficits are suspected to be a product of abnormalities occurring in both the early and late stages of facial processing. Speed and methods People with ASD process face and non-face stimuli with the same speed. In typically developing individuals, there is a preference for face processing, thus resulting in a faster processing speed in comparison to non-face stimuli. These individuals primarily utilize holistic processing when perceiving faces. Contrastingly, individuals with ASD employ part-based processing or bottom-up processing, focusing on individual features rather than the face as a whole. When focusing on the individual parts of the face, persons with ASD direct their gaze primarily to the lower half of the face, specifically the mouth, varying from the eye trained gaze of typically developing people. This deviation from holistic face processing does not employ the use of facial prototypes, which are templates stored in memory that make for easy retrieval. Additionally, individuals with ASD display difficulty with recognition memory, specifically memory that aids in identifying faces. The memory deficit is selective for faces and does not extend to other objects or visual inputs. Some evidence lends support to the theory that these face-memory deficits are products of interference between connections of face processing regions. Associated difficulties The atypical facial processing style of people with ASD often manifests in constrained social ability, due to decreased eye contact, joint attention, interpretation of emotional expression, and communicative skills. These deficiencies can be seen in infants as young as 9 months; specifically in terms of poor eye contact and difficulties engaging in joint attention. Some experts have even used the term 'face avoidance' to describe the phenomena where infants who are later diagnosed with ASD preferentially attend to non-face objects over faces. Furthermore, some have proposed that the demonstrated impairment in children with ASD's ability to grasp emotional content of faces is not a reflection of the incapacity to process emotional information, but rather, the result of a general inattentiveness to facial expression. The constraints of these processes that are essential to the development of communicative and social-cognitive abilities are viewed to be the cause of impaired social engagement and responsivity. Furthermore, research suggests that there exists a link between decreased face processing abilities in individuals with ASD and later deficits in Theory of Mind; for example, while typically developing individuals are able to relate others' emotional expressions to their actions, individuals with ASD do not demonstrate this skill to the same extent. There is some contention about this causation however, resembling the chicken or the egg dispute. Others theorize that social impairment leads to perceptual problems rather than vice versa. In this perspective, a biological lack of social interest inherent to ASD inhibits developments of facial recognition and perception processes due to underutilization. Continued research is necessary to determine which theory is best supported. Many of the obstacles that individuals with ASD face in terms of facial processing may be derived from abnormalities in the fusiform face area and amygdala, which have been shown to be important in face perception as discussed above. Typically, the fusiform face area in individuals with ASD has reduced volume compared to normally developed persons. This volume reduction has been attributed to deviant amygdala activity that does not flag faces as emotionally salient and thus decreases activation levels of the fusiform face area. This hypoactivity in the in the fusiform face area has been found in several studies. Studies are not conclusive as to which brain areas people with ASD use instead. One study found that, when looking at faces, people with ASD exhibit activity in brain regions normally active when typically developing individuals perceive objects. Another study found that during facial perception, people with ASD use different neural systems, with each one of them using their own unique neural circuitry. Compensation mechanisms As ASD individuals age, scores on behavioral tests assessing ability to perform face-emotion recognition increase to levels similar to controls. Yet, it is apparent that the recognition mechanisms of these individuals are still atypical, though often effective. In terms of face identity-recognition, compensation can take many forms including a more pattern-based strategy which was first seen in face inversion tasks. Alternatively, evidence suggests that older individuals compensate by using mimicry of other’s facial expressions and rely on their motor feedback of facial muscles for face emotion-recognition. These strategies help overcome the obstacles individuals with ASD face in interacting within social contexts. Artificial face perception A great deal of effort has been put into developing software that can recognize human faces. Much of the work has been done by a branch of artificial intelligence known as computer vision which uses findings from the psychology of face perception to inform software design. Recent breakthroughs using noninvasive functional transcranial Doppler spectroscopy as demonstrated by Njemanze, 2007, to locate specific responses to facial stimuli have led to improved systems for facial recognition. The new system uses input responses called cortical long-term potentiation (CLTP) derived from Fourier analysis of mean blood flow velocity to trigger target face search from a computerized face database system. Such a system provides for brain-machine interface for facial recognition, and the method has been referred to as cognitive biometrics. Another interesting application is the estimation of human age from face images. As an important hint for human communication, facial images contain lots of useful information including gender, expression, age, etc. Unfortunately, compared with other cognition problems, age estimation from facial images is still very challenging. This is mainly because the aging process is influenced not only by a person's genes but also many external factors. Physical condition, living style etc. may accelerate or slow the aging process. Besides, since the aging process is slow and with long duration, collecting sufficient data for training is fairly demanding work. See also - Capgras delusion - Fregoli syndrome - Cognitive neuropsychology - Delusional misidentification syndrome - Facial recognition system - Prosopagnosia, or face blindness - Recognition of human individuals - Social cognition - Thatcher effect - The Greebles - Pareidolia, perceiving faces in random objects and shapes - Apophenia, seeing meaningful patterns in random data - Hollow face illusion - N170, an event-related potential associated with viewing faces - Cross-race effect Further reading - Bruce, V. and Young, A. (2000) In the Eye of the Beholder: The Science of Face Perception. Oxford: Oxford University Press. ISBN 0-19-852439-0 - Tiffany M. Field, Robert Woodson, Reena Greenberg, Debra Cohen (8 October 1982). "Discrimination and imitation of facial expressions by neonates". Science 218 (4568): 179–181. doi:10.1126/science.7123230. PMID 7123230. - Mikko J. Peltola, Jukka M. Leppanen, Silja Maki & Jari K. Hietanen (June 2009). "Emergence of enhanced attention to fearful faces between 5 and 7 months of age". Social cognitive and affective neuroscience 4 (2): 134–142. doi:10.1093/scan/nsn046. PMC 2686224. PMID 19174536. - Leppanen, Jukka; Richmond, Jenny; Vogel-Farley, Vanessa; Moulson, Margaret; Nelson, Charles (May 2009). "Categorical representation of facial expressions in the infant brain". Infancy : the official journal of the International Society on Infant Studies 14 (3): 346–362. doi:10.1080/15250000902839393. PMC 2954432. PMID 20953267. - Jeffery, L.; Rhodes, G. (2011). "Insights into the development of face recognition mechanisms revealed by face after effects.". British Journal of Psychology 102 (4): 799–815. doi:10.1111/j.2044-8295.2011.02066.x. - Jeffery, L.; Rhodes, G. (2011). "Insights into the development of face recognition mechanisms revealed by face aftereffects.". British Journal of Psychology 102 (4): 799. doi:10.1111/j.2044-8295.2011.02066.x. - Curby, K.M.; Johnson, K.J., & Tyson A. (2012). "Face to face with emotion: Holistic face processing is modulated by emotional state". Cognition and Emotion 26 (1): 93–102. doi:10.1080/02699931.2011.555752. - Jeffery, L.; Rhodes, G. (2011). "Insights into the development of face recognition mechanisms revealed by face aftereffects.". British Journal of Psychology 102 (4): 799–815. doi:10.1111/j.2044-8295.2011.02066.x. - Stefanie Hoehl & Tricia Striano (November–December 2008). "Neural processing of eye gaze and threat-related emotional facial expressions in infancy". Child development 79 (6): 1752–1760. doi:10.1111/j.1467-8624.2008.01223.x. PMID 19037947. - Tricia Striano & Amrisha Vaish (2010). "Seven- to 9-month-old infants use facial expressions to interpret others' actions". British Journal of Developmental Psychology 24 (4): 753–760. doi:10.1348/026151005X70319. - Klaus Libertus & Amy Needham (November 2011). "Reaching experience increases face preference in 3-month-old infants". Developmental science 14 (6): 1355–1364. doi:10.1111/j.1467-7687.2011.01084.x. PMID 22010895. - Tobias Grossmann, Tricia Striano & Angela D. Friederici (May 2006). "Crossmodal integration of emotional information from face and voice in the infant brain". Developmental science 9 (3): 309–315. doi:10.1111/j.1467-7687.2006.00494.x. PMID 16669802. - Charles A. Nelson (March–June 2001). "The development and neural bases of face recognition". nfant and Child Development 10 (1–2): 3–18. doi:10.1002/icd.239. - O. Pascalis, L. S. Scott, D. J. Kelly, R. W. Shannon, E. Nicholson, M. Coleman & C. A. Nelson (April 2005). "Plasticity of face processing in infancy". Proceedings of the National Academy of Sciences of the United States of America 102 (14): 5297–5300. doi:10.1073/pnas.0406627102. PMC 555965. PMID 15790676. - Emi Nakato, Yumiko Otsuka, So Kanazawa, Masami K. Yamaguchi & Ryusuke Kakigi (January 2011). "Distinct differences in the pattern of hemodynamic response to happy and angry facial expressions in infants--a near-infrared spectroscopic study". NeuroImage 54 (2): 1600–1606. doi:10.1016/j.neuroimage.2010.09.021. PMID 20850548. - Awasthi B, Friedman J, Williams, MA (2011). "Processing of low spatial frequency faces at periphery in choice reaching tasks". Neuropsychologia 49 (7): 2136–41. doi:10.1016/j.neuropsychologia.2011.03.003. PMID 21397615. - Bruce V, Young A (August 1986). "Understanding face recognition". Br J Psychology 77 (Pt 3): 305–27. doi:10.1111/j.2044-8295.1986.tb02199.x. PMID 3756376. - Kanwisher N, McDermott J, Chun MM (1 June 1997). "The fusiform face area: a module in human extrastriate cortex specialized for face perception". J. Neurosci. 17 (11): 4302–11. PMID 9151747. - Rossion, B.; Hanseeuw, B., & Dricot, L. (2012). "Defining face perception areas in the human brain: A large scale factorial fMRI face localizer analysis.". Brain and Cognition 79 (2): 138–157. doi:10.1016/j.bandc.2012.01.001. - KannurpattiRypmaBiswal, S.S.B. (March 2012). "Prediction of task-related BOLD fMRI with amplitude signatures of resting-state fMRI". Frontiers in Systems Neuroscience 6: 1–7. doi:10.3389/fnsys.2012.00007. - Gold, J.M.; Mundy, P.J., & Tjan, B.S. (2012). "The perception of a face is no more than the sum of its parts". Psychological Science 23 (4): 427–434. doi:10.1177/0956797611427407. - Pitcher, D.; Walsh, V., & Duchaine, B. (2011). "The role of the occipital face area in the cortical face perception network". Experimental Brain Research 209 (4): 481–493. doi:10.1007/s00221-011-2579-1. - Arcurio, L.R.; Gold, J.M., & James, T.W. (2012). "The response of face-selective cortex with single face parts and part combinations". Neuropsychologia 50 (10): 2454–2459. doi:10.1016/j.neuropsychologia.2012.06.016. - Arcurio, L.R.; Gold, J.M., & James, T.W. (2012). "The response of face-selective cortex with single face parts and part combinations". Neuropsychologia 50 (10): 2458. doi:10.1016/j.neuropsychologia.2012.06.016. - Liu J, Harris A, Kanwisher N. (2010). Perception of face parts and face configurations: An fmri study. Journal of Cognitive Neuroscience. (1), 203–211. - Rossion, B., Caldara, R., Seghier, M., Schuller, A-M., Lazeyras, F., Mayer, E., (2003). A network of occipito-temporal face-sensitive areas besides the right middle fusiform gyrus is necessary for normal face processing. A Journal of Neurology, 126 11 2381-2395 - McCarthy, G., Puce, A., Gore, J., Allison, T., (1997). Face-Specific Processing in the Human Fusiform Gyrus. Journal of Cognitive Neuroscience, 9 5 605-610 - Campbell, R., Heywood, C.A., Cowey, A., Regard, M., and Landis, T. (1990). Sensitivity to eye gaze in prosopagnosic patients and monkeys with superior temporal sulcus ablation. Neuropsychologia, 28(11), 1123-1142 - 8 (2). 1996. pp. 139–46. PMID 9081548. Missing or empty - Haxby JV, Horwitz B, Ungerleider LG, Maisog JM, Pietrini P, Grady CL (1 November 1994). "The functional organization of human extrastriate cortex: a PET-rCBF study of selective attention to faces and locations". J. Neurosci. 14 (11 Pt 1): 6336–53. PMID 7965040. - Haxby JV, Ungerleider LG, Clark VP, Schouten JL, Hoffman EA, Martin A (January 1999). "The effect of face inversion on activity in human neural systems for face and object perception". Neuron 22 (1): 189–99. doi:10.1016/S0896-6273(00)80690-X. PMID 10027301. - Puce A, Allison T, Asgari M, Gore JC, McCarthy G (15 August 1996). "Differential sensitivity of human visual cortex to faces, letterstrings, and textures: a functional magnetic resonance imaging study". J. Neurosci. 16 (16): 5205–15. PMID 8756449. - Puce A, Allison T, Gore JC, McCarthy G (September 1995). "Face-sensitive regions in human extrastriate cortex studied by functional MRI". J. Neurophysiol. 74 (3): 1192–9. PMID 7500143. - Sergent J, Ohta S, MacDonald B (February 1992). "Functional neuroanatomy of face and object processing. A positron emission tomography study". Brain 115 (Pt 1): 15–36. doi:10.1093/brain/115.1.15. PMID 1559150. - Gorno-Tempini ML, Price CJ (October 2001). "Identification of famous faces and buildings: a functional neuroimaging study of semantically unique items". Brain 124 (Pt 10): 2087–97. doi:10.1093/brain/124.10.2087. PMID 11571224. - Vuilleumier P, Pourtois G, Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia 45 (2007) 174–194 - Ishai A, Ungerleider LG, Martin A, Schouten JL, Haxby JV (August 1999). "Distributed representation of objects in the human ventral visual pathway". Proc. Natl. Acad. Sci. U.S.A. 96 (16): 9379–84. doi:10.1073/pnas.96.16.9379. PMC 17791. PMID 10430951. - Gauthier I (January 2000). "What constrains the organization of the ventral temporal cortex?". Trends Cogn. Sci. (Regul. Ed.) 4 (1): 1–2. doi:10.1016/S1364-6613(99)01416-3. PMID 10637614. - Droste DW, Harders AG, Rastogi E (August 1989). "A transcranial Doppler study of blood flow velocity in the middle cerebral arteries performed at rest and during mental activities". Stroke 20 (8): 1005–11. doi:10.1161/01.STR.20.8.1005. PMID 2667197. - Harders AG, Laborde G, Droste DW, Rastogi E (July 1989). "Brain activity and blood flow velocity changes: a transcranial Doppler study". Int. J. Neurosci. 47 (1–2): 91–102. doi:10.3109/00207458908987421. PMID 2676884. - Njemanze PC (September 2004). "Asymmetry in cerebral blood flow velocity with processing of facial images during head-down rest". Aviat Space Environ Med 75 (9): 800–5. PMID 15460633. - Zheng, X.; Mondloch, C.J. & Segalowitz, S.J. (2012). "The timing of individual face recognition in the brain". Neuropsychologia 50 (7): 1451–1461. doi:10.1016/j.neuropsychologia.2012.02.030. - Eimer, M.; Gosling, A., & Duchaine, B. (2012). "Electrophysiological markers of covert face recognition in developmental prosopagnosia". Brain: A Journal of Neurology 135 (2): 542–554. doi:10.1093/brain/awr347. - Moulson, M.C.; Balas, B., Nelson, C., & Sinha, P. (2011). "EEG correlates of categorical and graded face perception.". Neuropsychologia 49 (14): 3847–3853. doi:10.1016/j.neuropsychologia.2011.09.046. - Everhart DE, Shucard JL, Quatrin T, Shucard DW (July 2001). "Sex-related differences in event-related potentials, face recognition, and facial affect processing in prepubertal children". Neuropsychology 15 (3): 329–41. doi:10.1037/0894-418.104.22.1689. PMID 11499988. - Herlitz A, Yonker JE (February 2002). "Sex differences in episodic memory: the influence of intelligence". J Clin Exp Neuropsychol 24 (1): 107–14. doi:10.1076/jcen.22.214.171.1240. PMID 11935429. - Smith WM (July 2000). "Hemispheric and facial asymmetry: gender differences". Laterality 5 (3): 251–8. doi:10.1080/135765000406094. PMID 15513145. - Voyer D, Voyer S, Bryden MP (March 1995). "Magnitude of sex differences in spatial abilities: a meta-analysis and consideration of critical variables". Psychol Bull 117 (2): 250–70. doi:10.1037/0033-2909.117.2.250. PMID 7724690. - Hausmann M (2005). "Hemispheric asymmetry in spatial attention across the menstrual cycle". Neuropsychologia 43 (11): 1559–67. doi:10.1016/j.neuropsychologia.2005.01.017. PMID 16009238. - De Renzi E (1986). "Prosopagnosia in two patients with CT scan evidence of damage confined to the right hemisphere". Neuropsychologia 24 (3): 385–9. doi:10.1016/0028-3932(86)90023-0. PMID 3736820. - De Renzi E, Perani D, Carlesimo GA, Silveri MC, Fazio F (August 1994). "Prosopagnosia can be associated with damage confined to the right hemisphere--an MRI and PET study and a review of the literature". Neuropsychologia 32 (8): 893–902. doi:10.1016/0028-3932(94)90041-8. PMID 7969865. - Mattson AJ, Levin HS, Grafman J (February 2000). "A case of prosopagnosia following moderate closed head injury with left hemisphere focal lesion". Cortex 36 (1): 125–37. doi:10.1016/S0010-9452(08)70841-4. PMID 10728902. - Barton JJ, Cherkasova M (July 2003). "Face imagery and its relation to perception and covert recognition in prosopagnosia". Neurology 61 (2): 220–5. doi:10.1212/01.WNL.0000071229.11658.F8. PMID 12874402. - Sprengelmeyer R, Rausch M, Eysel UT, Przuntek H (October 1998). "Neural structures associated with recognition of facial expressions of basic emotions". Proc. Biol. Sci. 265 (1409): 1927–31. doi:10.1098/rspb.1998.0522. PMC 1689486. PMID 9821359. - Verstichel P (2001). "[Impaired recognition of faces: implicit recognition, feeling of familiarity, role of each hemisphere]". Bull. Acad. Natl. Med. (in French) 185 (3): 537–49; discussion 550–3. PMID 11501262. - Nakamura K, Kawashima R, Sato N et al. (September 2000). "Functional delineation of the human occipito-temporal areas related to face and scene processing. A PET study". Brain 123 (Pt 9): 1903–12. doi:10.1093/brain/123.9.1903. PMID 10960054. - Gur RC, Jaggi JL, Ragland JD et al. (September 1993). "Effects of memory processing on regional brain activation: cerebral blood flow in normal subjects". Int. J. Neurosci. 72 (1–2): 31–44. doi:10.3109/00207459308991621. PMID 8225798. - Ojemann JG, Ojemann GA, Lettich E (February 1992). "Neuronal activity related to faces and matching in human right nondominant temporal cortex". Brain 115 (Pt 1): 1–13. doi:10.1093/brain/115.1.1. PMID 1559147. - Bogen JE (April 1969). "The other side of the brain. I. Dysgraphia and dyscopia following cerebral commissurotomy". Bull Los Angeles Neurol Soc 34 (2): 73–105. PMID 5792283. - Bogen JE (1975). "Some educational aspects of hemispheric specialization". UCLA Educator 17: 24–32. - Bradshaw JL, Nettleton NC (1981). "The nature of hemispheric specialization in man". Behavioral and Brain Science 4: 51–91. doi:10.1017/S0140525X00007548. - Galin D (October 1974). "Implications for psychiatry of left and right cerebral specialization. A neurophysiological context for unconscious processes". Arch. Gen. Psychiatry 31 (4): 572–83. doi:10.1001/archpsyc.1974.01760160110022. PMID 4421063. - Njemanze PC (January 2007). "Cerebral lateralisation for facial processing: gender-related cognitive styles determined using Fourier analysis of mean cerebral blood flow velocity in the middle cerebral arteries". Laterality 12 (1): 31–49. doi:10.1080/13576500600886796. PMID 17090448. - Gauthier I, Skudlarski P, Gore JC, Anderson AW (February 2000). "Expertise for cars and birds recruits brain areas involved in face recognition". Nat. Neurosci. 3 (2): 191–7. doi:10.1038/72140. PMID 10649576. - Gauthier I, Tarr MJ, Anderson AW, Skudlarski P, Gore JC (June 1999). "Activation of the middle fusiform 'face area' increases with expertise in recognizing novel objects". Nat. Neurosci. 2 (6): 568–73. doi:10.1038/9224. PMID 10448223. - Grill-Spector K, Knouf N, Kanwisher N (May 2004). "The fusiform face area subserves face perception, not generic within-category identification". Nat. Neurosci. 7 (5): 555–62. doi:10.1038/nn1224. PMID 15077112. - Xu Y (August 2005). "Revisiting the role of the fusiform face area in visual expertise". Cereb. Cortex 15 (8): 1234–42. doi:10.1093/cercor/bhi006. PMID 15677350. - Righi G, Tarr MJ (2004). "Are chess experts any different from face, bird, or greeble experts?". Journal of Vision 4 (8): 504–504. doi:10.1167/4.8.504. - My Brilliant Brain, partly about grandmaster Susan Polgar, shows brain scans of the fusiform gyrus while Polgar viewed chess diagrams. - Kung CC, Peissig JJ, Tarr MJ (December 2007). "Is region-of-interest overlap comparison a reliable measure of category specificity?". J Cogn Neurosci 19 (12): 2019–34. doi:10.1162/jocn.2007.19.12.2019. PMID 17892386. - Feingold CA (1914). "The influence of environment on identification of persons and things". Journal of Criminal Law and Police Science 5: 39–51. - Walker PM, Tanaka JW (2003). "An encoding advantage for own-race versus other-race faces". Perception 32 (9): 1117–25. doi:10.1068/p5098. PMID 14651324. - Vizioli L, Rousselet GA, Caldara R (2010). "Neural repetition suppression to identity is abolished by other-race faces". Proc Natl Acad Sci U S A 107 (46): 20081–20086. doi:10.1073/pnas.1005751107. PMC 2993371. PMID 21041643. - Malpass & Kravitz, 1969; Cross, Cross, & Daly, 1971; Shepherd, Deregowski, & Ellis, 1974; all cited in Shepherd, 1981 - Chance, Goldstein, & McBride, 1975; Feinman & Entwistle, 1976; cited in Shepherd, 1981 - Brigham & Karkowitz, 1978; Brigham & Williamson, 1979; cited in Shepherd, 1981 - Other-Race Face Perception D. Stephen Lindsay, Philip C. Jack, Jr., and Marcus A. Christian. Williams College - Diamond & Carey, 1986; Rhodeset al.,1989 - F. Richard Ferraro (2002). Minority and Cross-cultural Aspects of Neuropsychological Assessment. Studies on Neuropsychology, Development and Cognition 4. East Sussex: Psychology Press. p. 90. ISBN 90-265-1830-7. - Levin DT (December 2000). "Race as a visual feature: using visual search and perceptual discrimination tasks to understand face categories and the cross-race recognition deficit". J Exp Psychol Gen 129 (4): 559–74. doi:10.1037/0096-34126.96.36.1999. PMID 11142869. - Rehnman J, Herlitz A (April 2006). "Higher face recognition ability in girls: Magnified by own-sex and own-ethnicity bias". Memory 14 (3): 289–96. doi:10.1080/09658210500233581. PMID 16574585. - Tanaka, J.W.; Lincoln, S.; Hegg, L. (2003). "A framework for the study and treatment of face processing deficits in autism". In Schwarzer, G.; Leder, H. The development of face processing. Ohio: Hogrefe & Huber Publishers. pp. 101–119. ISBN 9780889372641. - Behrmann, Marlene; Avidan, Galia; Leonard, Grace L.; Kimchi, Rutie; Beatriz, Luna; Humphreys, Kate; Minshew, Nancy (2006). "Configural processing in autism and its relationship to face processing". Neuropsychologia 44: 110–129. doi:10.1016/j.neuropsychologia.2005.04.002. - Schreibman, Laura (1988). Autism. Newbury Park: Sage Publications. pp. 14–47. ISBN 0803928092. - Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy (2012). "Face identity recognition in autism spectrum disorders: A review of behavioral studies". Neuroscience & Biobehavioral Reviews 36: 1060–1084. doi:10.1016/j.neubiorev.2011.12.008. - Dawson, Geraldine; Webb, Sara Jane; McPartland, James (2005). "Understanding the nature of face processing impairment in autism: Insights from behavioral and electrophysiological studies". Developmental Neuropsychology 27: 403–424. PMID 15843104. - Kita, Yosuke; Inagaki, Masumi (2012). "Face recognition in patients with Autism Spectrum Disorder". Brain and Nerve 64: 821–831. PMID 22764354. - Grelotti, David; Gauthier, Isabel; Schultz, Robert (2002). "Social interest and the development of cortical face specialization: What autism teaches us about face processing". Developmental Psychobiology 40: 213–235. doi:10.1002/dev.10028. Retrieved 2/24/2012. - Riby, Deborah; Doherty-Sneddon Gwyneth (2009). "The eyes or the mouth? Feature salience and unfamiliar face processing in Williams syndrome and autism". The Quarterly Journal of Experimental Psychology 62: 189–203. doi:10.1080/17470210701855629. - Joseph, Robert; Tanaka, James (2003). "Holistic and part-based face recognition in children with autism". Journal of Child Psychology and Psychiatry 44: 529–542. doi:10.1111/1469-7610.00142. - Langdell, Tim (1978). "Recognition of Faces: An approach to the study of autism". Journal of Psychology and Psychiatry and Allied Disciplines (Blackwell) 19: 255–265. Retrieved 2/12/2013. - Spezio, Michael; Adolphs, Ralph; Hurley, Robert; Piven, Joseph (28 Sept 2006). "Abnormal use of facial information in high functioning autism". Journal of Autism and Developmental Disorders 37: 929–939. doi:10.1007/s10803-006-0232-9. - Revlin, Russell (2013). Cognition: Theory and Practice. Worth Publishers. pp. 98–101. ISBN 9780716756675. - Triesch, Jochen; Teuscher, Christof; Deak, Gedeon O.; Carlson, Eric (2006). "Gaze following: why (not) learn it?". Developmental Science 9: 125–157. doi:10.1111/j.1467-7687.2006.00470.x. - Volkmar, Fred; Chawarska, Kasia; Klin, Ami (2005). "Autism in infancy and early childhood". Annual Reviews of Psychology 56: 315–316. doi:10.1146/annurev.psych.56.091103.070159. - Nader-Grosbois, N.; Day, J.M. (2011). "Emotional cognition: theory of mind and face recognition". In Matson, J.L; Sturmey, R. International handbook of autism and pervasive developmental disorders. New York: Springer Science & Business Media. pp. 127–157. ISBN 9781441980649. - Pierce, Karen; Muller, R.A., Ambrose, J., Allen, G.,Chourchesne (2001). "Face processing occurs outside the fusiform 'face area' in autism: evidence from functional MRI". Brain 124: 2059–2073. Retrieved 2/13/2013. - Harms, Madeline; Martin, Alex; Wallace, Gregory (2010). "Facial emotion recognition in autism spectrum disorders: A review of behavioral and neuroimaging studies". Neuropsychology Review 20: 290–322. doi:10.1007/s11065-010-9138-6. - Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew; Clarke, Paula; Miles, Jermey; Nation, Kate; Clarke, Leesa; Williams, Christine (2008). "Emotion recognition in faces and the use of visual context Vo in young people with high-functioning autism spectrum disorders". Autism 12: 607-. doi:10.1177/1362361308097118. - Njemanze, P.C. Transcranial doppler spectroscopy for assessment of brain cognitive functions. United States Patent Application No. 20040158155, August 12th, 2004 - Njemanze, P.C. Noninvasive transcranial doppler ultrasound face and object recognition testing system. United States Patent No. 6,773,400, August 10th, 2004 - YangJing Long (2009). "Human age estimation by metric learning for regression problems". Proc. International Conference on Computer Analysis of Images and Patterns: 74–82. - Face Recognition Homepage - Are Faces a "Special" Class of Objects? - Science Aid: Face Recognition - FaceResearch – Scientific research and online studies on face perception - Face Blind Prosopagnosia Research Centers at Harvard and University College London - Face Recognition Tests - online tests for self-assessment of face recognition abilities. - Perceptual Expertise Network (PEN) Collaborative group of cognitive neuroscientists studying perceptual expertise, including face recognition. - Face Lab at the University of Western Australia - Perception Lab at the University of St Andrews, Scotland - The effect of facial expression and identity information on the processing of own and other race faces by Yoriko Hirose, PhD thesis from the University of Stirling - Global Emotion Online-Training to overcome Caucasian-Asian other-race effect
http://en.wikipedia.org/wiki/Face_perception
13
14
||This article may be too technical for most readers to understand. (March 2013)| Nuclear transmutation is the conversion of one chemical element or isotope into another. In other words, atoms of one element can be changed into atoms of other element by 'transmutation'. This occurs either through nuclear reactions (in which an outside particle reacts with a nucleus), or through radioactive decay (where no outside particle is needed). Though all transmutation is caused either by radioactive decay or nuclear reaction, the reverse is not true, as not all types of either decay or nuclear reaction cause transmutation. The most common types of radioactive decay that do not cause transmutation are gamma decay and the related process internal conversion. However, most other types of decay do cause transmutation of the decaying radioisotope. Similarly, a few nuclear reactions do not cause transmutation (for example the gain or loss of a neutron might not cause transmutation), although in practice, most nuclear reactions, and types of nuclear reactions, do result in transmutation. Nuclear transmutation can occur through various natural processes, or it may be artificially induced by human intervention. Natural vs. artificial transmutation Natural transmutation is responsible for the creation of all the chemical elements we observe naturally. Most of this happened in the distant past, however (see section below on transmutation in the universe). One type of natural transmutation observable in the present occurs when certain radioactive elements present in nature spontaneously decay by a process that causes transmutation, such as alpha or beta decay. An example is the natural decay of potassium-40 to the argon-40 which forms most of the argon in air. Also on Earth, natural transmutations from the different mechanism of natural nuclear reactions occur, due to cosmic ray bombardment of elements (for example, to form carbon-14), and also occasionally from natural neutron bombardment (for example, see natural nuclear fission reactor). Artificial transmutation may occur in machinery that has enough energy to cause changes in the nuclear structure of the elements. Machines that can cause artificial transmutation include particle accelerators and tokamak reactors. Conventional fission power reactors also cause artificial transmutation, not from the power of the machine, but by exposing elements to neutrons produced by a fission from an artificially produced nuclear chain reaction. Artificial nuclear transmutation has been considered as a possible mechanism for reducing the volume and hazard of radioactive waste. The term transmutation dates back to alchemy. Alchemists pursued the philosopher's stone, capable of chrysopoeia – the transformation of base metals into gold. While alchemists often understood chrysopoeia as a metaphor for a mystical, or religious process, some practitioners adopted a literal interpretation, and tried to make gold through physical experiment. The impossibility of the metallic transmutation had been debated amongst alchemists, philosophers and scientists since the Middle Ages. Pseudo-alchemical transmutation was outlawed and publicly mocked beginning in the fourteenth century. Alchemists like Michael Maier and Heinrich Khunrath wrote tracts exposing fraudulent claims of gold making. By the 1720s, there were no longer any respectable figures pursuing the physical transmutation of substances into gold. Antoine Lavoisier, in the 18th century, replaced the alchemical theory of elements with the modern theory of chemical elements, and John Dalton further developed the notion of atoms (from the alchemical theory of corpuscles) to explain various chemical processes. The disintegration of atoms is a distinct process involving much greater energies than could be achieved by alchemists. It was first consciously applied to modern physics by Frederick Soddy when he, along with Ernest Rutherford, discovered that radioactive thorium was converting itself into radium in 1901. At the moment of realization, Soddy later recalled, he shouted out: "Rutherford, this is transmutation!" Rutherford snapped back, "For Christ's sake, Soddy, don't call it transmutation. They'll have our heads off as alchemists." Rutherford and Soddy were observing natural transmutation as a part of radioactive decay of the alpha decay type. However in 1919, Rutherford was able to accomplish transmutation of nitrogen into oxygen, using alpha particles directed at nitrogen 14N + α → 17O + p. This was the first observation of a nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. Eventually, in 1932, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues John Cockcroft and Ernest Walton, who used artificially accelerated protons against lithium-7 to split the nucleus into two alpha particles. The feat was popularly known as "splitting the atom," although it was not the modern nuclear fission reaction discovered 1938 by Otto Hahn and his assistant Fritz Strassmann in heavy elements. Later in the twentieth century the transmutation of elements within stars was elaborated, accounting for the relative abundance of heavier elements in the universe. Save for the first five elements, which were produced in the Big Bang and other cosmic ray processes, stellar nucleosynthesis accounted for the abundance of all elements heavier than boron. In their 1957 paper Synthesis of the Elements in Stars, William Alfred Fowler, Margaret Burbidge, Geoffrey Burbidge, and Fred Hoyle explained how the abundances of essentially all but the lightest chemical elements could be explained by the process of nucleosynthesis in stars. Author Ken Croswell summarised their discoveries thus: Burbidge, Burbidge, Fowler, Hoyle Took the stars and made them toil: Carbon, copper, gold, and lead Formed in stars, is what they said It transpired that, under true nuclear transmutation, it is far easier to turn gold into lead than the reverse reaction, which was the one the alchemists had ardently pursued. Nuclear experiments have successfully transmuted lead into gold, but the expense far exceeds any gain. It would be easier to convert gold into lead via neutron capture and beta decay by leaving gold in a nuclear reactor for a long period of time. More information on gold synthesis, see Synthesis of precious metals. Transmutation in the universe As noted above, the Big Bang is thought to the be the origin of the hydrogen (including all deuterium) and helium in the universe. Hydrogen and helium together account for 98% of the mass of ordinary matter in the universe. The Big Bang also produced small amounts of lithium, beryllium and perhaps boron. More lithium, beryllium and boron were produced later, in a natural nuclear reaction, cosmic ray spallation. Stellar nucleosynthesis is responsible for all of the other elements occurring naturally in the universe as stable isotopes and primordial nuclide, from carbon to plutonium. These occurred after the Big Bang, during star formation. Some lighter elements from carbon to iron were formed in stars and released into space by asymptotic giant branch (AGB) stars. These are a type of red giant that "puffs" off its outer atmosphere, containing some elements from carbon to nickel and iron. All elements with atomic weight greater than 64 atomic mass units are produced in supernova stars by means of nuclear reaction of lighter nuclei with other particles, mostly neutrons. The Solar System is thought to have condensed approximately 4.6 billion years before the present, from a cloud of hydrogen and helium containing heavier elements in dust grains formed previously by a large number of such stars. These grains contained the heavier elements formed by transmutation earlier in the history of the universe. All of these natural processes of transmutation in stars are continuing today, in our own galaxy and in others. For example, the observed light curves of supernova stars such as SN 1987A show them blasting large amounts (comparable to the mass of Earth) of radioactive nickel and cobalt into space. However, little of this material reaches Earth. Most natural transmutation on the Earth today is mediated by cosmic rays (such as production of carbon-14) and by the radioactive decay of radioactive primordial nuclides left over from the initial formation of the solar system (such as potassium-40, uranium and thorium), plus the radioactive decay of products of these nucleides (radium, radon, polonium, etc.). See decay chain. Artificial transmutation of nuclear waste Transmutation of transuranium elements (actinides) such as the isotopes of plutonium, neptunium, americium, and curium has the potential to help solve the problems posed by the management of radioactive waste, by reducing the proportion of long-lived isotopes it contains. When irradiated with fast neutrons in a nuclear reactor, these isotopes can be made to undergo nuclear fission, destroying the original actinide isotope and producing a spectrum of radioactive and nonradioactive fission products. Ceramic targets containing actinides can be bombarded with neutrons to induce transmutation reactions to remove the most difficult long-lived species. These can consist of actinide-containing solid solutions such as (Am,Zr)N, (Am,Y)N, (Zr,Cm)O2, (Zr,Cm,Am)O2, (Zr,Am,Y)O2 or just actinide phases such as AmO2, NpO2, NpN, AmN mixed with some inert phases such as MgO, MgAl2O4, (Zr,Y)O2, TiN and ZrN. The role of non-radioactive inert phases is mainly to provide stable mechanical behaviour to the target under neutron irradiation. Reactor types For instance, plutonium can be reprocessed into MOX fuels and transmuted in standard reactors. The heavier elements could be transmuted in fast reactors, but probably more effectively in a subcritical reactor which is sometimes known as an energy amplifier and which was devised by Carlo Rubbia. Fusion neutron sources have also been proposed as well suited. Fuel types There are several fuels that can incorporate plutonium in their initial composition at Beginning of Cycle (BOC) and have a smaller amount of this element at the End of Cycle (EOC). During the cycle, plutonium can be burnt in a power reactor, generating electricity. This process is not only interesting from a power generation standpoint, but also due to its capability of consuming the surplus weapons grade plutonium from the weapons program and plutonium resulting of reprocessing Spent Nuclear Fuel (SNF). Mixed Oxide fuel (MOX) is one of these. Its blend of oxides of plutonium and uranium constitutes an alternative to the Low Enriched Uranium (LEU) fuel predominantly used in Light Water Reactors (LWR). Since uranium is present in MOX, although plutonium will be burnt, second generation plutonium will be produced through the radiative capture of U-238 and the two subsequent beta minus decays. Fuels with plutonium and thorium are also an option. In these, the neutrons released in the fission of plutonium are captured by Th-232. After this radiative capture, Th-232 becomes Th-233, which undergoes two beta minus decays resulting in the production of the fissile isotope U-233. The radiative capture cross section for Th-232 is more than three times that of U-238, yielding a higher conversion to fissile fuel than that from U-238. Due to the absence of uranium in the fuel, there is no second generation plutonium produced, and the amount of plutonium burnt will be higher than in MOX fuels. However, U-233, which is fissile, will be present in the SNF. Weapons-grade and reactor-grade plutonium can be used in plutonium-thorium fuels, with weapons-grade plutonium being the one that shows a bigger reduction in the amount of Pu-239. Reasoning behind transmutation Isotopes of plutonium and other actinides tend to be long-lived with half-lives of many thousands of years, whereas radioactive fission products tend to be shorter-lived (most with half-lives of 30 years or less). From a waste management viewpoint, transmutation of actinides eliminates a very long-term radioactive hazard and replaces it with a much shorter-term one. It is important to understand that the threat posed by a radioisotope is influenced by many factors including the chemical and biological properties of the element. For instance caesium has a relatively short biological halflife (1 to 4 months) while strontium and radium both have very long biological half-lives. As a result strontium-90 and radium are much more able to cause harm than caesium-137 when a given activity is ingested. Many of the actinides are very radiotoxic because they have long biological half-lives and are alpha emitters. In transmutation the intention is to convert the actinides into fission products. The fission products are very radioactive, but the majority of the activity will decay away within a short time. The most worrying short-lived fission products are those that accumulate in the body, such as iodine-131 which accumulates in the thyroid gland, but it is hoped that by good design of the nuclear fuel and transmutation plant that such fission products can be isolated from humans and their environment and allowed to decay. In the medium term the fission products of highest concern are strontium-90 and caesium-137; both have a half-life of about 30 years. The caesium-137 is responsible for the majority of the external gamma dose experienced by workers in nuclear reprocessing plants and, in 2005, to workers at the Chernobyl site. When these medium-lived isotopes have decayed the remaining isotopes will pose a much smaller threat. Long-lived fission products |Hover: more info| Some radioactive fission products can be converted into shorter-lived radioisotopes by transmutation. Transmutation of all fission products with halflife greater than one year is studied in Grenoble, with varying results. Sr-90 and Cs-137, with halflives of about 30 years, are the largest radiation emitters in used nuclear fuel on a scale of decades to a few hundreds of years, and are not easily transmuted because they have low neutron absorption cross sections. Instead, they should simply be stored until they decay. Given that this length of storage is necessary, the fission products with shorter halflives can also be stored until they decay. The next longer-lived fission product is Sm-151, which has a halflife of 90 years, and is such a good neutron absorber that most of it is transmuted while the nuclear fuel is still being used; however, effectively transmuting the remaining Sm-151 in nuclear waste would require separation from other isotopes of samarium. Given the smaller quantities and its low-energy radioactivity, Sm-151 is less dangerous than Sr-90 and Cs-137 and can also be left to decay. Finally, there are 7 long-lived fission products. They have much longer halflives in the range 211,000 years to 16 million years. Two of them, Tc-99 and I-129, are mobile enough in the environment to be potential dangers, are free or mostly free of mixture with stable isotopes of the same element, and have neutron cross sections that are small but adequate to support transmutation. Also, Tc-99 can substitute for U-238 in supplying Doppler broadening for negative feedback for reactor stability. Most studies of proposed transmutation schemes have assumed 99Tc, 129I, and transuranics as the targets for transmutation, with other fission products, activation products, and possibly reprocessed uranium remaining as waste. Of the remaining 5 long-lived fission products, Se-79, Sn-126 and Pd-107 are produced only in small quantities (at least in today's thermal neutron, U-235-burning light water reactors) and the last two should be relatively inert. The other two, Zr-93 and Cs-135, are produced in larger quantities, but also not highly mobile in the environment. They are also mixed with larger quantities of other isotopes of the same element. See also - John Hines, II, R. F. Yeager. John Gower, Trilingual Poet: Language, Translation, and Tradition. Boydell & Brewer. 2010. p.170 - Lawrence Principe. New Narratives in Eighteenth-Century Chemistry. Springer. 2007. p.8 - Muriel Howorth,Pioneer Research on the Atom: The Life Story of Frederick Soddy, New World, London 1958, pp 83-84; Lawrence Badash, Radium, Radioactivity and the Popularity of Scientific Discovery, Proceedings of the American Philosophical Society 122,1978: 145-54; Thaddeus J. Trenn, The Self-Splitting Atom: The History of the Rutherford-Soddy Collaboration, Taylor & Francis, London, 1977, pp 42, 58-60, 111-17. - Cockcroft and Walton split lithium with high energy protons April 1932. - William Alfred Fowler, Margaret Burbidge, Geoffrey Burbidge, and Fred Hoyle, 'Synthesis of the Elements in Stars', Reviews of Modern Physics, vol. 29, Issue 4, pp. 547–650 - Ken Croswell, The Alchemy of the Heavens - Anne Marie Helmenstine, Turning Lead into Gold: Is Alchemy Real?, About.com:Chemistry, retrieved January 2008 - B.E. Burakov, M.I Ojovan, W.E. Lee. Crystalline Materials for Actinide Immobilisation, Imperial College Press, London, 198 pp. (2010). http://www.icpress.co.uk/engineering/p652.html - Rita Plukiene, Evolution Of Transuranium Isotopic Composition In Power Reactors And Innovative Nuclear Systems For Transmutation, PhD Thesis, Vytautas Magnus University, 2003, retrieved January 2008 - Takibayev A., Saito M., Artisyuk V., and Sagara H., 'Fusion-driven transmutation of selected long-lived fission products', Progress in nuclear energy, Vol. 47, 2005, retrieved January 2008. - Transmutation of Transuranic Elements and Long Lived Fission Products in Fusion Devices, Y. Gohar, Argonne National Laboratory - M. I. Ojovan, W.E. Lee. An Introduction to Nuclear Waste Immobilisation, Elsevier, Amsterdam, 315pp. (2005). - Schwenk-Ferrero, A. (2013). "German Spent Nuclear Fuel Legacy: Characteristics and High-Level Waste Management Issues". Science and Technology of Nuclear Installations: 293792. doi:10.1155/2013/293792. Retrieved 5 April 2013. - "Cesium-RELEVANCE TO PUBLIC HEALTH". cdc.gov. Retrieved 5 April 2013. - Method for net decrease of hazardous radioactive nuclear waste materials - US Patent 4721596 Description - Transmutation of Selected Fission Products in a Fast Reactor - The Nuclear Alchemy Gamble - Institute for Energy and Environmental Research
http://en.wikipedia.org/wiki/Nuclear_transmutation
13
14
- slide 1 of 13 The Simple Present Tense It is never too late for your students to get to know their classmates better, and interviews are a great way to accomplish this. In addition, interviews make a perfect opportunity to practice the simple present. Pair your students and give them enough time to interview one another. After that, ask each student to introduce his partner to the rest of the class (if you are early in the semester) or to share some unusual facts with his classmates. Your students should focus on using the simple present tense to describe their partners. English speakers use the present tense most often to express habitual actions. Habitual actions are those that occur on a regular basis whether they are daily, weekly or even yearly. One activity you can do with your students to practice the present tense is talking about daily routines. What do you do to get ready in the morning before facing your day? Most people probably do many of the same things, but they do not perform those actions in the same order. With your class brainstorm a list of habits someone might practice when getting ready in the morning. Then have your students use these ideas to share their personal routine with a classmate. Another way to practice using the present tense and bring culture into the classroom at the same time is to talk about holiday traditions. Not all cultures celebrate the same holidays, and even when two people groups share the same holiday they may not share traditions associated with the day. Ask each of your students to give a class presentation about a holiday that he or she celebrates in his home culture. Ask each student to talk about what his/her family/friends and he/she do every year when the holiday rolls around. The more diversity your students bring to this activity, the more appreciation each member of your class will have for his classmates. - slide 2 of 13 The Simple Past Tense Charades can be a high intensity game that serves many functions in the ESL classroom, but a more leisurely charade activity can be used to practice using the simple past. Have a student volunteer to act out an activity. It should be an activity that takes several steps, for example, getting ready in the morning or making dinner. Allow your student to act out the entire event while your class watches. Then have your class relay the steps in the process using the simple past. Make sure they are able to articulate each step in the process. You can take this activity a step further by having your students write a paragraph on one of the processes they observed in class. Encourage your students to use transitions of time like first, next, then, second, after that, and finally to connect their ideas and help their paragraphs flow. When teaching the simple past to ESL students, it is worth spending some time addressing the pronunciation of the –ed ending. For verbs that end in a voiced consonant (b, d, g, and z for example) the pronunciation is /ed/. For verbs which end in voiceless consonants (p, t, k, and s for example), the same spelling is pronounced /t/. To clarify the pronunciation pattern, review the concept of voiced and voiceless consonants and then brainstorm a list of verbs for each category. Then, practice using the past tense of these verbs in pairs. If you want to take a lighter approach to reviewing the past tense in English, use dice to challenge your students. Have each person roll two dice to get a number. That number represents the number of years ago he/she must talk about. Your student should then share something he/she did in that year. For example, if your student rolls a six he/she might say, "Six years ago, I flew to Germany." If you want, bring the activity a little closer to home and let the number on the dice represent how many days ago an event happened. In that case, your student might share, "Six days ago, I studied for my biology test." Though your students will primarily be practicing the past tense, rolling dice, which affects their answers, adds an element of fun and frivolity to the exercise. - slide 3 of 13 The Simple Future Tense You can revisit the dice game with the future tense, as well. Again, have each student roll the dice to determine how far in the future he/she will talk about. In this case, the number on the dice might best represent the number of days in the future the event will happen. Make sure you have a calendar available for your students, and then, roll away. Each student, after rolling the dice, should share something that he/she will do that many days from now. For example, if he/she rolls a three, the reply might be, "In three days, I will go to a birthday party." If your students have traveled overseas to continue their English studies, it may be a long time until they are able to return to their home countries. Ask them to imagine what they will do on the first day they return home after their studies in English are complete. Have each person make a list of at least ten things he or she will do when they return home. Tell your students that they are going to make plans to see a movie. Give each student a copy of a theater schedule. Ask the students to read the schedule and decide which movies they would like to see. Ask each student what movie he/she plans on seeing on the field trip. - slide 4 of 13 The Present Progressive If you look around you, there is limitless inspiration for speaking in the present progressive. Ask your students to share their observations while looking at their classmates. What is each person doing right now? You could even assign specific actions to students to make the activity more interesting. After that, have your students move on to describing what is going on outside by looking to see what they can see from the window. What are you doing right now? If we are in class together, I can just look over and see for myself, but if we are talking on the telephone, I have to ask. With this in mind, have your students role-play telephone conversations in which each person asks her partner what he is doing. A not so typical use of the present progressive tense, talks about future time. This happens when an English speaker uses the phrase “be going to…” when speaking about future plans. For example, Jane might say, “Tomorrow, I am going to take a test.” Though the verb tense is present progressive, the intent is of future time. Have your students share what they are going to do tomorrow, next week, next month and next year. - slide 5 of 13 The Past Progressive You can use a crime role-play to give your students practice using the past progressive. In Agatha Christie style, explain to your students that a crime has been committed. Give each student a small slip of paper. On one write the crime that was committed and tell the recipient of that paper that he or she committed the crime, all other slips of paper will be blank. Of course, murder is a natural crime to investigate, but you can use any crime you like. Then, one student should play the detective, who is trying to solve the mystery. He should ask his classmates what they were doing when the crime was committed. Each student should respond with the past progressive, giving his alibi in the process. Once the detective has spoken with all the students in the class, he should make an accusation as to the perpetrator. Students sometimes have difficulty understanding the difference between the simple past and the past progressive. To help them practice deciding which tense to use, try the following activity. On each of several small slips of paper, write a general time in the past. You may want to include times like yesterday, last week, in 2010, etc. Then, make a second set of slips on which you give a specific moment in the past, like 5 p.m. yesterday, Tuesday night, etc. Have your students take turns drawing a slip of paper and then sharing what he did or was doing at the time his or her slip says. If they draw a specific time, like 5 p.m. yesterday, they should use the past progressive tense. If they draw a more general time, they should use the simple past, "In 2010, I bought a car." Mix up your usual homework by asking your students to take a mini field trip to an area with many people. They may choose a park, a food court at the mall, the zoo or any other place they may like to go. Explain to them the term, “people watching” and have your students take notes on what people are doing. Then, as a homework assignment, ask your students to describe what the people around them were doing by using verbs in the past progressive tense. - slide 6 of 13 The Future Progressive What are your students doing right now that they do every day? Have your class share their daily experiences with a partner by making observations about their lives, both in the present and in the future. Each person should use the present progressive to describe what he or she is doing right now, and then express his or her plans for doing that same activity tomorrow at this same time by using the future progressive tense. For example, they might say, “Right now, I am explaining a class activity. Tomorrow at this time, I will be explaining another class activity.” Have each of your students write a short list of their daily activities using the past tense. Then have students exchange papers. Each person should describe tomorrow in their partner’s life by changing all of his past sentences to future sentences. - slide 7 of 13 The Present Perfect “Have you ever,” can be a useful phrase to teach your students when you are talking about the present perfect tense. You can start your class interviews by brainstorming a list of interesting activities that you and your students have done or would like to do. Then, give your students turns to ask one another if they have ever done one of the activities on the board. The student should answer using the present perfect tense. You can make the previous activity into a lively game if you have some room for your students to run around. Arrange enough chairs in a circle to accommodate all but one of your students. The last student stands in the middle of the circle and thinks of something he or she has never done and then says it aloud to the class. They may say something like, “I have never eaten sushi.” Once they say it, any student who has done that activity must get up from his/her seat and move into an empty one. At this point, the student in the center of the circle should also try to get a seat in the circle. The person who is left with no seat stands in the center and takes a turn at, “I have never.” - slide 8 of 13 The Past Perfect Cross-cultural experiences are often a challenge as well as an adventure. Any students that leave their countries and families to study English overseas will find themselves having new experiences every day. Give your class a few minutes to think of some things they had never done before coming to the United States. Then, ask each person to share at least one thing he or she had never done by completing the sentence, “Before coming to the United States, I had never…” - slide 9 of 13 The Future Perfect Do your students have a plan for their lives? If they have traveled overseas to study English, they may have had to make all kinds of plans for the present time as well as the future. Ask your students to share their ideas by creating five-year plans and ten-year plans. Then, have each person share with the class what he/she will have accomplished by the end of either five years or ten years. To do this, they should use the future perfect tense to form sentences like, “I will have gotten married. I will have bought a house.” - slide 10 of 13 The Present Perfect Progressive To review this tense, make sure your students understand how to use the words, since and for (as in “for two weeks”). Then, give your students a list of time references using these words. You may choose phrases such as “since I was six years old” or “for three days.” Using these time phrases, your students must come up with grammatical sentences written in the present perfect progressive. - slide 11 of 13 The Past Perfect Progressive Students studying English overseas leave everything behind to pursue their education. Ask your students to share some of the things they had been doing before they came to the United States. It may be that they had been trying to get visas or that they had been planning their weddings. - slide 12 of 13 The Future Perfect Progressive Challenge your students to think of things they have been doing since the beginning of the semester. Then, ask how long they will have been doing these same things by the end of the semester. Ask them to write a short paragraph about these activities using the future perfect progressive tense. - slide 13 of 13 Whether you are teaching beginning students or advanced students, verbs will be a part of your curriculum. The next time you are teaching verbs to ESL students, try some of these activities. You are sure to see the results of your actions. - Understanding and Using English Grammar by Betty Schrampfer Azar. - Multi-ethnic students by iStockphoto on office.microsoft.com used by permission. - Author's personal experience
http://www.brighthubeducation.com/esl-lesson-plans/126436-teaching-verbs-with-comprehensive-list-of-activities/
13
31
Genetic material/Genome Genetic material refers to the material made of DNA in each cell of any organism. The DNA is divided into genes. Each gene contains the information required to produce one polypeptide/protein needed by the organism. The thread-like DNA in a cell is divided into several separate lengths. Each length forms a structure called a chromosome. There are two copies of each chromosome in every cell. Human cells contain 23 pairs of chromosomes. A gene is a length of DNA that contains the information needed to make one polypeptide. For example, the beta globin gene contains the information needed to make the beta globin polypeptide found in the hemoglobin of red blood cells. More than one gene may be involved in making one protein, and more than one polypeptide may be formed from one gene as a result of alternate splicing. It is the process of changing the genetic material of an animal or an organism or a plant. The main method of genetically modifying the organism is by transgenesis. Each cell of an organism contains two copies of each gene. In a heterozygote, the two genes of a pair are different from each other. Each cell of an organism contains two copies of each gene. In a homozygote, both copies of the gene are identical to each other. A process by which the DNA of an organism changes or mutates. In humans this can lead to disease such as thalassemia in which the mutation results in decreased production of beta or alpha globin. The mutant gene is passed down from a parent to the offspring and so the condition is inherited. In viruses, and other infectious organisms, mutations can lead to emergence of organisms with new characteristics. It can make them more virulent, or resistant to antibiotics thus increasing their infectivity. A patent is a monopoly right, granted for a limited period, given to an inventor in return for the publication to the world at large of the details of an invention. A cross-over between two members of a pair of chromosomes results in the formation of a recombined chromosome wherein a new set of gene arrangement is created. This refers to the introduction of a foreign gene into an animal or other organisms. The transferred gene is called a transgene. Transplantation involves the removal of organs, tissue or cells from one organism and their implantation into another organism. The 12 principles laid down under Statement on General Principles are common to all areas of biomedical research. The specific issues are mentioned under relevant topics. Review Committee in Human Genetics All institutions where research is carried out on human genetics should have an Ethical Review Committee with adequate expertise in the field. Scientific competence of the investigator and sound scientific methodology should be essential prerequisites for genetic research. It includes appropriate training, planning, pilot and field testing of the protocols, containment where necessary and quality control of laboratory techniques. For all biogenetic research involving human subjects the investigator must obtain the informed consent of the prospective subject or, in the case of an individual who is not capable of giving informed consent, the proxy consent of a properly authorized representative/legal guardian should be taken. Research involving children Before undertaking research involving children, the investigator must ensure that : - children will not be involved in research that might be carried out equally with adults; - the purpose of the research is to obtain knowledge relevant to the health needs of children; - a parent or legal guardian of each child has given proxy consent; - the consent of each child has been obtained to the extent of the child's capabilities; - the child's refusal to participate in research must always be respected unless according to the research protocol the child would receive therapy for which there is no medically acceptable alternative; - the risk presented by interventions not intended to benefit the individual child-subject is low and commensurate with the importance of the knowledge to be gained; and - interventions that are intended to provide therapeutic benefit are likely to be at least as advantageous to the individual child-subject as any available alternative Essential information for prospective research subjects Before requesting an individual's consent to participate in research, the investigator must provide the individual with the following information, in language that he or she is capable of understanding. The communication should not only be scientifically accurate but should be sensitive to their social and cultural context: - that each individual is invited to participate as a subject in research. The aims and methods of the research should be fully explained to the concerned individual. - the expected duration of the subject's participation;- the benefits that might reasonably be expected to result to the subject or to others as an outcome of the research; - any foreseeable risks or discomfort to the subject, associated with participation in the research; - any alternative procedures or courses of treatment that might be as advantageous to the subject as the procedure or treatment being tested; - the extent to which confidentiality of records in which the subject is identified will be maintained; - the extent of the investigator's responsibility, if any, to provide medical services to the subject for any unexpected injury/illness resulting from the research free of charge; - research subjects who suffer physical injury as a result of their participation are entitled to medical care as an institutional responsibility; - that the individual is free to refuse to participate and will be free to withdraw from the research at any time without penalty or loss of benefits to which he or she would otherwise be entitled. All possible means of coercion or direct or indirect rewards for participation should be scrupulously avoided. Equitable distribution of burdens and benefits Individuals or communities to be invited to be subjects of genetic research should be selected in such a way that the burdens and benefits of the research will be equitably distributed. Special justification is required for inviting vulnerable individuals (prisoners, mentally retarded subjects, medical students, nurses, subordinates, employees etc.) and if they are selected, the means of protecting their rights and wishes must be strictly applied. Persons who are economically or socially disadvantaged should not be used as research subjects to benefit those who are financially better off. Pregnant or Nursing women as research subject Pregnant or nursing women should in no circumstances be the subject of genetic research unless the research carries no more than minimal risk to the fetus or nursing infant and the object of the research is to obtain new knowledge about the fetus, pregnancy and lactation. As a general rule, pregnant or nursing women should not be subjects of any Clinical Trials except such trials as are designed to protect or advance the health of pregnant or nursing women or fetuses or nursing infants, and for which women who are not pregnant or nursing would not be suitable subjects. Confidentiality of data The investigator must establish secure safeguards for the confidentiality of the research data. Subjects should be told of the limits to the investigator's ability to safeguard confidentiality and of the anticipated consequences of breaches of confidentiality. When commercial companies are involved in research, it is necessary to protect researchers and subjects from possible coercion/inducement to participate in the study. Academic institutions conducting research in alliance with industries/commercial companies require a strong review to probe possible conflicts of interest between scientific responsibilites of researchers and business interests (e.g. ownership or part-ownership of a company developing a new product). In cases where the review board determines that a conflict of interest may damage the scientific integrity of a project or cause harm to research participants, the board should advise accordingly. Institutions need self-regulatory processes to monitor, prevent and resolve such conflicts of interest. Prospective participants in research should also be informed of the sponsorship of research, so that they can be aware of the potential for conflicts of interest and commercial aspects of the research. Undue inducement through compensation for individual participants, families and populations should be prohibited. This prohibition, however, does not include agreements with individuals, families, groups, communities or populations that foresee technology transfer, local training, joint ventures, provision of health care or of information infrastructures, reimbursement costs of travel and loss of wages and the possible use of a percentage of any royalties for humanitarian purposes. I. HUMAN GENETIC DISEASES/DISORDERS These involve obtaining history of other members of the family of the proband under investigations. It may reveal information about the likelihood that individual members of the family either are carriers of genetic defects or may be affected by the disease. Special privacy and confidentiality concerns arise in genetic family studies because of the special relationship between the participants. It should be kept in mind that within families each person is an individual who deserves to keep the information about himself or herself confidential. Family members are not entitled to know each other's diagnosis. Before revealing medical or personal information about individuals to other family members, investigator must obtain the consent of the individual. In our country revealing the information that the wife has balanced chromosomal translocation (leading to recurrent abortions or a genetic syndrome in her child) or that she is a carrier of a single gene i.e. 'X' linked or recessive disease, may lead to the husband asking for a divorce inspite of the fact that in some of the cases, the husband himself may be a carrier of a recessive disorder. While general principles of Counselling require the presence of both the spouses, necessary care must be taken not to end up in breaking the families. The familial nature of the research cohorts involved in pedigree studies can pose a challenge for ensuring that recruitment procedures are free of elements that unduly influence the decision to participate. The very nature of the research exerts pressure on family members to take part, because the more complete the pedigree, the more reliable the resulting information will be. Problems which could arise when - - Revealing who else in the family agreed to participate may lead to breach of confidentiality. - A proband is used for revealing his personal interest he/she may put undue pressure on relatives to enroll in the study. - Direct recruitment (by telephone calls) may however be seen as an invasion of privacy by family members. - Contact through personal physicians may imply that their health care will be compromised if they do not agree to participate. Defining risks and benefits Potential risks and benefits should be discussed thoroughly with prospective subjects. In genetic research, the primary risks outside of gene therapy are psychosocial rather than physical. Adequate counselling should be given to subjects on the meaning of the genetic information they receive. Genetic counselling should be done by persons qualified and experienced in communicating the meaning of genetic information. II. GENETIC SCREENING Definition : A search in a population to identify individuals who may have, or be susceptible to, a serious genetic disease or who, though not at risk themselves, are gene carriers and thus may be at risk of having children with a particular genetic disease. Depending on the nature of the genetic defect that is identified and its pattern of inheritance, siblings and other blood relations as well as existing and future offsprings may be affected. Thus the status of genetic information raises, ethical questions that differ significantly, from the normal rules and standards applied to the handling of personal medical records. Adequately informed consent is therefore essential. Those being screened are entitled to receive sufficient information in a way that - - they can understand what is proposed to be done - they must be made aware of any substantial risk - they must be given time to decide whether or not to agree to what is proposed and they must be free to withdraw from the investigation at any time. The Disorder to be screened and its inheritance pattern should be explained as also the reliability of the screening test, the procedure for informing individuals of the results, what will be done with the samples, information about the implication of a positive screening test (abnormal) and a warning to pregnant women that genetic screening may reveal unexpected and awkward information, for example about paternity. Confidentiality should be maintained in handling of the results with an emphasis on the responsibility of individuals with a positive (abnormal) result to inform partners and family members.It should be emphasised that consent for screening or a subsequent confirmatory test does not imply consent to any specific treatment or to the termination of a pregnancy. General guidelines have to be followed for vulnerable individuals i.e minors, mentally ill, prisoners, students, subordinates, people who do not speak the language of the investigator etc. Genetic counselling should be readily available for those being screened. Confidentiality of medical information is protected by law but this is not absolute. Information may be disclosed where it is in the public interest to do so. Screening New Borns : Screening of new borns should be allowed to detect only those genetic diseases like phenylketonuria where the serious effects of the disease could be prevented by a special diet or treatment . The same applies to investigations to detect genetic, chromosomal, metabolic abnormalities, etc. if general principles mentioned earlier are followed. The other diseases can be screened as and when interventions/therapy is made available in future. Prenatal testing : It is aimed at detecting the presence of genetic or chromosomal abnormalities in fetuses. Examination of the genetic make up of the fetus is done through amniocentesis, chorionic villi sampling, placentocentesis, cordocentesis (blood sampling from the umbilical cord) and skin and other biopsies,and also examination of blood samples from the mother. Embryoscopy may be used to detect external malformations. Anonymous testing: Researchers may conduct anonymous testing or screening on the general population in order to establish the prevalence of genetic anomalies and deleterious genes.This is now possible by PCR(polymerase chain reaction) amplification which uses a single blood spot or a small sample of blood for multiple tests. Blood spots collected in screening newborns for treatable disorders could be used to collect epidemiologic information about genetic predispositions to disorders of late onset. In cases where the information derived from stored specimens might be useful to individuals, the code of anonymity may be broken. All the criteria mentioned in the general principles like informed consent, confidentiality etc. should be observed. Genetic Registers : Computer based genetic registers are subject to Data Protection Act but there is need for additional safeguards for all genetic registers, including storage of information in a safe place and manner, restriction of access to only those specifically responsible for the register, and the removal of identifying information when data are used for research purposes. The practice of genetic screening in employment: It may be done only when justified and in the interest of the employees i.e. Sickle Cell Disease screening for those in aviation industry who are likely to be exposed to atypical atmospheric conditions. An employer may use genetic screening procedures with the consent of entrants (This issue is not decided in many countries). This screening may be only for a disorder which might be harmful to the employee or any disorder which may jeopardise other people in the relevant function or job. (Any possibility of direct or indirect threat to the job should be scrupulously avoided.) Subject to prior consultation with workplace representatives, and with appropriate Health authorities, it is recommended that genetic screening of employees for increased occupational risks ought only to be contemplated where- - there is strong evidence of a clear connection between the working environment and the development of the condition for which genetic screening is to be conducted; - the condition in question is one which seriously endangers the health of the employee or is one in which an affected employee is likely to present a serious danger to third parties; - the condition is one for which the dangers cannot be eliminated or significantly reduced by reasonable measures taken by the employer to modify or respond to the environmental risks. Insurance companies should adhere to the current policy of not requiring any genetic tests as a prerequisite of obtaining insurance. This is forbidden by law in some countries .e.g. USA. Public policy & genetic screening There is a very great need for improving public awareness and understanding of human genetics. There should be a central coordination and monitoring mechanism for a genetic screening programme in the interest of the public, the majority of which have little knowledge of genetics. III. THERAPEUTIC APPROACHES INCLUSIVE OF GENE THERAPY Genetic disorders which require nutritional replacement therapy like phenylketonuria do not pose any ethical problem. Replacement with animal products should follow the rules as stipulated for other diseases. The goal of human genetic research is to alleviate human suffering. Gene therapy is a proper and logical part of this effort. Gene therapy should be subject to all the ethical codes that apply to research involving patients. i) Somatic gene therapy is the only method out of the four types of Genetic Engineering that may be allowed for the purpose of preventing or treating a serious disease when it is an ethical therapeutic option. It should be restricted to the alleviation of disease(life threatening or seriously disabling genetic disease) in individual patients and should not be permitted to change normal human traits. Safety should be ensured especially because of the possibility of unpredicted consequences of gene insertion. It should provide for long term surveillance. Informed consent must be taken especially regarding uncertainities about outcome, as children could be candidates for therapy. ii) Germ Line therapy should not be attempted at present because there is insufficient knowledge to evaluate the risk to future generation. Unpredictable outcome is a more valid reason than fear of unscrupulous people in power acquiring undue powers. iii) Enhancement Genetic Engineering for altering human traits should not be attempted as we possess insufficient information at present to understand the effects of attempts to alter/enhance the genetic machinery of humans. It is not wise, safe or ethical for parents to give for example growth hormone to their normal offspring in order to produce very large football or basketball players. Similarly it would be unethical to use genetic engineering for improvement of intelligence, memory etc even if specific gene/genes are identified in future. iv) Eugenic Genetic Engineering for personality, character,formation of body organs, fertility, intelligence and physical, mental and emotional characteristics are enormously complex. Dozens, perhaps hundreds, of unknown genes that interact in totally unknown ways, probably contribute to each such trait. Environmental influences also interact with these genetic backgrounds in poorly understood ways. The concept of remaking a human i.e. eugenic genetic engineering is not realistic and has grave risks of this being misused by unscrupulous people in power. This should not be allowed. IV. ISSUES RELATED TO NATIONAL AND INTERNATIONAL COLLABORATIVE RESEARCH It is important that all research with human subjects adequately protect the rights and welfare of the subjects. All human genetic research in India will be subject to guidelines of the funding agencies and rules and regulations laid down by the Govt. of India if it were conducted wholly within the country.International collaborative projects should not only follow the guidelines for collaboration but make sure that the investigations should follow the guidelines given by the financial agencies/national bodies especially with regard to ethical guidelines. This includes international standards, declaration of Helsinki or Nuremberg code. Written descriptions of the specific procedural implementation of such policies that have been adopted by the collaborating institutions in their own countries are required. Investigators should be very clear as to which part of the project will be done in a foreign country and also what specific sample will be taken out of the country for the project. It should be strictly forbidden to utilise the sample for any other purpose than for the specific purpose mutually agreed to and sanctioned by the appropriate authority. To be specific no DNA from human subjects should be sent out of the country unless it follows the procedure and guidelines laid down by the Indian Council of Medical Research/Government of India. In the event of failure of agreement the guidelines of the country (India) shall prevail. The human genome in its natural state is not subject to private, national or transnational ownership by claim of right, patent or otherwise. Intellectual property based upon the human genome may be patented or otherwise recognised in accordance with national laws and international treaties. Question of patenting DNA should be clearly stated. Who should benefit should also be specified. The percentage benefit to be given/received should be mentioned in writing through a carefully drawn Memorandum of Understanding. V. HUMAN GENOME DIVERSITY Deptt. of Biotechnology, Ministry of Science & Technology has brought out a document on genomic diversity which envisages the following - - To support a network of laboratories in India for studying genomic diversities of anthropologically well-defined populations following a uniform set of protocols for collecting information, and screening a uniform set of genomic markers by inviting and implementing project proposals under the framework of this programme. - To establish a national repository of biological samples (DNA, cell lines etc.) with appropriate safeguards, regulations and monitoring. - To establish and integrate regional and national statistical databases comprising genomic, epidemiological, cultural and linguistic data on Indian population. The biological tools, materials and analysis of DNA samples will be carried out by Indian scientists in Indian laboratories. The biological samples collected under this programme, as well as the data generated, have a variety of ethical, legal and commercial implications. Scientists involved in this will follow appropriate ethical protocols and respect the rights and sensitivities of the participating individuals and populations. The relevant issues pertain to: i) the mechanism for collection of samples, ii) who can have access to the samples and for what purposes, iii) who owns the DNA; and iv) to establish measures for quality control of the laboratories. VI. RESEARCH RELATED TO DNA BANKING DNA samples should not leave the country without following the guidelines evolved by the Govt. of India with clear undertaking that it should not be used for any other purpose other than the original intent for collection. In every case where a new study proposes to use samples collected for a previously conducted study, the ethical committee should consider, whether the consent given for the earlier study also applies to the new study, whether the objectives of the new study diverge significantly from the purpose of the original protocol, and whether fresh consent has been obtained when the new study depends on the familial identifiability of the samples. Internationally the accepted norm is to obtain fresh consent for any secondary use. The consequences of DNA diagnosis for which no treatment is available or for conditions menifesting late in life e.g. breast cancer, Alzheimer's etc. should be seriously considered before embarking on DNA diagnosis. VII. DNA DIAGNOSIS The general principles of informed consent, confidentiality and other criteria used for any investigation in genetics should be followed. Preimplantation DNA diagnosis- As there are various types of investigations in this area this should be reviewed by an ethical committee. In children - Parents are advised not to get the diagnosis done especially in cases like Huntington's disease till the child reaches the age of proper "consent" to the test. In adults, the vulnerable population should be kept in mind while following the general principles. Unless appropriate counselling services are available DNA diagnosis is fraught with grave psycho-social implications. VIII. ASSISTED REPRODUCTIVE TECHNIQUES Any fertilization involving human sperm and ova that occurs outside the human body. There is no objection ethically as at the moment for IVF or any other related procedure for conducting research or for clinical applications. "Informed consent" should include information regarding use of "spare" embryos. It should be made clear whether embryos that are not used for transfer could or could not be used for research purposes or implanted in another woman's womb, or "preserved" for use at a later date or destroyed. Investigators should ensure that participants are informed and consent is taken in writing. Investigators should clarify the ownership of the embryos whether they belong to the biological mother or the laboratory. Abortions should never be encouraged for research purposes. A National Advisory Board for ethics in reproduction should be constituted which can evaluate research proposals in this area. Fetuses as research subjects - Research involving human fetuses raises special concerns. The fetus has a unique and inextricable relationship to the mother. It cannot consent to be a research subject. The fetus may also be an indirect subject of research when women, who may be pregnant, participate in the research. Respect for safeguarding of personal and parental reproductive choices - Reproductive decisions should be the province of those who will be directly responsible for the biological and social aspects of child bearing and child rearing. Usually this means the family. However, when a couple is unable to reach an agreement, the mother should have the final authority of decision. Women have a special position as care givers for children with disabilities. Since the bulk of care falls upon the woman, she should make the final decision among reproductive options, without coercion from her partner, her doctor, or the law. Choice is more than the absence of legal prohibition or coercion. Choice should include the economic and social ability to act upon a decision, including disability. There should be a positive right to affordable genetic services, safe abortion and medically indicated care for children with disabilities. (i) through Nuclear transplantation : This seems to be a possibility in the near future as sheep and monkeys have already been cloned. The ethical implications need not be expanded. Research on human cloning definitely should be forbidden by law. (ii) through embryo splitting: Embryo splitting is ethically acceptable provided that the resulting embryos are not damaged or destroyed in the process. There are many issues involved here which require separate discussion. a. It is ethically acceptable to use embryo splitting to produce embryos for simultaneous implantation in the same woman. (Not more than four embryos shall be produced from a single embryo) and to cryopreserve embryos resulting from embryo splitting for transfer and implantation in a subsequent IVF cycle, should an initial IVF cycle using split embryos prove unsuccessful. b. It is unacceptable to split embryo and retain them in a cryopreserved state for the sole purpose of : - providing an adult with an identical twin to raise as his or her own child - having a large family of genetically identical children - retaining a "back-up" embryo as a potential replacement for a child who dies - retaining a "back-up" embryo as a potential organ or tissue donor for an identical twin already born - retaining a "back-up" embryo as a potential source of fetal tissue, organs or ovaries - donation to others - sale to others. Whether it is ethically acceptable to split embryos for the specific purpose of allowing preimplanation diagnosis on one of the resulting embryos if that embryo would be damaged in the process is debatable. Research involving human embryos: This should be permitted with appropriate safeguards. Studies of "normal" embryos will lead to understanding the process of fertilization, which cannot be entirely accomplished by animal research. Additionally, studies of "abnormal" embryos are a potential source of scientific information at the molecular level about the origins and development of genetic disorders, malformations and pediatric cancers. To understand the natural history of some genetic diseases, it will be necessary to obtain sperm and eggs from parents who are at higher risk to transmit these conditions to offspring, and to study the genetic mechanisms involved compared to those in "normal" embryos. Thus, restricting embryo research only to spare embryos donated after infertility treatment will not be sufficient. The embryo does not have the same moral status as infant or child, although it deserves respect and moral consideration as a developing form of human life. This judgement is based on three characteristics of pre-implantation embryos; absence of developmental individuation, no possibility of sentience (feeling) and a high rate of natural mortality at this stage. Harm cannot be done to such an organism until the capacity for sentience has been established. From this perspective there is a clear difference between the moral status of living children and embryos. It is possible to damage an embryo in research. The damage would become "harmful" in the moral sense only if the embryo was transferred to a human uterus and a future sentient person was harmed by the damage once done to the embryo. This possibility can be avoided by regulations forbidding the transfer of any embryo that has been involved in research to a human uterus. Respect for embryo can be shown by (1) accepting limits on what can be done in embryo research, (2) committing to an inter-disciplinary process of peer group review of planned research, and (3) carrying out an informed consent process for gamete and embryo donors. Further, respect for the embryo's limited moral status can be shown by careful regulation of the conditions of research, safeguards against commercial exploitation of embryo research, and limiting the time within which research can be done to 14 days. This last restriction is in keeping with the policy in several nations that permit research with embryos (Australia, Great Britain, American College of Obstetrics and Gynaecology 1986; Human Fertilization and Embryology Authority, 1993; Royal Commission on New Reproductive Technologies, 1993) until the developmental stage when the "primitive streak" appears. At this time, the development of nervous system begins and the embryo begins to become a distinct individual. Adoption: Adopted children or children born from use of donor gametes, and their social parents, should have the right to know whatever medical or genetic information about the genetic parents that may be relevant to the child's health. Genetic testing of adopted children or children awaiting adoption should fall under the same guidelines as testing of biological children. IX. HUMAN GENOME PROJECT (HGP) The human genome project (HGP) is an international research effort, the goal of which is to analyse the structure of human DNA and to determine the locations of the estimated 1,00,000 genes. Another component of the programme is to analyse the DNA of a set of non-human model organisms to provide comparative information that is essential for understanding how the human genome functions. The project began formally in 1990.The investigators have been able to identify and isolate human genes particularly those associated with diseases. The project has the potential for profoundly altering our approach to medical care from one of treatment of advanced disease to prevention based on the identification of individuals at risk. HGP is arguably the single most important organised research project in the history of biomedicine. Ethical considerations Implications of using this genetic knowledge pose a number of questions for - - individual and families - whether to participate in testing, with whom to share the results, and how to act on them - health professionals - when to offer testing, how to ensure its quality, how to interpret the results and to whom to disclose information - employers, insurers, the courts and other social institutions - the relative value of genetic information to the decisions they must make about individuals - for governments - about how to regulate the production, and use of genetic tests and the information they provide and how to provide access to testing and counselling services for society - for society - how to improve public understanding of science and its social implications and increase participation of the public in science policy making. X. RESEARCHER'S RELATIONS WITH THE MEDIA AND PUBLICATION PRACTICES Researchers have a responsibility to make sure that the public is accurately informed about results without raising false hopes or expectations. Researchers should take care to avoid talking with journalists or reporters about preliminary findings. Sometimes the media report potentially promising research that subsequently cannot be validated. Sometimes the media report research on animals in such a way that the public thinks that the step to treatment for humans is an easy one. Retractions almost never appear in the popular press or on television. Therefore it is important to avoid premature reports. The best safeguard against inaccurate reporting is for the researcher to require, as a condition for talking with the media, that the reporter supply a full written rather than oral version of what will be reported, so that the researcher can make necessary corrections. Investigators publication plans should not threaten the privacy or confidentiality of subjects (publication of pedigrees can easily result in the identification of studying participants). It is recommended that consent for the publication shall be obtained separately rather than as part of the consent to participation in research or treatment. XI. GUIDELINES ON ETHICAL ISSUES FOR PROFESSIONALS AND PRACTITIONERS OF GENETICS IN THE FIELD OF HUMAN GENETICS General ethical guidelines in medical genetics for health workers and public are outlined. Respect for person includes informed consent, right to referral, full disclosure, protection of confidentiality and respect for children and adolescents in the context of genetic testing. - Access to genetics services - Access to genetics services should not depend upon social class or ability to pay. Whatever services exist in a nation should be available equally to everyone. Genetic services should be provided first to those whose need for them is greatest. Hence, there is a great need to set up genetic centres for counselling as well as therapy where available. - Non-directive counselling - Genetic counselling should be non-directive i.e. the couple should be explained the various options available, while the final choice should be left to the couple. Illiterate subjects with no or poor understanding of scientific facts may be told what other persons in their situation may opt to do. - Voluntary approach - It is essential to ensure that the individual voluntarily approaches genetic services including genetic counselling, screening for susceptibility to common diseases or to occupationally-related diseases, presymptomatic testing, testing children, and prenatal diagnosis. Persons who choose or refuse genetic services should not be the object of discrimination or stigmatization. Persons who choose or refuse genetic testing or services should not be penalized in terms of health care, employment, or insurance. - The only exception to the rule of voluntary screening should be newborns, if, and only if, early treatment is available that would benefit the newborn. Therefore, the government may mandate screening for newborns who would be harmed by the absence of prompt treatment. When this is done the government would have the ethical obligation to provide prompt, affordable treatment for the disorders for which they screen. Otherwise the screening would be in vain. - Disclosure of information - There should be full disclosure of clinically relevant information to patients. Professionals should disclose all test results relevant to an individual's own health or the health of a fetus, including results indicative of any genetic condition, even if the professional regards the condition as not serious. Those who will bear and rear the child should decide, after receiving full and unbiased information, about the effects of the conditions on their family and their socio-cultural situation. Test results should be disclosed even if ambiguous or conflicting. New or controversial interpretations of test results should also be disclosed. Test results without direct relevance to health (e.g. nonpaternity, fetal sex in the absence of X-linked disorders) may be withheld if this appears necessary to protect a vulnerable party. Disclosure includes the duty to recontact individuals or families if new developments arise that are relevant to health. - Duties to family members - In genetics, the true patient is a family with a shared genetic heritage. Family members have a moral obligation to share genetic information with each other. If children are intended, individuals should share information with their partners. Individuals have a duty to inform other family members who may be at high risk. If an individual will not do so, the medical geneticist may issue a general warning to family members, but without revealing information about the affected individual. Preserving patient confidentiality is a well-known duty in medicine. This duty is mitigated if it conflicts with another well-known duty, preventing harm to other parties. - Protection of privacy from institutional third parties - Medical geneticists should recognize the potential for harm when institutions are allowed access to genetic information about individuals, even with the individual's consent. Therefore, such institutions should not have access to such data and should not be permitted to require genetic tests. - Prenatal diagnosis - This should be performed only for reasons relevant to the health of the fetus or the mother. Prenatal diagnosis should not be performed solely to select the sex of the child (in the absence of an X-linked disorder). Sex selection, whether for male or female, denigrates the fundamental personhood of those already born, and has the power to harm societies by unbalancing sex ratios. The potential harm to large groups of people outweighs any immediate benefits to individuals or families. The Government of India has already passed legislation banning diagnosis of sex for non-medical reasons. - Prenatal diagnosis can be used to prepare parents for the birth of a child with a disability. Therefore, prenatal diagnosis should be available to such parents who request it but oppose abortion, provided that they understand and are willing to accept the risks to the fetus. - In some cases, prenatal diagnosis may be performed to protect the health of the mother. These include clinically confirmed cases of morbid anxiety or situations where prenatal paternity testing would benefit the mother's mental health (e.g. if rape occurred while a couple was trying to conceive). Professionals should recognize the human and economic costs involved in prenatal diagnosis and should limit its use to situations where there is a clear benefit. - The human genome project & the future of Medicine Mark S Guyer & Francis S Collins Human Genome Project, Guyer & Collins, Vol 147, Nov 1993, P 1143-1151. - Nuffield Council on Bioethics U.K.Genetic screening & ethical issues - Nuffield Council on Bioethics, UK Human tissues Ethical & legal issues - Safeguards for Gene Therapy Notice Board, The Lancet, Vol.339, Jan 25, 1992 Page 238 - The Prenatal Diagnostic Techniques Act,India (1994). - DBT Guidelines for Gene Therapy,India (1996) - Guidelines for Exchange of Human Biological Material for Biomedical Research Purposes,Ministry of Health & family Welfare,India (1997).
http://www.healthlibrary.com/book8_chapter554.htm
13
22
Ending Slavery in the District of Columbia This booklet describes events related to the abolition of slavery in Washington, DC, which occurred on April 16, 1862, nearly nine months before the more famous “Emancipation Proclamation” was issued. The District of Columbia, which became the nation’s capital in 1791, was by 1862 a city of contrasts: a thriving center for slavery and the slave trade, and a hub of anti-slavery activity among abolitionists of all colors. Members of Congress represented states in which slavery was the backbone of the economy, and those in which slavery was illegal. One result of the intense struggle over slavery was the DC Compensated Emancipation Act of 1862, passed by the Congress and signed by President Abraham Lincoln. The act ended slavery in Washington, DC, freed 3,100 individuals, reimbursed those who had legally owned them and offered the newly freed women and men money to emigrate. It is this legislation, and the courage and struggle of those who fought to make it a reality, that we commemorate every April 16, DC Emancipation Day. Though the Compensated Emancipation Act was an important legal and symbolic victory, it was part of a larger struggle over the meaning and practice of freedom and citizenship. These two words continue to be central to what it means to be a participating member of society. We invite you to think about what these concepts have meant in the past and what they mean to you today. A New National Capital The area we know as the District of Columbia was selected as the site for the capital of the United States in 1791. It was created by land ceded to the federal government by Virginia and Maryland, two slave-holding states of the Chesapeake region. The District of Columbia, which included Washington City, Georgetown, Washington County and Alexandria (until 1846), became a center for slavery and the slave trade. Slavery was a legal, economic and social institution. In legal terms, it meant that certain individuals had the right to purchase and “own” other human beings as property. These individuals were then able to profit from the labor of the people they “owned” who were forced to work without getting paid. Slavery, however, was not simply an institution that benefited propertied individuals; it was an economic system that allowed the United States, particularly the southern states, to develop as it did. Slavery also hinged on the modern and pseudo-scientific concept of race, which is based on skin color. By constructing a belief in biological differences based on color, people who were called “white” justified the oppression of people who were called “black.” Early African-American Population In 1800, African Americans were 25 percent of the District’s population of 14,093, sharing the new capital with Native American and white people. The majority of these African-American people were enslaved. The image most of us have of slavery is large plantations or farms in the rural South where large numbers of women, men and children labored. In the District, as in cities across the South, black people labored and lived in a range of settings, often singly or in small numbers. As the nation’s capital was developing, there was a great need for skilled and unskilled laborers. African Americans helped to construct the U.S. Capitol building, the White House and other public and private projects. While the vast majority of those enslaved did not earn money or wages, there were some who were permitted by their owners to earn money, and eventually purchased their freedom. And because there were no laws in Washington, DC requiring the newly freed to leave the District upon gaining their legal freedom, the free black population continued to grow. Other enslaved people gained their legal freedom, or manumission, when their owners provided for it in their wills. Once their owners died, they were legally free. Limits on Freedom The growing free African American population in the capital worried pro-slavery white people, including the mayor, Robert Brent, and the Board of Aldermen, the precursor to the Council of the District of Columbia. Through the introduction of laws known as “Black Codes,” they sought to solidify slavery as an institution and to strengthen the concept of racial segregation in the city. They also restricted the meaning and practice of legal freedom for free black people. The mayor and aldermen legislated the first set of Black Codes in 1808. These codes made it unlawful for “Negroes” or “loose, idle, disorderly persons” to be on the streets after 10 p.m. Free black people who violated this curfew could be fined five dollars (equal to $65 in 2007). Enslaved African Americans had to rely on their owners to pay the fine. The punishment for nonpayment of fines was whipping. The mayor and aldermen enacted a harsher set of Black Codes in 1812. Free black people could be fined $20 if they violated the curfew, and jailed for six months if the fine went unpaid. Enslaved people received the same fine but the punishment for nonpayment was 40 lashes. In addition, free African Americans had to register with the local government and carry their certificates of freedom at all times. In 1821, Mayor Samuel Smallwood and the Board of Aldermen imposed even greater restrictions on free black people in the District. The new set of Black Codes required them to appear before the mayor with documents signed by three white people vouching for their good character, proving their free status. They also had to pay a “peace bond” of $20 to a “respected” white man as a commitment to good behavior. This code illustrates the precarious nature of freedom for non-enslaved African Americans, by attempting to control the movement of people of color. Free African Americans contested the codes. William Costin, for example,refused to pay the peace bond. In court, Costin argued that the Constitution “knows no distinction of color. That all who are not slaves are equally free...equally citizens of the United States." The judge ruled that while the codes were legal they could not be imposed upon free black people who had been residents before the code was enacted. It was a limited, though important, victory. Costin also called into question the logic of the concept of race; his ancestors were Cherokee, European and African. For Costin, any of those could define him. Turning Points During Slavery The US Congress, established in 1789 and consisting solely of white men until 1870, was a focal point for intense debate about the abolition of slavery. Beginning in the late 1820s, abolitionists organized a coordinated campaign to petition Congress to end slavery and the slave trade in the nation’s capital. The effort to send abolitionist petitions to Congress gained strength in the mid-1830s when thousands of petitions flooded the House of Representatives. In response, southern Congressmen instituted the “Gag Rule” in 1836, banning the introduction of petitions or bills pertaining to slavery. In all parts of the country where slavery was permitted, communities of free black people were a cause of concern to pro-slavery white people, as demonstrated by several highly publicized incidents. Denmark Vesey’s Plans for Charleston, SC In 1822, Denmark Vesey, a free black minister, planned an insurrection in reaction to the city of Charleston, South Carolina’s suppression of the African Church, a major community institution for African Americans. The conspiracy was revealed two months before the incident was to take place, resulting in the trial and subsequent hanging of Vesey and three dozen co-conspirators. City leaders publicized their accounts of the planned revolt to discourage future attempts. The Nat Turner Rebellion of 1831 In 1831, Nat Turner, an enslaved African American, led a major rebellion in Southampton, Virginia. Turner’s rebellion started the night he murdered the family that owned him, before moving on to attack other nearby white families. As he went, he was joined by more and more enslaved people, and by the time they approached the closest town, Turner and his cohort had killed more than fifty white men, women and children. The implications of this rebellion reverberated throughout the country. White District residents became even more fearful of African Americans questioning slavery and desiring freedom; some responded by attacking and arresting black people throughout the District. The Snow Riot of 1835 In August 1835, local white-owned newspapers reported that the District had its own “Nat Turner.” They alleged that Arthur Bowen, an 18-year old enslaved African American, attempted to murder Anna Maria Thornton, the wealthy white widow of William Thornton, the Architect of the Capitol. Mrs. Thornton legally owned Bowen, and he and his mother lived in her home in the 1300 block of F Street NW. When Arthur Bowen was arrested and jailed, a white mob of mostly Irish mechanics gathered at the city jail, then located at Judiciary Square, and threatened to hang Bowen. The mechanics’ anger was also directed at white abolitionists who worked to get Congress to end the slave trade in the District. Dr. Reuben Crandall, a botanist and doctor with an office in Georgetown and brother of Prudence Crandall, a vocal Connecticut abolitionist, was the primary target. Assumed guilty by association, police searched Dr. Crandall’s office and found antislavery publications. He was arrested and jailed on charges of incitement to rebellion. The mob outside the jail sought hanging as a punishment for both Bowen and Crandall and hoped to inflict the punishment themselves. Prevented by the police from gaining access to Bowen and Crandall, they redirected their anger toward Mr. Beverly Snow’s popular Epicurean Eating House, located nearby at the corner of Sixth Street and Pennsylvania Avenue NW. They ransacked the restaurant, destroying furniture and breaking liquor bottles, forcing Snow to flee the District. After looting Snow’s restaurant, they continued their rampage by vandalizing other black-owned businesses and institutions, including Rev. John F. Cook, Sr.’s church and school at the corner of 14th and H streets, NW. Fearing that the mob would come after him, Rev. Cook fled to Pennsylvania. The impact of the Snow Riot lasted far beyond the few days of violence. As one of a number of clashes in the 1830s and 1840s, it was emblematic of the continued centrality of slavery in the nation’s capital. The Pearl Incident of 1848 On the evening of April 15, 1848, at least 75 enslaved adults and children from Washington, Georgetown and Alexandria sought freedom on the Pearl, a 64 foot cargo schooner waiting for them in the Potomac River at a wharf in Southwest DC. The escape was facilitated by two white men: Daniel Drayton, who chartered the ship for $100, and Edward Sayres, the captain of the Pearl. After dark on that Saturday night, the freedom seekers made their way to the wharf in small family groups. The Pearl set off to sail by night down the Potomac River to Alexandria, Virginia, and subsequently to the Chesapeake Bay where the captain planned to turn and head toward Pennsylvania, but bad weather slowed the voyage. The next morning, when the 41 white families that owned the fugitives discovered the escape, a posse was formed to capture them. Having learned about the escape route from an informer, the posse of 30 white men traveled by steamboat and overtook the Pearl at Point Lookout, about 100 miles southeast of the capital, and returned all aboard to Washington. As the news of the escape attempt spread, pro-slavery rioters attacked known abolitionist businesses for three days. Drayton and Sayres were held in the city jail, from which a mob attempted to remove them for hanging. Most of the escapees were jailed before being sold to slave dealers in New Orleans and Georgia. A few secured their freedom and became abolitionists. Though unsuccessful, historians believe that it was the nation’s largest single escape attempt. The Pearl incident also increased national attention to the existence of slavery and the slave trade in the nation’s capital. Retrocession of Alexandria In 1846, Congress voted to permit the portion of the District of Columbia that was south of the Potomac River to “retrocede” or return to Virginia, resulting in the oddly-shaped outline of the nation’s capital we have now. Though the impetus for retrocession was not clearly related to the institution of slavery, the return of this land to Virginia’s jurisdiction had immediate and dire consequences to African Americans living there: the loss of access to education. Unlike DC, Virginia had laws against educating black people, so all schools for African Americans were closed for almost fifteen years until the Union Army occupied Alexandria during the Civil War, and reopened them. The Compromise of 1850 As conflicts between pro- and anti-slavery factions continued, and the country continued to grow, Congress decided to step in to address the regional disputes over slavery. The “Compromise of 1850” sought to appease both sides by ending or preventing the introduction of slavery and the slave trade in new states while allowing slavery and the slave trade to continue in states where already legal. The effect of the compromise in the District of Columbia was the introduction of a slave-trade act that prevented the importation of enslaved people into the District for resale or transportation elsewhere, but continued to allow the sale of enslaved District residents to slave holders. This was done by a slave-owning Congressman from Kentucky, in an effort to appear to make concessions to abolitionists. The public auctions of enslaved women, men and children continued, as did slave prisons and the sight of groups of handcuffed, or coffled, black people walking through the city on their way to or from being sold. The Abolition Movement in the District of Columbia By 1830, there were more free African Americans than enslaved people in Washington, DC. This growing population, together with those enslaved, organized churches, private schools, benevolent societies and businesses. Building these community institutions gave black District residents a sense of ownership and control over parts of their lives, and provided opportunities for organized resistance to slavery. By 1850, free African Americans outnumbered those enslaved by almost two to one. According to the US Census, there were 8,461 free and 4,694 enslaved African Americans. The District’s role as a center of abolitionism gained momentum with the repeal of the Gag Rule in 1844, and the passage of the Compromise of 1850. Beginning in the early 1850s,anti-slavery Congressmen pushed for Congress to use its constitutional power to "exercise exclusive legislation" to end slavery in the District. It would take another decade for that to happen. Washington, DC also served as an important stop on what was popularly called the "underground Railroad," a network of black and white abolitionists who worked "underground" or clandestinely, at great risk, to assist enslaved people seeking freedom in northern states and Canada. The National Era Newspaper Anti-slavery newspapers were another important aspect of the abolition movement that required commitment and fearlessness for those involved. The National Era newspaper, for example, was a target of a pro-slavery mob following the Pearl incident. The paper was founded in Washington, DC by the American and Foreign Anti-Slavery Society. Gamaliel Bailey, a well-known white anti-slavery journalist, took over as the principal editor in 1847. Much of Bailey’s focus was on the abolition of the slave trade in the District. In 1851-1852, Bailey serialized Harriet Beecher Stowe’s popular novel, Uncle Tom’s Cabin, making it the first time the story was widely available to the reading public. The Civil War The Civil War, also known as the “War Between the States,” was essentially a struggle over keeping the United States of America united, and the issue that divided the states was the institution of slavery. With the 1860 election of Abraham Lincoln as President, the slaveholding South became increasingly nervous that their livelihood and way of life were threatened. By February 1861, all “deep South” states had seceded from the Union and formed the Confederate States of America. None of the “border states” with slavery (Maryland, Missouri, Kentucky and Delaware) seceded. After the first clash at Fort Sumter, South Carolina, in April 1861, most of the upper south states, including Virginia, left the Union and joined the Confederacy. The Civil War had begun. African-American Refugees Arrive Drawn by the relatively large black population in Washington, DC, and the headquarters of Union forces, African-American refugees began entering the District in 1861 from Maryland, Virginia and other southern states. Although the District was mostly pro-Union, it was still a dangerous place for enslaved blacks seeking freedom. Many “slave catchers” and “slave hunters” combed the city looking for fugitives to return South. By 1864, when fugitive slave laws were repealed and slavery was abolished in Maryland, Washington, DC was safe for refugees. By the end of the Civil War, more than 25,000 African Americans had moved to DC. Refugee camps were created to accommodate the new residents, often near the sites of forts that are preserved throughout the District. There were camps at Duff Green’s Row on First Street between East Capitol and A Streets SE, at Camp Barker at 12th Street and Vermont Avenue, NW, and at Freedmen’s Village just across the river in Arlington. Most of the refugees in the camps were women, children, the infirmed and the elderly. Most young men had either fled further north or had enlisted as soldiers, sailors or laborers in the war effort. African‐American Soldiers and Sailors On April 23, 1861, a few days after Ft. Sumter was attacked, Jacob Dodson wrote a letter to the U.S. Secretary of War informing him that “I have some three hundred reliable colored free citizens of this City, who desire to enter the services for the defense of the City.” The reply was “this Department has no intention at present to call into the service of Government any colored soldiers.” It would be two years into the war before the U.S. Army’s policy changed. The US Navy was more receptive to employing African Americans. Black sailors began serving in September 1861. The Navy’s role was to blockade southern ports, control major rivers, and repel Confederate privateers and cruisers that attempted to prey on Union merchant ships. Approximately 480 black men born in the District served in the Navy during the Civil War. The Army’s First Regiment, US Colored Troops, was organized and trained in spring and summer of 1863 in Washington, DC. They trained at Analostan Island (now Roosevelt Island). There were also District men who served in regiments raised elsewhere in the Union. James T. Wormley, who owned the hotel at the corner of 15th and H Streets NW, served in the Massachusetts 5th Cavalry. Of the more than 209,000 black men who served as Civil War soldiers; 3,265 were from Washington, DC. Their names appear on the African American Civil War Memorial at Vermont Avenue and U Street NW. Black women served as nurses and other ways in the war effort. Elizabeth Keckley, the formerly enslaved memoirist, organized the Contraband Relief Association to help women and children; Sojourner Truth worked at Freedmen’s Village in Arlington. The issue of African Americans serving in the US military turned out to be a key issue in ending slavery and eventually, ending the war. 1862: A Pivotal Year Toward Ending Slavery The DC Compensated Emancipation Act During the Civil War, Charles Sumner, the senior senator from Massachusetts, and a vocal abolitionist, asked President Lincoln: “Do you know who is at this moment the largest slaveholder in the United States?” Sumner informed Lincoln that he was the largest slaveholder because the President “holds all the slaves of the District of Columbia.” Sumner was referring to the fact that the federal government was empowered in the US Constitution to “exercise exclusive legislation” over the federal district. Though this interpretation of the federal government’s constitutional power continues to be a source of conflict, abolitionists used it as a way to end slavery in the national capital. In December 1861, Henry Wilson, the junior Massachusetts senator, introduced a bill in Congress to end slavery in Washington, DC. spite considerable opposition from slaveholding Congressmen, aldermen and residents, the bill passed. The Senate approved the bill on April 3, 1862 and the House of Representatives on April 12, 1862. President Lincoln signed the legislation on April 16, 1862. Titled “An Act for the release for the release of certain persons held to service or labor in the District of Columbia," it freed the 3,100 women, men and children who were still enslaved in 1862. The act also allowed for slaveowners to be compensated up to $300 for each individual they had legally owned. In addition, newly-freed African Americans could receive up to $100 if they chose to emigrate to another country. A three-member Emancipation Commission was established to determine who could legally claim compensation and disburse funds. The claimants had to show papers proving that they had legally owned the formerly enslaved people, and were required to pledge loyalty to the Union. Though the majority of claimants were white, there were African Americans who received compensation for family members whose titles they had purchased in order to keep them from being sold. At the end of the compensation process, the federal government had spent close to $1 million dollars to compensate individuals for their “property.” Emancipation by Legislation and Proclamation On July 12, 1862, Congress passed an addendum to the April 16 act, permitting formerly enslaved people whose former owners had not filed claims for compensation to do so. Additionally, the DC Supplemental Emancipation Act permitted African Americans to testify to the veracity of others' claims. Because the admissibility of testimony given by African Americans had been challenged in the past, this was a new and heartening development to those who argued for equality of treatment under law. Five days later, July 17, 1862, Congress passed the Second Confiscation and Militia Act, which freed enslaved people throughout the country whose owners were serving in the Confederate Army. Slavery was abolished in the US territories on July 19, 1862, again in an efort to cut off support to the Confederate states. Ten days after signing the DC Supplemental Emancipation Act, President Lincoln told his cabinet of his intention to threaten the Confederate states with freeing the enslaved people in their states if they did not re -join the Union. This plan was not implemented until September 22, 1862, when President Lincoln signed the Preliminary Emancipation Proclamation, which announced his deadline of January 1, 1863. The Emancipation Proclamation Nine moonths after signing the DC Emancipation Act, and one hundred days after issuing the Preliminary Emancipation Proclamation, President Lincoln issued the final Emancipation Proclamation, on January 1, 1863. The Emancipation Proclamation was primarily of symbolic importance. No enslaved people were immediately freed by the proclamation because it excluded slave-holding border states—Maryland, Delaware, Missouri and Kentucky—out of fear of sending them into rebellion. Enslaved people living in states controlled by the Confederacy could only be freed if and when the Union Army arrived and liberated them in person. Yet the Emancipation Proclamation clarified that slavery would end in states that did not return to the Union. Six months after the last Confederate general surrendered his troops to the Union Army, the 13th Amendment to the U.S. Constitution, passed by Congress in December 1865, finally outlawed slavery throughout the entire United States, including those areas earlier excluded by the Emancipation Proclamation. Emancipation Celebrations and Parades A frican Americans responded immediately and enthusiastically to the DC Emancipation Act and the Emancipation Proclamation. The first Emancipation Parade took place on April 19, 1866, the fourth anniversary of the DC Emancipation Act. It was a huge, joyous event, which brought out close to half of the city’s African American population. Thousands participated in the parade that began at Franklin Square, wound its way throughout the city, and returned to Franklin Square for speeches. Many thousands more lined the main thoroughfares of the District, including Pennsylvania Avenue, NW, to watch the parade. The Washington Bee newspaper claimed that the Emancipation Parade was the “grandest event in the history of the colored race.” On May 12, 1866, a wood engraving sketched by F. Dielman, a white artist, was published in Harper’s Weekly, a popular white-owned magazine. It is the only known representation of the first Emancipation Parade. DC Emancipation parades continued from 1866 to 1901. Church celebrations, which had begun in 1862, continued after 1901. The tradition of Emancipation commemorations was revived in 1991, in large part due to the initiative and research of Loretta Carter Hanes, a District native. Mrs. Hanes, an avid student of Washington, DC history, and founder of Reading Is Fundamental in the District of Columbia, began an annual wreath-laying ceremony in Lincoln Park (on East Capitol Street between 11th and 13th Streets) at the statue of Lincoln, installed in 1876, that was paid for entirely by donations from formerly enslaved people. The parades, organized to celebrate the abolition of slavery, were also used to make public demands for full citizenship. African Americans recognized that legal freedom— through the DC Emancipation Act, Emancipation Proclamation and the 13th Amend-ment—did not automatically confer full citizenship. As a result, African Americans began a larger struggle over the meaning and practice of freedom and citizenship in the United States. The overwhelming joy engendered by DC emancipation was expressed by poet James Madison Bell in his poem: “EMANCIPATION IN THE DISTRICT OF COLUMBIA” Unfurl your banners to the breeze! Let Freedom’s tocsin sound amain, Until the islands of the seas Re-echo with the glad refrain! Columbia’s free! Columbia’s free! Her teeming streets, her vine-clad groves, Are sacred now to Liberty, And God, who every right approves. Abolition in the District of Columbia. American Memory, Library of Congress, http:// memory.loc.gov/ammem/today/apr16.html. Accessed March 3, 2009. Brawley, Benjamin, ed., Early American Negro Writers New York: Dover Publications, Inc., 1970, 288‐289. Carbone, Elisa. Stealing Freedom New York: Random House Children’s Book, 1998. Civil War Soldiers and Sailors System, National Park Service, http://www.itd.nps.gov/ cwss. Accessed March 3, 2009. DC Emancipation and Supplemental Acts of 1862. http://www.archives.gov/exhibits/ featured_documents/dc_emancipation_act/. Accessed February 28, 2009. Denmark Vesey: The Vesey Conspiracy. http://www.pbs.org/wgbh/aia/ part3/3p2976.html Accessed February 28, 2009. Clark‐Lewis, Elizabeth, ed. First Freed: Washington, D.C., in the Emancipation Era Washington, DC: Howard University Press, 2002. Corrigan, Mary Beth. “Imaginary Cruelties?: A History of the Slave Trade in Washington, D.C.,” Washington History Fall/Winter 2001‐2002, 13/2, pp. 4‐27. Freeman, Elsie, Wynell Burroughs Schamel, and Jean West. "The Fight for Equal Rights: A Recruiting Poster for Black Soldiers in the Civil War." Social Education 56, 2 (February 1992): 118‐120. [Revised and updated in 1999 by Budge Weidman.] Gamaliel Bailey. In Encyclopædia Britannica Online: http://www.britannica.com/ EBchecked/topic/49252/Gamaliel‐Bailey. Accessed March 4, 2009. Gibbs, C.R., Black, Copper & Bright: The District of Columbia’s Black War Regiment Silver Spring: Three Dimensional Publishing, 2002. McLaughlin Green, Constance. The Secret City: A History of Race Relations in the Nation’s Capital, Princeton: Princeton University Press, 1965. Harrold, Stanley. Subversives: Antislavery Community In Washington, D.C., 1828‐1865 Baton Rouge: Louisiana State University Press, 2003. John H. Holman Papers, Western Historical Manuscript Collection, Columbia‐University of Missouri. Lesko, Kathleen M., Babb, Valerie, and Gibbs, Carroll R.. Black Georgetown Remembered: A History of Its Community From the Founding of The Town of George in 1751 to the Present Day, Washington, D.C.: Georgetown University Press, 1999. McPherson, James. The Negro’s Civil War: How American Negroes Felt and Acted During the War, New York: Random House, 1965. McPherson, James. Battle Cry of Freedom: The Civil War Era. New York, Oxford Univ. Press. 1988. McQuirter, Marya Annette. African American Heritage Trail, Washington, DC Washington, DC: Historic Preservation Office and Cultural Tourism DC, 2003. Pacheco, Josephine F. The Pearl A Failed Slave Escape on the Potomac, Chapel Hill and London: The University of North Carolina Press, 2005. Paynter, John H. Fugitives of the Pearl, Washington, DC: Associated Publishers, Inc., 1930. Researching Slavery and Freedom in the National Archives. http://www.archives.gov/ midatlantic/public/slavery‐research.pdf . Accessed February 28, 2009. Richards, Mark David. “The Debates Over Retrocession, 1801‐2004,” Washington History Spring/Summer 2004, pp.54‐82. Ricks, Mary Kay. Escape on the Pearl: The Heroic Bid for Freedom on the Underground Railroad. New York: Harper Collins, 2007. Russell, Hilary. “Final Research Report: The Operation of the Underground Railroad in Washington, D.C., 1800‐1860,” Washington, DC: Historical Society of Washington, DC and the National Park Service, July 2001. Whyte, James H. The Uncivil War: Washington During the Reconstruction, 1865‐1878. New York: Twayne Publishers, 1958. Woods Brown, Letitia. Free Negroes in the District of Columbia 1790‐1846, New York: Oxford University Press, 1972.
http://emancipation.dc.gov/page/ending-slavery-district-columbia
13
15
Did you know? Lots of "real" Aztec gold was only tumbaga. What the Spanish Conquistadors thought was gold was often only an alloy called tumbaga. As they explored the New World, the early conquistadors were spurred on by the possibility of finding treasure and riches. Captive Indians told convincing stories of cities far to the north even more fabulous than the Aztec capital Tenochtitlan. The Spaniards' greed was sufficient to fuel determined drives into ever more remote territory in the hopes of striking it rich. The Aztecs certainly had lots of gold, but nowhere near as much as the conquistadors believed. It turned out that all that glittered was not necessarily gold - much of it was an alloy called tumbaga. The metallurgical skills of the pre-Columbian Indians went unrecognized for centuries prior to the pioneering work of Dora M. K. de Grinberg and others in the past fifty years or so. De Grinberg, an Argentinian archeologist working in Mexico, doggedly followed tenuous leads to uncover ample evidence that the ancient Indian metalworkers were far more knowledgeable than had previously been supposed. The conventional wisdom was that pre-Columbian tribes worked only gold, copper and platinum found in their native state (i.e. almost pure, and not requiring any smelting). In addition, it was accepted that some groups knew how to take advantage of any rich ores found in placer deposits in streams. But it was rarefor anyone to suggest that the Indians had their own underground mines, or knew how to control the mix of metals required for the production of alloys. De Grinberg became interested in the drawings on one particular piece of cotton cloth, about 3 meters in length, which dated back to sometime in the mid-16th century. The drawings, using black and red pigments, provided historical information in a series of scenes, linked together by lines, which archaeologists believed represented routes. Each drawing was of a distinct place, only some of which could be identified with certainty at the time de Grinberg took up the challenge. All the known places were in the state of Michoacán. De Grinberg thought that many of the drawings in this codex (now known as the Lienzo de Jicalán) showed activities connected with mineral exploitation, and that the routes were mining routes. But how could she prove it? She guessed that the codex might have been produced to accompany the report about copper working commissioned in 1533 by Vasco de Quiroga. This written report about copper working still exists, and some details appeared to support de Grinberg's hunch. As Bishop of Michoacán, Vasco de Quiroga went on to encourage the manufacture of handicrafts in the villages of the state, promoting the idea that each village develop specialist skills in one craft or another. Santa Clara, for example, became the center for all things copper, and remains so today. Village and mine The history of the Lienzo de Jicalán is somewhat murky. It is believed that it was stolen, early on, by a Luisa Magaña, who then gave it to Pablo García in payment of a medical debt. Later, it was stored in the church of Jucutacato, a tiny village west of Uruapan. It is still sometimes referred to as the Lienzo of Jucutacato. The lienzo was first exhibited in public, in the state capital Morelia, during the first state exhibition in 1877. It later passed into the hands of the Mexican Society for Geography and Statistics, based in Mexico City. Following restoration work by the National History and Archaeology Institute (INAH), it was displayed for several years in the Regional Museum in Morelia, before being returned once more to Mexico City. Michoacán state authorities want it returned permanently to their state to ensure its safe keeping. De Grinberg used a combination of field archaeology and clues from the Lienzo de Jicalán to unravel the mystery surrounding some of the locations it depicts. This enabled her, for instance, to locate the previously unknown Churumucuo very precisely, in the hills overlooking the Infiernillo reservoir in southern Michoacán. She was able to find the mine depicted in the rear left of the picture. Further studies suggested that a single mining locality with twenty workers could produce about 1800 kilograms of copper every (pre-Columbian) month of twenty days. The main drawing in the lienzo depicts a settlement called Xiuhquilan, shown as the focal point of five separate routes, each linking a string of villages. This picture shows the smelting process clearly. Workers squat on either side of a fire and use long pipes to oxygenate the fire to ensure high temperatures. Xiuhquilan is now identified with Jicalán el Viejo, an unrestored archaeological site a few kilometers south of Uruapan. The lienzo suggests that the founders of Xiuhquilan were not Purépecha (Tarascan) Indians, but Náhuatl-speaking Toltecs, who migrated into the area from somewhere far to the east. After the village was subsumed into the Tarascan Empire in the late 15th century, it had to send regular tributes of painted gourds and copper items to its new masters. The lienzo reveals that Xiuhquilan's indigenous leaders held authority over several mineral deposits, sources of the copper ore essential to support the settlement's main economic activity of copper-smelting. Three of the lienzo's five routes link the town to copper mines. The first of these routes leads southeast towards the headwaters of the River Balsas. The second heads south for the vicinity of what is today the Infiernillo dam, on the highway from Uruapan to Lázaro Cárdenas. The third route goes southwest to the Pinzándaro region, on the banks of the River Tepalcatepec. These metal-working Indians were extremely skilled. It is well known that it is much harder to work and shape copper than it is gold. The Indians not only knew to melt and hammer native metals into shape, they also knew how to locate ores, mine them (by open pit or shaft mining as appropriate) and smelt them. By 900 AD they knew how to reduce both carbonates (a relatively easy process) and sulfates (a much harder one) in order to extract the metals. Metallurgical tests of artifacts and slag from waste tips have shown that the Indians of Michoacán even produced several alloys of copper, and were able to color them. These alloys included various bronzes and tumbaga, a mix of gold with copper and usually some silver. Tumbaga looks just like gold, leading the Spaniards to believe that there was far more gold than there really was. The rest, as they say, is history... Krasnopolsky de Grinberg, Dora M. Los señores del metal. Minería y metalurgia en Mesoamérica. Consejo nacional para la cultura y las artes, Pangea. 1990. Krasnopolsky de Grinberg, Dora M. "¿Qué sabían de fundición los antiguos habitantes de Mesoamérica?" Ingenierías, Enero-marzo 2004, Vol VII, No. 22 pp 64-70. Marquez, Carlos F. "Gestionarán que el lienzo de Jucutacato sea resguardado en el estado". La Jornada Michoacana, Nov 9, 2005. Once, Grecia. "El Lienzo de Jucutacato, testigo de la historia." Cambio de Michoacán. Oct 30, 2006. Pre-Hispanic and Colonial Metallurgy in Jicalán, Michoacán, México: An Archaeological Survey , with contributions by: Mario Retiz, Anyul Cuellar, and Efraín Cárdenas. Report submitted to FAMSI (Foundation for the Advancement of Mesoamerican Studies Inc.) Research year 2003. Text © Copyright 2007 by Tony Burton. All rights reserved. Photos in public domain unless otherwise credited.
http://www.mexconnect.com/articles/1238-did-you-know-lots-of-real-aztec-gold-was-only-tumbaga
13
37
Grains are the seeds of cereal crops such as wheat, rye, rice, oats and barley and have been a staple food for humans for thousands of years. In pre-industrial times grains were commonly eaten whole but advances in the milling and processing of grains allowed large-scale separation and removal of the bran and germ, resulting in refined flour that consists mainly of the starchy endosperm. Refined flour became popular because it produced baked goods with a softer texture and extended freshness. However the bran and germ contain a host of important nutrients, which are lost when the grain is refined. Nowadays it is increasingly recognised that foods made with whole grain can make an important contribution to our health and wellbeing and that the whole grain ‘package’ provides benefits relating to the individual nutrients they contain. Research consistently shows that regular consumption of whole grain foods as part of a healthy diet can reduce the risk of heart disease, certain types of cancer, type 2 diabetes, and may also help in weight management. What does whole grain mean? Each cereal grain is made up of three distinct sections: the outer fibre-rich bran, the inner micronutrient-rich germ and the starchy main ‘body’ of the kernel known as the endosperm. Whole grain means that all three sections of the kernel are included and they can be eaten whole, cracked, split, flaked, or ground. Most often whole grains are milled into flour and used to make breads, cereals, pasta, crackers, and other grain-based foods. Regardless of how the grain is handled, a whole grain food product must deliver approximately the same relative proportions of bran, germ, and endosperm found in the original grain.1 Anatomy of a Whole Grain Kernel Bran: The multi-layered outer skin of the kernel that helps to protect the other two parts of the kernel from sunlight, pests, water, and disease. It contains fibre, important antioxidants, iron, zinc, copper, magnesium, B vitamins, and phytonutrients. Germ: The embryo which, if fertilised by pollen, will sprout into a new plant. It contains B vitamins, vitamin E, antioxidants, phytonutrients, and unsaturated fats. Endosperm: The germ's food supply, which, if the grain were allowed to grow would provide essential energy to the young plant. As the largest portion of the kernel, the endosperm contains starchy carbohydrates, proteins, and small amounts of vitamins and minerals. A whole grain can be a food on its own, such as oatmeal, brown rice, barley, or popcorn, or used as an ingredient in food, such as whole-wheat flour in bread or cereal. Types of whole grains include whole wheat, whole oats/oatmeal, whole grain cornmeal, popcorn, brown rice, whole rye, whole-grain barley, wild rice, buckwheat, triticale, bulgur (cracked wheat), millet, quinoa, and sorghum. Other less common whole grains include amaranth, emmer, farro, grano (lightly pearled wheat), spelt, and wheat berries. Intake of whole grains Research suggests that health benefits can be obtained at relatively low levels of whole grain consumption, typically one to three servings per day, however it seems that many people do not reach this level. Some of the specific barriers to whole grain consumption include the lack of knowledge as to what a whole grain is, the lack of awareness of its health benefits, the difficulties some consumers have identifying whole grain foods, the perception of the taste and flavour of these products, as well as their cost.2 In the UK about a third of adults and 27% of children do not consume any whole grain at all and only 5-6% of the population achieve three portions per day.3,4 This is similar to intakes in the US where according to a recent report of the Department of Agriculture (USDA), only 7% of Americans achieved three whole grain portions a day.5 By contrast to the USA and UK, Scandinavians tend to have higher intakes of whole grain mainly due to their reliance on whole-grain rye bread as a staple food. Due to differences in measurement it is difficult to compare studies but data suggest that intakes in Norway are four times greater than in the UK, and in Finland intakes are even higher. Men seem to consume more whole grain than women but this may simply be because of a greater food intake overall. In the UK higher levels of education and income are linked with a greater intake of whole grain, whereas in Finland the highest intakes of rye bread were observed in the lower social grades.3 Not just the fibre Whole grain is rich in fibre, and although the benefits of fibre for gut and heart health have been known for some time, it seems that whole grain provides protection over and above that provided by the fibre. Studies show that in women, the health effects of whole grain on heart disease go beyond those linked to the fibre, whereas in men, the bran or fibre component of whole grains provided a significant portion of the protection.6,7 The health advantages of whole grains are largely associated with consuming the entire whole-grain “package,” which includes vitamins (B vitamins, vitamin E), minerals (iron, magnesium, zinc, potassium, selenium), essential fatty acids, phytochemicals (physiologically active components of plants that have functional health benefits) and other bioactive food components. Most of the health-promoting substances are found in the germ and bran of a grain kernel and include resistant starch, oligosaccharides, inulin, lignans, phytosterols, phytic acid, tannins, lipids, and antioxidants, such as phenolic acids and flavonoids.8 It is believed that these nutrients and other compounds, when consumed together, have an additive and synergistic effect on health.9 Recommendations for cereal grain consumption Cereal grains are a good source of carbohydrates and fibre and national dietary guidelines have always encouraged the consumption of starchy and fibre-rich foods, but it is only recently that scientific knowledge has evolved to consider whole grains worth a separate mention from other refined cereals. Dietary guidelines around the world give recommendations for a healthy balanced diet and emphasise the importance of grain foods, particularly whole grains in the diet. - In the UK the Balance of Good Health is a pictorial representation of the recommended balance of foods in the diet and aims to help people understand and enjoy healthy eating. The plate recommends that people “base a third of their food intake around the bread, cereals and potatoes group, aiming to include one food from this group at each meal” (British Nutrition Foundation), and “to eat wholemeal, whole grain, brown or high fibre versions where possible” (Food Standards Agency).10 - The dietary recommendations of Germany, Austria and Switzerland suggest five servings of cereals, cereal products and potatoes a day, preferably whole grain products.11 - In the USA the Dietary Guidelines for Americans, give advice on food and physical activity choices for health (www.healthierus.gov/dietaryguidelines). The guidelines were updated in 2005 and emphasise whole-grain foods including the recommendation to “make half your grains whole”. To help consumers choose a balanced diet the guidelines quantify the amount of whole-grain foods consumers should aim to eat each day as 3 or more ounce equivalents. To help consumers put the dietary guidelines into practice the USDA developed the Food Guide Pyramid.12 (supported by the website www.mypyramid.gov) - A similar plate is used in the Dietary Guidelines for Australians. The guidelines emphasise the importance of cereals as “the foundation of our daily meals” and recommend between 6 to 12 servings of grain-based foods per day including plenty of whole grain varieties.13 - The Canadian Food Guide to Healthy Eating recommends 5-8 servings of grain products per day and advises to make at least half of the grain products consumed each day, whole grain.14 - In Greece, dietary guidelines recommend 8 servings of non refined cereal products and emphasise whole grain varieties.15 - There has also been a recent recommendation for 4 servings of whole grain per day in Denmark.16 How to recognise whole grain foods? It might seem simple to find whole grain products, but just because it is brown or states that it is high in fibre does not necessarily mean it is whole grain. Additional label reading is required to correctly identify foods that qualify as whole grain. To verify that a product is whole grain, consumers should be encouraged to look beyond a product’s name. Descriptive words in the product's name, such as stone-ground, multi-grain, 100% wheat, or bran, do not necessarily indicate that a product is whole grain. As a general guide it is necessary to look out for the word ‘whole’ as in “wholemeal”, “whole grain” or “100% whole wheat”on the packaging. The ingredient statement will list whole grains by the specific grain, such as whole-wheat flour, whole oats, or whole-grain corn. In many whole-grain foods, a whole grain is among the first ingredients listed. Where foods have been made with several different whole grains these may be noted further down on the list of ingredients but may also qualify as a whole-grain food. However, the ingredient list does not clearly indicate the amount of whole grain present in the food, nor does whole grain appear on the nutrition information panel on packs. Colour and Texture: The brown colour of a food does not determine whole grain (e.g. some breads may be brown because molasses or caramel colouring have been added). Many whole-grain products, such as cereals, are light in colour. Also, whole-grain foods are not always dry or gritty, some may be dense with a pleasant “nutty” flavour or light and flaky like a cereal grain. Just because a product is high in fibre does not automatically mean it is whole grain. On the other hand, the fibre content of a whole grain food varies depending on the type of grain, amount of bran, density of the product, and moisture content. Food enriched with wheat or oat bran may be high in fibre but does not necessarily contain the whole grain. Health claims are only allowed where there is adequate scientific evidence. They are designed to help educate consumers and encourage consumption of healthier foods. To ensure harmonisation of health claims across Europe, EU Regulation 1924/2006, Nutrition and Health Claims made on Foods, came into force on 1st July 2007. The European Commission and Member States acting together will authorise health claims for use. They will be advised by the European Food Safety Authority (EFSA), which assumes responsibility for the assessment of claims.17 However local health claims will remain valid until full EU integration in 2010. For example UK products composed of 51% or more whole grain can claim ‘People with a healthy heart tend to eat more whole grain foods as part of a healthy lifestyle’.18 In Sweden products with at least 50% whole grain can state ‘A healthy lifestyle and a balanced diet rich in whole grain products reduce the risk of heart disease. Product X is rich in whole grains’.19 Innovations in the food supply A recent pan-European study conducted by the European Food Information Council (EUFIC) on consumers nutrition knowledge shows that in the UK, Sweden, Hungary, Germany and Poland over 73% of the respondents knew experts recommend to eat more whole grain, with only 49% in France.20 Other consumer research conducted by the International Food Information Council (IFIC) shows that consumers that are aware of whole grain, are increasingly interested in consuming more whole-grain foods (78%).21 Food manufacturers can help by creating new products and reformulating existing products to contain increased levels of whole grains. Some whole-grain products are being made with "white wheat flour", which comes from a naturally occurring albino variety of wheat. White wheat flour has a mild, sweet flavour more similar to that of a refined grain than a whole grain and resembles typical refined flour, but it has the nutritional value and fibre content of whole grain. This can increase the acceptance of the products made with such flour. However, white wheat does not contain tannins and phenolic acids, compounds found in the outer bran of the red wheat commonly used to make whole-wheat flour. Another wheat flour offers the nutritional benefits of 100% whole wheat, yet functions and tastes like refined white flour. This flour is produced by a patented milling technique applied to traditional hard spring wheat, which preserves the mild flavour, colour, and texture of refined flour. The HEALTHGRAIN project (Exploiting Bioactivity of European Cereal Grains for Improved Nutrition and Health Benefits), is looking to improve the well-being and reduce the risk of metabolic syndrome-related diseases in Europe by increasing the intake of protective compounds in whole grains. The project is developing new methods to incorporate grain-based concentrates and ingredients with high nutritional impact into consumer products with sensory quality appealing to European consumers, such as whole meal flours with diminished levels of fractions with less beneficial nutritional properties, ingredients with different compositions in terms of fibre, micronutrients and phytochemicals and products for individuals sensitive to wheat gluten.2 Other traditional whole grains, such as oats and barley, are gaining popularity with consumers. Whole-grain barley, wheat and rice are also now available in quick-cooking varieties (for wheat and rice, pre-cooked varieties that can be prepared in a few minutes in the microwave). Other innovative grain products have additional beneficial ingredients including oat-based products fortified with omega-3 fatty acids and vitamin E, and enriched pasta made with wheat, oats, spelt, legumes, and flaxseed. Health effects of whole grains Research demonstrates an association between consuming whole grain as part of a low-fat diet and a reduced risk of heart disease. Studies have consistently found that individuals taking three or more servings of whole grain foods per day have a 20 to 30 percent lower risk of cardiovascular events compared to individuals with lower intakes of whole grain.6,7, 22-24 This level of protection is not seen with refined grains and is even greater than that seen with fruit and vegetables.25 Potential mechanisms for this health effect have been proposed, but are not fully understood. Components of some whole grains, including soluble fibre, beta-glucan, alpha-tocotrienol, and the arginine-lysine ratio, are believed to play a role in lowering blood cholesterol. Whole grains may decrease risk of heart disease through their antioxidant content. Oxidative stress and inflammation are predominant pathological factors for several major diseases and it has been suggested that the variety of phytochemicals found in whole grains may directly or indirectly inhibit oxidative stress and inflammation.26 Other bioactive components are believed to play a role in vascular reactivity, clotting, and insulin sensitivity.27-29 Studies have not isolated the exact mechanisms for the positive effect of whole grain on cardiovascular health and it is likely that (as for fruit and vegetables) the whole grain ‘package’ is more protective than its individual components.8 Whole grains appear to be associated, in a number of studies, with a reduced risk of several gastrointestinal cancers. A review of 40 studies on gastrointestinal cancers found a 21 to 43 percent lower cancer risk with high intake of whole grains compared to low intakes.8 In recent large prospective cohort studies, whole grain consumption was associated with a modest reduced risk of colorectal cancer.27,30,31 The studies examining the risk of hormone-dependent cancers are limited. Several mechanisms have been proposed for this action. Fibre and certain starches found in whole grains ferment in the colon to help reduce transit time and improve gastrointestinal health. Whole grain also contains antioxidants that may help protect against oxidative damage, which may play a role in cancer development. Other bioactive components in whole grain may affect hormone levels and possibly lower the risk of hormone-dependent cancers. Other potential mechanisms could be alterations in blood glucose levels and weight loss.8 However, a recent report published jointly by the World Cancer Research Fund (WCRF) and the American Institute for Cancer Research (AICR) reviewed existing studies on the relative risk of different types of cancer through lifestyle choices. It concluded that dietary fibre probably protects against colorectal cancer, but there is limited evidence suggesting that such foods protect against oesophageal cancer. The report did not find supportive data to conclude that the degree of refinement may be a factor modifying cancer risk, but acknowledged the difficulty in assessing whole grain intake in the absence of an internationally accepted definition, and the possible confounding between dietary fibre and other dietary constituents and in general with ”healthier” dietary patterns and lifestyles.32 Components of whole grain, including fibre, resistant starch, and oligosaccharides play roles in supporting gastrointestinal health. Studies suggest that dietary fibre from whole grain increases stool weight by absorbing water and the partial fermentation of fibre and oligosaccharides, which increases the amount of beneficial bacteria in stool.8,33 Resistant starch is not digested and absorbed like ordinary starch, which means it passes into the large intestine and behaves in a similar way to fibre. This larger and softer mass of residue speeds the movement of the bowel contents towards excretion. The effect of promoting normal intestinal regularity makes whole grain products integral components of diet plans to help alleviate constipation and decrease the risk of developing diverticulosis and diverticulitis.34 Major epidemiological studies show a reduced risk of 20 to 30 percent for type 2 diabetes associated with higher intakes of whole grain or cereal fibre.35 Evidence from observational studies and clinical trials suggests improved blood glucose control in people with diabetes and, in non-diabetic individuals, whole grains may lower fasting insulin levels and decrease insulin resistance.8,35 Whole grain intake is inversely associated with the risk of type 2 diabetes, and this association is stronger for the bran than for the germ. Findings from prospective cohort studies consistently support increasing whole grain consumption for the prevention of type 2 diabetes.36,37 Components of whole grain, including magnesium, fibre, vitamin E, phytic acids, lectins, and phenolic compounds, are believed to contribute to risk reduction of type 2 diabetes as well as lowering blood glucose and blood insulin levels. In studies that examined the source of fibre, researchers found that fibre from whole grain, but not from fruit or vegetable sources, appears to exert the protective effect in reducing risk for developing type 2 diabetes.37-39 A recent Cochrane review on the preventive effects of whole-grain foods on diabetes mellitus shows that the beneficial effects of whole grain are mainly explained via their effects on BMI and that the current evidence does not allow to draw a definite conclusion about the preventive effect of whole-grain foods on the development of type 2 diabetes and that properly designed long-term randomised controlled trials are needed.40 Emerging evidence suggests that whole grain intake may contribute to achieving and maintaining a healthy weight. Studies show that people who include whole grain as part of a healthful diet are less likely to gain weight over time.41,42 Eating a diet high in whole grains is associated with lower body mass index and weight, smaller waist circumference, and reduced risk of being overweight.43-45 People who consume more whole grains are likely to have healthier lifestyles.46 The mechanisms by which whole grain may support weight management include enhanced and extended satiation (regulation of energy intake per eating occasion to lower daily energy intake), and prolonged gastric emptying to delay the return of hunger.8 Although preliminary evidence suggests that whole grain may influence body-weight regulation, additional epidemiological studies and clinical trials are needed. The HEALTHGRAIN project, which will finish in 2010, is studying the mechanisms responsible for the health benefits of whole grain products on risk factors for cardiovascular disease, type 2 diabetes and overweight.2 How to eat more whole grains To reap the many health benefits of whole grains it is advisable to eat 3 portions a day. It is easy to include whole grain in the diet simply by swapping some portions of refined starchy staples for whole grain varieties. Scientific studies support the recommendation of at least 48 g of whole grain daily. Increasing the consumption of whole grain should be done progressively to let the body adapt to higher fibre content. In the serving examples of whole grain, foods do not need to contain 100% whole grain, but do need to contain a minimum of 51% whole grain to be called whole grain, and also contain the three components of whole grain – endosperm, germ and bran. See Table below for suggestions of whole grain choices. Type of Food Whole grain option Porridge made with rolled oats or oatmeal Puffed whole grains Whole-grain muesli and cereal bars Bread and crackers Rye bread (pumpernickel), wholemeal, granary, wheatgerm or mixed grain breads. Whole-wheat crackers, rye crackers and crispbreads Whole-grain rice cakes Wholemeal flour, wheat germ, buckwheat flour, unrefined rye and barley flour, oatmeal and oat flour Brown rice, whole-wheat pasta, whole barley, bulgur wheat (cracked wheat) quinoa, pearl barley Whole grain contains many healthful components, including dietary fibre, starch, essential fatty acids, antioxidants, vitamins, minerals, lignans, and phenolic compounds, that have been linked to reduced risk of heart disease, cancer, diabetes, and other chronic diseases. Since most of the health-promoting components are found in the germ and bran, foods made with whole grain can play an important role in maintaining good health. Eating more whole grain involves making relatively easy changes in grain food selections. With awareness and education, along with increased availability of easy-to-identify whole-grain products, consumers can increase their intake of whole grain to recommended levels. What is a serving size or a portion of whole grains? 120g cooked brown rice or other cooked grain 120g cooked 100% whole-grain pasta 120g cooked hot cereal, such as oatmeal 30g uncooked whole-grain pasta, brown rice or other grain 1 slice 100% whole-grain bread 1 very small (30g) 100% whole-grain muffin 120g 100% whole-grain ready-to-eat cereal - US Food and Drug Administration. FDA Provides Guidance on 'Whole Grain' for Manufacturers (Available at: http://www.fda.gov/bbs/topics/news/2006/NEW01317.html) - The HEALTHGRAIN project (Exploiting Bioactivity of European Cereal Grains for Improved Nutrition and Health Benefits), funded by the European Community Sixth Framework Programme, 2005-2010 FOOD-CT-2005-514008 - Lang R, Jebb SA. Who consumes whole grains, and how much? Proceedings of the Nutrition Society 2003:62:123-127 - Thane CW, Jones AR, Stephen AM, Seal CJ, Jebb SA. Whole-grain intake of British young people aged 4-18. British Journal of Nutrition 2005:94(5):825-831 - Centers for Disease Control and Prevention (CDC). National Center for Health Statistics (NCHS). National Health and Nutrition Examination Survey Data. Hyattsville, MD: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, 1999-2002 - Liu S, Stampfer MJ, Hu FB, Giovannucci E, Rimm E, Manson JE, Hennekens CH, Willett WC. Whole-grain consumption and risk of coronary heart disease: results from the Nurses’ Health Study. American Journal of Clinical Nutrition 1999;70(3):412-9 - Jensen MK, Koh-Banarjee P, Hu FB, Franz MJ, Sampson L, Gronbaek M, Rimm EB. Intake of whole grains, bran, and germ risk of coronary heart disease among men. American Journal of Clinical Nutrition 2004 Dec;80(6):1492-9 - Slavin J. Whole grains and human health. Nutrition Research Review 2004;17:99-110 - Pereira MA, Pins JJ, Jacobs DR, Marquart L, Keenan JM. Whole grains, cereal fiber, and chronic diseases: Epidemiologic evidence. In CRC Handbook of Dietary Fiber in Human Nutrition. Boca Raton, FL: CRC Press; 1993:461-479 - UK Food Standards Agency Health Eating Nutrition Essentials. Available at: www.eatwell.gov.uk/healthydiet/nutritionessentials - The Food Pyramid for Germany, Austria and Switzerland. Available at: www.dge.de/pyramide/pyramide.html - U.S. Department of Health and Human Services and U.S. Department of Agriculture. Dietary Guidelines for Americans, 2005. 6th edition, Washington DC: U.S. Govt Printing Office, Jan 2005. http://www.healthierus.gov/dietaryguidelines - Australian Government, National Health and Medical Research Council. Guidelines for all Australians 2003. Available at: www.nhmrc.gov.au/publications/synopses/dietsyn.htm - Health Canada Food Guide. 2007 (Available at: www.healthcanada.gc.ca/foodguide, accessed December 2008 or download pdf at http://www.hc-sc.gc.ca/fn-an/food-guide-aliment/index-eng.php) - Ministry of Health and Welfare, Supreme Scientific Health Council. Dietary guidelines for adults in Greece, 1999 (Available at http://www.mednet.gr/archives/1999-5/pdf/516.pdf accessed December 2008) - National Food Institute, Technical University of Denmark. Wholegrain – Definition and scientific background for recommendations of wholegrain intake in Denmark. May 2008. http://www.food.dtu.dk/ - Regulation (EC) No 1924/2006 of the European Parliament and of the Council of 20 December 2006 on nutrition and health claims made on foods. Available at: www.eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2006:404:SOM:EN:HTML - UK Joint Health Claims Initiative (health claims valid until 2010). (Available at: http://www.jhci.co.uk/) - Swedish Nutrition Foundation health claims. (Available at: www.snf.ideon.se/snf/en/rh/Healh_claims_FF.htm) - The European Food Information Council (EUFIC). Pan-European consumer research on in-store behaviour, understanding and use of nutrition information on food labels, combined with assessing nutrition knowledge (Available through webinar at http://www.focusbiz.co.uk/webinars/eufic/paneuropeanlabelresearch/europe/) - The International Food Information Council Foundation. Food & Health Survey: Consumer Attitudes toward Food, Nutrition and Health. 2008:1-54. (Available at: http://ific.org/research/foodandhealthsurvey.cfm) - Jensen MK, Koh-Banerjee P, Hu FB, Franz M, Sampson L, Grønbæk M and Rimm EB. Intakes of whole grains, bran, and germ and the risk of coronary heart disease in men. American Journal of Clinical Nutrition December 2004 Vol. 80, No. 6, 1492-1499 - Jacobs DRJ, Meyer KA, Kushi LH, Folsom AR. Is whole grain intake associated with reduced total and cause-specific death rates in older women? The Iowa Women’s Health Study. American Journal of Public Health 1999;89:322 - Pietinen P, Rimm EB, Korhonen P. Intake of dietary fiber and risk of coronary heart disease in a cohort of Finnish men: The alpha-tocopherol, beta-carotene cancer prevention study. Circulation 1996;94:2720-2727 - Steffen LM, Jacobs DRJ, Stevens J. Associations of whole-grain, refined-grain, and fruit and vegetable consumption with risks of all-cause mortality and incident coronary artery disease and ischemic stroke: the Atherosclerosis Risk in Communities (ARIC) Study. American Journal of Clinical Nutrition 2003;78:383-390 - Mellen PB, Walsh TF, Herrington DM. Whole grain intake and cardiovascular disease: a meta-analysis. Nutrition Metabolism and Cardiovascular Disease 2007:Epub ahead of print - Jacobs DR, Andersen LF, Blomhoff R. Whole grain consumption is associated with a reduced risk of noncardiovascular, noncancer death attributed to inflammatory diseases in the Iowa Women’s Health Study American Journal of Clinical Nutrition 2007: 85(6):1606-1614 - Pereira MA, Jacobs DR, Pins JJ, Raatz S, Gross M, Slavin J, Seaquist E. The effect of whole grains on inflammation and fibrinolysis: a controlled feeding study. Circulation 2000:101:711 - Liese AD, Roach AK, Sparks KC, Marquart L, D’Agostino RB, Mayer-Davis EJ. Whole-grain intake and insulin sensitivity: the Insulin Resistance Atherosclerosis Study. American Journal of Clinical Nutrition 2003;78:965-71 - Schatzkin A, Mouw T, Park Y, Subar AF, Kipnis V, Hollenbeck A, Leitzmann MF and Thompson FE. Dietary fiber and whole-grain consumption in relation to colorectal cancer in the NIH-AARP Diet and Health Study. American Journal of Clinical Nutrition May 2007 Vol. 85, No. 5, 1353-1360 - Larsson SC, Giovannucci E, Bergkvist L and Wolk A. Whole grain consumption and risk of colorectal cancer: a population-based cohort of 60 000 women. British Journal of Cancer 2005: 92, 1803–1807 - WCRF/AICR (2007). Food, Nutrition, Physical Activity and the Prevention of Cancer – a Global Perspective. Washington D.C. (Available from http://www.dietandcancerreport.org/) - Kurasawa S, Haack VS, Marlett JA. Plant residue and bacteria as bases for increased stool weight accompanying consumption of higher dietary fiber diets. Journal of the American College of Nutrition 2000; 19:426-433 - Marlett JA, McBurney MI, Slavin J. Position of the American Dietetic Association: health implications of dietary fiber. Journal of the American Dietetic Association 2002;102:993-1000 - Murtaugh MA, Jacobs DRJ, Jacob B, Steffen LM, Marquart L. Epidemiological support for the protection of whole grains against diabetes. Proceedings of the Nutrition Society 2003;62:143-149 - Munter JSL, Hu FB, Spiegelman D, Franz M, van Dam RM. Whole grain, bran, and germ intake and risk of type 2 diabetes: a prospective cohort study and systematic review. PloS Medicine 2007;4(8):e261 - Montonen J, Knekt P, Jarvinen R, Arommaa A, Reunanen A. Whole-grain and fiber intake and the incidence of type 2 diabetes. Journal of the American College of Nutrition 2003;77:622-629 - Hu FB, Manson JE, Stampfer MJ. Diet, lifestyle, and the risk of type 2 diabetes mellitus in women. New England Journal of Medicine 2001;345:790-797 - Salmeron J, Manson JE, Stampfer MJ, Colditz GA, Wing AL, Willett WC. Dietary fiber, glycemic load, and risk of noninsulin-dependent diabetes mellitus in women. Journal of American Medical Association 1997;277:472-477 - Priebe MG, van Binsbergen JJ, de Vos R, Vonk RJ. Whole grain foods for the prevention of type 2 diabetes mellitus. Cochrane Database Systematic Reviews 2008;23:CD006061 - Koh-Banerjee P, Rimm EB. Whole-grain consumption and weight gain: a review of the epidemiological evidence, potential mechanisms and opportunities for future research. Proceedings of the Nutrition Society 2003;62:25-29 - Koh-Banerjee P, Franz M, Sampson L, Liu S, Jacobs DRJ, Spiegelman D, Willett W, Rimm E. Changes in whole-grain, bran, and cereal fiber consumption in relation to 8-y weight gain among men. American Journal of Clinical Nutrition 2004;80:1237-45 - Newby PK, Maras J, Bakun P, Muller D, Ferrucci L and Tucker KL. Intake of whole grains, refined grains, and cereal fiber measured with 7-d diet records and associations with risk factors for chronic disease. American Journal of Clinical Nutrition Dec 2007;86(6):1745-1753 - Good CK, Holschuh N, Albertson AN, Eldridge AL. Whole Grain Consumption and Body Mass Index in Adult Women: An Analysis of NHANES 1999-2000 and the USDA Pyramid Servings Database. Journal of American College of Nutrition 2008 Vol 27(1):80-87 - Williams PG, Grafenauer SJ, O’Shea JE. Cereal grains, legumes, and weight management: a comprehensive review of the scientific evidence. Nutrition Reviews 2004 Vol. 66(4):171–182 - Harland JI, Garton LE. Whole-grain intake as a marker of healthy body weight and adiposity. Public Health Nutrition 2008 Jun;11(6):554-63
http://www.eufic.org/article/en/page/BARCHIVE/expid/Whole-grain-Fact-Sheet/
13
15
Step Three: Determine the Baseline Any benchmark or baseline should be expressed as a pollution-to-production ratio. It will also be used to determine the cost of the pollution per unit of product. A baseline needs a relevant unit of product for each product that is manufactured with the chemicals being studied. The unit of product must be an accurate measure of a characteristic of the product. If a process is used for the same part at all times, then number of pieces will make a good unit of product. However, if the process works on several parts, then a more specific measure will be needed to determine units of product, such as surface area or weight. Units of Measure How much waste is produced per product? Identifying the correct means of measuring the performance of a manufacturing process is one of the most important steps in pollution prevention planning. The measurement accurately portrays what is happening in the process and provides meaningful data to use in the options analysis step. Pinpointing and solving problems would be difficult without measurement, as would be documenting the impact of pollution prevention. Feedback from measurement will also help in making decisions on facility policies, developing new technologies, and choosing additional pollution prevention options. The unit of product must be carefully chosen. Generally, valid units of product are count (numbers of pieces), surface area (square feet), volume (cubic feet), etc. Examples of units that are not valid are sales and run time. The unit of product must relate directly to the product or service being measured. In addition, in order to obtain accurate data on the amount of pollution generated during a production run or during a measured time period, rejected product must be included in the calculation of the production volume. This is why sales are not a good indicator of production rate. Conversely, run time is not a good indicator of production because a machine or a process may be operating, but the product is not necessarily being produced nor is waste being generated. Sales underestimates production volume and run time overestimates it. The Production Ratio It is necessary to develop a basis of comparison for chemical waste generated in the production process over time. Simply comparing waste generated from year to year can be misleading if there was a significant change in the levels of production involving the chemical being targeted. Production ratio (PR) is used to normalize changes in production levels. It is calculated by dividing the production level for the reporting year by the production level for the previous year. Once a production ratio is determined, it is used as a factor when comparing target chemical waste generated between the two years. A facility paints 1,600 parts in Year A. It paints 1,800 parts in Year B. The production ratio is: 1,800/1,600 = 1.13 Simply using the units to determine the PR may not give an accurate result if the parts are not identical. In that case, a more specific attribute must be used such as surface area, weight or other relevant measure. Production Ratio Example Production level change Click here for a second production ratio example. Either during or after a team has been organized, the performance of the current manufacturing processes must be determined. As a minimum, the processes that use or generate Toxic Release Inventory (TRI) chemicals are targeted for pollution prevention. This will be critical for the team to calculate a baseline for future comparisons and must be done prior to options analysis. An important first step is to decide the accurate and relevant units of measurement for the processes involved. The next section provides more details on measuring waste and pollution generation. Data Gathering for Current Operations For each and every process that uses a chemical reportable on the TRI Form R, gather and verify information related to the chemical’s waste generation and releases. This information must be comprehensive in order to be as accurate and useful as possible. It should include information related to the product being manufactured, the process, the volume produced, and all associated costs. There should be a description of the product(s) or service(s) related to the chemical being addressed. This may include information about desired quality and the reason why the product manufacturer requires the use of a TRI chemical. Customer input may be desired or required for specifications. Pollution prevention planning is a good way to question the design of a product and ask why the chemical is needed. Are there customer specifications or product quality issues that need to be considered? These will be factors when options are analyzed for pollution prevention. In order to further pinpoint how and why a chemical waste is being generated, process information must be gathered. Data on the process should include a description of the major steps. Finding out how employees are involved in the process is often helpful. This can include information on employee function, training and safety/health considerations. Also, obtain whatever documentation is available about the process such as vendor literature, chemical analysis, preventive maintenance schedules, equipment specifications, etc. Any or all of the information will be needed for the options analysis step that studies the alternatives for making the process more efficient, thus using less raw material or generating less waste or pollution. Chemical Handling Data Because waste can be generated as a result of transfers and spills, data should be gathered on how chemicals are stored, transferred, packaged and otherwise dispensed. These operations may be a part of the manufacturing process or they may be auxiliary operations that occur elsewhere in the facility. During option analysis, in order to calculate the costs, savings and payback of any pollution prevention changes must be gathered on all operations that involve the TRI chemicals in question. Many hidden costs in the use of a chemical are instituted in overhead or department charges. However, these numbers must be isolated and identified in order for the option analysis to be comprehensive. Some costs to consider are those related to environmental compliance. This includes compliance issues such as analysis of waste, treatment of waste, license fees and the cost of disposal. As burdensome as these costs might be, they are only a fraction of the cost to manage TRI chemicals. Many of these environmental compliance functions can be done externally or internally. If they are internal costs, remember to include the cost of the time it takes staff to perform these tasks. Another cost is the purchase of the chemical. Add to this the cost to transport the chemical. This must include not only any external charges to get the chemical to the facility but the internal cost to transport it within the facility. Then add the cost to store the material, including the cost of the space it occupies. Auxiliary costs to properly store and maintain the chemical must be included. Add any cost for temperature or humidity controls required for the chemicals storage and use. In addition, there might be costs to maintain the equipment that stores or transports the chemical, including preventive maintenance. Costs for risk management include the following: insurance to protect against losses caused by accidental release and injury; health and safety equipment and training requirements so employees can work with the chemical as safely as possible; and for some chemicals significant costs due to absenteeism caused by perceived or real health effects of the chemical. From Example 3-1, toluene is used to thin the paint at one pound of toluene per gallon of paint. This toluene is released to the air as the paint dries. In Year A, 100 pounds of toluene was released in this way when the 1,600 parts were painted. So if Year A is the baseline year, the pollution to production ratio is 100 divided by 1,600 or 0.063 pound of toluene released per part painted. If the toluene costs a dollar per pound, the cost is 6.3 cents per part painted. During Year A, tests were performed and it was discovered that paint quality did not deteriorate by using 0.80 pound of toluene per gallon of paint. This reduced use of toluene per gallon of paint released 90 pounds in Year B. The pollution/production ratio is 90 divided by 1,800 or 0.05 pound of toluene released per part painted, and the cost is 5 cents per part. So compared to the baseline year, this is a savings of 1.3 cents per part, or $23.40 for 1,800 parts. Finally, intangible costs should be assessed and recorded by asking: - Are there any community concerns? - Are there employee health or safety concerns about using the chemical? - Are there emergency response concerns regarding the use of the chemical? - Does the chemical contribute to unpleasant production work areas (i.e. odors)? - Are there product marketing disadvantages? In order to obtain a baseline of the present situation, all this information must be gathered and be effectively organized. This can be done with charts, graphs, matrices, etc. Each facility will have a unique system to organize the data to fit its needs. Production ratios and baselines must be determined for each process that generates the chemical being studied. In addition to determining a baseline for measuring the cost of waste generation per unit of product, it is also essential to identify and document current and past pollution prevention efforts. Documentation of efforts will allow the pollution prevention team to avoid repeating work unnecessarily and also provides the groundwork for future feasibility studies if changes in technology or increasing costs of environmental management make yesterday’s discarded ideas more attractive today. Next is to sum all the chemical waste generated data and divide it by the amount of production that generated those chemicals. The result of this operation is the amount of waste or pollution that is generated per unit of product. Sources for Data Gathering of Waste and Pollution Information Waste generated from production processes can assume a variety of forms. Most notable among these are air emissions, process wastewaters, hazardous waste and scrap. It is important to be aware of all forms of waste that are produced through manufacturing to ensure an accurate assessment of a production process. One good approach for gathering this information is to develop a material balance or process map for target chemicals to account for each waste stream that comes from the process. This can start with a sketch showing the flow of raw materials, products, wastes and releases involving the target chemical. Make sure to include streams for wastes that are recycled, treated or otherwise managed on-site. A common engineering principle is that what goes into a system must come out in some form or another. By measuring the material inputs, the total outputs that must be accounted for can be identified and through process of elimination, the unknowns can be determined. In some cases, the data needed to fully measure the amount of each waste stream may not be available. In these cases, it becomes necessary to use engineering judgment and knowledge of the production process to develop reasonable estimates of how the system is operating. This occurs more often with water and air releases, particularly “fugitive” (non-stack) air releases. The primary information source for waste shipped off-site, whether to be recycled, treated, or disposed, is the hazardous waste manifest. The manifest provides the type and quantities of hazardous wastes shipped. For mixed wastes or sludge that contains target chemicals, a useful tool for determining the fraction of the mixture that consists of the target chemical is to review the waste profile submitted to the off-site hazardous waste management firm when the waste stream was approved for acceptance. The waste management firm your facility is contracted with should supply, upon request, copies of the results of waste analysis that was performed when a shipment was received. Information for scrap waste can be found on the bill of lading for each shipment. These are often used in place of the hazardous waste manifest for wastes such as scrap metals, scrap circuit boards or spent lead-acid batteries that are sent to a metals recycler. Similar to the hazardous waste manifest, the bill of lading will provide the type and quantities of scrap materials shipped. Product design specifications may be needed to help estimate the amount of the target chemical contained in the total waste shipped. Wastewater Discharged to POTW To discharge wastewater to a publicly-owned treatment works (POTW) generally requires an Industrial Discharge permit, which will include limits on the pollutant concentrations allowed in the wastewater discharge. Facilities are required to perform periodic sampling and analysis of their wastewater discharge to ensure compliance with the limits set. This information can also be used to estimate annual levels of a target chemical that is discharged to a POTW by using the concentration levels determined in sampling along with the cumulative volume of wastewater discharge from the facility. Some facilities perform in-house sampling and analysis on a more frequent basis than required by their permit. These results provide a good tool for estimating the volume of a target chemical that is discharged to a POTW. Stack Air Emissions Facilities that are required to hold air emissions permits should find that their permit application contains a great deal of information to help estimate a target chemical’s volume of releases through stack air emissions. Each manufacturing process that vents emissions through a stack is required to be thoroughly described in the air permit application, with information regarding the chemicals used, the throughput of the process and the emissions associated with the process. The calculations contained in an air permit application are performed on a basis for potential to emit, which assumes constant operation of the manufacturing process equipment and does not include emissions reductions due to pollution control equipment. Therefore, any use of air permit application data must include appropriate changes to reflect the actual operating conditions of the process. Facilities that are not required to hold air emissions permits may estimate their stack air emissions using their knowledge of process conditions and materials balances. Quarterly or annual tests of stack emissions may be worthwhile to perform to provide data to compare to estimates. Fugitive Air Emissions Fugitive (non-stack) air emissions can be difficult to determine directly. They are commonly estimated through a materials balance with fugitive emissions representing the last remaining unknown after all other outputs have been directly measured or estimated. If a facility employs an industrial hygienist, he or she may have information on employee exposure levels that can also be used in estimating fugitive air emissions. On-site Waste Management There are several ways that wastes are managed on-site. Some wastes can be recycled, such as spent solvents or used oils and lubricants. Most facilities keep track of how many batches are processed by the recycling equipment or of the amount of regenerated material. Also track the amounts of solvents, used oils, or other flammable materials that are incinerated on-site. These should be identified in the air emissions permit application. Other wastes are treated on-site prior to disposal, such as spent acids and caustics or polymer waste. Information for measuring the amounts of waste generated should be obtained either from the treatment process description, or from direct observation of the process. Some employees may be hesitant to take all of the necessary steps involved in gathering the information needed for a complete material balance, as it can initially appear to be a daunting task. A recommended first step in performing the material balance is to simply document material inputs minus the materials included in the product stream. This result will show the amount of waste that is generated and can serve as a driving force for finding the specific sources of waste in a process. Like our content and want to share it with others? Please see our reprint policy.
http://www.mntap.umn.edu/prevention/P2_chapter3step3.html
13
14
Resource Materials for the Biology Core Courses ------- - -- - ------------------- Bates College How to Make Simple Solutions and Dilutions RESOURCE MATERIALS INDEX | simple dilution | serial dilution | VC=VC method | molar solutions | percent solutions | |molar - % conversions | concentrated stock solutions (X units) | normality - molarity conversion Working Concentration Calculations A simple dilution is one in which a unit volume of a liquid material of interest is combined with an appropriate volume of a solvent liquid to achieve the desired concentration. The dilution factor is the total number of unit volumes in which your material will be dissolved. The diluted material must then be thoroughly mixed to achieve the true dilution. For example, a 1:5 dilution (verbalize as "1 to 5" dilution) entails combining 1 unit volume of solute (the material to be diluted) + 4 unit volumes of the solvent medium (hence, 1 + 4 = 5 = dilution factor). The dilution factor is frequently expressed using exponents: 1:5 would be 5e-1; 1:100 would be 10e-2, and so on. Example 1: Frozen orange juice concentrate is usually diluted with 4 additional cans of cold water (the dilution solvent) giving a dilution factor of 5, i.e., the orange concentrate represents one unit volume to which you have added 4 more cans (same unit volumes) of water. So the orange concentrate is now distributed through 5 unit volumes. This would be called a 1:5 dilution, and the OJ is now 1/5 as concentrated as it was originally. So, in a simple dilution, add one less unit volume of solvent than the desired dilution factor value. Example 2: Suppose you must prepare 400 ml of a disinfectant that requires 1:8 dilution from a concentrated stock solution with water. Divide the volume needed by the dilution factor (400 ml / 8 = 50 ml) to determine the unit volume. The dilution is then done as 50 ml concentrated disinfectant + 350 ml water. A serial dilution is simply a series of simple dilutions which amplifies the dilution factor quickly beginning with a small initial quantity of material (i.e., bacterial culture, a chemical, orange juice, etc.). The source of dilution material for each step comes from the diluted material of the previous. In a serial dilution the total dilution factor at any point is the product of the individual dilution factors in each step up to it. Final dilution factor (DF) = DF1 * DF2 * DF3 etc. Example: In a typical microbiology exercise the students perform a three step 1:100 serial dilution of a bacterial culture (see figure below) in the process of quantifying the number of viable bacteria in a culture (see figure below). Each step in this example uses a 1 ml total volume. The initial step combines 1 unit volume of bacterial culture (10 ul) with 99 unit volumes of broth (990 ul) = 1:100 dilution. In the second step, one unit volume of the 1:100 dilution is combined with 99 unit volumes of broth now yielding a total dilution of 1:100x100 = 1:10,000 dilution. Repeated again (the third step) the total dilution would be 1:100x10,000 = 1:1,000,000 total dilution. The concentration of bacteria is now one million times less than in the original sample. Very often you will need to make a specific volume of known concentration from stock solutions, or perhaps due to limited availability of liquid materials (some chemicals are very expensive and are only sold and used in small quantities, e.g., micrograms), or to limit the amount of chemical waste. The formula below is a quick approach to calculating such dilutions where: V = volume, C = concentration; in whatever units you are working. (stock solution attributes) V1C1=V2C2 (new solution attributes) Example: Suppose you have 3 ml of a stock solution of 100 mg/ml ampicillin (= C1) and you want to make 200 ul (= V2) of solution having 25 mg/ ml (= C2). You need to know what volume (V1) of the stock to use as part of the 200 ul total volume needed. V1 = the volume of stock you will start with. This is your unknown. C1 = 100 mg/ ml in the stock solution V2 = total volume needed at the new concentration = 200 ul = 0.2 ml C2 = the new concentration = 25 mg/ ml By algebraic rearrangement: V1 = (V2 x C2) / C1 V1 = (0.2 ml x 25 mg/ml) / 100 mg/ml and after cancelling the units, V1 = 0.05 ml, or 50 ul So, you would take 0.05 ml = 50 ul of stock solution and dilute it with 150 ul of solvent to get the 200 ul of 25 mg/ ml solution needed. Remember that the amount of solvent used is based upon the final volume needed, so you have to subtract the starting volume form the final to calculate it. Sometimes it may be more efficient to use molarity when calculating concentrations. A mole is defined as one gram molecular weight of an element or compound, and comprised of exactly 6.023 x 10^23 atoms or molecules (this is called Avagadro's number). The mole is therefore a unit expressing the amount of a chemical. The mass (g) of one mole of an element is called its molecular weight (MW). When working with compounds, the mass of one mole of the compound is called the formula weight (FW). The distinction between MW and FW is not always simple, however, and the terms are routinely used interchangeably in practice. Formula (or molecular) weight is always given as part of the information on the label of a chemical bottle. The number of moles in an arbitrary mass of a dry reagent can be calculated as: # of moles = weight (g)/ molecular weight (g) Molarity is the unit used to describe the number of moles of a chemical or compounds in one liter (L) of solution and is thus a unit of concentration. By this definition, a 1.0 Molar (1.0 M) solution is equivalent to one formula weight (FW = g/mole) of a compound dissolved in 1 liter (1.0 L) of solvent (usually water). Example 1: To prepare a liter of a simple molar solution from a dry reagent Multiply the formula weight (or MW) by the desired molarity to determine how many grams of reagent to use: Chemical FW = 194.3 g/mole; to make 0.15 M solution use 194.3 g/mole * 0.15 moles/L = 29.145 g/L Example 2: To prepare a specific volume of a specific molar solution from a dry reagent A chemical has a FW of 180 g/mole and you need 25 ml (0.025 L) of 0.15 M (M = moles/L) solution. How many grams of the chemical must be dissolved in 25 ml water to make this solution? #grams/desired volume (L) = desired molarity (mole/L) * FW (g/mole) by algrebraic rearrangement, #grams = desired volume (L) * desired molarity (mole/L) * FW (g/mole) #grams = 0.025 L * 0.15 mole/L * 180 g/mole after cancelling the units, #grams = 0.675 g So, you need 0.675 g/25 ml For more on molarity, plus molality and normality: EnvironmentalChemistry.com More examples of worked problems: About.com: Chemistry Many reagents are mixed as percent concentrations as weight per volume for dry reagent OR volume per volume for solutions. When working with a dry reagent it is mixed as dry mass (g) per volume and can be simply calculated as the % concentration (expressed as a proportion or ratio) x volume needed = mass of reagent to use. Example 1: If you want to make 200 ml of 3 % NaCl you would dissolve 0.03 g/ml x 200 ml = 6.0 g NaCl in 200 ml water. When using liquid reagents the percent concentration is based upon volume per volume, and is similarly calculated as % concentration x volume needed = volume of reagent to use. Example 2: If you want to make 2 L of 70% actone you would mix 0.70 ml/ml x 2000 ml = 1400 ml acetone with 600 ml water. To convert from % solution to molarity, multiply the % solution by 10 to express the percent solution grams/L, then divide by the formula weight. Molarity = (grams reagent/100 ml) * 10 Example 1: Convert a 6.5 % solution of a chemical with FW = 325.6 to molarity, [(6.5 g/100 ml) * 10] / 325.6 g/mole = [65 g/L] / 325.6g/mole = 0.1996 M To convert from molarity to percent solution, multiply the molarity by the FW and divide by 10: % solution = molarity * FW Example 2: Convert a 0.0045 M solution of a chemical having FW 178.7 to percent solution: [0.0045 moles/L * 178.7 g/mole] / 10 = 0.08 % solution 6. Concentrated stock solutions - using "X" units Stock solutions of stable compounds are routinely maintained in labs as more concentrated solutions that can be diluted to working strength when used in typical applications. The usual working concentration is denoted as 1x. A solution 20 times more concentrated would be denoted as 20x and would require a 1:20 dilution to restore the typical working concentration. Example: A 1x solution of a compound has a molar concentration of 0.05 M for its typical use in a lab procedure. A 20x stock would be prepared at a concentration of 20*0.05 M = 1.0 M. A 30X stock would be 30*0.05 M = 1.5 M. 7. Normality (N): Conversion to Molarity Normality = n*M where n = number of protons (H+) in a molecule of the acid. Example: In the formula for concentrated sulfuric (36 N H2SO4), there are two protons, so, its molarity= N/2. So, 36N H2SO4 = 36/2 = 18 M. Modified 9-27-12 ga
http://abacus.bates.edu/~ganderso/biology/resources/dilutions.html
13
20
Note: This message is displayed if (1) your browser is not standards-compliant or (2) you have you disabled CSS. Read our Policies for more information. Most children have some degree of measurable hearing. Only a very small percentage of children with hearing loss experience complete deafness. The degree of hearing loss refers to how much hearing loss is present. There are five broad categories used to describe the degree of hearing loss. The numbers listed below represent the lowest frequency (or softest) sounds a person can hear. People with slight hearing loss (20 – 25 dB) may have trouble hearing faint (quiet) speech. They may also have to listen carefully in important or difficult situations. For people with mild hearing loss, understanding speech can be difficult. They can usually hear well if they are listening to a single person speak in a quiet situation. However, they have trouble hearing faint or distant speaking. People with mild hearing loss usually can benefit from hearing aids or FM systems. Listening is a strain for people with moderate hearing loss. While they can understand what a person says if the person is close, it can be difficult for them to hear someone else in a noisy environment. People with moderate hearing loss may miss 50 – 75% of speech in a conversation, and often need to have part of the conversation repeated. People with moderate hearing loss usually can benefit from hearing aids or FM systems. People with moderate – severe hearing loss can miss up to 100% of speech in a conversation, and need for a conversation to be very loud. Again, people with moderate – severe hearing loss usually can benefit from hearing aids or FM systems. People with severe hearing loss may hear a loud voice, if the person speaking is one foot (12 inches) away from his/her ear. They may be able to identify noises in their environment (for example, a paper rustling or traffic outside), but often appear to be ignoring conversation from the people around them. People with profound hearing loss are considered to be deaf. They may detect very loud sounds, and are usually aware of vibrations (movements) around them. People with this degree of hearing loss may rely on vision (sight), rather than hearing, as their main way of communicating with other people. People with profound hearing loss may benefit from treatments or therapies that amplify sound (make sounds louder), but may benefit more from a cochlear implant. For more information about the treatments and therapies mentioned on this page, please click here. People may have syndromic hearing loss (hearing loss associated with other symptoms or features of a condition) or nonsyndromic hearing loss (usually caused by a change within one of the genes related to hearing). Hearing loss can also happen due to exposures to certain viruses, diseases, or drugs; long-term exposure to noise; complications related to birth; tumors; or aging.
http://www.in.gov/isdh/24478.htm
13
16
Tamaulipas is home to Tampico, one of the country’s first ports, as well as many major theater groups! According to archeological evidence, nomadic tribes may have occupied the region as early as 6000 B.C. The first settlements are thought to have occurred around 4000 B.C. Tamaulipas was originally populated by the Olmec people and later by Chichimec and Huastec tribes. Between 1445 and 1466, Mexica (or Aztec) armies commanded by Moctezuma I Ilhuicamina conquered much of the territory and transformed it into a tributary region for the Mexica empire. However, the Aztecs never fully conquered certain indigenous groups in the area, including the Comanche and Apache. In 1517, Francisco Hernández de Córdoba led the first group of Spaniards into the area. They were defeated by Huastec natives, as were the forces of Francisco de Garay who arrived a year later. In 1522, Hernán Cortés defeated the indigenous forces. He and his army captured the city of Chila, but the lack of mineral deposits and continued resistance by the natives discouraged their expansion into the northern regions. Efforts to convert the Indians to Catholicism during this time also failed. At the beginning of the 17th century, Franciscan priests founded missions in the region and began converting the indigenous populations. During this time, widespread cattle and sheep ranching by the Spanish bolstered the area’s economy while forcing native populations from their original lands. Occasional revolts by the native tribes weakened colonial interest in the region. During the 18th century, French colonizers settled north of Tamaulipas in what is now the state of Louisiana. Seeing this as an encroachment and a threat to their colonization efforts, the Spanish launched a number of initiatives to populate the local region and promote new economic activities. However, efforts to bolster the population were largely unsuccessful, and economic development was hampered by inadequate transportation methods. Beginning in 1810, the inhabitants of Mexico clashed with the Spanish colonists in what would become the Mexican War of Independence. Spanish royalist troops succeeded in overthrowing the rebellion, but by then the spirit of independence had settled upon the people. By 1821, Mexico had gained its independence under the terms of the Plan of Iguala. Three years later, Tamaulipas joined the Mexican federation of states. Nationalist troops from Tamaulipas fought against Texas secession in 1836, but after the Mexican-American War, Tamaulipas lost all of its territories north of the Río Grande to the United States Much of the remaining 19th century was characterized by political instability until the state began to experience economic development during the ”Porfiriato Period” (1876–1910), when President Porfirio Díaz was in power. The Mexican Revolution, which began in 1910, also reached Tamaulipas. As with all Mexican states, Tamaulipas adopted the county’s new national constitution. During the 20th century, Tamaulipas expanded its economy, thanks to increased commerce with the United States. Following the adoption of the North American Free Trade Agreement (NAFTA) in 1994—which established trade provisions among Mexico, the United States, and Canada—Tamaulipas emerged as a manufacturing center for products exported to the United States. At present, about 350 maquiladoras (assembly plants) employing over 150,000 workers line the U.S.-Tamaulipas border. In the southern part of the state, chemical and oil production facilities manufacture acrylic fiber, plastic resins, synthetic rubber, and polymers. Tamaulipas, which is part of the fertile lowland area known as “La Huasteca,” has an ideal agricultural climate and is Mexico’s main producer of sorghum; other major crops are corn, cotton, and wheat. More than half the state’s land area is devoted to raising cattle, goats, pigs, and sheep. Tamaulipas’ prime location on the Gulf of Mexico makes it a center of the country’s fishing industry. The primary harvests include shrimp, crayfish, oysters and crabs. Freshwater fish such as tilapia and catfish are also abundant throughout the state. Agriculture, fishing, and tourism are the state’s primary economic activities, although manufacturing accounts for about 21 percent of the total. Trade activities compose about 19 percent of the economy, followed by service-based companies at 17 percent, transportation and communications at 14 percent, finance and insurance at 13 percent, agriculture and livestock at 9 percent, construction at 6 percent, and mining at 1 percent. Facts & Figures - Capital: Victoria (278,773) - Major Cities (population): Reynosa (540,207), Matamoros (460,598), Nuevo Laredo (367,504), Tampico (313,409), Madero (196,544) - Size/Area: 30,650 square miles - Population: 3,024,238 (2005 Census) - Year of Statehood: 1824 - Tamaulipas gets its name from a Huastec word, Tamaholipa, which can be translated as either “place where people pray” or “place of the high mountains.” - The state’s coat of arms depicts Tamaulipas’ agricultural and livestock prosperity, the mechanization of its countryside, the region’s industrial development, and its abundant fishing resources. Bernal de Horcasitas Hill, a notable landmark in the region, and the crest of José Escandón y Helguera who colonized the state are also portrayed on Tamaulipas’ coat of arms. - Tamaulipas is renowned for its theater groups, including one that performs entirely in mime. - La Picota, a favorite dance of the region, features dancers who jump, leap, and swirl in spirited choreography. The rhythmical motions of the dance, which are thought to be derived from Scottish folk dancing, are accompanied by a clarinet and drum. - Geographically, Tamaulipas is one-third the size of the state of Chihuahua and fifteen times larger than Morelos. - The Festival Internacional Tamaulipas (Tamaulipas International Festival), held each October, features cultural and artistic events that include exhibits, plays, concerts, and cinema. The event attracts throngs of people annually and stirs the population’s passion for Mexican culture and heritage. - Tampico Port in Tamaulipas is one of Mexico’s first exporting ports. While oil is the primary commodity exported from Tampico, it also ships silver, copper, lumber, wool, hemp, and other agricultural products all over the world. - Jimmy Buffett’s best-selling 1977 album, Changes in Latitudes, Changes in Attitudes, features the song "Tampico Trauma," a humorous tale of his misadventures while visit the Mexican city. Strategically located at the Mexican-American border, the municipality of Matamoros features many historical buildings and attractions. The Main Square downtown is home to monuments honoring Miguel Hidalgo, founder of the Mexican War of Independence movement, and Benito Juárez, considered by many to be Mexico’s greatest leader. The Puerta Mexico, or New Bridge, inaugurated in 1928, connects Matamoros with the city of Brownsville, Texas. The Teatro de la Reforma (also called ”The Opera Theatre”) in Matamoros was originally built in 1865. In 1904, it witnessed a historical moment when Mexico’s national anthem was performed there for the first time by its composer, Don Jaime Nuño. The Cathedral of Nuestra Señora del Refugio, which was built in 1832, is another point of interest in the state. In 1958, Pope John XXIII created the new dioceses for Matamoros and designated the church as a cathedral. One of the most beautiful beaches along the Tamaulipas shoreline and an ideal spot for sport fishing is La Barra del Tordo. The Carrizal River meanders along the beach’s shores, forming a complex ecosystem with rich and abundant vegetation and fauna, including the Lora turtles that come to the beach every year to reproduce. Among the state’s more inviting beaches are Altamira and the Golden Dunes, located in Altamira, and Miramar, which draws countless visitors annually. Bagdad Beach, east of Matamoros, hosts the ever-popular El Festival del Mar (Festival of the Sea) each year. Tampico’s historical downtown features architectural landmarks—such as the Cathedral of Tampico, the Maritime Customs Building, and Pirámide de las Flores (Flower’s Pyramid)—that showcase the city’s culturally diverse past and attract tourists from all over the world. How to Cite this Page: Tamaulipas. (2013). The History Channel website. Retrieved 4:59, June 19, 2013, from http://www.history.com/topics/tamaulipas. Tamaulipas. [Internet]. 2013. The History Channel website. Available from: http://www.history.com/topics/tamaulipas [Accessed 19 Jun 2013]. “Tamaulipas.” 2013. The History Channel website. Jun 19 2013, 4:59 http://www.history.com/topics/tamaulipas. “Tamaulipas,” The History Channel website, 2013, http://www.history.com/topics/tamaulipas [accessed Jun 19, 2013]. “Tamaulipas,” The History Channel website, http://www.history.com/topics/tamaulipas (accessed Jun 19, 2013). Tamaulipas [Internet]. The History Channel website; 2013 [cited 2013 Jun 19] Available from: http://www.history.com/topics/tamaulipas. Tamaulipas, http://www.history.com/topics/tamaulipas (last visited Jun 19, 2013). Tamaulipas. The History Channel website. 2013. Available at: http://www.history.com/topics/tamaulipas. Accessed Jun 19, 2013.
http://www.history.com/topics/print/tamaulipas
13
16
Ecosystems of the Rocky Mountains High Peaks and Deep Canyons The Rocky Mountains were formed, and are influenced today, by continuing interrelated processes of geology, climate, fire, and more recently, human activity. The Rocky Mountains are young, geologically speaking — only about 65 million years old — yet there is evidence of every geological process known to build mountains. Multiple events uplifted, folded, carved, erupted, and eroded the landscape over millions of years. The most recent major event occurred between 70 million and 40 million years ago. Plates of the earth's crust collided repeatedly, folding and piling on top of each other. This uplift drained an ancient sea that once occupied the center of what is now North America, and formed what we know today as the Rocky Mountains. Volcanoes, glaciers, and erosion have further defined and reshaped these mountains and valleys; signs of glacial and volcanic activity are evident throughout the region. |Evidence of glacial activity abounds. Here, an alpine lake fills a glacially carved cirque (steep–sided semicircular hollow) in Payette National Forest, Idaho. Photo by T. Demetriades–Josophene, U.S. Forest Service.| Current geologic activity is most obvious in Yellowstone National Park, which is underlain by molten rock that fuels its famous geysers. But the process of mountain shaping can be subtle, too. Wind, rivers, and streams erode mountains, while glaciers continuously gouge and scrape the rock over which they move. Water seeps into rocks, breaking them as it freezes and expands. Mountains may look static, but looks can be deceiving. The Way West The shape of the Rocky Mountains determined the routes of American Indians and later, early settlers, who sought the easiest way through the rugged terrain. This was through mountain passes, the same routes as many highways traverse today. Historically, the easiest pass was South Pass in Wyoming. South Pass is a gently sloping rise that crosses the Continental Divide in the Wyoming Basin, a large upland plateau. An estimated half million pioneers bound mostly for Oregon, California, and Utah traveled through this pass. Passes in the southern Rockies are steeper and more hazardous. And to the north, explorers Lewis and Clark ventured over treacherous Lolo Pass, an ancient Indian route crossing the rugged Bitterroot Mountains along the Idaho-Montana border. Extreme wind and cold characterize the forest tundra transition area between the dense subalpine forest and the treeless alpine tundra. Here, stunted trees grow in isolated clumps. "Flag trees" indicate the prevailing wind direction, with branches surviving only on their leeward sides. On the coldest and windiest sites, trees abandon their erect growth form and become shrubby mats. Limber and bristlecone pine trees grow into twisted forms as they cling to rocky knobs and ridges. Limber pines' flexible branches bend without breaking. The tree's scientific name, Pinus flexilis, refers to this adaptation. Bristlecone pines grow a limited number of new needles each year; each needle can perform photosynthesis for 10 to 15 years. |The Gunnison River in Colorado carved the Gunnison Gorge over millions of years. Managed by the BLM and the National Park Service, this is a popular recreation area and also home to bighorn sheep, elk, deer, and bald eagles.| Key alpine plant adaptations are low-growth forms and small sizes. Many alpine plants also have extensive root systems to tap scarce moisture and nutrients, as well as to stay firmly anchored in windy conditions. Dense coverings of hairs protect sensitive leaves, shoots, and flowers from the drying effects of the constant wind. Only a few animals can withstand the harsh winters of the alpine zone. The yellow-bellied marmot is an alpine rodent that hibernates through the winter, while the pika, a small, round relative of the rabbit, stays active, living off stored grasses and herbs. White-tailed ptarmigan have densely feathered feet that function like snowshoes. The chickenlike bird changes plumage from summer dark to winter white. Weasels undergo a similar change in color. |The low-growing dwarf clover is adapted to living at high altitudes.| Adapting to Fire Fire is a natural part of forest ecosystems in the Rocky Mountains and many plants have adapted to fire. |Students at the McCall Outdoor Science School (MOSS) in Idaho smell the sweet scent of the ponderosa pine in Ponderosa State Park. The tree is also known as yellow pine for the distinctive hue of mature trees, and by children as the "puzzle tree" for its bark structure, which resembles puzzle pieces. Photo by University of Idaho, MOSS.| Ponderosa pine forests depend on fire to create open stands that let in sunlight. Low-intensity fires clear the forest floor and assist germination of ponderosa pine seeds. Seedlings thrive in the sunlit forest. The ponderosa pine's thick bark and deep roots protect it from fire and the trees' large buds are protected by insulating scales. In addition, mature trees tend to "self-prune," losing their lower branches. This keeps a surface fire from becoming a "crown" fire, one that advances from treetop to treetop. Lodgepole pine trees produce both closed and open cones. Closed (serotinous) cones remain on the tree and are sealed with heavy resin until the heat from a high-intensity fire opens them and releases the seeds by the millions. A Place to Roam Mountains often provide the only places where large predators can find unfragmented habitat to roam and hunt. Wolves and grizzly bears, for example, historically existed in a variety of ecosystems, but today exist in the Rocky Mountain region mostly in small populations in or near parks and wilderness areas. Both species are at the top of the food chain, helping to control the populations of other animals such as deer and elk. |Canada lynx. The southern part of its range extends into the Rocky Mountains. Photo by Milo Burcham, U.S. Forest Service.| The presence of top predators within an ecosystem can benefit many species of plants and animals. As part of an effort to restore the Greater Yellowstone Ecosystem, wolves were reintroduced to the Rocky Mountains in 1995. As their numbers have increased, the behavior of elk has changed. To protect themselves, the elk now move around more frequently. This, coupled with a slight decrease in elk populations, has relieved pressure on the riparian plants that elk eat. Trees, such as willows, cottonwoods, and even aspens are now flourishing, providing improved habitat for smaller animals and birds. A greater density and diversity of plants also assists in filtering water and combating erosion. |Elk not only need large habitat areas, but also require habitat corridors that connect their summer and winter ranges. Photo by Betsy Wooster, BLM.| The Canada lynx is one of several wildcat species found in the Rocky Mountains. This predator follows a natural 7–10 year population cycle of "boom and bust" tied to the population of its favorite food, the snowshoe hare. The Rocky Mountains are managed by a variety of public agencies and private landowners. This public-private "checkerboard" ownership pattern evolved during the West's settlement, which started in the fertile mountain valleys. Much of the high mountain land was eventually set aside as National Forests and National Parks, while odd-numbered sections of land were given to the railroads to encourage construction of rail and telegraph lines. Although much of the Rocky Mountain area has been consolidated into manageable blocks of public or private land, the West's distinctive checkerboard ownership pattern still exists in some areas today. Land that was neither privately settled nor designated as parks or forests was called "public land." Today BLM manages this public land for multiple uses (such as outdoor recreation and oil and gas leasing) while protecting the natural, historical, and other resources of these lands. A major challenge for land managers is to ensure suitable habitat for large and migratory wildlife species. Public lands generally provide the largest habitat blocks, but wildlife do not necessarily stay within these boundaries. For example, the reintroduced wolves (previously mentioned) now number some 300. They sometimes stray onto private lands and prey on livestock. Defenders of Wildlife, a non-governmental organization, reimburses ranchers for their losses, and the U.S. Fish and Wildlife Service works with other agencies and groups to address problems involving wolves. But the presence of wolves remains controversial. Grizzly bears also need large, remote habitat blocks. Their numbers were reduced to just several hundred in the continental United States during the last century, but have increased to an estimated 1,200 since federal protection was granted in 1975. Still numerous in Canada and Alaska, grizzly bears in the lower 48 states are concentrated in the Yellowstone and Glacier National Park/Bob Marshall Wilderness areas of Wyoming and Montana. Lack of suitable habitat has been a major limiting factor, but in some areas, grizzly bear populations are expanding with better habitat management and greater public support. A Growing Thirst for Water Most large cities are located at the foot of the mountains, with the largest population centers along the Front Range on the eastern side of the mountains. As the area's population grows, so does the need for water, much of which comes from mountain streams and rivers. The Denver metropolitan region gets much of its water through extensive transmountain diversions. These diversions use reservoirs, underground tunnels, and ditches to move water from the wetter western side of the mountains to the drier eastern side, where four out of five Coloradans live. As a result, some counties on the west side lose up to 65 percent of their streamflow. Living With Fire New homes in or adjacent to wildlands (undeveloped areas) create special challenges for land managers. Current federal policy seeks to reduce hazardous fuels that feed wildland fires. Millions of hectares of western forests contain high accumulations of flammable fuels, including "ladder fuels"—small trees and brush that carry surface fires up into tree crowns. To deal with hot crown fires, which pose risks to homes and communities, federal agencies are using thinning and prescribed (controlled) fires to reduce fire fuels. These fuel-reduction techniques help protect not only homes, but also power grids, drinking water, critical habitat, soils, and air quality. Hundreds of nonnative invasive plant species are affecting the economy and ecology of the Rocky Mountains. European cheatgrass has invaded significant portions of the region. Cheatgrass is highly flammable and sprouts quickly after a fire, increasing fire frequencies from every 60 years to every 2–4 years in some areas. Mountain shrub habitat is affected by invasive plants such as leafy spurge and a number of knapweed species. Some wetlands and streamsides at middle to lower elevations are covered with purple loosestrife, an invasive ornamental plant that crowds out the native wetland plants. This alteration of woodland and wetland habitat by nonnative invasive plants is believed to be a contributing factor in the general population decline of woodland bird species that spend the summer in the Rockies. Federal, state, county, and local land managers, and landowners, are mounting aggressive campaigns to stop the spread of these invasives into uninfested areas. Residential development is on the increase throughout the Rocky Mountains. Several key factors are spurring the West's growing population, including new jobs in recreation and energy development, new technology that allows workers to live almost anywhere, and an influx of retirees. As more private land is developed, public land becomes even more important in ensuring open space for people and for wildlife. The Greater Yellowstone Ecosystem is an example. At 7.3 million hectares, it is the largest intact temperate ecosystem in the world, containing two national parks, seven national forests, and 12 wilderness areas, as well as private land, much of it under intense development pressure. Here, large private ranches and farms adjoining public land provide much-needed open space, as well as edge habitat (the zone between two different types of habitat), and transitional zones between wilderness and inhabited areas. But many of these private properties are being subdivided into new residential developments, bringing cultivated lawns, pets, and other impacts to wildlife and water quality. Partnerships are Key Some creative partnerships have emerged to slow this loss of open space. The Green River Valley Land Trust in Wyoming encourages ranchers to place conservation easements on their property. A conservation easement limits development to protect certain resources while the landowner retains ownership. |High in the Rocky Mountains near Gunnison, Colorado, students from Western State College are studying the winter habitat of the Gunnison sage-grouse. Here, students are measuring sagebrush density, height, and percentage of leaf cover. Such research will help land managers plan for the birds' winter habitat needs in an area where development and associated activities are increasingly contributing to habitat loss. The Gunnison sage-grouse currently occupies only about 10 percent of the potential habitat available to it when settlers of European descent first began arriving in the West. Pictured are students John Stanek, Matt Vasquez, and Mary Oswald. Photo by Jessica Young, Ph.D., Western State College of Colorado.| Along the South Fork and Henry's Fork of the Snake River, proposed residential and resort developments threaten to fragment the river corridor habitat and diminish recreational opportunities by blocking public access. The BLM is working with partners to protect portions of the river corridor and public access to it. One recent conservation easement is preserving traditional land uses where a 60-lot subdivision, nine-hole golf course, and resort were proposed. Because so many issues cut across jurisdictional boundaries, people are joining forces to tackle tough management challenges. The Blackfoot Challenge in western Montana, for example, involves 30 active partners, including private landowners, federal and state land managers, and local businesses. They have worked to restore and protect the Blackfoot River watershed in Montana through such efforts as planting native grasses, removing livestock feedlots from streamsides, and securing conservation easements. But a primary goal is to protect the rural lifestyle of the people who live in the area by keeping large-scale development at bay. Toward this end, the partnership provides tours and workshops on rural values, sustainable agriculture, and alternative income sources such as ecotourism and guest ranching. |The BLM worked with The Nature Conservancy to acquire a conservation easement to protect Fisher Bottom on the South Fork of the Snake River, which flows west from the Rockies through one of the most diverse ecosystems in Idaho. Photo by Karen Rice, BLM.| |Blackfoot River is born high on the Continental Divide and flows more than 200 km through several ecosystems in Montana. The area encompasses some of the most productive fish and wildlife habitat in the northern Rocky Mountains. Photo by Lee Walsh, BLM.| Rocky Mountain Tourism With their scenic views, abundant water, substantial snow, and diversity of wildlife, the Rocky Mountains offer a variety of outdoor recreation opportunities, such as skiing, hiking, white-water rafting, fishing, and all-terrain-vehicle (ATV) riding.The popularity of the Rocky Mountains requires land managers to enhance recreational and visitor services while raising public awareness about responsible ways to recreate on the public lands. The effects from such recreation are considerable. For example, food and trash left by visitors attract bears and other wildlife, which often become accustomed to humans. When hiking trails become crowded, some people may create their own trails and shortcuts. This can damage plants and compact the soil. Compacted soils do not absorb water, leading to erosion and the formation of gullies. Along streams, vehicle tires can damage fragile riparian and aquatic ecosystems or spread seeds of invasive plants to new areas. In response, land managers are working with a variety of recreation groups to implement education and outreach campaigns to reduce such impacts. These cooperative efforts involve local rider groups whose members help the government monitor vehicle activity on public land. In fulfilling their multiple-use mission, BLM land managers seek to offer the widest array of recreational opportunities while minimizing impacts. Sometimes innovative approaches are necessary. For example, some river managers use a lottery system to keep river raft and boat traffic manageable during summer months. To maintain popular recreation sites and stay within budget, BLM sometimes collects special recreation fees. In addition, thousands of volunteers work in exchange for free campground sites and annual passes. Programs such as "Leave No Trace" and "Tread Lightly!" teach recreationists how to enjoy and respect the outdoors, fostering a personal outdoor ethic. Going Underground: Mining and Energy Development The Rocky Mountains contain valuable minerals and energy resources that contribute to America's economy, energy independence, and quality of life. Sources under the jurisdiction of the Interior Department, including BLM-managed lands, account for more than 30 percent of domestic energy production. Many Rocky Mountain towns, such as Silverton, Colorado, and Butte, Montana, owe their beginnings to mining. The discovery of gold and silver attracted thousands of miners in the 1800s and successful prospectors obtained title to their claims through the federal government. Lead, zinc, and copper later spurred even more development, and today, copper, trona (soda ash), and coal are among the resources extracted from the Rocky Mountains. |Bikers ride along the Gold Belt Tour National Scenic Byway in Colorado, past one of many historic mine sites. Photo by Ruth Zirkle.| In a previous era, mining often damaged the landscape. Today's mining is carried out with better technology and under laws and regulations aimed at ensuring environmental protection. Rich Natural Gas Fields While most U.S. oil and natural gas production takes place offshore, public lands in the West supply a significant amount of natural gas, a clean-burning fossil fuel mostly made up of methane. An oil and gas inventory by the Interior and Energy Departments in 2003 found that federal lands in five key western geologic basins contain nearly 140 trillion cubic feet of natural gas, enough to supply the 56 million homes that use natural gas for the next 30 years. One area of interest for natural gas development is the Rocky Mountain Front. Here, the geological history set up unique conditions for the creation of rich supplies of natural gas. The key is an ancient seabed that provided organic debris as source material. Deep underground, heat turned this organic debris into oil and gas. Geologic movements of rock over time created underground spaces (reservoirs) that stored the oil and gas, with barriers on top that trapped them. As a result, experts believe there is a large reserve of natural gas trapped along the eastern mountain front in Wyoming and Montana. New technology makes extraction more feasible. Balancing Multiple Uses Because of an increasing demand for energy, the BLM is issuing more permits to drill on public land in and adjacent to the Rocky Mountains. Facilitating such energy development is a component of the BLM's multiple-use mission, though there is ongoing public discussion over how much energy development should occur in the Rocky Mountain region. On BLM-managed public land, a lease and a permit to drill are required to remove oil and gas, and removal must be consistent with the BLM's land use plans, which are developed though a public process. The BLM issues permits after completing environmental analyses required by law. These permits can include stipulations to protect other resources, including wildlife migration routes and nesting areas. Operators are required to minimize environmental impacts during operation, protect important cultural and historic sites, and restore the disturbed site when drilling is completed. By law, energy development is off-limits in National Parks and in congressionally designated Wilderness Areas. Public land managers seek to balance the national need for domestic oil and gas supplies with requirements to protect ecosystems while meeting all legal responsibilities. The Rocky Mountains have always inspired awe and respect. Once considered a formidable barrier to be crossed on the way to places farther west, the Rockies today are one of the most sought-after destinations in the world. As more mountain land becomes developed, the open space provided by the public lands becomes increasingly significant. The challenge for public land managers is to conserve the natural resources that attracted people to the mountains in the first place, while allowing continued use and enjoyment of these diverse mountain ecosystems for many years to come. This can be accomplished with the active participation of those who use and care about the public lands, which belong to all Americans.
http://www.blm.gov/wo/st/en/res/Education_in_BLM/Learning_Landscapes/For_Teachers/science_and_children/mountains_majesty/index/mountains1.html
13
24
A derivative, one of the fundamental concepts of calculus, measures how quickly a function changes as its input value changes. Given a graph of a real curve, the derivative at a specific point will equal the slope of the line tangent to that point. For example, the derivative of y = x2 at the point (1,1) tells how quickly the function is increasing at that point. If a function has a derivative at some point, it is said to be differentiable there. If a function has a derivative at every point where it is defined, we say it is a differentiable function. Differentiability implies continuity. One of the main applications of differential calculus is differentiating a function, or calculating its derivative. The First Fundamental Theorem of Calculus explains that one can find the original function, given its derivative, by integrating, or taking the integral of, the derivative. The derivative of the function f(x), denoted f'(x) or , is defined as: In other words, it is the limit of the slope of the secant line to f(x) as it becomes a tangent line. If the tangent line is increasing (which it is if the original function is increasing), the derivative is positive; if the function is decreasing, the derivative is negative. For example, In general, f'(mx) = m; that is, the derivative of any line is equal to its slope. Higher order derivatives A higher order derivative is obtained by repeatedly differentiating a function. Thus, the second derivative of x, or , is and so forth. A common alternative notation is f''(x), f'''(x), and f(n)(x) for the second, third or n th derivative. A partial derivative is obtained by differentiating a function of multiple variables with respect to one variable while holding the rest constant. For example, the partial derivative of F(x,y) with respect to x, or , represents the rate of change of F with respect to x while y is constant. Thus, F could be windchill, which depends both on wind velocity and actual temperature. represents how much windchill changes with respect to wind velocity for a given temperature. Partial derivatives are calculated just like full derivatives, with the other variables being treated as constants. Example: Let . Then there are two partial derivatives of first order: Note that the two partial derivatives f1(x1,x2) and f2(x1,x2) in this example are again differentiable functions of x1 and x2, so higher derivatives can be calculated: Note that f12(x1,x2) equals f21(x1,x2), so that the order of taking the derivative doesn't matter. Though this doesn't hold generally, it's true for a great class of important functions, specifically continuous functions. In mathematics, derivatives are helpful in determining the maximum and minimum of a function. For example, taking the derivative of a quadratic function will yield a linear function. The points at which this function equals zero are called critical points. Maxima and minima can occur at critical points, and can be verified to be a maximum or minimum by the second derivative test. The second derivative is used to determine the concavity, or curved shape of the graph. Where the concavity is positive, the graph curves upwards, and could contain a relative minimum. Where the concavity is negative, the graph curves downwards, and could contain a relative maximum. Where the concavity equals zero is said to be a point of inflection, meaning that it is a point where the concavity could be changing. Derivatives are also useful in physics, under the "rate of change" concept. For example, acceleration is the derivative of velocity with respect to time, and velocity is the derivative of distance with respect to time. Another important application of derivatives is in the Taylor series of a function, a way of writing certain functions like ex as a power series.
http://www.conservapedia.com/Derivative
13
18
The Fate of Spilled Oil Natural processes that may act to reduce the severity of an oil spill or accelerate the decomposition of spilled oil are always at work in the aquatic environment. These natural processes include weathering, evaporation, oxidation, biodegradation, and emulsification. - Weathering is a series of chemical and physical changes that cause spilled oil to break down and become heavier than water. Winds, waves, and currents may result in natural dispersion, breaking a slick into droplets which are then distributed throughout the water. These droplets may also result in the creation of a secondary slick or thin film on the surface of the water. - Evaporation occurs when the lighter substances within the oil mixture become vapors and leave the surface of the water. This process leaves behind the heavier components of the oil, which may undergo further weathering or may sink to the ocean floor. For example, spills of lighter refined petroleum-based products such as kerosene and gasoline contain a high proportion of flammable components known as light ends. These may evaporate completely within a few hours, thereby reducing the toxic effects to the environment. Heavier oils leave a thicker, more viscous residue, which may have serious physical and chemical impacts on the environment. Wind, waves, and currents increase both evaporation and natural dispersion. - Oxidation occurs when oil contacts the water and oxygen combines with the oil to produce water-soluble compounds. This process affects oil slicks mostly around their edges. Thick slicks may only partially oxidize, forming tar balls. These dense, sticky, black spheres may linger in the environment, and can collect in the sediments of slow moving streams or lakes or wash up on shorelines long after a spill. - Biodegradation occurs when micro-organisms such as bacteria feed on oil. A wide range of micro-organisms is required for a significant reduction of the oil. To sustain biodegradation, nutrients such as nitrogen and phosphorus are sometimes added to the water to encourage the micro-organisms to grow and reproduce. Biodegradation tends to work best in warm water environments. - Emulsification is a process that forms emulsions consisting of a mixture of small droplets of oil and water. Emulsions are formed by wave action, and greatly hamper weathering and cleanup processes. Two types of emulsions exist: water-in-oil and oil-in-water. Water-in-oil emulsions are frequently called "chocolate mousse," and they are formed when strong currents or wave action causes water to become trapped inside viscous oil. Chocolate mousse emulsions may linger in the environment for months or even years. Oil and water emulsions cause oil to sink and disappear from the surface, which give the false impression that it is gone and the threat to the environment has ended.
http://www.epa.gov/emergencies/content/learning/oilfate.htm
13
27
The Arctic is warming faster than the rest of the globe and is experiencing some of the most severe climate impacts on Earth. One of the most notable is the rapid decline in the thickness, age and extent of sea ice. Thiner and younger ice melts much faster, and scientists are predicting that by 2030, the Arctic Ocean will be entirely ice-free in the summer. Monday, 13 July 2009 Petermann Glacier in Greenland Petermann Glacier, located in Northern Greenland, is retreating quickly. Why is that a problem? Sea ice underpins the entire Arctic marine ecosystem, and as it shrinks and thins, there are major repercussions for the Arctic peoples and wildlife. Many polar species depend on the ice to survive - polar bears are the most famous example, but ringed seals also spend most of their time on ice, travelling North to find thicker, more stable one. The Arctic is still under-explored and many more species could be touched by the melting ice cap than we even know of. The Arctic is also one of the regions where the feed-back effects from Global Warming are the most dire. As the white ice cap is replaced with dark seas, sun light is less and less reflected by in space, allowing for more warming to happen on the planet. The quantities of methane contained in the permanently frozen ground (permafrost) in the tundra is also a major problem: as the ground melts, more methane (one of the most potent greenhouse gases) is released in the atmosphere, accelerating global warming. In particular, the ice laying on top of the land mass, like in Greenland, will contribute to sea level rising. The ice sheet has been melting faster than ever in recent years, as the summer 2009 Greenpeace expedition demonstrated. These are factors humans cannot control, and if we are to solve the climate crisis, we have to do it before these feedback effects get beyond recovery. As the sea-ice receeds, oil companies are seeing an opportunity to move into the Arctic and try to exploit the oil reserves - both onshore and offshore. Cairn Energy has already started exploratory drilling, and while it failed to strike oil in 2010, the company is hoping to start again in 2011. Drilling for oil in the Arctic is however extremely dangerous. The sea-ice, as well as extreme weather conditions make clean-up of possible oil spills very difficult. In addition, the particularities of the Arctic make oil drilling more dangerous than usual. For instance, collisions of icebergs with oil platforms are a very real possibility. The oil industry's way of dealing with icebergs seems careless in comparision with the threat: fireships are used to hose down and melt icebergs that come too close to the platform. The irony is tragic: if we weren't burning so much fossil fuels like oil, those oil companies wouldn't have to opportunity to move in the Arctic. It is in their interest to keep the tragedy going for as long as possible - and keep the world addicted to oil. Drilling in the Arctic, instead of improving our transport system and forcing car companies to achieve better fuel efficiency, is keeping us addicted to oil, and only makes the problem worse. As oil companies move into the Arctic, fragile habitats are threatened. The map below highlights where they are going, and their history in the region - as well as the threats they are posing. View Arctic oil threats in a larger map As the sea ice melts, more exploitation is possible. Geologists believe that that one third of the planet's crude oil is located under the sea-bed. Unfortunately, the lessons that should be taught on how much disaster is caused by current oil exploitation - from oil spills to the very global warming that opens these oil fields for exploitation. Arctic States are trying to expend their sovereignty over the sea-bed as far as they can. Close up of the oil still present 15 years after the original Exxon Valdez spill. Open seas also mean more shipping. More shipping means more risks of grounding, oil spills and chemical spills. Both poles are far more vulnerable than the rest of the planet to oil spills. The low temperatures, lack of light as well the small amount of search and rescue stations means that any accident is going to have a long lasting impact on the environment. Look no further than the infamous Exxon Valdez grounding for this. 20 years after it happened, oil is still leaking from under rocks in the Prince William sound. Oil can irritate the skin of some Arctic species, and reduce their defenses against the cold. Bird feathers can also become entangled in the oil and prevent them from flying. Most of the time, they laos ingest the oil and inhale some toxic fumes. Sea otter at rehabilitation centre in Valdez after Exxon Valdez oil spill. A lot of pollution in the Arctic doesn't orginate there. Air and marine currents bring the toxic chemicals that are emitting in Eurasia and North America. These are then ingested by fauna and flora. The amount of mercury found in fauna hunted by the aborigene population currently excedes the commonly agreed food-safety levels. These levels also contribute to further endangering species already facing a number of threats. Persistant organic pollutants like DDT, DDE and other pesticides also find their way to the Arctic, where they are almost never used, since there is no agriculture there. The greatest tragedy of the Arctic, is that the damage of global warming and transboundary pollution is caused thousands of kilometers away, and the native inhabitants are laregely powerless to stop the destruction of their environment.
http://www.greenpeace.org/new-zealand/en/campaigns/climate-change/arctic-impacts/arctic-under-threat/
13
26
In December 1898, at the close of the Spanish-American War, Spain surrendered control of Cuba, Puerto Rico, and Guam to the United States. Though Cuba achieved nominal independence in 1902, in 1917 Puerto Rico assumed the status of an American territory, which afforded Puerto Ricans U.S. citizenship and the right to elect their own legislature, but not the full benefits of statehood. When American forces occupied the island in 1898, the Puerto Rican economy and politics underwent a shift that had implications for labor relations. For instance, the introduction of large-scale agriculture produced opportunities for some women to work as cigar strippers. Indeed, women’s participation in this new economic order gave them the same economic opportunities as men. As changes in the economy took place, women joined their male partners in the struggle to improve working conditions. Thus, women were active participants in and key members of the labor movement from the very beginning. However, as their role in the economy became more prominent, working women became targets of gender and racial discrimination, and their struggle in many instances was interwoven with issues of race, gender, and class. Viewing women solely as workers in the agricultural economy, some industrial managers attempted to limit and control Puerto Rican women’s reproductive choices in order to increase the efficiency of the economic system. Industrialization and Women in the Workforce Traditionally, agriculture formed the base of the Puerto Rican economy. Workers from the tobacco and sugar plantations formed gremios, or guilds, which are considered the first attempts at labor organizations. American control brought large corporations and new modes of factory production, which displaced the traditional workshops settings and artisanal apprenticeships. A focus on mass production undermined the quality-oriented mode of production of the artisans. In 1929, the Wall Street stock market crash precipitated what came to be known as the Great Depression in the United States. Not isolated to the United States, the stock market crash was part and parcel of a worldwide economic downturn. The depression had devastating effects on the island, creating widespread hunger and unemployment that lasted for over a decade. Many banks could not continue to operate. Farmers fell into bankruptcy. As part of his New Deal efforts to restore economic stability, President Roosevelt created the Puerto Rican Reconstruction Administration (PRRA), which provided for agricultural development, public works, and electrification of the island. This improved infrastructure helped to bolster the Puerto Rican economic situation and relieve some of the devastation from the depression. Adopting the slogan “Bread, Land, and Liberty,” in 1938 the Partido Popular Democrático (Democratic Popular Party) was founded under the leadership of Luis Muñoz Marín. In the insular government, Muñoz Marín had served as a member of the local Congress, as the President of the Puerto Rican senate, and eventually as the first elected Governor of Puerto Rico. In its beginnings the Partido Popular Democrático favored independence for the island. In addition, Muñoz Marín both supported the increased industrialization that American companies were bringing to the Puerto Rico and was an advocate for workers’ rights. During this increasing industrialization, women took on a more prominent role in the new economy. The demands in the needle industry forced women to leave their homes and work in factories. They worked as seamstresses for low wages. In 1934 Eleanor Roosevelt visited the island and wrote about women’s work in the needle industry in her column “Mrs. Roosevelt’s Page” for the magazine Woman’s Home Companion. Roosevelt also criticized the employment system of those factories. She observed that seamstresses were paid “two dollars a dozen” for making handkerchief that took them “two weeks.” These types of demands and labor exploitation made women realize that they were as oppressed as men. Thus, it is not surprising that women joined the labor movement along with their male partners as a way to resist economic exploitation. Changes in the Puerto Rican economy altered the relationship between the worker and the economy. The result was that the artisan class developed a more defensive attitude, not only toward industrial capitalism, but also toward the political influences that American companies exercised on the island. The labor movement in Puerto Rico organized as a political party and adopted socialist ideology to balance the power of U.S. corporate capitalism. In addition, after the United States took control of the island, workers saw an opportunity to join labor organizations such as the American Federation of Labor. Workers’ attempts to combat socioeconomic oppression were facilitated by their socialist critique of the working environment. Organized workers used newsletters and newspapers as tools of information and empowerment. Headlines and announcements from union newspapers demonstrate that the local labor movement considered women’s issues important. Collectively, this focus on women’s issues allowed female workers from around the island to feel united, and like they had a stake in the labor movement, and the political party that represented them. It is important to point out here that the union recognized women not only as factory workers, but also as equal partners in the struggle for fair treatment—a struggle that occasionally brought them into conflict with the police. Women strikers, however, did not always behave within the bounds of traditional gender norms. There were instances in which some women strikers “went out of control” and were put in jail. Though female workers were active participants in the labor movement alongside male workers, primarily women bore the brunt of the coercive and discriminatory reproductive restrictions championed by American industrialists and social workers. From their initial arrival on the island, the Americans were concerned about “public order.” Often this alarm was articulated in terms of a concern about “overpopulation”—the average Puerto Rican family included five to six persons—and a perceived lack of self-control on the part of working class and poor Puerto Ricans. In 1917, with the support of American industrialists, scientists, social workers, and middle- and upper-class Puerto Ricans influenced by neo-Malthusian arguments supporting widespread birth control, public health officials decided to put into effect a plan to control the birth rate on the island. This policy, though seemingly based on scientific principles, was based on a set of stereotypes about Puerto Ricans that characterized them as racially inferior and unable to make their own decisions about their fertility. It is in this way that the insular government developed public policy to control what they labeled as a “culture of poverty.” In this regard, the fate of the Puerto Rican women was in the hands of American scientists and demographers and local government officials. By distinguishing between superior and inferior persons in their policy of population control, these officials implemented policies based on eugenic assumptions that served the needs of U.S. business interests by disciplining the reproductive habits of their workforce. Americans’ views about the connection between Puerto Rican racial inferiority and what they saw as an out-of-control birth rate reinforced the assumptions that justified the Americans’ presence on the island. One might agree with Nancy Stepan’s book The Hour of Eugenics, in which she observes that, for an imperial power like the United States, “Eugenics, was more than a set of national programs embedded in national debates; it was also part of international relations.” Thus, the attempt to discipline the reproductive habits of Puerto Rican women was not unusual, since they were colonial subjects and the population policy was part of the colonial experiment. In 1948, Puerto Rico elected its first governor. Luis Muñoz Marín had campaigned for economic reforms and structural changes in the political relationship between the Unied States and islanders. Muñoz and other political leaders considered agricultural countries to be underdeveloped and industrial countries developed; manufacturing was seen as the means by which Puerto Rico could develop economically. As a result, the government launched an industrialization program known as “Operation Bootstrap,” which focused primary on inviting American companies to invest on the island. These companies would receive incentives, such as tax exemptions and infrastructural assistance, in return for providing jobs for the local population. Under “Operation Bootstrap,” the island was to become industrialized by providing labor locally, inviting investment of external capital, importing the raw materials, and exporting the finished products to the United States market. Due to the nature of the American companies that participated in the plan, women were recruited to work these new jobs, such as those in the garment industry. In these jobs, women often functioned as the main or co-provider in their households and continued to confound the myth of the male breadwinner. Additionally, women continued to participate in the labor movement, protesting for equal wages and better treatment. During “Operation Bootstrap,” the question of the Puerto Rican birthrate remained a public policy issue. Governor Muñoz feared that the plan for industrial modernization might be in jeopardy if he did not take steps to deal with the “overpopulation” problem. Thus, the administration set about educating the population about birth control, and encouraging surgical sterilization. In other instances, the local government fostered the migration of Puerto Ricans to the U.S. mainland and overseas possessions such as Hawaii. These measures were highly criticized by civil rights groups and the Catholic Church, who perceived this campaign as an unwarranted attempt to restrict individuals’ reproductive rights. In addition, candidates who were challenging the sitting government denounced the discriminatory nature of these public policies. Governor Muñoz Marin and his cabinet were concerned about the possible electoral repercussions of coercive sterilization policies. Luis A. Ferré, who was Muñoz’s political opponent, alleged that some women were being hired on the condition that they undergo surgical sterilization. Ferré maintained that his allegations were informed by women from one of the factories in the town of Cayey. Muñoz’s advisors suggested that discrimination against women in the work environment on the grounds that they were not sterilized would be a political blow to the Governor’s reelection efforts. As a result, Muñoz ordered a complete investigation, after which he was forced to intervene and reevaluate the role of the government in birth control policies. Official documents, census data, newspaper articles, and photographs from this time period in Puerto Rico’s history shed light on the complicated roles women have played in Puerto Rican society. American companies and government officials recognized that working women were necessary for increased industrialization. Women’s participation in these new industries opened up the opportunity for them to become household breadwinners and participate in the labor movement alongside men. This participation in industry and in the labor movement, however, also brought with it a slew of government regulations about women’s health, primarily birth control and forced sterilization, often based on eugenic assumptions about the racial inferiority of Puerto Rican women. Thus, it is important to continue to reflect upon the profound ways in which gender influenced the relationship between these workers and the economic system.
http://chnm.gmu.edu/wwh/modules/lesson16/lesson16.php?s=0&c=
13