score
stringclasses 605
values | text
stringlengths 4
618k
| url
stringlengths 3
537
| year
int64 13
21
|
---|---|---|---|
21 | During the SEVEN YEARS' WAR (1756-63) the French abandoned most of the region to the British, and upon the surrender of Montréal in Sept 1760, Britain effectively took over the territory which would later become Upper Canada. After the Treaty of PARIS (1763), the borders of Britain's new Province of Quebec were extended S into the Ohio Valley. When the AMERICAN REVOLUTION began, the permanent European population of western Québec consisted of a few French-speaking settlers around Detroit. By 1783 - the end of the American revolt - what had been a trickle of wartime LOYALIST refugees became a stream; 5000-6000 set a tone and fashioned an ideology that would influence much of Upper Canada's future.
The 2-century-old Loyalist myth has these sturdy people overcoming hardship and deprivation but, in fact, few refugees anywhere have been so privileged. Gov Sir Frederick HALDIMAND began Loyalist settlement initiatives, establishing disbanded army regiments in ranges of quickly surveyed townships stretched along the American frontier; in the event of war, these veterans were intended to form a defensive barrier. Three main areas were selected: along the St Lawrence, around Kingston and the Bay of Quinte, and in the NIAGARA PENINSULA. A fourth, near Detroit, was considered, but its scheduled surrender to the US postponed development. Land was granted in lots, with heads of families receiving 100 acres (40.5 ha) and field officers up to and eventually more than 1000 acres (405 ha). Clothing, tools and provisions were supplied for 3 years. Although there were difficulties, these favoured displaced persons did well, and many disgruntled Americans - some simply "land-hungry" - moved N to join them. By 1790 western Québec had a population of nearly 10 000.
The Loyalists who came to Upper Canada, mostly American frontiersmen, were well able to cope with the rigours of new settlements; moreover, they were not politically docile. Many had been in the forefront of political protest in the old American colonies, and although they had not been ready to take up arms for colonial rights, they were prepared to use every legal and constitutional means at their disposal to better their lives. It was their constitutional complaints that caused Britain in 1791 to modify the inadequate QUEBEC ACT of 1774.
The Constitutional Act was a clear response by London to the American Revolution. The excess of democracy that had permeated the southern colonies would not be allowed in the 2 new provinces of Upper and Lower Canada. A lieutenant-governor was established in each province, with an executive council to advise him, a legislative council to act as an upper house, and a representative assembly. Policy was to be directed by the executive, which was responsible not to the assembly but to the Crown. The Church of England (see ANGLICANISM) was to tie the colonies more firmly to Britain: in Upper Canada, a permanent appropriation of funds "for the Support and Maintenance of a Protestant clergy" was formally guaranteed by the establishment of one-seventh of all lands in the province as reserves, with the proceeds from sale or rental going to the church (see CLERGY RESERVES). Subsequent instructions established crown reserves, another seventh of the land, the revenue from which would be used to pay the costs of the provincial administration. Land ownership, the question that concerned most settlers, was to be on the British pattern of freehold tenure. The SEIGNEURIAL SYSTEM was permanently eradicated in the upper province. The franchise was fairly wide, and the assembly numbered no fewer than 16 members while the Legislative Council was made up of 7.
The first leader of this new wilderness society was Lt-Gov John Graves SIMCOE, whose avowed purpose was to create in Upper Canada a "superior, more happy, and more polished form of government," not merely to attract immigrants but to renew the empire and by example to win Americans back into the British camp. Governmental institutions were established, first at Newark [Niagara-on-the-Lake] and then at the new capital at York [Toronto]. Simcoe used troops to build a series of primary roads, got the land boards and land distribution under way, established the judiciary, grandly abolished SLAVERY and showed a keen interest in promoting Anglican affairs. When he left the province in 1796 he could take pride in his achievements, although he had failed to convert Americans from republicanism and to persuade Britain to turn Upper Canada into a military centre. To Britain, Canada still meant Québec, and Simcoe's elaborate plans for the defence of a western appendage beyond the sea-lanes were unrealistic.
Upper Canada did not flourish under Simcoe's followers, the timid Peter Russell, the busy martinet Gen Peter Hunter, the scarcely busy Alexander Grant and the lacklustre Francis GORE. It was still a remote frontier of fragmented settlement; and land, the only real source of prosperity, had been carelessly carved up in huge grants by lax administrators. Politics began to emerge in provincial life, bearing the mark of the Constitutional Act, which, by its very nature, had created a party of favourites. Lieutenant-governors chose their executive and legislative councils from among men they could trust and understand, who shared their solid, conservative values: Loyalists or newly arrived Britons. These men (later called the FAMILY COMPACT) quickly became a kind of Tory faction permanently in power. They could not conceive any brand of loyalty to the Crown apart from their own; when opposition arose, as it did frequently over money bills, those advocating extension of the shackled assembly's powers were branded, in exchanges of fiery rhetoric, as Yankee Republicans. But the influence of political critics such as Robert Thorpe, Joseph Willcocks and William Weekes, who were not merely "smoke-makers" but true parliamentary whigs, was to be washed away in the vortex of the WAR OF 1812.
During the war, Upper Canada, whose inhabitants were predominantly American in origin, was invaded, violated and, in parts, occupied. American forces were repulsed by British regulars assisted by Canadian militia. The war strengthened the British link, rendered loyalism a hallowed creed, fashioned martyr-heroes Sir Isaac BROCK and TECUMSEH, brought a certain prosperity, and appeared to legitimize the political status quo. Later commentators would find in it a touchstone for Canadian NATIONALISM and the explanation for much of Canada's persistent public, if not personal, anti-Americanism.
The war ended Upper Canada's isolation. American immigration was formally halted, but Upper Canada received an increased number of British newcomers - some with capital. The economy continued to be tied to Britain's declining MERCANTILISM, and the wheat trade gained primacy among Upper Canadian farmers. Still, the province remained capital-poor: for example, the Welland Canal Co, a public works venture, had to look abroad for investors. The expense of administering the growing colony increased substantially in the early 1820s. Schemes to reunite the 2 Canadas were occasionally considered. In 1822 an effort was made to adjust the customs duties shared with Lower Canada to provide the upper province, which had no ocean port, with a larger share of revenue.
Revenues remained inadequate and the province was plunged into debt, unable to pay the interest on its own badly received debentures without further borrowings. The establishment of the BANK OF UPPER CANADA (1821) and other banks failed to bring real fiscal stability, and neither did the contributions of the massive British-based colonization venture, the CANADA COMPANY. In fact, Canada Co payments were used to defray the salaries of government officials (the civil list) and thus the assembly was sidestepped in its desire to control government revenues.
The War of 1812 consolidated the political control of the province's ruling oligarchy, whose leading light was Anglican Archdeacon (later bishop of Toronto) John STRACHAN. Many commentators have labelled the Family Compact corrupt, although recent evidence suggests that the group was rigorous and methodical in its administration and thorough in its investigation of irregularities. It had a strong sense of duty to development, as shown by its unswerving support of public works such as the WELLAND CANAL. But an oligarchy, enlightened or not, was an anachronism in an age in which democracy was becoming the fashion.
By 1820 opposition in the province was becoming sophisticated but had not yet taken the form of disciplined parties. Some agitators such as Robert GOURLAY, the celebrated "Banished Briton," had earlier dramatized popular grievances in martyrlike fashion. Until the mid-1830s the major impulse of opposition was frequently conducted by more moderate and whiggish politicians such as Dr William BALDWIN, Robert BALDWIN and Rev Egerton RYERSON. Reformer William Lyon MACKENZIE sometimes wanted Upper Canada to be a kind of Jeffersonian dream and envisaged a province composed of yeomen-farmers wedded to the soil, firmly patriotic and ready to become British-American minutemen. At the same time he never failed to laud technological advances. He, like the compact he so vigorously opposed, was actually a stranger to the forces and values that eventually dominated the 19th century: moderate liberalism and increasing industrialism. His REBELLION OF 1837 misfired because, like so many politicians after him, he failed to understand the basic, moderate political posture of Upper Canadians. The rebellion marked the nadir of Upper Canada's never buoyant fortunes. Political chaos was accompanied by economic disaster as the panic of 1837 swept Anglo-American finance and the province found itself over a million pounds in debt.
Mackenzie's violent posturing and his poorly supported rebellion turned out to be unnecessary, since gradual reforms were already under way in both the colony and Britain. The inadequacies of the rigid Constitutional Act were by now apparent. For battered post-rebellion Upper Canada the impetus for real political change could only come from Westminster, although it might be accelerated by advocates in the province, as was later shown by the brief but powerful government of Robert Baldwin and Louis LAFONTAINE. Some immediate change came through the efforts of the earl of DURHAM in 1838. As governor general he spent only a few days in Upper Canada, but he found time for a short, formal visit to Toronto and an interview with Baldwin. He also received sound counsel from his advisers, especially Charles Buller, all of which he placed in his report (see DURHAM REPORT).
Durham set in motion a scheme that had long been considered: the reunification of Upper and Lower Canada. By 1838 Upper Canada had a diverse population of more than 400,000 and stretched W from the Ottawa R to the head of the Great Lakes. It was still a rough-hewn and somewhat amorphous community, poorly equipped with schools, hospitals or local government. Durham, from his lofty imperial perch, argued that a reunion of the provinces would swamp the French of Lower Canada in an English sea and, more important, that the economic potential of both colonies would be enhanced and they would thus be less burdensome to Britain. All this Durham insisted would easily be advanced under RESPONSIBLE GOVERNMENT, whereby the Cabinet is rendered responsible to the assembly rather than to the Crown. The errors of the Constitutional Act could be exorcised and unruly politics temporized without fear of further revolts. Britain approved the union, although the granting of responsible government would take almost a decade more. On 10 Feb 1841 Upper Canada's short, unhappy history came to an end. The relationship with its French-speaking counterpart would remain to be worked out under the new legislative union. Meanwhile, Upper Canadians could make some claim to having a collective past and, with the prospects of a rapidly increasing population and improving agricultural opportunities, a collective future. See also PROVINCE OF CANADA.
Author ROGER HALL
Links to Other Sites
The website for the Historica-Dominion Institute, parent organization of The Canadian Encyclopedia and the Encyclopedia of Music in Canada. Check out their extensive online feature about the War of 1812, the "Heritage Minutes" video collection, and many other interactive resources concerning Canadian history, culture, and heritage.
Library and Archives Canada
The website for Library and Archives Canada. Offers searchable online collections of textual documents, photographs, audio recordings, and other digitized resources. Also includes virtual exhibits about Canadian history and culture, and research aids that assist in locating material in the physical collections.
Canada: A People's History
This CBC feature program highlights significant events, issues, and personalities in Canadian history.
French Canada and the Early Decades of British Rule (1760 - 1791)
A digitized copy of a booklet that examines the issues and policies that defined Britian's administration of its North American colonies in the decades preceeding the implementation of the Quebec Act and the Constitutional Act. From the Canadian Historical Association and Library and Archives Canada.
The Medical Profession in Upper Canada
View a digitized copy of an 1894 book that chronicles the early years of medical practice in Canada. Click on the pages to advance through the book. From Early Canadiana Online.
This UNB website provides access to extensive references and resources about the United Empire Loyalists and their descendents.
Canadian Geographic: Historical Maps
Take a walk through the history of Canada. Select a year to see the maps and the history related to that era. From the "Canadian Geographic" website.
This nicely illustrated website is dedicated to the history of Fort Frontenac. From the Cataraqui Archaeological Research Foundation.
The website for the Galafilm documentary series "CHIEFS," which is devoted to the life stories of First Nations leaders, including Sitting Bull, Pontiac, Joseph Brant, Black Hawk, and Poundmaker.
Sir John Johnson House National Historic Site
This Parks Canada website features a profile of Sir John Johnson and an illustrated tour of the national historic site in Williamstown, Ontario.
Settlement of Adolphustown
This RootsWeb.com website focuses on the early Loyalist settlements in the Napanee region of Ontario.
John Graves Simcoe
This Archives of Ontario website profiles John Graves Simcoe, leader of the Queen's Rangers during the American Revolution and the first Lieutenant-Governor of Upper Canada.
A historical overview of the political turmoil and military action that engulfed Lower and Upper Canada during the Rebellions of 1837 – 1838. Many illustrations and interesting historical minutiae.
Archives of Ontario: Black History
A selection of archival documents that relate to Black history in Ontario. An Archives of Ontario website.
Sir Isaac Brock
A biography of Sir Isaac Brock, a colonial administrator and British officer who was lauded as a hero of the War of 1812. From the Dictionary of Canadian Biography Online.
A Collector's Passion - The Peter Winkworth Collection
View an extensive collection of distinctive paintings that document more than four centuries of Canadian history. Also features artist's biographies and notes about specific paintings. From the Peter Winkworth Collection of Canadiana at Library and Archives Canada.
Fathers of Confederation
Biographies of the Fathers of Confederation are part of the "Canadian Confederation" website from Library and Archives Canada. Includes historical photographs and other archival resources.
Toronto Public Library
The website for the Toronto Public Library. Check out the library's many collections on music, history, science fiction and fantasy, genealogy, and many other themes that may be of interest to you.
Invasion Repulsed, 1812
This capsule history of the War of 1812 documents the primary issues that determined the course of the war and some of its outcomes. From the website for The Canadian Atlas Online.
The Rebellions of 1837-1838
Learn about the simmering political and social issues that set off the insurrections in Lower and Upper Canada from 1837 to 1838. Features biographies of leading figures, great illustrations, maps and snippets of some of the fiery oratory of the time. Part of the Histori.ca “Peace and Conflict” educational website.
Search for historical maps of specific locations in Canada at this website from Research Collections, McMaster University Library.
This website describes artifacts retrieved from the apparent remains of the H.M.S. Speedy, which sank in a violent storm in 1804. Scroll down the page for more information about the historical significance of this shipwreck.
The Constitutional Act
Read an online digitized copy of the landmark "Constitutional Act," a decree signed by King George III of England on June 10, 1791, that created the provinces of Upper and Lower Canada. On its pages are details pertaining to the establishment of effective government institutions, the responsibilities of the lieutenant governor, the role of the church, and more. From Canadiana Online.
Sir John Harvey
View an illustrated biography of military officer and colonial administrator Sir John Harvey. From the website "The New Brunswick Land Company & The Settlement of Stanley and Harvey."
Sir Roger Hale Sheaffe
A biography of Sir Roger Hale Sheaffe, British army officer in the War of 1812 and colonial administrator. From the Dictionary of Canadian Biography Online.
A biography of John Strachan, teacher, clergyman, officeholder, and bishop. Also provides much detail about the history of Upper Canada. From the Dictionary of Canadian Biography Online.
Choosing Canada's capital: conflict resolution in a parliamentary system
Read excerpts from a book that details the political considerations and negotiations concerning the location of Canada's capital in the 19th century. From Google Books.
View a series of video clips that chronicles the attack by US war ships under the command of Captain Isaac Chauncey on the HMS Royal George and Kingston, Ontario, in the War of 1812.
Drummond to Bathurst
See a digitized copy of a letter from Sir Gordon Drummond to Henry Bathurst in regard his reaction to Major General de Rottenburg's proclamation of martial law in Upper Canada in 1813. See page 441 for Bathurst's reply to Drummond and subsequent documents on this topic. From "Documents Relating to the Constitutional History of Canada, 1791-1818" at canadiana.org.
This overview of the political history of Upper Canada is part of the "Canadian Confederation" website at Library and Archives Canada. Also features historical maps.
Facebook: Canada's History Magazine
Join the conversation about noteworthy events and personalities in Canadian history.
The Early Political and Military History of Burford
See the full text of an illustrated 1913 book about the early history of southern Ontario. From the Brantford Library.
A biography of Charles Duncombe, physician, politician, and a leader of the rebellion of 1837 in Upper Canada. From the Dictionary of Canadian Biography Online.
Queen Victoria's journals
See brief comments about rebellions in Upper Canada in a Tuesday 16th January 1838 entry in a digitized copy of Queen Victoria's journals. Search or browse this site for other references to Canada and political figures involved in Canadian affairs during the reign of Queen Victoria. From the website "Queen Victoria's Journals."
Shawnadithit grew anxious waiting for her uncle, Longnon, to return to camp at the junction of Badger Brook and the Exploits River, deep in the wilds of Newfoundland... | http://www.thecanadianencyclopedia.com/articles/upper-canada | 13 |
26 | Questions that drive interest, applications that illustrate concepts, and the tools to test and solidify comprehension.
Students come into their first Economics course thinking they will gain a better understanding of the economy around them. Unfortunately, they often leave with many unanswered questions. To ensure students actively internalize economics, O'Sullivan/Sheffrin/Perez use chapter-opening questions to spark interest on important economic concepts, applications that vividly illustrate those concepts, and chapter-ending tools that test and solidify understanding.
Table of Contents
PART 1 Introduction and Key Principles
1 Introduction: What Is Economics?
2 The Key Principles of Economics
3 Exchange and Markets
4 Demand, Supply, and Market Equilibrium
PART 2 The Basic Concepts in Macroeconomics
5 Measuring a Nation’s Production and Income
6 Unemployment and Inflation
PART 3 The Economy in the Long Run
7 The Economy at Full Employment
8 Why Do Economies Grow?
PART 4 Economic Fluctuations and Fiscal Policy
9 Aggregate Demand and Aggregate Supply
10 Fiscal Policy
11 The Income-Expenditure Model
12 Investment and Financial Markets
PART 5 Money, Banking, and Monetary Policy
13 Money and the Banking System
14 The Federal Reserve and Monetary Policy
PART 6 Inflation, Unemployment, and Economic Policy
15 Modern Macroeconomics: From the Short Run to the Long Run
16 The Dynamics of Inflation and Unemployment
17 Macroeconomic Policy Debates
PART 7 The International Economy
18 International Trade and Public Policy
19 The World of International Finance
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
Macroeconomics: Principles, Applications and Tools, Coursesmart eTextbook, 7th Edition
Format: Electronic Book
$81.99 | ISBN-13: 978-0-13-255598-2 | http://www.mypearsonstore.com/bookstore/macroeconomics-principles-applications-and-tools-coursesmart-0132555980 | 13 |
29 | A Reference Resource
Theodore Roosevelt, who came into office in 1901 and served until 1909, is considered the first modern President because he significantly expanded the influence and power of the executive office. From the Civil War to the turn of the twentieth century, the seat of power in the national government resided in the U.S. Congress. Beginning in the 1880s, the executive branch gradually increased its power. Roosevelt seized on this trend, believing that the President had the right to use all powers except those that were specifically denied him to accomplish his goals. As a result, the President, rather than Congress or the political parties, became the center of the American political arena.
As President, Roosevelt challenged the ideas of limited government and individualism. In their stead, he advocated government regulation to achieve social and economic justice. He used executive orders to accomplish his goals, especially in conservation, and waged an aggressive foreign policy. He was also an extremely popular President and the first to use the media to appeal directly to the people, bypassing the political parties and career politicians.
Frail and sickly as a boy, "Teedie" Roosevelt developed a rugged physique as a teenager and became a lifelong advocate of exercise and the "strenuous life." After graduating from Harvard, Roosevelt married Alice Hathaway Lee and studied law at Columbia University. He dropped out after a year to pursue politics, winning a seat in the New York Assembly in 1882.
A double tragedy struck Roosevelt in 1884, when his mother and his wife died in the same house on the same day. Roosevelt spent two years out West in an attempt to recover, tending cows as a rancher and busting outlaws as a frontier sheriff. In 1886, he returned to New York and married his childhood sweetheart, Edith Kermit Carow. They raised six children, including Roosevelt's daughter from his first marriage. After losing a campaign for mayor, he served as Civil Service commissioner, president of the New York City Police Board, and assistant secretary of the Navy. All the while, he demonstrated honesty in office, upsetting the party bosses who expected him to ignore the law in favor of partisan politics.
War Hero and Vice President
When the Spanish-American War broke out in 1898, Roosevelt volunteered as commander of the 1st U.S. Volunteer Cavalry, known as the Rough Riders, leading a daring charge on San Juan Hill. Returning as a war hero, he became governor of New York and began to exhibit an independence that upset the state's political machine. To stop Roosevelt's reforms, party bosses "kicked him upstairs" to the vice presidency under William McKinley, believing that in this position he would be unable to continue his progressive policies. Roosevelt campaigned vigorously for McKinley in 1900—one commentator remarked, "Tis Teddy alone that's running, an' he ain't a runnin', he's a gallopin'." Roosevelt's efforts helped ensure victory for McKinley. But his time as vice president was brief; McKinley was assassinated in 1901, making Roosevelt the President of the United States.
By the 1904 election, Roosevelt was eager to be elected President in his own right. To achieve this, he knew that he needed to work with Republican Party leaders. He promised to hold back on parts of his progressive agenda in exchange for a free hand in foreign affairs. He also got the reluctant support of wealthy capitalists, who feared his progressive measures, but feared a Democratic victory even more. TR won in a landslide, becoming the first President to be elected after gaining office due to the death of his predecessor. Upon victory, he vowed not to run for another term in 1908, a promise he came to regret.
As President, Roosevelt worked to ensure that the government improved the lives of American citizens. His "Square Deal" domestic program reflected the progressive call to reform the American workplace, initiating welfare legislation and government regulation of industry. He was also the nation's first environmentalist President, setting aside nearly 200 million acres for national forests, reserves, and wildlife refuges.
In foreign policy, Roosevelt wanted to make the United States a global power by increasing its influence worldwide. He led the effort to secure rights to build the Panama Canal, one of the greatest engineering feats at that time. He also issued his "corollary" to the Monroe Doctrine, which established the United States as the "policeman" of the Western Hemisphere. In addition, he used his position as President to help negotiate peace agreements between belligerent nations, believing that the world should settle international disputes through diplomacy rather than war.
Roosevelt is considered the first modern U.S. President because he greatly strengthened the power of the executive branch. He was also an extremely popular President—so popular after leaving office in 1909 that he was able to mount a serious run for the presidency again in 1912. Believing that his successor, William Howard Taft, had failed to continue his program of reform, TR threw his hat into the ring as a candidate for the Progressive Party. Although Roosevelt was defeated by Democrat Woodrow Wilson, his efforts resulted in the creation of one of the most significant third parties in U.S. history.
With the onset of World War I in 1914, Roosevelt advocated that the United States prepare itself for war. Accordingly, he was highly critical of Wilson's pledge of neutrality. Once the United States entered the war in 1917, all four of Roosevelt's sons volunteered to serve, which greatly pleased the former President. The death of his youngest son, Quentin, left him deeply distraught. Theodore Roosevelt died less than a year later.
Theodore Roosevelt was born on October 27, 1858, and grew up in New York City, the second of four children. His father, Theodore, Sr., was a well-to-do businessman and philanthropist. His mother, Martha "Mittie" Roosevelt, was a Southerner, raised on a plantation in Georgia.
"Teedie" grew up surrounded by the love of his parents and siblings. But he was always a sickly child afflicted with asthma. As a teenager, he decided that he would "make his body," and he undertook a program of gymnastics and weight-lifting, which helped him develop a rugged physique. Thereafter, Roosevelt became a lifelong advocate of exercise and the "strenuous life." He always found time for physical exertions including hiking, riding horses, and swimming.
As a young boy, Roosevelt was tutored at home by private teachers. He traveled widely through Europe and the Middle East with his family during the late 1860s and early 1870s, once living with a host family in Germany for five months. In 1876, he entered Harvard College, where he studied a variety of subjects, including German, natural history, zoology, forensics, and composition. He also continued his physical endeavors, taking on boxing and wrestling as new pursuits.
During college, Roosevelt fell in love with Alice Hathaway Lee, a young woman from a prominent New England banking family he met through a friend at Harvard. They were married in October 1880. Roosevelt then enrolled in Columbia Law School, but dropped out after one year to begin a career in public service. He was elected to the New York Assembly and served two terms from 1882 to 1884.
A double tragedy struck Roosevelt in 1884. On February 12th, Alice gave birth to a daughter, Alice Lee. Two days later, Roosevelt's mother died of typhoid fever and his wife died of kidney disease within a few hours of each other—and in the same house. For the next few months, a devastated Roosevelt threw himself into political work to escape his grief. Finally, he left his daughter in the care of his sister and fled to the Dakota Badlands.
Once out West, Roosevelt soaked in the frontier lifestyle. He bought two ranches and a thousand head of cattle. He flourished in the hardships of the western frontier, riding for days, hunting grizzly bears, herding cows as a rancher, and chasing outlaws as a frontier sheriff. Roosevelt headed back East in 1886; a devastating winter the following year wiped out most of his cattle. Although he would frequent the Dakota Badlands in subsequent years to hunt, he was ready leave the West and return to his former life.
One of the reasons he did so was because of a rediscovered love with his childhood sweetheart, Edith Kermit Carow. The two were married in England in 1886 and moved to Oyster Bay, New York, into a house known as Sagamore Hill. In addition to raising Roosevelt's first child, Alice, he and Edith had five children: Theodore, Kermit, Ethel, Archibald, and Quentin.
Renewed Political Spirit
After returning to New York, Roosevelt continued his writing career, which began with the publication of his book, The Naval War of 1812, in 1882. He wrote a number of books during this period, including The Life of Thomas Hart Benton (1887), The Life of Gouverneur Morris (1888), and The Winning of the West (four volumes, 1889-1896).
Roosevelt also resumed his political career by running unsuccessfully for mayor of New York City in 1886. In 1888, he campaigned for Republican presidential nominee Benjamin Harrison. When Harrison won the election, he appointed Roosevelt to the U.S. Civil Service Commission. Roosevelt was re-appointed to the Commission by Democratic President Grover Cleveland in 1893. As commissioner, he worked hard to enforce the civil service laws, although he regularly clashed with party regulars and politicians who wanted him to ignore the law in favor of patronage.
Roosevelt served dutifully as a commissioner until he accepted the presidency of the New York City Police Board in 1895. He demonstrated honesty in office, much to the displeasure of party bosses. He also cleaned up the corrupt Police Board and strictly enforced laws banning the sale of liquor on the Sabbath.
In 1897, the newly elected Republican President, William McKinley, appointed Roosevelt assistant secretary of the Navy. Roosevelt had long believed in the importance of the Navy and the role it played in national defense. As acting secretary of the Navy, he responded to the explosion of the U.S. battleship Maine in Havana Harbor in 1898 by putting the Navy on full alert. (See McKinley biography, Foreign Affairs section, for details.) Roosevelt instructed Commodore George Dewey to make ready for war with Spain by taking the necessary steps for bottling up the Spanish squadron in Asian waters. He also asked Dewey to prepare for the probable invasion of the Philippines.
The Rough Riders
When the Spanish-American War began, Roosevelt resigned as assistant secretary of the Navy and volunteered for service as commander the 1st U.S. Volunteer Cavalry, a unit known as the Rough Riders—an elite company comprised of Ivy League gentlemen, western cowboys, sheriffs, prospectors, police officers, and Native Americans. Once in Cuba, Roosevelt distinguished himself by leading them on a charge—on foot—up San Juan Hill (actually Kettle Hill) on the outskirts of Santiago. The contingent suffered heavy casualties.
The Rough Riders returned to the United States as war heroes. Their varied backgrounds, colorful leader, and bravery on the battlefield brought them considerable attention. Roosevelt personally reveled in his time in the military. He later wrote about his military exploits: "I would rather have led that charge and earned my colonelcy than served three terms in the United States Senate. It makes me feel as though I could now leave something to my children which will serve as an apology for my having existed."
Roosevelt returned home a war hero and caught the eye of Republican leaders in New York who were looking for a gubernatorial candidate. He agreed to run for governor against a popular Democrat, Judge Augustus van Wyck, the candidate of Tammany Hall. Roosevelt carried the election by just a few thousand votes; his victory stemmed largely from the work of the state's Republican Party boss, Thomas C. Platt, who threw the full support of his political machine behind the hero of San Juan Hill. Although Platt and Roosevelt had agreed to consult each other on matters of policy and patronage, the new governor was his own man. TR steadfastly refused to appoint party regulars as State Insurance Commissioner or Public Works Commissioner—the two most important patronage jobs in the state.
When Governor Roosevelt supported a bill for the taxation of the value and assets of public services (gas, water, electric, and streetcars), his actions led to an explosive break with Platt. Almost overnight the insurance companies, the construction contractors, and the privately owned public service corporations realized that all the money they were contributing to Platt's political machine brought them little if any influence with Governor Roosevelt.
Boss Platt knew that something had to be done with the governor before he completely destroyed the Republican state machine. Consulting with Mark Hanna, the top Republican political boss in the nation, Platt conspired to "kick [Roosevelt] upstairs" to the vice presidency in 1900. (Vice President Garret Hobart had just died in office.) This would keep Roosevelt from running for a second term in New York (the governorship was a two-year term in those days). Roosevelt reluctantly agreed, persuaded that the vice presidency might lead to a shot at the White House in 1904. He also knew that the party bosses had rigged the convention, making it nearly impossible for him to avoid being nominated.
1900 Vice Presidential Campaign
The Republican convention nominated TR by acclamation. Thereafter, Roosevelt campaigned furiously for the Republican presidential candidate, William McKinley, matching his Democratic opponents, William Jennings Bryan and Adlai E. Stevenson, move for move. Roosevelt traveled more than 21,000 miles on a special campaign train, making hundreds of speeches, and more than three million people saw him in person. He spoke in 567 cities in twenty-four states. "Tis Tiddy alone that's running," observed Mr. Dooley (a press columnist who used an exaggerated Irish accent to make political observations) "an' he ain't a runnin', he's gallopin'."
The Republican ticket overwhelmed the Democrats, racking up an 861,757 vote plurality, the largest Republican victory in years. McKinley won the popular vote of 7.2 million (292 Electoral College votes) to Bryan's 6.3 million (155 Electoral College votes). McKinley won his bid for reelection over Bryan by an even larger margin than he had garnered in 1896.
In September 1901, however, an assassin's bullet killed President McKinley (see McKinley biography, Death of the President section). This tragedy put Theodore Roosevelt ("that damned cowboy"—according to Mark Hanna, the top Republican political boss in the nation) in the White House as the nation's twenty-sixth President. He was the youngest person ever to serve in that capacity. Neither the nation nor the presidency would ever be the same again.
The Campaign and Election of 1904
After Roosevelt acceded to the presidency in 1901, he soon began to think about how to win election as President in his own right. He realized that although he did not always agree with conservative Republicans in Congress, he needed their support in order to win the nomination in 1904. To that end, he worked out an understanding with legislators, especially Senator Nelson W. Aldrich of Rhode Island, which gave him a free hand in foreign affairs in return for holding back the more progressive items of his domestic agenda. But TR did not refrain from using the executive office to break up monopolies, such as the Northern Securities Company, to mediate in labor disputes between unions and management, as he did in the coal miners' strike in 1902, and to use the White House as a "bully pulpit," from which he lectured the nation on how government should regulate big business.
Fearful that his anti-corporate sentiments had soured party bosses, Roosevelt toned down his rhetoric in 1903. Most importantly, he was able to place his people in key party positions and maneuvered Mark Hanna, now the Chairman of the Republican National Committee, to endorse his candidacy several months prior to the 1904 convention. Then TR turned to the public, holding press conferences, launching a national tour of western states that lasted for thirty days, and boldly issuing an executive order that provided pensions for all veterans between the ages of sixty-two and sixty-seven.
With Mark Hanna's untimely death prior to the Republican convention in Chicago, one of Roosevelt's main competitors was gone, making TR's nomination a foregone conclusion. He was nominated unanimously on the first ballot. He picked Senator Charles W. Fairbanks of Indiana—a conservative Republican with close ties to the railroad industry—as his running mate. When the Democrats met in St. Louis, they picked two conservatives, Judge Alton B. Parker, from New York, and eighty-one-year-old Henry G. Davis, a wealthy ex-senator from Virginia and the oldest man to ever run for the vice-presidency.
The Democrats, showcasing themselves as the "sane and safe choice," attacked the Roosevelt administration as "spasmodic, erratic, sensational, spectacular, and arbitrary." Republicans touted Roosevelt's record in foreign policy and promised more of the same. Neither Roosevelt nor Parker actively campaigned for the presidency, as was the custom. Over the summer of 1904, Roosevelt directed the campaign from his front porch at Oyster Bay, issuing lofty statements to his supporters and instructions on strategy to Republican state parties.
Roosevelt received a large amount of money for the campaign from wealthy capitalists, such as Edward H. Harriman (the railroad tycoon), Henry C. Frick (the steel baron), and J.P. Morgan (the financial potentate of Wall Street). The wealthy capitalists and their friends contributed more than $2 million to Roosevelt's campaign. They supported Roosevelt because they preferred an "unpredictable head of a predictable party" in power than the "predictable head of an unpredictable party." They might have favored Parker as a person, but the Democrats were simply too populist in their constituency and potentially too radical in their ideas for the conservative business leaders ever to trust.
The election, however, had never been in doubt. TR won 336 electoral votes to Parker's 140. He took every state outside of the South, including Missouri. Roosevelt was immensely popular and rode to a second term on a huge wave of public support, unlike anything the nation had ever seen.
After the victory, Roosevelt vowed not to run again for the presidency, believing it was wise to follow the precedent of only serving two terms in office. However, he came to regret that promise in advance of the 1908 election, believing he still had much of his agenda to accomplish. However, he held true to his pledge and supported his chosen successor, William Howard Taft, in 1908.
The Campaign and Election of 1912
Before he left office in 1909, Roosevelt hand-picked William Howard Taft as his successor and worked to get him elected. Taft had served in the Roosevelt administration as governor of the Philippines and secretary of war. During the election, Taft vowed to run the country just as Roosevelt had. But the new administration was off to a rocky start with the outgoing President. After apparently indicating that he would retain most of the existing cabinet members, Taft soon discovered that he would be better served by his own hand-picked secretaries. Roosevelt was miffed at having his cabinet members dismissed and at not being consulted on the new appointments.
After Taft's inauguration, Roosevelt traveled in Africa and Europe for more than a year. He went on safari with his son Kermit, where he acquired more than 3,000 animal trophies, including eight elephants, seven hippos, nine lions, and thirteen rhinos. He then met up with Edith in Egypt, and the two of them journeyed throughout Europe, encountering constant demands to meet and greet royalty and politicians. When the Roosevelts returned to New York in June 1910, they were greeted by one of the largest mass receptions ever given in New York City.
When he first arrived back in the United States, Roosevelt remained noncommittal on the Taft presidency. He wanted time to assess Taft's performance before making any judgments. However, some of his old friends had already brought him negative reports. Gifford Pinchot was so angry with Taft regarding conservation that he had earlier traveled to Italy to meet Roosevelt and discuss the situation. Once TR returned home, he was frequently visited by old friends who decried Taft's supposed efforts to undo his work.
During this period, progressivism was gradually rising from the local and state level to the national level. Increasing numbers of people across the nation supported expanding the role of the federal government to ensure the welfare of the people. Pressured by the progressive wing of the Republican Party to challenge Taft in 1912, Roosevelt weighed his options. Eventually he decided to throw "his hat into the ring" and run against his former protege.
The Republicans met in Chicago in June 1912, hopelessly split between the Roosevelt progressives and the supporters of President Taft. Roosevelt came to the convention having won a series of preferential primaries that put him ahead of the President in the race for party delegates. Taft, however, controlled the convention floor, and his backers managed to exclude most of the Roosevelt delegates by not recognizing their credentials. These tactics enraged TR, who then refused to allow himself to be nominated, paving the way for Taft to win on the first ballot.
Roosevelt and his supporters abandoned the G.O.P. and reconvened in Chicago two weeks later to form the Progressive Party. They then nominated TR as their presidential candidate with Governor Hiram Johnson of California as his running mate. Roosevelt electrified the convention with a dramatic speech in which he announced that "we stand at Armageddon, and we battle for the Lord."
Declaring that he felt "as strong as a Bull Moose," Roosevelt gave the new party its popular name—the Bull Moose Party—and described its party platform as "New Nationalism." Its tenets included political justice and economic opportunity, and it sought a minimum wage for women; an eight-hour workday; a social security system; a national health service; a federal securities commission; and direct election of U.S. senators. The platform also supported the initiative, referendum, and recall as means for the people to exert more direct control over government. TR worried about the power of the minority—often politicians—over the majority and thought these changes would make government more accountable to the people.
The Democrats nominated the reform governor of New Jersey, Woodrow Wilson, for President and Thomas R. Marshall, the governor of Indiana, as vice president. Wilson's platform, known as "New Freedom," called for limits on campaign contributions by corporations, tariff reductions, new and stronger antitrust laws, banking and currency reform, a federal income tax, direct election of senators, and a single-term presidency.
Although Roosevelt and Wilson were both progressives, they differed over the means and extent to which government should intervene or regulate the states and the economy. Differences between New Nationalism and New Freedom over trusts and the tariff became a central issue of the campaign. Roosevelt believed the federal government should act as a "trustee" for the American people, controlling and supervising the economy in the public interest. Wilson had greater reservations about a large federal government and sought a return to a more decentralized republic. He argued that if big business were deprived of artificial advantages, such as the protective tariff and monopolies, the natural forces of competition would assure everyone an equal chance at success—thus minimizing the role of government. Whereas Roosevelt differentiated between "good" and "bad" trusts, Wilson suggested that all monopolies were harmful to the nation.
Roosevelt's colorful personality helped him overcome the disadvantage of running as a third-party candidate, and he and Wilson contended fiercely for the support of voters interested in reform. Near the end of the campaign, TR dramatized his vitality by insisting on finishing a campaign speech even with an assailant's bullet lodged in his chest. Fortunately, the bullet had been slowed down by the pages of a thick speech he had in his coat pocket, but Roosevelt's courageous—perhaps foolhardy—act reminded Americans of what they loved about him.
Wilson captured 41.9 percent of the vote to Roosevelt's 27.4 percent and Taft's 23.1 percent. Socialist Party candidate Eugene Debs won 6 percent of the vote. Despite the divided popular vote, Wilson compiled 435 electoral votes compared to Roosevelt's 88 and Taft's 8. Roosevelt won in six states—California, Michigan, Minnesota, Pennsylvania, South Dakota, and Washington.
Despite its loss, the strong showing of the Progressive Party signaled the emergence of a significant force in U.S. political history. It also reflected a rising progressive spirit in the United States. Together with Wilson and Debs, Roosevelt had challenged the conservative wing of the Republican Party and left it discredited. In addition, although TR lost the election, much of his New Nationalism program was enacted during Wilson's presidency.
When Theodore Roosevelt took the oath of office in September 1901, he presided over a country that had changed significantly in recent decades. The population of the United States had almost doubled from 1870 to 1900 as immigrants came to U.S. cities to work in the country's burgeoning factories. As the United States became increasingly urban and industrial, it acquired many of the attributes common to industrial nations—overcrowded cities, poor working conditions, great economic disparity, and the political dominance of big business. At the turn of the twentieth century, Americans had begun to look for ways to address some of these problems.
As chief executive, Roosevelt felt empowered by the people to help ensure social justice and economic opportunity through government regulation. He was not a radical, however; TR believed that big business was a natural part of a maturing economy and, therefore, saw no reason to abolish it. He never suggested fundamentally altering American society or the economy to address various economic and social ills. In fact, he often stated that there must be reform in order to stave off socialism; if government did not act, the people would turn to more extreme measures to seek remedies.
In addition, TR was a politician who understood the need to compromise in order to implement his ideas. Coming into office following William McKinley's assassination, Roosevelt pledged to maintain the fallen President's policies so as not to upset the nation in a time of mourning. And even when he began to chart his own course, Roosevelt knew that he had to work with congressional Republicans to get the G.O.P. nomination for President in 1904.
The Great Regulator
One of Roosevelt's central beliefs was that the government had the right to regulate big business to protect the welfare of society. However, this idea was relatively untested. Although Congress had passed the Sherman Antitrust Act in 1890, former Presidents had only used it sparingly.
So when the Department of Justice filed suit in early 1902 against the Northern Securities Company, it sent shockwaves through the business community. The suit alarmed the business community, which had hoped that Roosevelt would follow precedent and maintain a "hands-off" approach to the market economy. At issue was the claim that the Northern Securities Company—a giant railroad combination created by a syndicate of wealthy industrialists and financiers led by J. P. Morgan—violated the Sherman Antitrust Act because it was a monopoly. In 1904, the U.S. Supreme Court ruled in favor of the government and ordered the company dismantled. The high court's action was a major victory for the administration and put the business community on notice that although this was a Republican administration, it would not give business free rein to operate without regard for the public welfare.
Roosevelt then turned his attention to the nation's railroads, in part because the Interstate Commerce Commission (ICC) had notified the administration about abuses within the industry. In addition, a large segment of the population supported efforts to regulate the railroads because so many people and businesses were dependent on them. Roosevelt's first achievement in this area was the Elkins Act of 1903, which ended the practice of railroad companies granting shipping rebates to certain companies. The rebates allowed big companies to ship goods for much lower rates than smaller companies could obtain. However, the railroads and big companies were able to undermine the act.
Recognizing that the Elkins Act was not effective, Roosevelt pursued further railroad regulation and undertook one of his greatest domestic reform efforts. The legislation, which became known as the Hepburn Act, proposed enhancing the powers of the Interstate Commerce Commission to include the ability to regulate shipping rates on railroads. One of the main sticking points of the bill was what role the courts would play in reviewing the rates. Conservative senators who opposed the legislation, acting on behalf of the railroad industry, tried to use judicial review to make the ICC essentially powerless. By giving the courts, which were considered friendly to the railroads, the right to rule on individual cases, the ICC had less power to remedy the inequities of the rates.
When Roosevelt encountered this resistance in Congress, he took his case to the people, making a direct appeal on a speaking tour through the West. He succeeded in pressuring the Senate to approve the legislation. The Hepburn Act marked one of the first times a President appealed directly to the people, using the press to help him make his case. The passage of the act was considered a major victory for Roosevelt and highlighted his ability to balance competing interests to achieve his goals.
Roosevelt believed that the government should use its resources to help achieve economic and social justice. When the country faced an anthracite coal shortage in the fall of 1902 because of a strike in Pennsylvania, the President thought he should intervene. As winter approached and heating shortages were imminent, he started to formulate ideas about how he could use the executive office to play a role—even though he did not have any official authority to negotiate an end to the strike. Roosevelt called both the mine owners and the representatives of labor together at the White House. When management refused to negotiate, he hatched a plan to force the two sides to talk: instead of sending federal troops to break the strike and force the miners back to work, TR threatened to use troops to seize the mines and run them as a federal operation. Faced with Roosevelt's plan, the owners and labor unions agreed to submit their cases to a commission and abide by its recommendations.
Roosevelt called the settlement of the coal strike a "square deal," inferring that everyone gained fairly from the agreement. That term soon became synonymous with Roosevelt's domestic program. The Square Deal worked to balance competing interests to create a fair deal for all sides: labor and management, consumer and business, developer and conservationist. TR recognized that his program was not perfectly neutral because the government needed to intervene more actively on behalf of the general public to ensure economic opportunity for all. Roosevelt was the first President to name his domestic program and the practice soon became commonplace, with Woodrow Wilson's New Freedom, Franklin D. Roosevelt's New Deal, and Harry S. Truman's Fair Deal.
Roosevelt was the nation's first conservationist President. Everywhere he went, he preached the need to preserve woodlands and mountain ranges as places of refuge and retreat. He identified the American character with the nation's wilderness regions, believing that our western and frontier heritage had shaped American values, behavior, and culture. The President wanted the United States to change from exploiting natural resources to carefully managing them. He worked with Gifford Pinchot, head of the Forestry Bureau, and Frederick Newell, head of the Reclamation Service, to revolutionize this area of the U.S. government. In 1902, Roosevelt signed the Newlands Reclamation Bill, which used money from federal land sales to build reservoirs and irrigation works to promote agriculture in the arid West.
After he won reelection in his own right in 1904, Roosevelt felt more empowered to make significant changes in this domain. Working with Pinchot, he moved the Forest Service from the Department of the Interior to the Department of Agriculture. This gave the Forest Service, and Pinchot as head of it, more power to achieve its goals. Together, Roosevelt and Pinchot reduced the role of state and local government in the management of natural resources, a policy that met with considerable resistance. Only the federal government, they argued, had the resources to oversee these efforts. Roosevelt used his presidential authority to issue executive orders to create 150 new national forests, increasing the amount of protected land from 42 million acres to 172 million acres. The President also created five national parks, eighteen national monuments, and 51 wildlife refuges.
Roosevelt and the Muckrakers
The emergence of a mass-circulation independent press at around the turn of the century changed the nature of print media in the United States. Instead of partisan publications that touted a party line, the national media was becoming more independent and more likely to expose scandals and abuses. This era marked the beginning of investigative journalism, and the reporters who led the effort were known as "muckrakers," a term first used by Roosevelt in a 1906 speech.
One of the best examples of Roosevelt's relationship with the muckrakers came after he read Upton Sinclair's The Jungle, which described in lurid detail the filthy conditions in the meat packing industry—where rats, putrid meat, and poisoned rat bait were routinely ground up into sausages. Roosevelt responded by pushing for the Meat Inspection Act and the Pure Food and Drug Act of 1906. Both pieces of legislation endeared him to the public and to those corporations that favored government regulation as a means of achieving national consumer standards.
Roosevelt was the first President to use the power of the media to appeal directly to the American people. He understood that his forceful personality, his rambunctious family, and his many opinions made good copy for the press. He also knew that the media was a good way for him to reach out to the people, bypassing political parties and political machines. He used the media as a "bully pulpit" to influence public opinion.
On Race and Civil Rights
Theodore Roosevelt reflected the racial attitudes of his time, and his domestic record on race and civil rights was a mixed bag. He did little to preserve black suffrage in the South as those states increasingly disenfranchised blacks. He believed that African Americans as a race were inferior to whites, but he thought many black individuals were superior to white individuals and should be able to prove their merit. He caused a major controversy early in his presidency when he invited Booker T. Washington to dine with him at the White House in October 1901. Roosevelt wanted to talk to Washington about patronage appointments in the South, and he was surprised by the vilification he received in the Southern press; he did not apologize for his actions. Although he appointed blacks to some patronage positions in the South, he was generally unwilling to fight the political battles necessary to win their appointment.
One incident in particular taints Roosevelt's reputation on racial issues. In 1906, a small group of black soldiers was accused of going on a shooting spree in Brownsville, Texas, killing one white man and wounding another. Despite conflicting accounts and the lack of physical evidence, the Army assumed the guilt of the black soldiers. When not one of them admitted responsibility, an irritated Roosevelt ordered the dishonorable discharge of three companies of black soldiers (160 men) without a trial. Roosevelt and the white establishment had assumed the soldiers were guilty without affording them the opportunity for a trial to confront their accusers or prove their innocence.
Theodore Roosevelt inherited an empire-in-the-making when he assumed office in 1901. After the Spanish-American War in 1898, Spain ceded the Philippines, Puerto Rico, and Guam to the United States. In addition, the United States established a protectorate over Cuba and annexed Hawaii. For the first time in its history, the United States had acquired an overseas empire.
As President, Roosevelt wanted to increase the influence and prestige of the United States on the world stage and make the country a global power. He also believed that the exportation of American values and ideals would have an ennobling effect on the world. TR's diplomatic maxim was to "speak softly and carry a big stick," and he maintained that a chief executive must be willing to use force when necessary while practicing the art of persuasion. He therefore sought to assemble a powerful and reliable defense for the United States to avoid conflicts with enemies who might prey on weakness. Roosevelt followed McKinley in ending the relative isolationism that had dominated the country since the mid-1800s, acting aggressively in foreign affairs, often without the support or consent of Congress.
One of the situations that Roosevelt inherited upon taking office was governance of the Philippines, an island nation in Asia. During the Spanish-American War, the United States had taken control of the archipelago from Spain. When Roosevelt appointed William Howard Taft as the first civilian governor of the islands in 1901, Taft recommended the creation of a civil government with an elected legislative assembly. The Taft administration was able to negotiate with Congress for a bill that included a governor general, an independent judiciary, and the legislative assembly.
The most spectacular of Roosevelt's foreign policy initiatives was the establishment of the Panama Canal. For years, U.S. naval leaders had dreamed of building a passage between the Atlantic and Pacific oceans through Central America. During the war with Spain, American ships in the Pacific had to steam around the tip of South America in two-month voyages to join the U.S. fleet off the coast of Cuba.
In 1901, the United States negotiated with Britain for the support of an American-controlled canal that would be constructed either in Nicaragua or through a strip of land—Panama—owned by Colombia. In a flourish of closed-door maneuvers, the Senate approved a route through Panama, contingent upon Colombian approval. When Colombia balked at the terms of the agreement, the United States supported a Panamanian revolution with money and a naval blockade, the latter of which prevented Colombian troops from landing in Panama. In 1903, the Hay-Bunau-Varilla Treaty with Panama gave the United States perpetual control of the canal for a price of $10 million and an annual payment of $250,000.
When he visited Panama in 1906 to observe the building of the canal, Roosevelt became the first U.S. President to leave the country during his term of office. He wanted to see the spectacle, which became known as one of the world's greatest engineering feats. Nearly 30,000 workers labored ten-hour days for ten years to build the $400-million canal, during which time American officials were able to counteract the scourge of Yellow Fever that had ravaged large numbers of canal workers. The Panama Canal was finally completed in 1914; by 1925, more than 5,000 merchant ships had traversed the forty miles of locks each year. Once operational, it shortened the voyage from San Francisco to New York by more than 8,000 miles. The process of building the canal generated advances in U.S. technology and engineering skills. This project also converted the Panama Canal Zone into a major staging area for American military forces, making the United States the dominant military power in Central America.
Latin America consumed a fair amount of Roosevelt's time and energy during his first term as President. Venezuela became a focus of his attention in 1902 when Germany and Britain sent ships to blockade that country's coastline. The European nations had given loans to Venezuela that the Venezuelan dictator refused to repay. Although both Germany and Britain assured the Americans that they did not have any territorial designs on Venezuela, Roosevelt felt aggrieved by their actions and demanded that they agree to arbitration to resolve the dispute.
Santo Domingo (now the Dominican Republic) also encountered problems with European countries. Again, European investors had appealed to their governments to collect money from a debt-ridden nation Latin American nation. After the Dominican government appealed to the United States, Roosevelt ordered an American collector to assume control of the customs houses and collect duties to avoid possible European military action.
During the Santo Domingo crisis, Roosevelt formulated what became known as the Roosevelt Corollary to the Monroe Doctrine. The Monroe Doctrine, issued in 1823, stated that the United States would not accept European intervention in the Americas. Roosevelt realized that if nations in the Western Hemisphere continued to have chronic problems, such as the inability to repay foreign debt, they would become targets of European invention. To preempt such action and to maintain regional stability, the President drafted his corollary: the United States would intervene in any Latin American country that manifested serious economic problems. The corollary announced that the United States would serve as the "policeman" of the Western Hemisphere, a policy which eventually created much resentment in Latin America.
Though often recognized for the aggressiveness of his foreign policy, Roosevelt was also a peacemaker. His most successful effort at bringing belligerent powers to the negotiating table involved a crisis that had broken out in East Asia. Fighting had erupted between Russia and Japan in 1904, following Japan's attack on the Russian fleet at Port Arthur.
As the Russo-Japanese War raged on with many Japanese victories, Roosevelt approached both nations about mediating peace negotiations. The President longed for a world in which countries would turn to arbitration instead of war to settle international disputes, and he offered his services to this end. Although Russia and Japan initially refused his offer, they eventually accepted his "good offices" to help negotiate a peace, meeting with Roosevelt in 1905 in Portsmouth, New Hampshire. For his role as mediator, Roosevelt won the Nobel Prize for Peace, the first U.S. President to do so.
Roosevelt also arbitrated a dispute between France and Germany over the division of Morocco. Britain had recognized French control over Morocco in return for French recognition of British control in Egypt. Germany felt excluded by this agreement and challenged France's role in Morocco. Although the French had a weak claim to Morocco, the United States could not reject it without rejecting Britain's claim as well. The settlement in 1906 reached at Algeciras, Spain, saved face for Germany but gave France undisputed control over Morocco; it also paved the way for British control over Egypt. Some historians think that Roosevelt's intervention in these two hot spots averted fighting that might have engulfed all of Europe and Asia in a world war. In any case, Roosevelt's actions greatly strengthened Anglo-French ties with the United States.
Great White Fleet
Roosevelt believed that a large and powerful Navy was an essential component of national defense because it served as a strong deterrent to America's enemies. During his tenure as President, he built the U.S. Navy into one of the largest in the world, by convincing Congress to add battleships to the fleet and increasing its number of enlisted men. In 1907, he proposed sending the fleet out on a world tour. His reasons were many: to show off the "Great White Fleet" and impress other countries around the world with U.S. naval power; to allow the Navy to gain the experience of worldwide travel; and to drum up domestic support for his naval program. In December 1907, a fleet of sixteen battleships left Hampton Roads, Virginia, and traveled around the world, returning home fourteen months later in February 1909.
After losing the 1912 election to Woodrow Wilson (see "Campaigns and Elections" for details), Roosevelt and his son Kermit embarked on a voyage into the jungles of Brazil to explore the River of Doubt in the Amazon region. During the seven-month, 15,000-mile expedition, Roosevelt contacted malaria and suffered a serious infection after injuring his leg in a boat accident. Following his return to the United States, he spent his days writing scientific essays and history books.
When World War I broke out in Europe, the former President led the cause for military preparedness, convinced that the nation should join the war effort. He was greatly disappointed in President Wilson's call for neutrality and denounced his country's inactivity. When the United States finally entered the war in 1917, he offered to organize a volunteer division but the War Department turned him down. However, all four of his sons volunteered to fight in the war. When his youngest son, Quentin, was shot down and killed while flying a mission in Germany, Roosevelt became despondent. Thereafter, although he continued to tour the nation making speeches in favor of war bonds and the war, his mood and voice were less enthusiastic. For the first time in his life, sadness overtook the once unconquerable warrior.
Theodore Roosevelt died in his sleep on January 6, 1919, in his beloved house at Sagamore Hill in Oyster Bay, New York. One commentator said that death had to take him while he slept else it would have had a fight on its hands.
The nation had never known a family in the White House quite like the Roosevelts. The public loved to follow the adventures of the Roosevelt clan; the President understood that his family was a political asset and made it available, to some degree, to the media.
When Roosevelt married Edith Kermit Carow in 1886, he already had a daughter, Alice, from his first marriage. He and Edith had five more children—Theodore, Kermit, Edith, Archibald, and Quentin. For TR, his family was like having his own private circus. His children were everywhere, having the complete run of the place. They took their favorite pony, Algonquin, into the White House elevator, frightened visiting officials with a four-foot King snake, and dropped water balloons on the heads of White House guards.
The grand romp continued at the summer White House, Sagamore Hill, the family's home in Oyster Bay, New York. There, the President led the children and anyone who happened to be visiting on hours-long obstacle hikes, picnics, and swims in the ocean. Roosevelt also loved to engage family, friends, and visitors in grand story-telling sessions about ghosts and the cowboys whom Roosevelt had known out West. He taught the boys to box and the girls to run. He never held back in his affections or in his praise for courage and aggressiveness. He almost drove his wife, Edith, to distraction with his antics, and she often told her best friends that the President was just an ornery little boy at heart.
The nation's population numbered 76 million people in 1900. Eight years later, by the end of Roosevelt's second term, it had increased to 88 million. At the same time, the United States was becoming an urban nation, with wider segments of the population joining the workforce. The percentage that lived on farms had declined from 60 to 54 percent, while the number of women holding down jobs increased, rising from 18 to 21 percent of the total labor force. More and more of these working women were married—25 percent of all women working in 1910 compared to 15 percent in 1900. The only new state to enter the Union during the Roosevelt years was Oklahoma (1907).
Limiting the Franchise
Several important procedural changes in the American franchise took place during the Roosevelt years. First, the Progressive Movement undermined old party structures and thus seriously reduced overall voter participation in elections. Angry that traditional partisanship had resulted in political offices being staffed by "boodlers," crooks, and party hacks, progressives supported reforms aimed at destroying the power of party bosses. Such measures as the direct primary, the initiative, the referendum, the recall, and the direct election of senators were aimed at returning power to an informed and responsible electorate unaffected by party machines or boss politics.
Second, the number of registered voters fell greatly in cities and towns where immigrants dominated the population. Nearly every state in the Union passed personal registration laws between 1890 and 1920, which required identification certificates and personal appearances at designated government offices. Most laws required residency for a certain length of time prior to registration, as well as between registration and voting. These laws reduced the participation of working people who failed to register due to work schedules or, in the case of recent immigrants, were intimidated by the complex regulations written in English. Some states, such as New York, required that those registering demonstrate literacy in the English language, a barrier many immigrants could not overcome.
Numerous states that had allowed non-citizens to vote in the nineteenth century reversed themselves in the twentieth. By 1920, only seven states still allowed non-citizens to vote—and these were states with few immigrants. The newly formed Bureau of Immigration and Naturalization (1906), moreover, greatly increased the obstacles to citizenship. Applicants were forced to appear before a judge (accompanied by two witnesses to vouch for their moral character and good citizenship) who tested them in English on American history and civics. All applicants had to show proof that they had resided continuously in the United States for five years. They also had to swear, and sometimes prove, that they were not anarchists or polygamists.
Finally, a staggering drop in African-American voters in the South occurred during the first decade of the twentieth century. Every ex-Confederate state stripped blacks of their right to vote through literacy tests, property qualifications, and poll taxes. During the 1870s, more than 130,000 blacks had voted in Mississippi. That number fell to 1,300 in 1900. During the Roosevelt years, whites used terror and lynching to intimidate black males throughout the South. These actions reduced the number of black voters who might have qualified to register under the new laws. From 1900 to 1910, more than 1,300 black men were lynched and burned alive in southern and midwestern states. Once blacks were dropped from the voting rolls, registrars stripped many poor and illiterate whites from the rolls in the southern states, reducing the size of the electorate still further.
As result of these developments, voter participation rates fell from 79 percent in 1896 to 65 percent in 1904. The trend never again reached the high levels known in the late nineteenth century.
The most important exception to this trend towards disfranchisement was the growing suffrage movement for women. Although no states extended the vote to women during the Roosevelt presidency, suffragette victories in Wyoming, Colorado, Idaho, and Utah during the 1890s picked up steam again in the post-Roosevelt years. Washington, California, Kansas, Oregon, and Arizona enfranchised women in the years from 1910 to 1912—creating a momentum that eventually produced the Nineteenth Amendment (suffrage for women) in time for the election of 1920.
Theodore Roosevelt is widely regarded as the first modern President of the United States. The stature and influence that the office has today began to develop with TR. Throughout the second half of the 1800s, Congress had been the most powerful branch of government. And although the presidency began to amass more power during the 1880s, Roosevelt completed the transition to a strong, effective executive. He made the President, rather than the political parties or Congress, the center of American politics.
Roosevelt did this through the force of his personality and through aggressive executive action. He thought that the President had the right to use any and all powers unless they were specifically denied to him. He believed that as President, he had a unique relationship with and responsibility to the people, and therefore wanted to challenge prevailing notions of limited government and individualism; government, he maintained, should serve as an agent of reform for the people. His presidency endowed the progressive movement with credibility, lending the prestige of the White House to welfare legislation, government regulation, and the conservation movement. The desire to make society more fair and equitable, with economic possibilities for all Americans, lay behind much of Roosevelt's program.
The President also changed the government's relationship to big business. Prior to his presidency, the government had generally given the titans of industry carte blanche to accomplish their goals. Roosevelt believed that the government had the right and the responsibility to regulate big business so that its actions did not negatively affect the general public. However, he never fundamentally challenged the status of big business, believing that its existence marked a naturally occurring phase of the country's economic evolution.
Roosevelt also revolutionized foreign affairs, believing that the United States had a global responsibility and that a strong foreign policy served the country's national interest. He became involved in Latin America with little hesitation: he oversaw the Panama Canal negotiations to advocate for U.S. interests and intervened in Venezuela and Santo Domingo to preserve stability in the region. He also worked with Congress to strengthen the U.S. Navy, which he believed would deter potential enemies from targeting the country, and he applied his energies to negotiating peace agreements, working to balance power throughout the world.
Even after he left office, Roosevelt continued to work for his ideals. The Progressive Party's New Nationalism in 1912 launched a drive for protective federal regulation that looked forward to the progressive movements of the 1930s and the 1960s. Indeed, Roosevelt's progressive platform encompassed nearly every progressive ideal later enshrined in the New Deal of Franklin D. Roosevelt, the Fair Deal of Harry S. Truman, the New Frontier of John F. Kennedy, and the Great Society of Lyndon B. Johnson.
In terms of presidential style, Roosevelt introduced "charisma" into the political equation. He had a strong rapport with the public and he understood how to use the media to shape public opinion. He was the first President whose election was based more on the individual than the political party. When people voted Republican in 1904, they were generally casting their vote for Roosevelt the man instead of for him as the standard-bearer of the Republican Party. The most popular President up to his time, Roosevelt used his enthusiasm to win votes, to shape issues, and to mold opinions. In the process, he changed the executive office forever. | http://millercenter.org/president/roosevelt/essays/biography/print | 13 |
16 | Artist rendition of an Auroch,
ancestor of the modern cow
Aurochs, the wild ancestors of modern cows, once ranged over large areas of Asia, Europe and North Africa.
Aurochs were first domesticated 8,000 to 10,000 years ago in the Fertile Crescent area of the Near East and evolved into two types of domestic cattle, the humped Zebu (Bos indicus) and the humpless European Highland cattle (Bos taurus).
Some scientists believe that domesticated cattle from the Fertile Crescent spread throughout Eurasia, while others believe that a separate domestication event took place in the area of India and Pakistan.
Through analyzing degraded fats on unearthed potshards, scientists have discovered that Neolithic farmers in Britain and Northern Europe may have been among the first to begin milking cattle for human consumption.
The dairying activities of these European farmers may have begun as early as 6,000 years ago. According to scientists, the ability to digest milk was slowly gained some time between 5000-4000 B.C.E. by the spread of a genetic mutation called lactase persistance that allowed post-weaned humans to continue to digest milk.
If that date is correct, it may pre-date the rise of other major dairying civilizations in the Near East, India, and North Africa.
Discovery Channel"Early Brits Were Original Cheeseheads," (Discovery Channel Website; accessed Oct. 8, 2007) BBC"Early Man 'Couldn't Stomach Milk'," www.bbc.co.uk (accessed Oct. 30, 2007)
Temple of Ninhursag
Although there is evidence of cattle domestication in Mesopotamia as early as 8000 B.C.E., the milking of dairy cows did not become a major part of Sumerian civilization until approximately 3000 B.C.E.
Archaelogical evidence shows that the Ancient Sumerians drank cow's milk and also made cow's milk into cheeses and butters.
The picture to the left is of a carved dairy scene found in the temple of Ninhursag in the Sumerian city of Tell al-Ubaid. The scene, which shows typical dairy activities such as milking, straining and making butter, dates to the first half of the third millennium B.C.E.
At least as early as 3100 B.C.E., the domesticated cow had been introduced to, or had been separately domesticated in, Northern Africa.
In Ancient Egypt, the domesticated cow played a major role in Egyptian agriculture and spirituality.
Attesting to its central role in Egyptian life, the cow was deified. The Egyptians "held the cow sacred and dedicated her to Isis, goddess of agriculture; but more than that, the cow was a goddess in her own right, named Hathor, who guarded the fertility of the land."
"The ancient Hebrews...held milk in high favor; the earliest Hebrew scriptures contain abundant evidence of the widespread use of milk from very early times. The Old Testament refers to a 'land which floweth with milk and honey' some twenty times. The phrase describes Palestine as a land of extraordinary fertility, providing all the comforts and necessities of life. In all, the Bible contains some fifty references to milk and milk products."
"The first cattle to arrive in the New World landed in Vera Cruz, Mexico, in 1525. Soon afterword, some made their way across the Rio Grande to proliferate in the wild. They became known as 'Texas Cattle.' Soon after, some of the [Spanish] settlers transported cattle to South America from the Canary Islands and Europe. More followed, and cattle multiplied rapidly throughout New Spain, numbering in the thousands within a few years."
The first cows were brought to Plymouth colony in 1624.
"The cattle present in 1627 in Plymouth included black, red, white-backed and white-bellied varieties. The black cattle may have been of a breed or similar to those today called Kerrys. Kerry cattle are descended from ancient Celtic cattle and were originally native to County Kerry Ireland..."
Craig S. Chatier"Livestock in Plymouth Colony," Plymouth Archaeological Rediscovery Project website (accessed Oct. 9, 2007)
Spanish California Missions
"The Jesuit Priest, Eusebio Kino, introduced cattle to Baja California in 1679 as part of the missionary effort to establish mission settlements... Milk became a blessing to missionaries in time of need."
During a food shortage in 1772, Junipero Serra stated that "...milk from the cows and some vegetables from the garden have been [our] chief subsistence."
In 1776, at the Mission San Gabriel, Father Font wrote that "The cows are very fat and they give much and rich milk, which they [Native American women at the mission] make cheese and very good butter."
Robert L. Santos"Dairying in California through 1910," Southern California Quarterly, Summer 1994
Milk Maids and the Smallpox Vaccine
Man receiving smallpox vaccination
In the 18th century it was common folk knowledge in Europe that milk maids (women who milked cows) seemed to be immune from the smallpox plagues when they swept through Europe.
In 1796, English physician Edward Jenner developed a vaccine for smallpox based upon this folk knowledge.
"Recognizing that dairymaids infected with cowpox were immune to small-pox, Jenner deliberately infected James Phipps, an eight year old boy, with cowpox in 1796. He then exposed Phipps to smallpox-which Phipps failed to contract. After repeating the experiment on other children, including his own son, Jenner concluded that vaccination provided immunity to smallpox…"
In the United States, compulsory smallpox vaccination was introduced on a state by state basis, beginning in the early 1800s.
In the early 19th century, the alcohol distillery business in the United States began to grow. Large amounts of swill (spent-grains) were produced as a byproduct of whisky and other alcohol production. Many distilleries opened dairies and began feeding their dairy cows with the waste swill. The low nutritional content of the swill lead to sickness in the cows and in the humans who drank their milk.
"Confined to filthy, manure-filled pens, the unfortunate cows gave a pale, bluish milk so poor in quality, it couldn't even be used for making butter or cheese."
Raw-Milk-Facts.com"A Brief History of Raw Milk," www.raw-milk-facts.com (accessed Oct. 9, 2007)
Louis Pasteur (1822-1895)
French chemist and biologist Louis Pasteur, considered one of the fathers of microbiology, helped prove that infectious diseases and food-borne illnesses were caused by germs, known as the "germ theory."
Pasteur's research demonstrated that harmful microbes in milk and wine caused sickness, and he invented a process - now called "pasteurization" - whereby the liquids were rapidly heated and cooled to kill most of the organisms.
In 1883 a struggle known as the "milk war" broke out between milk farmers/producers and milk distribution companies in New-York.
Milk farmers demanded a higher price for their milk. When the distribution companies refused to pay more the farmers organized "spilling committees" that blocked roads, seized shipments and dumped out their own milk instead of selling it to the distributors.
These "spilling committees" created a "milk famine" in New York City and forced the milk distribution companies to pay the farmers higher prices for their milk.
"One of the first glass milk bottles was patented in 1884 by Dr. Henry Thatcher, after seeing a milkman making deliveries from an open bucket into which a child's filthy rag doll had accidentally fallen. By 1889, his Thatcher's Common Sense Milk Jar had become an industry standard. It was sealed with a waxed paper disc that was pressed into a groove inside the bottle's neck. The milk bottle, and the regular morning arrival of the milkman, remained a part of American life until the 1950s, when waxed paper cartons of milk began appearing in markets."
Distributing certified raw milk
Dr. Henry L. Coit's "Baby Keep Well" clinic, 1906
In the mid-to-late 1800s milk-born illness was a major problem.
Milk produced at unhygienic production facilities (like distillery dairies) served as a medium to spread diseases like typhoid and tuberculosis. These diseases created a public health crisis that led to skyrocketing infant mortality in the cities.
As a result, "[i]n 1889, two years before the death of his son from contaminated milk, Newark, New Jersey doctor Henry Coit, MD urged the creation of a Medical Milk Commission to oversee or 'certify' production of milk for cleanliness, finally getting one formed in 1893."
Raw-Milk-Facts.com"A Brief History of Raw Milk" (www.raw-milk-facts.com; accessed Oct. 9, 2007)
Commercial Pasteurizing Begins
In 1895, commercial pasteurizing machines for milk were introduced in the United States.
"By 1917, pasteurization of all milk except that from cows proven to be free of tuberculosis was either required or officially encouraged in 46 of the country's 52 largest cities. The proportion of milk pasteurized in these cities ranged from 10 percent to 97 percent; in most it was well over 50 percent."
Congress passed the Capper-Volstead Act, allowing producers of agricultural products, such as milk, to "act together in associations" to organize collective processing, preparation for market, handling, and marketing of milk and other agricultural goods.
The act was of historic significance as it granted producers of milk and other agricultural products special exemptions from monopoly laws to help farmers raise the price for their products.
"Milk marketing orders came into existence as a result of the Agricultural Marketing Agreement Act of 1937...The rationale for the legislation was to reduce disorderly marketing conditions, improve price stability in fluid milk markets, and ensure a sufficient quantity of pure and wholesome milk.
The orders are regulations approved by dairy farmers in individual fluid milk markets that require manufacturers to pay minimum monthly prices for milk purchases."
Dairy farmers in the countryside outside New York City were hit hard by the Great Depression.
Milk prices in New York City fell so low that the milk distributors were paying farmers less for their milk than it cost them to produce it.
As things got desperate, dairy farmers organized the Dairy Farmers Union (DFU). Led by Archie Wright, a former organizer for the radical Industrial Workers of the World, the DFU went on strike in 1939.
During the strike, DFU members blocked roads and halted market-bound trucks. They confiscated milk and spilled it out on the roadsides. In some cases they threw bottles of kerosene on trucks that did not stop. The picketers fought non-strikers who tried to cross their lines, and State troopers who intervened.
"Federal assistance in providing milk for school children has been in operation since June 4, 1940, when a federally subsidized program was begun in Chicago. It was limited to 15 elementary schools with a total enrollment of 13,256 children. The schools selected were located in low-income areas of the city. The price to the children was 1 cent per one-half pint, and children who could not pay were given milk free, the cost being paid through donations by interested persons."
The Works Progress Administration (WPA) was formed on May 6, 1935, as a part of President Franklin D. Roosevelt's New Deal plan to bring the United States out of the Great Depression. The WPA differed from other New Deal programs in that it focused on providing work for artists, educators, writers and musicians.
The two posters pictured here were painted by artists under commission from the WPA. Like many WPA projects, these paintings served a dual purpose: to employ artists and to create increased demand for milk. As such, these paintings (and many others like them) were a form of federally subsidized dairy advertising.
At its height, the WPA employed over 3 million people.
Margaret Bing"A Brief Overview of the WPA," www.broward.org (accessed Oct. 16, 2007)
National School Lunch Act Passed
In 1946, President Harry Truman signed the National School Lunch Act into law. The act was designed to provide nutritious lunches to the nation's children. The reasoning behind the act was laid out in its text: "It is hereby declared to be the policy of Congress, as a measure of national security, to safeguard the health and well-being of the Nation's children and to encourage the domestic consumption of nutritious agricultural commodities and other food, by assisting the States, through grants-in aid and other means, in providing an adequate supply of food and other facilities for the establishment, maintenance, operation and expansion of nonprofit school lunch programs.”
The Secretary of Agriculture prescribed three types of lunches which would be acceptable under the act, designed as Type A, Type B, and Type C.
It was mandated that each lunch include between 1/2 to 2 pints of whole milk.
"The SMP provides milk free of charge or at a low cost to children in schools and child care institutions that do not participate in other Federal child nutrition meal service programs. The federally assisted program reimburses schools for the milk they serve."
Dairy Act of 1983 & Creation of the National Dairy Board
"The Dairy Production Stabilization Act of 1983 (Dairy Act) authorized a national producer program for dairy product promotion, research, and nutrition education to increase human consumption of milk and dairy products and reduce milk surpluses. This self-help program is funded by a mandatory 15-cent-per-hundredweight assessment on all milk produced in the contiguous 48 States and marketed commercially by dairy farmers. It is administered by the National Dairy Promotion and Research Board (Dairy Board). The Dairy Act provides that dairy farmers can direct up to 10 cents per hundredweight of the assessment for contributions to qualified regional, State, or local dairy product promotion, research, or nutrition education programs."
In 1990, the U.S. Congress passed the Fluid Milk Promotion Act to promote the sale of milk and to allow collective, producer financed, generic milk advertising.
The act stated that "fluid milk products are basic foods and are a primary source of required nutrients such as calcium, and otherwise are a valuable part of the human diet," and mandated that "fluid milk products must be readily available and marketed efficiently to ensure that the people of the United States receive adequate nourishment."
"The Food Guide Pyramid was introduced in 1992 to illustrate a food guide developed by the U.S. Department of Agriculture (USDA) to help healthy Americans use the Dietary Guidelines to choose foods for a healthy diet.
The Food Guide Pyramid is a graphic tool that conveys 'at a glance' important dietary guidance concepts of variety, proportion, and moderation. These concepts are not new—with varying emphasis, they have been part of USDA food guides for almost 100 years."
The 1992 Food Pyramid recommended that 2-3 servings of milk and other dairy products be consumed daily.
In 1993, the California Milk Processor Board was formed to increase milk consumption. Their first major public success was the creation of the "Got Milk?" advertisement campaign.
In 1995, the "Got Milk?" slogan was registered as a federal trademark by the National Dairy Boards and the "Got Milk?" campaign went national.
"Awareness of GOT MILK? is over 90% nationally and it is considered one of the most important and successful campaigns in history…The Dairy industry spends $150-million annually to support GOT MILK?, including use on those Milk Mustache ads. In addition, the 'brand' has become a hot property with over 100 product licensees."
MilkPEP"About the CMPB," www.gotmilk.com (accessed Oct. 16, 2007)
Nov. 5, 1993
Artificial Bovine Growth Hormone Approved by FDA
On November 5, 1993, the Food and Drug Administration (FDA) approved genetically engineered Artificial Bovine Growth Hormone (rBST, rBGH, BGH) for commercial use in the United States.
"In March 1993, before rbST was approved, an FDA advisory committee concluded that the use of rbST -- and any increased risk of mastitis and resulting increased use of antibiotics in treated cattle -- would not pose a risk to human health.
Monsanto Co.'s Posilac, the only rbST product approved for increasing milk production in dairy cattle, was first marketed in February 1994."
In response to the FDA approval of Artificial Bovine Growth Hormone (rBST, rBGH, BGH), the Pure Food Campaign launched a series of protests around the country where milk was spilled in symbolic protest.
Jeremy Rifkin, an organizer of the Pure Food Campaign, stated that there was widespread public concern over the safety of rBST and that "We believe this product is a hazard to health."
New York Times"Grocers Challenge Use of New Drug for Milk Output," Feb. 4, 1994
FDA Issues rBST Labeling Guidelines
In 1994, the FDA issued labeling guidelines for milk (and dairy products made with milk) produced by cows that have not been treated with rBST. In its guidelines the FDA stated: "Because of the presence of natural bST in milk, no milk is 'bST-free,' and a 'bST-free' labeling statement would be false."
The FDA advised that the following statement should be included on all products labeled as being made with milk from cows that are not treated with rBST: "No significant difference has been shown between milk derived from rbST-treated and non-rbST-treated cows."
"Dairy producer board members of the National Dairy Board (NDB) and the United Dairy Industry Association (UDIA) create Dairy Management Inc.™ (DMI) as the organization responsible for increasing demand for U.S.-produced dairy products on behalf of America’s dairy producers; direct coordination between national and local dairy promotion programs begins.
DMI forms the U.S. Dairy Export Council® (USDEC) to leverage investments of dairy processors, exporters, dairy producers, and industry suppliers to enhance the U.S. dairy industry’s ability to serve international markets. Both dairy checkoff dollars [funds collected from farmers for collective generic advertisements] and USDEC membership dues fund the organization."
In December 2001, Suiza Foods Corporation acquired Dean Foods Company and formed the "new" Dean Foods Corporation. The new Dean Foods Corporation became the nation's largest dairy processor and distributor with more than 25,000 employees and $10 billion in revenues.
Dean Foods Company "A Brief History of the New Dean Foods Company," www.deanfoods.com (accessed Oct. 22, 2007)
PETA's lawsuit claimed that the CMAB's "Happy Cows" advertising campaign constituted false advertising. They charged that the idyllic living conditions of the "Happy Cows" were in stark contrast to the large factory farm reality of most dairy cows in California.
The suit was thrown out by the California Superior Court in 2002. PETA appealed the decision to the California Supreme Court, which refused to review the case in 2005.
PETA "PETA Sues the California Milk Board for False Advertising," www.unhappycows.com (accessed Oct. 17, 2007)
Jan. 5, 2004
Dean Foods Acquires Horizon Organic
On January 5, 2004, Dean Foods, the nation's largest dairy processor and distributor, acquired Horizon Organic, the nation's leading organic milk and dairy product processor.
Dean Foods Company"A Brief History of the New Dean Foods Company," www.deanfoods.com (accessed Oct. 22, 2007)
Milk and Weight Loss Ad Campaign Initiated
In 2004, Dairy Management Inc. and the National Dairy Promotion and Research Board initiated a nationwide advertising campaign with the slogan "3-A-Day. Burn More Fat, Lose Weight."
The advertising campaign ran television, print and internet advertising claiming that the consumption of 3 servings of milk or other dairy products each day could help with weight loss.
Physicians Group Files Lawsuit Demanding Lactose Intolerance Warnings on Milk
In October 2005, the Physicians Committee for Responsible Medicine (PCRM) filed a class-action lawsuit on behalf of all residents of Washington, DC, against a number of large milk companies demanding lactose intolerance warnings on milk.
PCRM filed the lawsuit "To help raise public awareness about lactose intolerance...on behalf of all residents in Washington, D.C., who may purchase milk without realizing the serious digestive distress it can cause. Filed in the Superior Court of the District of Columbia on October 6, the suit calls for all milk cartons sold in D.C. to carry labels warning of milk’s possible side effects."
For many years, milk consumption in Japan had been on the decline, creating a surplus milk problem in Japan. The Japanese island of Hokkaido alone had to dispose of nearly 900 tons of surplus milk in a single month.
Sensing an opportunity, Hokkaido liquor store owner Chitoshi Nakahara decided to see if he could ferment this excess milk into beer.
The experiment worked, and Nakahara began selling "Bilk" in local liquor stores in 2007.
In response to a 2005 complaint from the Physicians Committee for Responsible Medicine(PCRM), the Federal Trade Commission (FTC) published a letter regarding The National Fluid Milk Processor Promotion Board (and others) advertisements that claimed drinking milk helps with weight-loss.
The letter stated that the FTC had been "advised by USDA staff that the Dairy Board, the Fluid Milk Board, and other affiliated entities that engage in advertising and promotional activities on behalf of the two boards, have determined that the best course of action at this time is to discontinue all advertising and other marketing activities involving weight loss claims until further research provides stronger, more conclusive evidence of an association between dairy consumption and weight loss..."
A lawsuit (still in appeals as of Oct. 31, 2007) was also filed by the PCRM against a number of milk retail companies, including Kraft Foods and General Mills, to prevent them from making milk weight-loss claims.
"The Kroger Co. announced today [Aug. 1, 2007] it will complete the transition of milk it processes and sells in its stores to a certified rBST-free supply by February 2008.
The Company said its decision was based on customer feedback in the markets it serves.
Headquartered in Cincinnati, Ohio, Kroger is one of the nation's largest retail grocery chains...At the end of the first quarter of fiscal 2007, the Company operated (either directly or through its subsidiaries) 2,458 supermarkets and multi-department stores in 31 states..."
CNN"Kroger to Complete Transition to Certified rBST-Free Milk by Early 2008," Aug. 1, 2007
Apr. 16, 2007
Nation's Largest Organic Dairy Violates Organic Rules
On April 16, 2007, Aurora Organic Dairy, the largest organic milk producer in the country, and supplier of organic milk to Wal-Mart, Target, Costco, Safeway and many other large stores, received a notice of proposed revocation from the USDA for willful violations of the 1990 Organic Foods Production Act.
The revocation letter from the USDA described 14 violations committed by Aurora Organic Dairy and stated: "Due to the nature and extent of these violations, the NOP proposes to revoke Aurora Organic Dairy's production and handling certifications under the NOP."
According to the Cornucopia Institute, a farm policy research group, the practices of Aurora are "a 'horrible aberration' and that the vast majority of all organic dairy products are produced with high integrity."
FTC Affirms the Legality of 'rBST Free' Labels on Milk
In Feb. 2007, the Monsanto Corporation (producers of rBST) filed a complaint with the Federal Trade Commission alleging that a number of milk processors were engaging in "false and deceptive" advertising by labeling their products as being free of the artificial growth hormone rBST and thereby inferring that milk from cows injected with the growth hormone is inferior.
In its response to the compliant filed by the Monsanto Corporation the FTC wrote that its"staff agrees with FDA that food companies may inform consumers in advertising, as in labeling, that they do not use rBST.”
The United States Food and Drug Administration (FDA) released its 968 page report "Animal Cloning: A Risk Assessment,” and announced to the public that milk from cloned cows had been approved for human consumption.
In its Jan. 15, 2008 press release announcing the report and its conclusions, the FDA wrote that "meat and milk from clones of cattle, swine, and goats, and the offspring of clones from any species traditionally consumed as food, are as safe to eat as food from conventionally bred animals.”
Market in Venice, CA Raided by Police for Selling Raw Milk; Three Arrested
"The owner of a Venice health food market and two other people were arrested on charges related to the allegedly unlawful production and sale of unpasteurized dairy products...
The arrests of James Cecil Stewart, Sharon Ann Palmer and Eugenie Bloch on Wednesday marked the latest effort in a government crackdown on the sale of so-called raw dairy products.
Prosecutors in Los Angeles alleged that Stewart, 64, operates a Venice market called Rawesome Foods through which he illegally sold dairy products that did not meet health standards because they were unpasteurized...
Palmer, 51, has operated Healthy Family Farms in Santa Paula since 2007 without the required licensing for milk production, prosecutors allege. She and her company face nine charges related to the production of unpasteurized [raw] milk products.
Bloch, a Healthy Family Farms employee, is charged with three counts of conspiracy." | http://milk.procon.org/view.resource.php?resourceID=000832 | 13 |
16 | NMSA Research Summary
Vocabulary Teaching and Learning Across Disciplines
In support of This We Believe characteristics:
- A shared vision that guides decisions
- Students and teachers engaged in active learning
- Curriculum that is relevant, challenging, integrative, and exploratory
Middle level educators understand that vocabulary is at the heart of general language development and conceptual learning and is, therefore, a critical aspect of curricular programs in all disciplines at the middle school level. The extensive research base on vocabulary learning and teaching provides us with important guidelines that inform instruction (Harmon, Wood, & Hedrick, in press). In this research summary, we highlight relevant studies that support several key understandings of vocabulary learning and teaching. The following are six key understandings for all teachers across age levels and content areas.
- Word knowledge is important for learning.
- Word knowledge is complex.
- Metacognition is an important aspect of vocabulary learning.
- Effective vocabulary instruction moves beyond the definitional level of word meanings.
- Vocabulary learning occurs implicitly in classrooms across disciplines.
- Vocabulary learning occurs through direct instruction.
Word knowledge is important for learning
Educators understand the importance of vocabulary, and few, if any, would omit vocabulary from their instruction. We know that a large vocabulary is an asset to readers; those who know many words are more likely to comprehend what they read. In fact, we have known for many decades that vocabulary size is a strong predictor of reading comprehension (Anderson & Freebody, 1981; Davis, 1944; Singer, 1965). However, the relationship between word knowledge and reading comprehension is complex and not easily described as one causing the other (Pearson, Heibert, & Kamil, 2007). Teaching unfamiliar words before students encounter them in a passage does not necessarily guarantee comprehension. Nonetheless, research indicates that there is a strong, positive, reciprocal relationship between word knowledge and reading comprehension (Baumann, Kame'enui, & Ash, 2003; National Reading Panel, 2000; RAND Reading Study Group, 2002). That is, vocabulary knowledge enables students to comprehend what they read, and the act of reading itself provides the opportunity for students to encounter and learn new words. Furthermore, the more words students know, the more likely they are to learn new words easily (Shefelbine, 1990). Conversely, students with limited vocabularies tend to read less and, therefore, have fewer exposures to new words in running text (Stanovich, 1986). Tremendous differences in word knowledge exist among students—differences that begin to appear at very young ages (Hart & Risley, 1995) and continue to impact learning as students move through school.
Word knowledge is complex
The nature of vocabulary learning and acquisition is complex and involves several processes that can inform instruction. Nagy and Scott (2000) described five noteworthy components of word knowledge. First, they pointed out that word learning is incremental—that is, we learn word meanings gradually and internalize deeper meanings through successive encounters in a variety of contexts and through active engagement with the words. For example, the average tenth grader is likely to have a deeper and more sophisticated understanding of the term atom compared to the knowledge of an average fourth grader, who still has a more simplistic understanding of the term. We also know words at varying levels of familiarity from no knowledge to some knowledge to a complete and thorough knowledge, which serves us especially well in speaking and writing (Beck, Perfitti, & McKeown, 1982; Dale, 1965). It may be that, for some words, students may only need to have a general understanding of a term to keep comprehension intact. For other words, a deeper understanding may be necessary for students to successfully comprehend a passage.
Another aspect of word knowledge is the presence of polysemous or multiple meaning words. Many words have different meanings depending upon the context in which they are used. This is especially evident in the various content areas such as mathematics, where polysemous word meanings differ greatly from the common usage of words (Durkin & Shire, 1991; Wood & Harmon, 2008; Rubenstein & Thompson, 2002). For example, a common word such as table represents an entirely different meaning in science texts when authors discuss the Periodic Table.
A third aspect of word knowledge described by Nagy and Scott (2000) is the different types of knowledge involved in knowing a word. The types of knowledge include the use of words in oral and written language, correct grammar usage of words or syntactical knowledge, semantic understandings such as appropriate synonyms and antonyms, and even morphological understandings that involve correct usage of prefixes and suffixes. Surprisingly, more than 60% of words encountered in academic texts can be taught morphologically (Nagy & Anderson, 1984). In particular, Milligan and Ruff (1990), in their analysis of social studies textbooks used from elementary through high school, found that approximately 71% of the glossary terms contained affixes and roots that could be directly taught.
A fourth aspect of word knowledge is the notion that learning a word meaning is inextricably related to knowledge of other related words. We do not learn word meanings in isolation; we learn word meanings in relation to other words and concepts. For example, knowing the concept of rectangle involves knowing about polygons, quadrilaterals, right angles, squares, and other related concepts. Finally, Nagy and Scott (2000) noted that word knowledge differs according to the type of word. Knowing the meaning of prepositions (e.g., if, under, around) differs greatly from knowing the meaning of specific science terminology, such as nucleus, proton, and neutron.
Metacognition is an important aspect of vocabulary learning
Middle level students need to engage in metacognitive thinking about what they do and do not understand as they encounter unfamiliar vocabulary. With regard to word learning, metacognition goes beyond encounters with unknown words to include a more expanded awareness of vocabulary that enables learners to continually build and increase their vocabularies (Stahl & Nagy, 2006). According to Stahl and Nagy, word awareness is a critical aspect of a comprehensive vocabulary program and consists of two components: (1) the "generative" aspect of word learning that involves developing word consciousness, and (2) the acquisition of sufficient independent word learning strategies that are useful in learning words across a variety of texts and disciplines.
Described by Anderson and Nagy (1992) as an awareness and interest in word meanings, word consciousness allows learners to develop an appreciation of the power of words, an understanding of the importance of word choice, and an awareness of the differences between spoken and written language (Graves, 2006). Word consciousness is especially important for English language learners, who must be critically aware of figurative language, such as idioms, which makes word learning more challenging.
Teaching students independent word learning strategies is critical for supporting vocabulary growth and development. Given the thousands of words students must learn to handle academic demands (Nagy & Anderson, 1984), direct instruction of vocabulary alone cannot shoulder the responsibility for increasing vocabulary knowledge. In fact, in their study of students in grades six through nine, Nagy and Anderson estimated that students in these grades may be exposed to 3,000 to 4,000 unfamiliar words while reading close to one million words in context during an academic school year (roughly 20 minutes per day). These numbers indicate that students also need to acquire word learning strategies for helping themselves figure out the meanings of words on their own (Graves, 2006). Two major independent word learning strategies are the use of context and morphology clues. While studies on the use of context clues as an independent and versatile strategy for word learning have been somewhat limited, and some even cautionary about the limitations of naturally occurring contexts (Baldwin & Schatz, 1985; Schatz & Baldwin, 1986), there is sufficient evidence to support instruction in context clues for helping middle grades students infer word meanings (Buikema & Graves, 1993; Jenkins, Matlock, & Slocum, 1989; Kuhn & Stahl, 1998; Patberg, Graves, & Stibbe, 1984). Other studies provide evidence that fourth, sixth, seventh, and eighth grade students can be taught to use morphological elements (i.e., prefixes, suffixes, roots) to infer word meanings in running text (Graves & Hammond, 1980; Wysocki & Jenkins, 1987).
Effective vocabulary instruction moves beyond the definitional level of word meanings
While the use of a dictionary for word learning is actually another independent word learning strategy, the ubiquitous practice of using dictionary definitions as an instructional technique has received much attention by researchers. The findings clearly indicate the limitations of this practice. Because definitions provide only a superficial level of word knowledge and rarely show students how to use the words, vocabulary instruction must move beyond the definitional level of word meanings. Miller and Gildea (1987) discussed the difficulties students have with using dictionary definitions to understand word meanings. They observed that their fifth and sixth grade participants searched for familiar ideas in the definitions and used that information to write their own sentences. For example, one student wrote, "I was meticulous about falling off the cliff" after reading the following definition for meticulous: "very careful or too particular about small details" (p. 99). The student focused on the phrase "very careful" and used that information for writing the sentence. Miller and Gildea found the same limitations when students were given an illustrative sentence containing a targeted word and were then asked to use that information to write a sentence. For example, for the illustrative sentence "The king's brother tried to usurp the throne," one student wrote, "The blue chair was usurped from the room" (p. 98). In this case, the student substituted the concept of "take" in the new sentence. From these observations, Miller and Gildea argued that students learn words in what they call "intelligible contexts" where students perceive a need to know a word meaning and are motivated to pursue understanding.
Scott and Nagy (1997) found that using dictionaries as a source of word meanings was problematic for the fourth and sixth grade students in the study, especially in terms of correct usage. Similar to Miller and Gildea's (1987) observation, students made what Scott and Nagy call "fragment selection errors," using only familiar parts of the definition to determine word meaning. In conclusion, instruction that uses definitions alone is not likely to impact comprehension (Baumann et al., 2003).
Vocabulary learning occurs implicitly in classrooms across disciplines
Vocabulary learning also occurs implicitly in language arts classrooms as well as content area classrooms, especially with regard to incidental word learning through context. Research studies have shown that upper grade students across ability levels can acquire vocabulary incidentally through reading and listening (Nagy & Herman, 1987; Sternberg, 1987). Nagy and Herman found that new words representing known concepts were more easily learned incidentally during independent reading than words that were more conceptually difficult. In another study, Swanburn and de Glopper (1999) found that middle level and secondary readers acquire partial understanding of approximately 15% of the unfamiliar words they encounter while reading. These studies support wide reading as an important component in a comprehensive vocabulary program. Reading widely and frequently is not only related to school achievement but also to increased vocabulary acquisition. In their study on the amount of time students spend reading, Anderson, Wilson, and Fielding (1988) found a positive correlation between the amount of time fifth grade students spend reading and their reading achievement scores on a standardized reading test. Students with scores at the 98th percentile on the test read approximately 5 million words per year, while those students scoring at the 50th percentile read approximately 600,000 words per year.
Vocabulary learning occurs through direct instruction
A comprehensive, research-based program for supporting vocabulary learning includes the previously discussed topics of instruction on independent word learning strategies, an emphasis on word consciousness, and the importance of wide reading. Direct instruction of specifically targeted words is also a critical component of an effective vocabulary program and has a solid research base. The well-known and widely accepted research of Beck, McKeown, and their colleagues (Beck, Perfitti, & McKeown, 1982; McKeown, Beck, Omanson, & Perfitti, 1983; McKeown, Beck, Omanson, & Pople, 1985) with upper elementary and middle grades students has shown that effective vocabulary instruction places an emphasis on the semantic relationship among words. In these studies, instruction moved beyond the definitional level to include activities for presenting words in semantic categories, using words in meaningful sentence contexts, and applying words in new contexts. Beck, McKeown, and their colleagues concluded that both word learning and comprehension were positively impacted by instruction that focused on the semantic relatedness of words; highlighted words central to passage understanding; and provided students with frequent, meaningful encounters with the words.
There are other studies on vocabulary instruction that focus on specific techniques for supporting word learning with young adolescents. For example, the keyword method, a mnemonic device, has a solid research base documenting its effectiveness for helping students remember word meanings (Levin, Levin, Glasman, & Nordwall, 1992; Pressley, Levine, & McDaniel, 1987; Pressley, Ross, Levin, & Ghatala, 1984). Studies also demonstrate that semantic maps to help students visualize the relationship among words are effective in promoting word learning (Johnson, Toms-Bronowski, & Pittelman, 1982 and Johnson, Pittelman, Toms-Bronowski, & Levin, 1984 as cited in Baumann et al., 2003). In addition, categorizing techniques, such as the Concept of Definition Map (Schwartz & Raphael 1985), as well as self-selection activities where students select words to learn (Ruddell & Shearer, 2002), are worthwhile teaching strategies for supporting vocabulary learning.
This brief summary of vocabulary research highlights six basic key understandings that middle grades teachers in all content areas can use to inform their instruction. The research base on vocabulary is extensive and provides us with the direction we need to make critical decisions about how to help all students learn the vocabulary they need to acquire conceptual knowledge in the various subject matter disciplines.
Anderson, R. C., & Freebody, P. (1981). Vocabulary knowledge. In J. T. Guthrie (Ed.), Comprehension and teaching: Research reviews (pp. 77–117). Newark, DE: International Reading Association.
Anderson, R. C., & Nagy, W. E. (1992). The vocabulary conundrum. The American Educator, 16, 14–18, 44–47.
Anderson, R. C., Wilson, P. T., & Fielding, L. G. (1988). Growth in reading and how children spend their time outside of school. Reading Research Quarterly, 23, 285–303.
Baldwin, R. S., & Schatz, E. L. (1985). Context clues are ineffective with low frequency words in naturally occurring prose. In J. A. Niles & R. V. Lalik (Eds.), Issues in literacy: A research perspective: Thirty-fourth yearbook of the National Reading Conference (Vol. 34, pp. 132–135). Rochester, NY: National Reading Conference.
Baumann, J. F., Kame'enui, E. J., & Ash, G. E. (2003). Research on vocabulary instruction: Voltaire Redux. In D. L. J. Flood, J. R. Squire, & J. M. Jensen (Eds.), Handbook of research on teaching the English language arts (2nd ed., pp. 752–785). Mahwah, NJ: Erlbaum.
Beck, I. L., Perfetti, C. A., & McKeown, M. G. (1982). Effects of long-term vocabulary instruction on lexical access and reading comprehension. Journal of Educational Psychology, 74, 506–521.
Buikema, J., & Graves, M. (1993). Teaching students to use context cues to infer word meanings. Journal of Reading, 36, 450–457.
Dale, E. (1965). Vocabulary measurement: Techniques and major findings. Elementary English, 42, 82–88.
Davis, F. B. (1944). Fundamental factors of comprehension in reading. Psychometrika, 9, 185–197.
Durkin, K., & Shire, B. (1991). Primary school children's interpretations of lexical ambiguity in mathematical descriptions. Journal of Research in Reading, 14(1), 46–55.
Graves, M. F. (2006). The vocabulary book: Learning and instruction. Newark, DE: International Reading Association.
Graves, M. F., & Hammond, H. K. (1980). A validated procedure for teaching prefixes and its effect on students' ability to assign meaning to novel words. In M. L. Kamil & A. J. Moe (Eds.), Perspectives on reading research and instruction: Twenty-ninth yearbook of the National Reading Conference (Vol. 29, pp. 184–188). Washington, DC: National Reading Conference.
Harmon, J. M., Wood, K. W., & Hedrick, W. B. (in press). Vocabulary instruction in middle and secondary content classrooms: Understandings and directions from research. In A. Farstrup & J. Samuels (Eds.), What research has to say about vocabulary instruction. Newark, DE: International Reading Association.
Hart, B., & Risley, T. (1995). Meaningful differences in the everyday lives of young American children. Baltimore, MD: Paul H. Brookes.
Jenkins, J. R., Matlock, B., & Slocum, T. A. (1989). Two approaches to vocabulary instruction: The teaching of individual word meanings and practice in deriving word meanings from context. Reading Research Quarterly, 24, 215–235.
Kuhn, M., & Stahl, S. (1998). Teaching children to learn word meanings from context: A synthesis and some questions. Journal of Literacy Research, 30, 119–138.
Levin, J. R., Levin, M. E., Glasman, L. D., & Nordwall, M. B. (1992). Mnemonic vocabulary instruction: Additional effectiveness evidence. Contemporary Educational Psychology, 17, 156–174.
McKeown, M. G., Beck, I. L., Omanson, R. C., & Perfetti, C. A. (1983). The effects of long-term vocabulary instruction on reading comprehension: A replication. Journal of Reading Behavior, 15, 3–18.
McKeown, M. G., Beck, I. L., Omanson, R. C., & Pople, M. T. (1985). Some effects of the nature and frequency of vocabulary instruction on the knowledge and use of words. Reading Research Quarterly, 20, 522–535.
Miller, G. A., & Gildea, P.M. (1987). How children learn words. Scientific American, 257(3), 94–99.
Milligan, J. L., & Ruff, T. P. (1990). A linguistic approach to social studies vocabulary development. The Social Studies, 81, 218–220.
Nagy, W. E., & Anderson, R. C. (1984). How many words are there in printed school English? Reading Research Quarterly, 19, 304–330.
Nagy, W. E., & Herman, P. A. (1987). Breadth and depth of vocabulary knowledge: Implications for acquisition and instruction. In M. G. McKeown & M. E. Curtis (Eds.), The nature of vocabulary acquisition (pp. 19–35). Hillsdale, NJ: Erlbaum.
Nagy, W. E., & Scott, J. A. (2000). Vocabulary processes. In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. III, pp. 269–284). Mahwah, NJ: Erlbaum.
National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: U.S. Government Printing Office.
Nist, S. L., & Olejnik, S. (1995). The role of context and dictionary definitions on varying levels of word knowledge. Reading Research Quarterly, 30(2), 172–193.
Pany, D., Jenkins, J. R., & Schreck, J. (1982). Vocabulary instruction: Effects on word knowledge and reading comprehension. Learning Disabilities Quarterly, 5, 202–215.
Patberg, J. P., Graves, M. F., & Stibbe, M. A. (1984). Effects of active teaching and practice in facilitating students' use of context clues. In J. A. Niles & L. A. Harris (Eds), Changing perspectives on research in reading language processing and instruction. Thirty-second yearbook of the National Reading Conference (Vol. 33, pp. 146–151). Rochester, NY: National Reading Conference.
Pearson, P. D., Hiebert, E. H., & Kamil, M. L. (2007) Vocabulary assessment: What we know and what we need to learn. Reading Research Quarterly, 42(2), 282–296.
Pressley, M., Levin, J. R., & McDaniel, M. A. (1987). Remembering versus inferring what a word means: Mnemonic and contextual approaches. In M. G. McKeown & M. E. Curtis (Eds.), The nature of vocabulary acquisition (pp. 107–127). Hillsdale, NJ: Erlbaum.
Pressley, M., Ross, K. A., Levin, J. R., & Ghatala, E. S. (1984). The role of strategy utility knowledge in children's strategy decision making. Journal of Experimental Child Psychology, 38, 491–504.
RAND Reading Study Group. (2002). Reading for understanding: Toward a research and development program in reading comprehension. Washington, DC: U.S. Department of Education.
Rubenstein, R. N., & Thompson, D. R. (2002). Understanding and supporting children's mathematical vocabulary development. Teaching Children Mathematics, 9(2), 107–112.
Ruddell, M. R. & Shearer, B. A. (2002). "Extraordinary," "tremendous," "exhilarating," "magnificent": Middle school at-risk students become avid word learners with the Vocabulary Self-Collection Strategy (VSS). Journal of Adolescent and Adult Literacy, 45, 352–363.
Schatz, E. K., & Baldwin, R. S. (1986). Context clues are unreliable predictors of word meanings. Reading Research Quarterly, 21, 439–453.
Schwartz R. M., & Raphael., T. E. (1985). Instruction in the concept of definition as a basic for vocabulary acquisition. In J. A. Niles & R. V. Lalik (Eds.), Issues in literacy: A research perspective: Thirty-fourth yearbook of the National Reading Conference (Vol. 34, pp. 116–123). Rochester, NY: National Reading Conference.
Scott, J. A., & Nagy, W. E. (1997). Understanding the definitions of unfamiliar words. Reading Research Quarterly, 32, 184–200.
Shefelbine, J. (1990). Student factors related to variability in learning word meanings from context. Journal of Reading Behavior, 22, 71–97.
Singer, H. A. (1965). A developmental model of speed of reading in grades 3 through 6. Reading Research Quarterly, 1, 29–49.
Stahl, S. A., & Nagy, W.E. (2006). Teaching word meanings. Mahwah, NJ: Erlbaum.
Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21, 360–407.
Sternberg, R. J. (1987). Most vocabulary is learned in context. In M. G. McKeown & M. E. Curtis (Eds.), The nature of vocabulary acquisition (pp. 89–105). Hillsdale, NJ: Erlbaum.
Swanburn, M. S. L., & de Glopper, K. (1999). Incidental word learning while reading: A meta-analysis. Review of Educational Research, 69, 261–285.
Wood, K. W., & Harmon, J. M. (2008). The "absolutes" of vocabulary knowledge: What research says about vocabulary development. Middle Ground, 12(1), 29–31.
Wysocki, K., & Jenkins, J. R. (1987). Deriving word meanings through morphological generalization. Reading Research Quarterly, 22, 66–81.
Beck, I. L., McKeown, M. G., & Kucan, L. (2002). Bringing words to life: Robust vocabulary instruction. New York: Guilford.
This book expounds on the authors' concept of "robust" vocabulary instruction, advocating a teaching and learning process that provides students with meaningful, multiple exposures to words encouraging them to think and talk about words and their uses, share their understandings with others, and relate their vocabulary knowledge to overall comprehension. They frame their chapters with the premise that word knowledge falls along a continuum from a little or no knowledge to a qualitative dimension with a rich, deep, often metaphorical level of understanding. They give numerous illustrations through dialogue and examples of problems with prevalent practices, such as providing only dictionary definitions or rote memorization, and, instead, tell how to develop student-friendly explanations of word meanings, use deductive questioning, paraphrase information to aid in comprehension, and assess students' conceptual word knowledge. The authors recommend an instructional sequence appropriate for all grade levels from the primary, intermediate, middle, and high school level and for all disciplines.
Blachowicz, C. L. Z., Fisher, P. J. L., & Ogle, D. (2006). Vocabulary: Questions from the classroom. Reading Research Quarterly, 4, 524–539.
This article begins with a historical perspective on vocabulary knowledge and instruction and then goes to the field to pose questions teachers have about vocabulary teaching and learning. The article concludes with recommendations for moving ahead as a community of researchers and teachers interested in this most important topic. One of the many significant findings of this article, as well as the one on assessment by Pearson described next, is that we have had little change related to vocabulary development in classroom practice or in our commercial programs over the past several years. The authors back up the questions teachers have about vocabulary with evidence from the professional literature. The questions asked by teachers include issues such as determining which words to teach, approaches for assisting ELL students, bridging the early learning vocabulary gap, how technology can be used effectively, and what we know about how to assess students' vocabulary knowledge.
Pearson, P. D., Hiebert, E. H. & Kamil, M. L. (2007). Vocabulary assessment: What we know and what we need to learn. Reading Research Quarterly, 42, 282–296.
The major premise of this "theory and research into practice" contribution is that educators' vocabulary measures are inadequate and not sufficiently sensitive in illustrating the relationship between vocabulary knowledge and general measures of comprehension. The authors address three questions: What do our current and past vocabulary assessments measure? What could they measure? What research needs to be done to ensure that our methods of teaching, learning, and assessing vocabulary knowledge are valid? They conclude that much research is yet to be undertaken in the area of vocabulary assessment. Among the many research questions identified, is the need to differentiate the type of vocabulary instruction required by various text genres given that we typically present vocabulary instructional principles holistically. Since our assessment of vocabulary knowledge has not changed dramatically through the decades, the authors also suggest that computerized assessment of vocabulary knowledge be implemented to determine students' understanding of specific domains of interest (from common morphemes to terms for a particular discipline).
Allen, J. (1999). Words, words, words: Teaching vocabulary in grades 4–12. York, ME: Stenhouse.
Allen, J. (2007). Inside words: Tools for teaching academic vocabulary Grades 4–12. Portland, ME: Stenhouse.
Blachowicz, B., & Fisher, P.J. (2006). Teaching vocabulary in all classrooms (3rd ed.). Upper Saddle River, NJ: Pearson.
Diamond, L., & Gutlohn, L. (2006). Vocabulary handbook. Berkeley, CA: Consortium on Reading Excellence.
Fry, E. B. (2004). The vocabulary teachers book of lists. San Francisco: Jossey-Bass.
Graves, M. F. (2006). The vocabulary book: Learning and instruction. New York: Teachers College Press.
Tompkins, G. E., & Blanchfield, C. (2004). Teaching vocabulary: 50 creative ways, Grades 4–12. Upper Saddle River, NJ: Pearson.
Janis M. Harmon is a professor in the Department of Interdisciplinary Learning and Teaching at the University of Texas at San Antonio. Her scholarly interests include vocabulary teaching and learning and adolescent literacy. She is currently a co-editor of Voices from the Middle, the middle level journal of the National Council of Teachers of English.
Karen D. Wood is a professor and graduate reading program coordinator at the University of North Carolina at Charlotte. Her scholarly interests are primarily in adolescent literacy research and instruction. She has been the editor of the Research into Practice column for Middle School Journal and is the author of numerous books and articles on comprehension, struggling readers, content area literacy, and vocabulary instruction.
Harmon, J. M., & Wood, K. D. (2008). Research Summary: Vocabulary teaching and learning across disciplines. Retrieved [date] from http://www.nmsa.org/Research/ResearchSummaries/ | http://www.amle.org/Research/ResearchSummaries/VocabularyTeaching/tabid/1728/Default.aspx | 13 |
16 | Explore Evolution mangles the tiny fraction of biogeography covered in this chapter. It is largely dedicated to an historical debate about whether early creationists disputed the fixity of species, a topic falling outside biogeography, and indeed the biology curriculum. The biological examples used are all instances of adaptive radiations on islands, an interesting topic, but not representative of the whole field.
Biogeography is an active field, exploring how geographic distribution of life is affected by history, climate, geology, and behavior. As predicted by evolution, the geographical arrangements of related species repeats in different groups. This comparison of multiple lineages shows how the shared history of an area produces similar patterns of common ancestry, and allows us to test hypotheses about evolution. The rapid ecological and morphological diversification of organisms on islands shows how quickly evolution can produce novelty. Explore Evolution ignores or dances around these points.
p. 75: "Galápagos Islands, seen from Earth orbit"
Technically speaking, this image shows the Yucatán peninsula on Mexico's eastern shore. The Galápagos are roughly 1500 miles south, in a different ocean.
p. 76: "Darwin was using [biogeography] to challenge the fixity of species."
Biogeography yields clear evidence for evolution not only of new species, but of new genera, families, etc., and examples of rapid evolution of morphological novelty. This is exactly opposite to the erroneous conclusion that Explore Evolution presents.
p. 76: "The evidence is just as consistent with … the orchard picture … as with the monophyletic view.
The same techniques that allow us to reconstruct evolutionary trees for a single lineage apply equally well to the entire tree of life. Biogeographic studies of a small family of insects may allow us to look deep into the evolutionary history of other branches of the tree of life. The consistency of these trees cannot be explained without reference to common descent. The creationist "orchard" is scientifically vacuous.
p. 78: "a mechanism … that can transform one type of animal into a fundamentally different type of animal.
The adaptive radiations of honeycreepers in Hawaii (and many other groups) represent a range of variation that meets any fair definition of "fundamentally different." Explore Evolution never defines this term, and its use in a definition of "macroevolution" is scientifically inaccurate.
p. 78: "Marsupials are not restricted to … Australia and South America … the opossum live[s] in the northern hemisphere … the oldest marsupial fossil [was found in] China."
The best evidence is that marsupials originated in Asia, migrated across a land bridge to the Americas, and across Antarctica to Australia. The Asian and North American marsupials went extinct, while the Australian and South American populations speciated and radiated in behavior, ecology and morphology. Extinction, migration and diversification are important parts of biogeography and evolution, and Explore Evolution does students a disservice by ignoring or misrepresenting these processes. The total omission of plate tectonics from this discussion is inexcusable.
p. 79: "Scientists … disagree about how to interpret the … evidence we have examined."
Scientific inquiry takes disagreement as a basis for new research, not as a chance to declare that "there may not be much further debate" and "the issue is likely to remain exactly where it is." This is not inquiry; it is surrender. It not only misleads students about the actual state of scientific knowledge, it misinforms them about the way science works.
Fixity of species and common descent: Biogeography allows powerful tests of particular hypotheses about evolutionary histories. Explore Evolution wrongly claims that biogeography is only relevant to tests of species fixity, but offers no testable claims to justify arguing that the biogeographic evidence is equally consistent with common ancestry or with creationist orchards.
Evolution on Islands: Studying island biogeography is important, but biogeography encompasses much more than Explore Evolution offers readers. What the book covers omits crucial information about the motion of the continents over time, falsely claims that island biogepgraphy does not show novel features evolving on islands, and introduces confusing and erroneous information about the few examples examined in detail.
Studying the biogeographical links between different parts of the world can deepen our understanding of the evolutionary relationships between different populations within a species, between different species, and among higher taxonomic groups. Because different groups diversify at different rates, the evolutionary history of a single species revealed by biogeography might help clarify the relationship between entire families of some different group. These testable predictions about biogeography are powerful tools for scientists.
By the account in Explore Evolution, this predictive power is nonexistent, and biogeography only allows us to reject the notion of species fixity, nothing more. Instead of examining the many ways that biogeographic studies can inform our understanding of evolution, Explore Evolution simply dismisses the field as irrelevant. In particular, the authors claim that biogeographic evidence cannot distinguish between common ancestry and creationist "orchard" models. Such "orchards" have no scientific basis. Despite the book's claim to be "inquiry-based," Explore Evolution never clarifies how readers might themselves investigate such an orchard.
As John Wilkins explains, "The idea that species were universally thought to be fixed prior to Darwin is simply wrong — many creationist thinkers of the classical period through to the 19th century thought that species could change." Linnaeus, the father of modern taxonomy, began his career committed to the fixity of species, but began accepting the evidence against such fixity approximately a century before Darwin's ideas were published. Nor was fixism widely accepted within the scientific community by the time Darwin wrote: "No sooner had natural history established a tradition of fixism of species [in the 1700s] than it was immediately under challenge, for example by Pierre Maupertuis in Vénus Physique in 1745," according to Species: A History of the Idea, by John Wilkins (p. 104). It was not Darwin's intention simply to refute the discredited notion of species fixity, but to describe the way that life on earth is related. Biogeography helps us see how that process works over longer time periods.
Despite that history, Explore Evolution claims:
Darwin was using this evidence [biogeography] to challenge a theory that was popular in his day but is almost unheard of now: the fixity of species. The fixity of species was the idea that each species is fixed in its physical form which it doesn't change (at least not enough to constitute a new species) and placed in its current habitat from which it doesn't move (at least not beyond significant geographic barriers such as mountain ranges or oceans). Nowadays, the idea of the fixity of species isn't even a blip on the radar.Explore Evolution, p. 76
Regardless of this rejection of fixism, Explore Evolution cites unnamed "critics" who assert that biogeography does not demonstrate "macroevolution," idiosyncratically defined as "the origin of new large-scale features such as organs or body plans." This, like the later arguments about the limits on the evolution of finches (see discussion of chapter 8), is an argument for fixity of something. They are merely following the lead of creationists like George McReady Price in the 1930s, who replaced the notion of species fixity with fixity of Biblical "kinds."
In fact, biogeography is a powerful illustration of macroevolution (as it is conventionally defined: "Evolution on the grand scale. The term refers to events above the species level. The origin of a new higher group, such as vertebrates, would be an example of a macroevolutionary event." from Ridley's Evolution, 2nd ed., p. 669). Adaptive radiations of flies, finches or marsupials all demonstrate how rapidly a small population can speciate and diversify, producing the the sort of diversity normally associated with "higher taxonomic groups" in geologically brief periods of time. The few genera of Darwin's finches occupy as many ecological niches as several families of birds; African cichlids exhibit morphologies and ecologies greater than can be found in the many orders of fish found on coral reefs, and all from an ancestor a few million years ago.
Deeper evolutionary insights can come from comparisons of multiple groups with a shared evolutionary history. For instance, both rodent, bat, and insect populations in the Philippines show similar evolutionary connections between certain island populations. One recent study summarized:
The Philippine archipelago is an exceptional theatre in which to investigate the roles of past history and current ecology in structuring geographic variation. The 7000 islands originated as a set of de novo oceanic islands … of varying ages and geological histories… It is an area of high biotic diversity … [A]t least 111 of the 170 native species of terrestrial mammals (64%) are endemic, it is still more striking that 24 of 84 genera (29%) are endemic, implying much in situ diversification, and phylogenetic studies suggest that several large endemic clades are present among fruit bats and murid rodents. Each oceanic island that has remained continuously isolated from its neighbouring islands is a unique centre of mammalian endemism, with 25–80% of the non-volant mammals [mammals other than bats] endemic, even on islands of only a few hundred square kilometres. Similar patterns are evident among butterflies (Holloway, 2003) and trichopteran insects.Lawrence Heaney, Joseph Walsh, Jr., and A. Townsend Peterson (2005) "The roles of geological history and colonization abilities in genetic differentiation between mammalian populations in the Philippine archipelago," Journal of Biogeography 32(2):229-247
The similarity of these results allows researchers to make predictions about other groups, and where in one case a researcher might look at the biogeography of a single species, in other cases the same pattern may be seen in the biogeography of an entire taxonomic family. Thus, if biogeography undermines the fixity of species, it also undermines the fixity of higher taxonomic groups (as often advocated by proponents of a creationist "orchard") by showing that all taxonomic groupings have responded to the same evolutionary pressures.
Explore Evolution asserts "the evidence [from biogeography] is completely consistent with other views of the history of life, in which small-scale changes in form and features do occur within separate but disconnected groups of organisms" (p. 79). In order for a claim to be good science, it is not enough that it be "consistent" with the evidence, it must actually make testable predictions. In the terms stated, this "orchard" model offers no testable predictions, as it is infinitely malleable. It can be adjusted to fit any evidence, but Explore Evolution never offers enough details on the "orchard" to allow any predictions. Given the book's claim to be "inquiry-based," this is at best a sad oversight.
By contrast, the consistency of multiple lines of biogeographic evidence is exactly what would be predicted if all life were evolving in response to the same events throughout earth's history. As discussed above, in standard textbooks, and in the extensive (and uncited by Explore Evolution) scientific literature on biogeography, biogeography generates powerful testable predictions, predictions generated by comparing the biogeography of one group to that of another with shared geography. Biogeographers can predict where new species will be found, and can predict the diversity of communities in areas never before investigated because of the power of biogeography and evolution.
Such tests are impossible for the neo-creationist "orchard" model hinted at by Explore Evolution. The book simply gives too little detail of such a model to allow readers to make any prediction. Since the neo-creationist advocates of this model state forthrightly that they believe that God created each tree in the orchard of life, and since God can do anything, this is a model which is consistent with everything, and predicts nothing. The authors of Explore Evolution, perhaps in order to hide the book's creationist heritage, chose not to explain where they think the many trees of life come from (who planted the "orchard"?), how many trees there are, or why the trees are "separate but disconnected" (who prunes them?). This lack of specificity makes the model potentially consistent with anything, but only because they have chosen to specify nothing.
The "Further Debate" section of this chapter (pp. 79-80) repeats one of the great failings of this text. It highlights two sets of views, presents a weak explanation of one side, anonymous critics on the other, and then simply abandons the students to decide for themselves what to think. The authors might claim that this is consistent with an inquiry-based approach, but as discussed in the critique of chapter 1, this is false. An inquiry-based approach would present a real source of scientific uncertainty (and not a lightly repackaged creationist attack on science) and would provide the students with the tools to investigate the subject further. An inquiry-based textbook would not simply declare "scientists sometimes disagree about how to interpret the various classes of evidence we have examined" (p. 79). That is not inquiry, that is surrender. Scientific inquiry takes disagreement as a starting point for further research, not as a chance to declare that "there may not be much further debate" and "the issue is likely to remain exactly where it is."
Not only does this chapter (and the book as a whole) mislead students about the actual state of scientific knowledge, it misinforms them about the way science works.
Explore Evolution claims that some people who doubt common ancestry accept fixity of species, so biogeography doesn't prove anything to them. One such 19th century scientist:
accepted that migration and adaptation would alter the features of species. Nevertheless, he doubted that species could undergo unlimited change, and did not accept that all species shared a common ancestor. Many modern critics of neo-Darwinism share this view.Explore Evolution, p. 78
Cuvier shares little with the modern creationists he is being compared with here. Cuvier wrote in the early 19th century, decades before Darwin and Wallace, and the understanding of genetics and the fossil record which Darwin had, let alone which modern scientists enjoy. He can be excused for thinking species appeared from some unknown source and remained fixed in form thereafter. If this is the most recent genuine scientist Explore Evolution can cite who holds this view, it is hardly a ringing endorsement. Cuvier's claims had there day, but research in his day and since then have falsified his views.
This small aside at the chapter's end takes back the small concession to critics of creationism offered at the chapter's beginning. Before, the authors acknowledge that species fixity "isn't even a blip on the radar" today. But again, biogeography is not simply an exercise in disputing species fixity: it demonstrates that taxonomic ranks above the species level are not fixed, and shows the process by which species diversify, and by which the branching process of evolution has produced those higher taxonomic levels.
Nor is biogeography simply concerned with speciation and adaptive radiation (a process the authors denigrate in the following chapter). Biogeography shows a great deal more than that species can change. Even within the limited subset of biogeography that Explore Evolution chooses to address (adaptive radiation), there is clear evidence for evolution of new species, genera, families, etc., and illustrations of the power of evolution to produce morphological novelty with great speed, given the right conditions. This is exactly the opposite conclusion from the one Explore Evolution draws. That does not mean that a debate is underway, only that the selective use of evidence can produce misleading results. When this book declares it impossible for evolution to accomplish a task, and then ignores instances where evolution does explain how that impossible thing happened, it calls the book's credibility, not evolution's, into question.
Biogeography, contrary to what readers of Explore Evolution might think, encompasses more than just adaptive radiation on islands. Studying the biogepgraphic effects of rivers and mountain ranges also informs our understanding of evolution. Our understanding of relationships between distantly related groups is often informed by comparing the distributions of modern species and their fossil ancestors with our understanding of continental drift. Such comparisons allow scientists to predict the whereabouts of important fossils and to trace back the distant shared ancestry of modern groups.
Explore Evolution never discusses plate tectonics and its impacts on biogeographic study, and in some cases erroneously dismisses common ancestry based on the current distribution of continents. For the most part, the book focuses on the rapid diversification seen on many isolated island populations, but wrongly claims that evolution in these adaptive radiations has produced no novelties, and only represent loss of genetic information. In fact, studies on islands show the evolution of novel anatomical structures and complex adaptations to new ecological niches.
The vision of biogeography in Explore Evolution is shockingly narrow. The examples of biogeography discussed are: the Galá Islands, the Hawaiian Islands, and island continents Australia and ancient South America. The discussion of marsupial biogeography across South America and Australia bizarrely omits any discussion of plate tectonics, a central theme in any discussion of biogeography over long time scales. No discussion at all is offered of many crucial biogeographic concepts that bear on evolutionary biology.
Biogeography generally focuses on finding repeated geographical patterns across multiple taxonomic groups. For instance, in the 1850s, Alfred Russel Wallace (who independently discovered natural selection) found in his travels through the East Indies that there was a sharp line between the species found in Southeast Asia and the islands as far east as Borneo and Bali, while islands only a few miles away had communities of species with closer affinities to the Australian fauna. The same pattern could be found in a range of groups, including mammals and birds, indicating that some common process acted to allow diversification of groups within regions. He summarized the significance of this result by stating "Every species has come into existence coincident both in space and time with a closely allied species" (Wallace, 1855, "On the Law Which Has Regulated the Introduction of New Species," Annals and Magazine of Natural History 16:184-196.)
By comparing macroevolutionary patterns between different groups, we find that the same patterns repeat. This strongly suggests that the same forces drove the diversification of those different groups. This also makes it possible to compare rates of evolution of those groups. For instance, a recent study of the global biogeography of mites allows unique insights into the processes driving diversity within and across various groups of mammals.
The authors of the mite research explain that "To date, few conclusive empirical studies of the worldwide historical biogeography of terrestrial organisms are available, because the members of clades [groups containing all descendants of a single common ancestor] with global distributions tend to present dispersal abilities that obscure historical biogeographical patterns. To find a group of land organisms with an ancient global distribution, and therefore suitable for a study of historical biogeography on a global scale, one needs to look among the earliest colonizers of terrestrial environments" (Boyer, et al. 2007, "Biogeography of the world: a case study from cyphophthalmid Opiliones, a globally distributed group of arachnids," Journal of Biogeography, OnlineEarly Articles).
These data establish a context within which other groups' diversity can be examined. While the mites those researchers were studying have a long history, and can be traced back nearly to the beginning of life on land, other groups evolved much later, and exist only in the subset of the world that was connected at the time they evolved. Not only is biogeography evidence against the fixity of species, it is evidence against the fixity of larger taxonomic categories, since groups of much different taxonomic rank follow the same biogeographic patterns. Marsupial biogeography, discussed below, fits well with part of the pattern seen in the mites. The biogeography of extant mammals matches a large portion of the mite data, and the inclusion of fossilized mammalian ancestors results in a biogeography that matches yet more of the mite data. This biogeographic pattern can not be explained without reference to common descent.
Explore Evolution does not even mention major areas of biogeographic research such as gradients in species diversity found as one travels from the poles to the equator, or from sea level to the tops of mountains. Such studies are central to our understanding of the origins not just of individual species, but the evolutionary processes which generate species diversity, and the book's silence on these topics does students a disservice.
Explore Evolution states:
Critics note that the examples of mockingbirds in the Galápagos and fruit flies in the Hawaiian Islands show only small scale variations in existing traits. … Since critics of the argument from biogeography see no evidence of large-scale change, or of a mechanism that can produce the new genes needed to cause such change, they doubt that the biogeographical distribution of animals supports Universal Common Descent.Explore Evolution, p. 77
The issue here of Universal Common Descent is a bit of a red herring. Biogeography is a powerful way to illustrate the power of evolutionary processes, but since all of the parts of the planet have been connected at one point or another, the earliest biogeographic evidence tends to be obscured by subsequent extinction and evolutionary change. Biogeography does reveal the speed with which evolution can operate, and comparing the biogeographic histories of different groups, we can better understand the timing and processes by which various groups evolved. As with the research on mite biogeography discussed earlier, researchers can test predictions of common ancestry by examining overlapping biogeographic patterns, and research on island adaptive radiations provide powerful examples of the power of evolution to generate evolutionary novelties. Explore Evolution claims that these radiations do not demonstrate a mechanism which can "transform one type of animal into a fundamentally different type of animal" (p. 77), but never offers a definition of "fundamentally different." By any reasonable standard, though, island radiations do indeed show exactly such novelty.
Most of the examples that Explore Evolution discusses are poor examples of the breadth of biogeography, since they are really illustrations of adaptive radiation (and not the most striking examples of that phenomenon, either). For instance, rather than addressing the classic case of the adaptive radiation of Darwin's finches on the Galápagos, Explore Evolution focuses on the less diverse Galápagos mockingbirds. Of the 14 species of finches which evolved from an ancestral population blown to the islands several million years ago, the range in sizes is vast, and the ecologies range from vegetarianism to carnivory, from tool-using insect hunters to bills which can crush seeds and to bills that probe into tiny holes to draw out insects. The range of ecologies generated from a small starting population in a few million years is tremendous. Similar ecological diversity evolved among a population of finches which blew onto the Hawaiian islands between 5 and 10 million years ago. Their descendants, known as honeycreepers, show a range of variation in morphology and ecology which falsifies any claim that evolution on islands does not produce fundamental differences. Some authors consider that group to represent a separate family of birds which evolved in a their short time in Hawaii, others regard it as a subfamily within the finches; all agree that their diversity is stunning.
Similarly, in three lakes of Africa's Rift Valley, a member of a family of fish named cichlids has evolved a range of ecologies and sizes unmatched anywhere else. Those lakes are known to have formed no later than 1.5-2 million years ago, and the hundreds of species of fish in those lakes occupy ecological niches, and exhibit biological forms, unheard of elsewhere. (One species specializes in eating the eyes of other fish.) The range is greater than what you might find at a coral reef, and all from a small number of evolutionary starting points.
In Hawaii, there are about at least a thousand species of flies — many still waiting to be described — in the genus Drosophila and they all share a common ancestor that separated from the mainland Drosophila tens of millions of years ago. Islands known to be less than 500,000 years old have species which exist only on that island, and which must have evolved in less than half a million years. Those flies represent roughly one third of the members of the genus in the world, and the species found in Hawaii exhibit evolutionary novelties: anatomical traits and behaviors seen nowhere else. This diverse group is not the only adaptive radiation on the islands.
In most of the world, damselfly larvae are aquatic hunters of other invertebrates, breathing through gills on their tails. In Hawaii, some have evolved to live on land, hunting through the leaf litter. In the course of their evolution from aquatic to terrestrial habitats, their gills evolved into air-breathing structures, and representatives of various intermediate stages in this transformation can be found on the islands. Others have adapted to live near the water that collects at the bottom of leaves, but actively avoid being actually soaked in that water, requiring a different set of adaptations. Again, these are structures which are fundamentally different than anything found in other damselflies, and these adaptations have occurred in remarkably short amounts of time.
Though Explore Evolution constrains itself to discussing animals, adaptive radiations can also be found in the plants of Hawaii. The silverswords, three genera in the sunflower family found in Hawaii have evolved a range of structures, ranging from short plants with spiky leaves to trees, shrubs, vines and low ground-cover. As Futuyma explains, "They vary greatly in the form and anatomy of the leaves and in the size, color and structure of the flowers. In many features their range of variation exceeds that among families of plants, yet almost all of them can be crossed, and the hybrids are often fully fertile. They all appear to have been derived from a single ancestor that colonized the Hawaiian Islands from western America" (Futuyma, 1997, Evolution, Sinauer Associates, Sunderland, MA, p. 118).
The appearance of such diversity from a known starting population demonstrates the incoherence of Explore Evolution's "criticisms". Adaptive radiations have often generated variations exceeding that seen within whole families within a geological flash, yet the authors call these "only small-scale variations in existing traits." By making such a sweeping and imbalanced generalization, Explore Evolution misinforms students about the actual evidence at hand. By making the claims without explaining the basis for them, Explore Evolution makes it impossible for students to explore these ideas in any additional depth, once again hindering inquiry, rather than encouraging and supporting true scientific investigation. If such new structures can be generated in the space of a few milllion years, it's not hard at all to envision the same processes producing the diversity of all life in the space of billions of years.
The marsupial faunas of South America and Australia are at least as ecologically diverse as placental mammals worldwide (with some exceptions, see the discussion of developmental constraints in our response to chapter 8). The convergent evolution of Australian mammals and placentals found in comparable habitats elsewhere shows the power of evolution to adapt species to similar conditions. That they have similar adaptations to those found in placentals, but achieve such adaptations by different means, indicates how flexible evolutionary processes can be. Because of the ecological diversity of South American and Australian marsupials, and the biogeographic history which made such diversity possible, marsupials could serve as a useful exploration of the interplay of evolution and biogeography.
Unfortunately, the discussion of marsupial biogeography in Explore Evolution is laughably bad: too brief to education, and so inaccurate as to be utterly useless. It begins with a mischaracterization of the evolution of marsupials:
The first mammals with the marsupial's distinctive mode of reproduction arose on the ancient southern super-continent of Gondwanaland. Later, after this great land mass broke up into separate continents, the ancestors of marsupials were separated from other mammals and evolved in isolation on the new continents of Australia and South America.Explore Evolution, p. 75.
This is a straw man. Textbooks and researchers in the field do not claim that marsupials originated in Australia or South America. The best evidence is that marsupials originated Asia, migrated to North America via a land bridge, and that the co-existed with placental mammals in the northern hemisphere for some time. Marsupials colonized first South America, and from there moved on to Antarctica and then Australia. The marsupial populations in Asia and North America went extinct, possibly as a result of competition with placental mammals among other factors, and the populations on southern continents remained in those safe havens.
Explore Evolution's perfunctory and inaccurate coverage of this basic biogeography does students a disservice. Students cannot be guaranteed to have a background to know what Gondwanaland was, nor to appreciate that the supercontinent was already largely broken up by the time marsupials were crossing landbridges between the continents. If students do not have the background to appreciate the interplay of diversification and continental drift, the book's explanation will not help.
Having created the straw man, Explore Evolution proceeds to knock it down:Critics of the marsupial argument insist that it, too, fails to establish Universal Common Descent or even the descent of all marsupials. At best, it shows that various groups of marsupials first originated in the same general area in the Southern Hemisphere and were then distributed more widely as the Southern continents separated from one another. But even this is questionable, some critics say. They point out that marsupials are not restricted to the southern continents of Australia and South America. Marsupials such as the opossum live in the northern hemisphere. And, in a recent development, paleontologists have unearthed the oldest marsupial fossil of all … in China.Explore Evolution, p. 77-78.
Marsupials exist in North America because of migration, a process described in this very chapter, but ignored now when the authors find it inconvenient. Around 3 million years ago, a combination of continental drift and the rising Andes brought South America into contact with North America. For the first time in 50 million years, North American species and South American species came into contact. Some marsupials (like the opossum) spread north. Some placentals from North American spread south. For reasons that scientists continue to investigate and discuss, the ability of North American placentals to persist and diversify in South America was much greater than the ability of South American marsupials to diversify in the north, or to outcompete the northern invaders (Stehli and Webb, eds. 1985. The Great American Biotic Interchange. Plenum Press: New York). Today, half of South American mammals are descended from North American ancestors. South American descendants represent no more than 20 percent of modern North American species, and most of those are in Central America, near the point of initial contact (Marshall, et al. 1982. "Mammalian evolution and the Great American Interchange," Science 215:1351-1357). This helps explain the confusion of the apparently biogeographically illiterate authors of Explore Evolution about why "marsupials such as the opossum live in the northern hemisphere." It is because of migration, a process they described as uncontroversial only a page earlier. It is not a mystery, and it is unclear why the authors would regard it as such.
The claim that North American opossums are evidence against the common ancestry of marsupials bears striking similarities to a critique of biogeography by young earth creationist Kurt Wise:There are very few examples of macrobiogeographical evidences for macroevolution, and none of them is very strong. The best-known claim is the concentration of marsupials in Australia. But there are several reasons that marsupials in Australia are actually a poor example. First, all marsupials are not in Australia. The Virginia opossum of North America, for example, is a marsupial. It is thought to have come from South America, not Australia. Thus not all similar organisms are known from every continent. Third, marsupials are the oldest fossil mammals know from Africa, Antarctica and Australia—in that order. The fossil record seems to show a migration of marsupials from somewhere around the intersection of the Eurasian and African continents and then a survival in only the continents farthest from their point of origin (South America and Australia).source Wise, Kurt (1994) "The Origins of Life's Major Groups," ch. 6 in J. P. Moreland (ed.) The Creation Hypothesis: Scientific Evidence for an Intelligent Designer, Intervarsity Press:Downers Grove, IL, p. 223).
Wise's confusion over the status of the Virginia opossum perhaps reflects his confusion of the New World order Didelphimorphia — commonly called opossums — with the Australian order Diprotodontia — some of which are commonly called possums (without the first "o"). The groups are morphologically and molecularly distinct, with well-established paleontological histories. If the opossum truly had roots in Australia, it would indeed be a biogeographic conundrum. In fact, the only close link between opossums and Australia is Wise's typo.
Similar misunderstandings plague the discussion of the marsupial fossil record in Explore Evolution and its creationist source material. It is not surprising, the breathless tone of Explore Evolution notwithstanding, that "paleontologists have unearthed the oldest marsupial fossil of all … in China." The authors wonder "If the ancestors of the marsupials originated in the Southern Hemisphere, why has the oldest known member of the group been discovered in the Norther Hemisphere?" The answer is simple: paleontologists do not claim that marsupials originated in the Southern hemisphere, only that they migrated there.
Marsupials and placental mammals separated roughly 125 million years ago, according to the most recent fossil data. Several lines of evidence indicate that the marsupials originated in Asia, spread to North America over a land bridge, which can be seen in the figure. The ages and characteristics of fossils found in Europe, South America, Africa, Australia and Antarctica suggest that marsupials spread to Europe and South America from North America. South America, Africa, Australia and Antarctica still had linkages at that time, allowing species on the Southern continents to spread easily.
Despite the feigned confusion of Explore Evolution's authors, the fossil record gives a very clear picture of the biogeographic history of marsupials, though there are many questions scientists continue to investigate. The earliest marsupial fossils (and the earliest placental fossils) are found in Asia. Fossilized marsupials are found in North America in rocks that are only a few million years younger than the Chinese fossils. During that period in geological history, plate tectonics had brought North America and Asia close enough together to forge a land bridge, allowing many species to migrate between those continents.
At that time, the southern continents were all connected, North America and Europe were still very close, and South America had not drifted far from North America, allowing dispersal during periods when ocean levels dropped. Marsupials related to North American species colonized Europe briefly, through a northern land bridge, and others colonized South America. Africa was in the process of separating from the supercontinent which also included Antarctica, Australia and South America, so the presence of marsupial fossils in Africa gives a good measure of how quickly they entered South America and dispersed across the supercontinent Gondwana. Fossilized marsupials in Antarctica also allow us to track their dispersal to Australia. This pattern is consistent with the fossil record of placental mammals, and with other lines of evidence (John P. Hunter and Christine M. Janis. 2006. "'Garden of Eden' or 'Fool’s Paradise'? Phylogeny, dispersal, and the southern continent hypothesis of placental mammal origins," Paleobiology, 32(3):339–344).
As the southern continents drifted apart as well, the marsupial faunas in each isolated continent followed a different path. As Antarctica drifted south towards its current polar position, it became colder and colder, ultimately driving its resident marsupials and palm trees extinct. South American and Australian marsupials produced diverse radiations which filled many of the same ecological niches occupied by placental mammals elsewhere. In the northern continents, which were periodically linked by land bridges, biotic interchanges resulted in periods of intense competition, which seem to have driven the native marsupials extinct.
When South America drifted north again and connected with North America around 3 million years ago, the Great American Biotic Interchange had the same devastating effect that biotic interchanges had on other marsupial faunas.
The same pattern of diversification and migration seen in marsupials can also be seen in other groups of plants and animals. That consistency between biogeographic and evolutionary patterns provides important evidence about the continuity of the processes driving the evolution and diversification of all life. This continuity is what would be expected of a pattern of common descent. The creationist orchard scheme gives us no reason to predict this pattern.
True biological novelty can be found in many of the adaptive radiations that Explore Evolution describes. Despite this, the authors insist "There are many examples of isolated islands that are home to flightless birds and insects that have clearly lost some of the genetic information necessary to produce the traits possessed by their ancestors. Large-scale macro-evolutionary change requires the addition of new genetic information, not the loss of genetic information" (p. 77). No evidence is offered of research on the genetics of flightlessness, and it is far from obvious that flightlessness must represent a loss of information. It is not generally true that loss of a structure involves loss of genes; eyeless cave fish lose their eyes because certain genes are over-expressed. Furthermore, there is no basis for their claim that macroevolution requires the addition of new genetic information, nor is new genetic information beyond the capacity of normal evolutionary processes. For a fuller discussion of the problems with Explore Evolution's treatment of "information," please see chapter 8.
We saw previously that the variation within Hawaiian Drosophila and other adaptive radiations is far greater than the variation found within some much broader taxonomic groups. It is difficult to say what genetic information the authors of Explore Evolution believe was lost. For instance, in the example of damselflies above, here is how one researcher put it in 1970:
This change from aquatic to semi-aquatic to arboreal to terrestrial habit has demanded considerable morphological and physiological change in the gills, and there is a beautiful transition series displayed by the gills of the various species from the long, thin, delicate, highly tracheated gills of the aquatic forms to the short, thick, opaque, densely hairy gills of the terrestrial species. There must also be changes in the function of the spiracles.Elwood C. Zimmerman (1970) "Adaptive Radiation in Hawaii with Special Reference to Insects." Biotropica, 2(1):32-38.
"It would appear," he concludes, "that it is from such extraordinary adaptive radiation that new major taxa might be produced, and the phenomenon is here demonstrated most lucidly before our eyes."
While the full set of genetic changes underlying this evolution are not fully known, there is no reason to believe it required any new genes, nor that any existing genes were lost along the way. Like many cases of the evolution of new structures (for instance, those discussed by Armin Moczek. 2008. "On the origins of novelty in development and evolution," Bioessays, 30(5):432-447), the evolutionary process most likely operated by rearranging and reusing existing genes and regulatory systems, making changes to the places where genes were expressed, or the times when they turned on or off. Such subtle changes can produce dramatic effects on the final form of an organism.
Understanding the precise genetic basis for those sorts of changes has been an important area of research over the last few decades, as new technology made it possible to examine the ways that genes control development. Explore Evolution presents such open areas of research as a reason to abandon all hope of resolving the underlying issues, but this is not how science works. Scientists are actively investigating the ways in evolution actually works, and students who hope to participate in the active research under way as researchers, doctors, or patients need to understand the process by which scientists produce and evaluate new knowledge. A competent textbook would use these areas of active research to invite true exploration of novel ideas. The fact that Explore Evolution despairs of finding explanations for unresolved issues in science is a damning indictment of the book's inadequacies. | http://ncse.com/book/export/html/1505 | 13 |
21 | We all know the income tax can be complicated, burdensome, even infuriating. But how does it square with the principles of the Constitution?
The founders of the United States were profound students of politics and history. They saw the protection of property rights, in the words of the most famous of the Federalist papers, as "the first object of government." Yet they saw that history had shown all known democracies to be "incompatible with personal security or the rights of property[.]" Given the fact that the poor everywhere outnumber the rich, political philosophy had held that a government based on majority rule was likely to lead to the misappropriation of the property of the few rich by the many poor.
The founders therefore included numerous provisions in the Constitution and bill of rights to protect the property rights of citizens. The Constitution also empowered the federal government to impose indirect taxes on commerce, such as tariffs, duties, and excise taxes, so long as such taxes were imposed uniformly throughout the United States. Direct taxes, such as "capitation" or "head" taxes had be apportioned among the states according to their population.
The Constitution seems to have discouraged the adoption of any federal income tax until the Civil War. In 1861, Congress for the first time adopted a federal income tax to finance the war, but allowed it to lapse in 1872.
In 1894, Congress again adopted an income tax--a two percent flat tax on incomes over $4,000. The following year, however, the Supreme Court held the tax to be unconstitutional because it was an unapportioned direct tax.
This decision holding the federal income tax to be unconstitutional is one of the few Supreme Court cases ever to have been overturned by constitutional amendment. In 1913, the adoption of the sixteenth amendment removed all limitations on the imposition of federal income taxes.
Congress immediately enacted a federal income tax with low rates that affected only a few people with relatively high incomes. Over time, however, the federal income tax system has given rise to a situation much like that described by the Federalist, in which a majority appropriates the assets of the minority.
Every year for roughly the past 25 years, the Internal Revenue Service has compiled data regarding the share of all income taxes paid by tax filers from the highest to the lowest income earning families and individuals. Despite the perennial rhetoric of class warfare that accompanies every political discussion of cutting income taxes, the IRS data show that the highest income earners pay a strikingly disproportionate share of all income taxes. The data also show this state of affairs has worsened over the past 20 years.
The most recent available tax data cover the year 1999. That year, the top one percent of taxpaying families and individuals earned over $293,415; the top 10 percent earned over $87,682; the top 25 percent earned over $52,965; and the top 50 percent earned over $26, 415.
Any one or all of these income categories are variously referred to as "the rich" in political debates regarding income tax levels. It is nevertheless unlikely that many families who do the hard work (frequently in multiple jobs) think of themselves as rich.
The data for 1999 show:
The top 1 percent of taxpayers earned 19.5 percent of all adjusted gross income, but paid 36.2 percent of all federal personal income taxes.
The top 10 percent of taxpayers earned 44.9 percent of all adjusted gross income, but paid 66.5 percent of income taxes.
On the other hand, the bottom 50 percent of taxpayers earned 13.2 percent of all adjusted gross income, but paid only 4 percent of income taxes.
In other words, the top 10 percent of tax filers were responsible for two of every three dollars paid in income taxes in 1999, while the bottom half of all those who file tax returns paid essentially no income taxes.
For the bottom half of tax filers who receive hundreds of billions of dollars in government benefits but pay essentially no income taxes, political debate about taxation has little personal meaning except insofar as they may aspire to earn higher incomes in the future.
Many Americans would see the present system of federal income taxes to be unfair if they knew the facts. These facts, however, are almost never reported in the mainstream media. The so-called progressivity of the federal income tax system is both fundamentally unfair and inconsistent with the principle of equal rights that underlies the Constitution. | http://claremont.org/publications/precepts/id.174/precept_detail.asp | 13 |
31 | From Ohio History Central
The Panic of 1837 was a financial crisis that had damaging effects on the Ohio and national economies.
Following the War of 1812, the United States government recognized the need for a national bank to regulate the printing of currency and the issuance of government bonds. Many in the U.S. public opposed the Bank of the United States, believing that it limited their ability to make land purchases and to pay off other debts. Jackson had opposed banks since the 1790s, when he lost a sizable amount of money when he invested his money in a bank.
In 1832, Nicholas Biddle, the head of the Bank of the United States, asked to have the institution re-chartered. In 1816, the United States government had authorized the bank to operate for twenty years. Biddle, at the urging of Henry Clay, applied for re-chartering four years early. Congress agreed with the necessity for a national bank, but President Jackson vetoed the bill. His action, in essence, prevented the continued existence of the Bank of the United States after 1836.
Jackson was not happy with waiting to 1836 for the Bank of the United States to end. In 1832, Jackson ordered the withdrawal of federal government funds, approximately ten million dollars, from the Bank of the United States. The president deposited these funds in state banks and privately-owned financial institutions known as "pet banks." Ohio had nine of these banks. Biddle tried to keep the national bank operational by calling in loans, yet many businesses did not have the funds available to pay off their debts. As a result of Biddle's actions, numerous businesses had to close their doors due to the lack of funds during 1833 and 1834.
After this brief economic downturn, the United States' economy boomed. State banks began loaning money to industrialists and farmers. The banks also began printing exorbitant amounts of currency. This action led to high inflation. At the same time that banks were printing currency and loaning out large sums of money, foreign governments and businesses, hoping to benefit from the United States' burgeoning economy, loaned large sums of money to U.S. businessmen.
As a result of all of these factors, high inflation resulted. Currency quickly depreciated. In July 1836, Jackson issued the Specie Circular. Under this act, the government would only accept gold or silver in payment for federal land. Foreign investors also did not want to accept U.S. currency as payment, and they began to call in their loans to U.S. businessmen before the currency depreciated further. U.S. citizens rushed the banks to withdraw the necessary funds to pay off their debts. Unfortunately, many banks had loaned out too much money and did not have sufficient reserves on hand to meet the demands of their customers. Approximately eight hundred banks closed their doors in 1837, stifling economic growth and bankrupting numerous businesses, including many of the banks.
During the Panic of 1837, approximately ten percent of U.S. workers were unemployed at any one time. Mobs in New York City raided warehouses to secure food to eat. Prominent businessmen, like Arthur Tappan, lost everything. Churches and other charitable organizations established soup kitchens and breadlines. In Ohio, many people lost their entire life savings as banks closed. Stores refused to accept currency in payment of debts, as numerous banks printed unsecured (backed by neither gold nor silver) money. Some Ohioans printed their own money, hoping business owners would accept it. Thousands of workers lost their jobs, and many businesses reduced other workers' wages. It took until 1843 before the United States' economy truly began to recover. The federal government's failure to assist the U.S. public led voters to turn against the Democratic Party, the party in control of government at the start of the Panic of 1837. In 1840, voters elected William Henry Harrison, a member of the Whig Party and an Ohioan, over the Democratic candidate.
- McGrane, Reginald. The Panic of 1837: Some Financial Problems of the Jacksonian Era. New York, NY: Russell & Russell, 1965.
- Sharp, James Roger. The Jacksonians Versus the Banks: Politics in the States After the Panic of 1837. New York, NY: Columbia University Press, 1970. | http://www.ohiohistorycentral.org/w/Panic_of_1837?rec=536&nm=Panic-of-1837 | 13 |
68 | Unfamiliar with a term? Confused by sustainability jargon? So are we! We have compiled this list to ensure common understanding.
Have a suggestion for an addition? Help us improve this resource by emailing [email protected] with subject “Glossary Addition.”
Aerial spray (noun): a liquid or matter that is driven through the air in the form of tiny drops or particles, through the use of an aerial device. Usually the term refers to crop dusting acts where airplanes are used to spread pesticides, fungicides, or fertilizers into agricultural crops; but the term can also refer to instances where flames retardants are used to combat fires.
Aerosol (noun): the suspension of solid particles and/or liquid droplets within a gas. This word can refer to both aerosol sprays that can release these substances by use of a propellant gas or a solid/liquid solution that is suspended by a solvent/medium. Aerosols can be particulate, or biological. Pollen, bacteria, spores, and volatile organic compounds are biological aerosols. Soot, smog and ash are particulate aerosols. Particulate and biological matter that are in the air are aerosols of the atmosphere.
Agricultural waste (noun): byproducts or material eliminated or discarded by agricultural production, including but not limited to: biological, hazardous, solid, and water waste. Some examples are nutrient depleted soil, excrement and pesticide runoff water, unusable and usable biomass.
Air pollution (noun): air pollution is the introduction of chemicals, particulate matter, or biological materials into the atmosphere from either man made or natural sources. Greenhouse gases are a form of air pollution. Tropospheric ozone is the result of air pollution and is believed to cause respiratory health problems such as asthma.
Air quality (noun): the degree to which the ambient air is free contaminants, this is addressed by measuring indicators of pollution.
Alternative energy (noun): energy generated from/through alternative or substitute sources or practices of energy to traditional sources. Modern energy alternatives to conventional energy are solar, wind, geo-thermal and tidal energy.
Annual consumption (noun): the quantity of resources consumed per year by a population.
Anthropogenic (adj): originated, made or resulting from human activity, as opposed to a natural origin. Contrast Biogenic.
Appreciative Inquiry (noun): a philosophy of organizational assessment and change that seeks examples of success to emulate and organizational or personal strengths to build upon, rather than focusing upon fixing negative or ineffective organizational processes.
Bamboo (noun): a fast growing, hollow, giant woody grass that grows chiefly in the tropics, where it is widely cultivated. Fields of application for bamboo include culinaryedicine,constructionurnitureaperusical instrumentsandscaping.
Bio-Based product/Bioproduct (noun): a product (other than food or feed) that is composed of biological products or renewable agricultural materials( including plant, animal and marine), or forestry materials.
Bioaccumulation (noun): the increase in concentration of a chemical in organisms that reside in environments contaminated with concentrations of various organic compounds. Chemicals absorbed in bioaccumulation are unlikely to be decomposed, broken down, or degraded faster than they are absorbed in either the environment or the organism. See also BIOMAGNIFICATION
Biodegradable (adj): capable of being disposed of by bacteria or other biological agents. Often time constraints are not noted when biodegradable is used to describe the lifespan of products. (I) The two main classifications of biodegradable plastics are hydro-biodegradable (HBP) and oxo-biodegradable (OBP).
Biodiesel (noun):refers to a vegetable oil- or animal fat-based diesel fuel. Biodiesel is typically made by chemically reacting specified lipids with an alcohol. Biodiesel is meant to be used in standard diesel engines and is thus distinct from the vegetable and waste oils used to fuel converted diesel engines. Combustion engines can be modified to run on biodiesel, but may not meet emission standards in certain states. Due to government subsidies on nonrenewable petroleum, foreign gas is often as expensive as domestically made biodiesel in the United States.
Biodiversity (noun): the variety of organisms found within a specified geographic region. The degree to which living organisms vary, as within and between species or within and between ecosystems. Maintaining biodiversity is necessary to preserve the health and survival of an ecosystem.
Biodynamic (adj): a method of organic crop cultivation that takes into account such factors as lunar phases, planetary cycles, and the interrelationship of soil, plants, and animals
Biofuel (noun): gaseous or liquid materials produced from plant material or biomass with a useful enough energy content/density to be used as an energy source. Seen as an alternative to conventional gasoline in automated vehicles. An example is bioethanol.
Biogenic (adj): changes in the environment resulting from the activities of living organisms. Contrast anthropogenic.
Biomagnification (noun): the sequence of processes in an ecosystem by which higher concentrations of a particular chemical, such as the pesticide DDT, accumulate in organisms higher up the food chain, generally through a series of prey-predator relationships. The highest members in the food chain are most likely to suffer toxic levels of pollutants from successsively consuming lower level members that have bioaccumulated toxins. Humans living near mercury contaminated fishing areas are at the greatest risk of mercury poisoning due to the fact that they are the highest order consumers in the seafood biomagnification scheme. It is hardest to commercially produce the meat of a heterotroph in a organically because of biomagnification.
Biomass (noun): living or recently-dead organic material that can be used as an energy source, excludes organic material that has been transformed by geological processes (such as coal or petroleum).
Biome (noun): a major regional or global biotic community characterized chiefly by the dominant forms of plant life and prevailing climate. (I) Primarily the Northern biome of Southern America is characterized as tropic rain forest whereas most of the North African biome is characterized as xeric shrubland.
Biomimicry (noun): a design discipline that studies nature’s elements, processes and designs and uses artificial practices to emulate, or reproduce these naturally occurring products, which can serve as solutions to human problems. Chitin is an example as a molecule that is artificially produced into a biodegradable surgical thread, where it naturally found in the exoskeletons of cicadas. The Nature’s 100 best is a list of biomimiced inventions.
Biosphere (noun): the part of the earth and it’s atmosphere capable of supporting life.
Bioswales (noun): are landscape elements designed to remove silt and pollution from surface runoff water. They consist of a hallowed or lowered drainage course with gently sloped sides filled with vegetation, compost,and/or rubble. The water’s flow path, along with the wide and shallow ditch, is designed to maximize the time water spends in theswale, which aids the trapping of pollutants and silt. Biological factors also contribute to the breakdown of certain pollutants. A common application is around parking lots, where substantial automotive pollution is collected by the paving and then flushed by rain. The bioswale, or other type of biofilter, wraps around the parking lot and treats the runoff before releasing it to the watershed or storm sewer.
Black water (noun): waste water from toilets. compare with gray water.
BPA (Bisphenol A)(noun): bisphenol A is an organic compound used primarily to make plastics. It is a key monomer in production of epoxy resins and in the most common form of polycarbonate plastic. Polycarbonate plastic, which is clear and nearly shatter-proof, is used to make a variety of common products including baby and water bottles, sports equipment, medical and dental devices, dental fillings and sealants, eyeglass lenses, CDs and DVDs, and household electronics. BPA is also used in the synthesis ofpolysulfones and polyether ketones, as an antioxidant in some plasticizers, and as a polymerization inhibitor in PVC. Epoxy resins containing bisphenol A are used as coatings on the inside of almost all food and beverage cans. At least 8 billion pounds of BPA are used by manufacturers annually.
Carbon Footprint (noun): the total amount of greenhouse gases emitted directly or indirectly through an activity, or from a product, company or person, typically expressed in equivalent tons of either carbon or carbon dioxide. You can calculate your carbon foot print here http://www.carbonfootprint.com/carbonfootprint.html. Google has a map application that calculates the amount of CO2 you emit from using different types of transportation; options include public transportation, personal driving, and the most benign option, walking.
Carbon neutral (noun): entity or person who manages resources to counteract carbon produced by certain actions. Achieving carbon neutrality means measuring the carbon emissions for an identified product, service or company, by offsetting emissions or green restoration or enhancement. It is theorized that combustible fuels that are produced by algae will be carbon neutral, as they intake CO2 and produce a fuel that when combusted will emit CO2.
Carbon offsets (noun): credits private companies can purchase to help the environment or counter degradation caused by human activity. See EMISSIONS TAX.
Carbon Sequestration (noun): removing carbon dioxide from the atmosphere through natural or artificial processes. Natural sequestration occurs in the carbon cycle when plants or other carbon dioxide consumers(algae, phytoplankton) also known as carbon sinks use carbon dioxide in their growth processes. In the example of trees as natural sequesters, carbon dioxide removal can be promoted by afforestation. An example of artificial sequestration is the use of carbon scrubbers to prevent carbon from entering the atmosphere.
Carcinogen (noun): A chemical substance or type of radiation that can cause cancer when exposed to humans or animals.
Carpool (noun): an arrangement between people to make a regular journey in a single vehicle, typically with each person taking turns to drive the others.
Certified organic (adj): produced without the use of chemical fertilizers, pesticides, or other artificial agents. Often certified by governments and non-profit agencies in the attempt to maintain adequate standards. In the United States the acting regulation board is the United States Department of Agriculture(USDA). Note that the term when used in marketing, advertising, packaging, or labeling is almost entirely unregulated without the certification of reputable organization.
CFL(compact fluorescent light bulbs) (noun): a fluorescent light with a longer life span and more efficient energy consumption than incandescent light. Most designs radiate a light described popularly as ‘soft white.’ The fact that these lights are made from mercury complicates their clean disposal.
Climate Change (noun): refers to a statistically significant variation in either the mean state of the climate or in its variability, persisting for an extended period. Climate change is a change in the ‘average weather’ that a given region experiences. When we speak of climate change on a global scale, we are referring to changes in the climate of the Earth as a whole, including temperature increases (global warming) or decreases, and shifts in wind.
Closed-loop Recycling (noun): the process of utilizing a recycled product in the manufacturing of a similar product or the remanufacturing of the same product. This is improvement of used products to meet new standards. See REMANUFACTURING.
CO2(Carbon dioxide) (noun): a colorless, odorless gas produced through both natural and unnatural processes. Instances include burning carbon, burning, organic compounds, combustion of carbon based fuels, decomposition and respiration. It is absorbed by plants in the process of photosynthesis. It is a normal gaseous constituent of the atmosphere.
Compostable (adj): is the chemical dissolution of materials by bacteria or other biological means.Often products consisting of biodegradable and nonbiodegradable elements are marketed as compostable.
Consignment (adj) : placing any material in the hand of another, but retaining ownership until the goods are sold. Consignment stores are easy to find and a great way to buy pre-owned.
Conscious capitalism (noun): is a term used to describe a capitalist system that seeks to be aware of the effects of its actions, and to consciously affect human beings and the environment in a beneficial way. Conscious capitalism also refers to a movement towards values-based economic value, where values represent social and environmental concerns globally as well as locally.
Conventional (noun): Agreed, stipulated or traditionally accepted standards, habits, belief social norms or criteria, often taking the form of a custom. In a social context, a convention may retain the character of an “unwritten” law of custom. Concerned with what is generally held to be acceptable at the expense of individuality and sincerity.
Cost-benefit analysis (noun): relating to or denoting a process that assesses the relation between the cost of an undertaking and the value of the resulting benefits. Cost-benefit analyses are under taken to reduce wasted costs, and in this way they are naturally oriented towards conservation of resources. Before the green movement the cafeteria had already made many cost-benefit analyses to reduce input costs, now those actions are seen as frugal attempts to avoid food waste.
Cradle-to-cradle (noun): A design philosophy put forth by architect William McDonough that considers the life-cycle of a material or product. Cradle-to-Cradle design models human industry on nature’s processes, in which materials are viewed as nutrients circulating in healthy metabolisms where as in the real world are able to be recycled or reused. Basic principle behind William McDonough and Michael Braungart’s book by the same name.
CSR (noun): corporate self regulation integrated into a business model. CSR policy functions as a built-in, self-regulating mechanism whereby business monitors and ensures its active compliance with the spirit of the law, ethical standards, and international norms. The goal of CSR is to embrace responsibility for the company’s actions and encourage a positive impact through its activities on the environment, consumers, employees, communities, stakeholders and all other members of the sphere.
Daylighting (noun): low-tech way to make a home greener, daylighting uses strategically placed windows, skylights and light tubes in order to maximize natural daylight and minimize the need for artificial lighting.
DDT (noun): a synthetic organic compound introduced in the 1940s and used as an insecticide. Like other chlorinated aromatic hydrocarbons(organochlorines), DDT tends to persist in the environment and become concentrated in animals at the head of the food chain, through the process of biomagnification. In 1962 Rachel Carson challenged the use of pesticides when the environmental impacts weren’t fully evaluated in her book ‘Silent Spring.’ Its use is now banned in most developed countries. The US banned DDT in 1972. The Endangered Species Act cites the ban as a major factor in the comeback of the American Eagle.
Deforestation (noun): the conversion of forested land to other non-forested uses by the removal and destruction of trees and habitat. Deforestation is cited as one of the major contributors to global warming, as the destruction of carbon sequestering plants reduces the ability for the environment to cycle carbon dioxide.
Degradation (noun): the separation of a chemical compound into elements or simpler compounds. The stability that a chemical compound ordinarily has is eventually limited when exposed to extreme environmental conditions like heat, radiation, humidity or the acidity of a solvent. The details of decomposition processes are generally not well defined, as a molecule may break up into a host of smaller fragments.
Dematerialize (noun): the reduction of mass in a product that does not diminish quality or intended service for the consumer. The absolute or relative reduction in the quantity ofmaterials required to serve economic functions in society. In common terms, dematerialization means doing more with less. Dematerialization serves as a counterargument to economic idea that ‘more is better.’
Design for the Envionment(DFE) (noun): a philosophy applied to the design process that advocates the reduction of environmental and human health impacts through materials selection and design strategies.
Dirty 30: List of 30 common synthetic chemical ingredients to avoid when choosing products because many have been linked to cancer and many have already been banned in the EU [Dirty 30]
Downcycle (verb): is the process of converting or recylcing waste materials or useless products into new materials or products of lesser quality and reduced functionality. The goal of downcycling is to prevent wasting potentially useful materials, reduce consumption of fresh raw materials, reduce energy usage, reduce air pollution and water pollution, and lower greenhouse gas emissions as compared to virgin production. Downcylcing is an consequence in recycling where the materials products are made of are unable to be used to remake the product. A clear example is plastic recycling, which turns the material into lower grade plastics.
Eco-friendly(adj): is a term used to refer to goods and services, laws, guidelines and policies claimed to inflict minimal or no harm on the environment. Companies sometimes use these terms to promote goods and services by making environmental marketing claims and with eco-labels. The use of the term to promote products is vague and the use of the term as a marketing promotion is discouraged by the International Standards Organization.
Ecometrics(noun): the quantification of a company’s environmental performance over time. Ecometrics defines economic and environmental issues and seeks to find solutions through developed study. Ecometrics measures materials and energy inputs and outputs for use in benchmarking and monitoring environmental progress. The development of mathematical models is central to the discipline, major advances having been made in model formulation, data gathering, estimation hypothesis testing and computation.
EcoRenaissance(noun): The transition from a conventional age of limited environmental activism to a modern age, where there are environmentally conscious pursuits in all intellectual arenas including politics, society, economics and art.
Ecoroof(noun): is a roof of a building that is partially or completely covered with vegetation and a growing medium, planted over a waterproofing membrane. It may also include additional layers such as a root barrier and drainage and irrigation systems. Ecoroofs serve several purposes for a building, such as absorbing rainwater, providing insulation, creating a habitat for wildlife, and helping to lower urban air temperatures and combat the heat island effect. There are two types of ecoroofs: intensive roofs, which are thicker and can support a wider variety of plants but are heavier and require more maintenance, and extensive roofs, which are covered in a light layer of vegetation and are lighter than an intensive green roof.
Ecosystem(noun): the natural interacting biotic and abiotic members of a habitat. Biotic factors can be animals, microorganisms, plants, fungi, and other living factors. Abiotic factors can be chemical-physical factors such as soil content, solar irradiation, and other non-living factors. Each ecosystem is defined by biome and habitat, examples of ecosystems are marine, subterranean, and forest.
Efficient(adj): achieving maximum productivity with minimum wasted effort or expense working in a well-organized and competent way. Preventing the wasteful use of a particular resource.
Elasticity of Demand(noun): is a measure used in economics to show the responsiveness, or elasticity, of the quantity demanded of a good or service to a change in its price. More precisely, it gives the percentage change in quantity demanded in response to a one percent change in price (holding constant all the other determinants of demand, such as income). Inelastic demand is the instance where demand is maintained in response to a significant change in price, whereas elastic demand is the instance where demand is responsive to price.
Emission Reduction Credit (ERC) /Carbon Offset: an emission reduction credit represents avoided or reduced emissions often measured in tons. ERCs are generated from projects or activities that reduce or avoid emissions. A carbon offset refers to a specific type of ERC that represents an activity that avoids or reduces greenhouse gase(GHG) emissions or sequesters carbon from the atmosphere.
Emissions(noun): the discharge of pollutant gases, liquids or particles into the general atmosphere.
Emissions cap/standard(noun): the maximal legal amount of a particular pollutant that may be released into the air from a pollutant source. The cap/standard depends upon the type of pollutant, the location of the source, the quality of the surrounding area, and the emissions standards imposed by the regulatory government.
Endocrine disruptor(noun): chemical pollutants that have the ability to substitute, or interfere with, natural endocrine(hormone) systems within organisms. Exposures can lead to reproductive malfunctions, developmental malfunctions. Contrast neurotoxin.
Energy Efficiency(noun): using less energy to fulfill the same function or purpose; usually attributed to a technological fix rather than a change in behavior, examples include using building insulation to reduce heating / cooling demand, compact fluorescent bulbs to replace incandescent, or proper tire inflation to improve gas mileage.
Energy efficient(adj): is the goal of efforts to reduce the amount of energy required to provide products and services. For example, insulating a home allows a building to use less heating and cooling energy to achieve and maintain a comfortable temperature. Installing fluorescent lights or natural skylights reduces the amount of energy required to attain the same level of illumination compared to using traditional incandescent light bulbs. Compact fluorescent lights use two-thirds less energy and may last 6 to 10 times longer than incandescent lights. Improvements in energy efficiency are most often achieved by adopting a more efficient technology or production process.
Energy Star(noun): government-backed initiative to promote energy efficiency. Household products can earn the Energy Star label for saving energy and reducing greenhouse gas emissions, as defined by the Environmental Protection Agency and US Department of Energy.
Environmentally friendly (eco-friendly): terms used to refer to goods and services, laws, guidelines and policies claimed to inflict minimal or no harm to the environment.
Environmental Impact Statement(EIS): a report required by the National Environmental Policy Act detailing the consequences associated with a proposed major federal action significantly effecting the environment.
Environmental justice/equity(noun): a merger of social forces that combines the civil right movement with environmental protection to demand a safe and healthy environment for all, regardless of economic status, gender, or ethnicity. The extent to which all people from any region receive equal treatment under environmental statues. regulations, and practices.
Environmental stewardship: to look after, maintain, or manage environmental issues.+
Environmentally Preferred Products(EPP): products or services that have a lesser or reduced effect on human health and the environment when compared with competing products or services that serve the same purpose. This comparison may consider raw materials acquisition, production, manufacturing, packaging, distribution, reuse, operation, maintenance and/or disposal of the product or service.
EPA(Environmental Protection Agency)(noun): a federal agency created by the US government in 1970 with the purpose of protecting the environment to the fullest exten possible under the laws enacted by the US congress. It’s mandate was to mount an integrated , coordinated attack on environmental pollution and degradation in conjunction with state and local governments. It at once became responsible for the original federal programs that were created to combat air and water pollution, solid waste disposal, pesticide registration, toxic substances, radiation standards, noise control, and EIS procedures.
EPP Certification(noun): process by which products or services are certified as Environmentally Preferred Products (EPPs). The certification addresses all stages of the product’s/service’s life-cycle, incorporates key environmental and human health issues relevant to the category(alike an EIA), and undergoes outside stakeholder review.
Fair trade (noun): an organized social movement and market-based approach that aims to help producers in developing countries create better trading and social conditions and promote sustainability. The movement advocates higher social and environmental standards. It focuses in particular on exports from developing countries to developed countries, most notably agricultural products such as coffee.
Food and Drug administration (FDA) a federal agency within the United States Department of Health and Human Services that is responsible for protecting and promoting public health through regulating the quality and safety of foods, food colors and additives, cosmetics,tobacco products, dietary supplements, prescription and over-the-counterpharmaceutical drugs, vaccines, biopharmaceuticals, blood transfusions, medical devices, electromagnetic radiation emitting devices, veterinary products, and cosmetics.Established in 1938.
Food pyramid/plate (noun): is a triangular or pyramid-shaped nutrition guide divided into sections to show the recommended intake for each food group. The most widely known food pyramid was introduced by the United States Department of Agriculture in 1992, was updated in 2005, and then replaced in 2011 by MyPyramid.
Food web (noun): the linking and interlinking of food chains into an ecosystem with several trophic levels.
Formaldehyde (noun): glue in particleboard, fiberboard, plywood and coatings on fabrics, draperies, also in some paints and foam. In view of its widespread use, toxicity and volatility, exposure to formaldehyde is a significant consideration for human health. In 2011, the US National Toxicology Program has described formaldehyde as “known to be a human carcinogen“.+
Fossil Fuel (noun): a naturally occurring fuel such as coal, oil, and natural gas(the major three) in the form of an organic sedimentary deposit that contains carbon or hydrocarbon. Are produced from the decomposition of fossilized remains of plants and animals and can be combusted for use as energy. Fossil fuels are non-renewable resources that are finite in quantity and inelastic in the consumer economy. Substitute forms of energy are being sought.
Fossil water (noun): the same as Paleowater
Forest Stewardship Council (FSC): is an international not-for-profit, multi-stakeholder organization established in 1993 to promote responsible management of the world’sforests. Its main tools for achieving this are standard setting, independent certification and labeling of forest products. This offers customers around the world the ability to choose products from socially and environmentally responsible forestry.
Fragrance: Also known as parfum, it is a common chemical used in conventional products to mask or add scent to a product. Labels can say simply fragrance and so could contain between 10-300 different chemicals that may not have been tested for safety and which could be carcinogenic or could be linked to the disruption of hormones or be highly toxic to one’s immune system. [Campaign for Safe Cosmetics].
Genetic Engineering: A novel technology that is fundamentally changing the genetic makeup of numerous crops, animals, trees, and insects-and forever altering our food supply. Scientists have found that the introduction of genetically engineered and modified plants, insects, fish and other mammals could lead to widespread species extinctions. –Truth in Labeling Coalition
Global Warming (noun): refers to a specific type of climate change, an increased warming of the Earth’s atmosphere caused by the buildup of man-made green house gases that trap the sun’s heat, causing changes in weather patterns and other effects on a global scale. These effects include global sea level rise, changes in rainfall patterns and frequency, habitat loss and droughts.
Glocal (adj): refers to the individual, group, division, unit, organisation, and community which is willing and able to think globally and act locally. It describes people, places, and things that consider the health of the entire planet and to take action in their own communities and cities
Genetically Modified Organism (GMO): An organism whose genetic makeup has been deliberately altered by inserting a modified gene or gene from another variety or species, in order to create or enhance desirable characteristics from the same or another species.
Gray Water (noun): the relatively clean wastewater generated from domestic activities such as laundry washing, dishwashing, and bathing, which can be recycled on-site for uses such as landscape irrigation and constructed wetlands.
Green (noun): is a broad philosophy, ideology and social movement regarding concerns for environmental conservation and improvement of the health of the environment, particularly as the measure for this health seeks to incorporate the concerns of non-human elements. Environmentalism advocates the preservation, restoration and/or improvement of the natural environment, and may be referred to as a movement to control pollution. Note that the term when used in marketing, advertising, packaging, or labeling is almost entirely unregulated.
Green Building (noun): a comprehensive process of design and construction that employs techniques to minimize adverse environmental impacts and reduce the energy consumption of a building, while contributing to the health and productivity of its occupants; common metrics for evaluating green buildings include the LEED (Leadership in Energy and Environmental Design) certification and Australia’s Green Star program. Facets of a green building can include passive or active solar heating systems, grey water reuse systems, or ecoroofs.
Green business (noun): business that employs eco-friendly processes to reduce its carbon footprint, like alternative power sourcing, paper reduction, recycling, use of recycled materials, water and energy efficiency practices, and reuse of gray water. These businesses might not work towards environmental advancements but practice in an environmentally conscious manner.
Green Collar (adj): descriptive of a class of jobs or careers that are relative or contribute beneficially to the green movement.
Greenhouse Effect(noun): the trapping of heat within the Earth’s atmosphere by greenhouse gases such as carbon dioxide, which accumulate in Earth’s atmosphere and act as a blanket keeping heat in.
Greenhouse Gases (GHG) (noun): these gases are so named because they contribute to the greenhouse effect due to high concentrations of these gases remaining in the atmosphere. The GHGs of most concern include carbon dioxide (CO2), methane (CH4), nitrous oxides (N2O).
Green Revolution: Beginning in the 1940s, the Green Revolution refers to the increase in agriculture production worldwide due to new technologies such as synthetic fertilizers, pesticides, herbicides, and hybridized seeds. The Green Revolution decreased biodiversity significantly and cancer rates increased dramatically in rural farmlands.
Greenwashing (noun): the process by which a company publicly and misleadingly exaggerates, embellishes, or labels the environmental attributes of itself or its products, while participating in environmentally- or socially-irresponsible practices.
Habitat (noun): is an ecological or environmental area that is inhabited by a particular species of animal, plant or other type of organism. It is the natural or physical environment in which an organism or species population lives.
Hazardous waste (noun): any waste that is classified as hazardous under the Resource Conservation and Recovery Act regulations. Hazardous wastes result from industrial processes, or from unused chemical resources. To qualify as hazardous, waste must be significant threat to human health or safety. This is usually because the waste is toxic, ignitable, corrosive, or reactive.
Hemp (noun): the cannabis plant, especially when grown for fiber. The fiber of this plant, extracted from the stem and used for industrial purposes including paper, textiles, biodegradable plastics, construction, health food and fuel. Recently hemp has been employed in the economy with considerable commercial success.
Herbicide (noun): a chemical agent(often synthetic) capable of killing or causing damage to certain plants(usually directed at weeds) without significantly disrupting other animal or plant communities.
Herbivore (noun): an animal that feeds on plants.
Hybrid (noun): a thing made by combining two different elements, a mixture. (Biology) the offspring of two plants or animals of different species or varieties, such as a mule (a hybrid of a donkey and a horse).(Vehicle) a vehicle that uses two or more distinct power sources to move the vehicle.The term most commonly refers to hybrid electric vehicles(HEVs), which combine an internal combustion engine and one or more electric motors.
Hypoallergenic (adj): relatively unlikely to cause an allergic reaction.
Indoor Air Quality (noun): refers to the contents of interior air that could affect the health and comfort of occupants. Acceptable IAQ is air in which there are no known concentrations of harmful contaminants .
Industrial Ecology (noun): an interdisciplinary field that focuses on the sustainable combination of environment, economy, and technology.
Insecticide (noun): a chemical agent used, either natural or synthetic used to kill or inhibit the growth or development of insects.
Intensive (adj): concentrated on a single area or subject or into a short time; very thorough or vigorous. aiming to achieve the highest possible level of production within a limited area, especially by using chemical and technological aids. concentrating on or making much use of a specified thing. Expressing intensity; giving force or emphasis. Denoting a property that is measured in terms of intensity (e.g., concentration) rather than of extent (e.g., volume), and so is not simply increased by addition of one thing to another.
International Organization for Standardization(IOS) (noun): an international standard-setting body composed of representatives from various nationalstandards organizations. Founded in 1947, the organization promulgates worldwide proprietary, industrial and commercial standards. It has its headquarters in Geneva, Switzerland.
Karmic capitalism(inclusive capitalism) (noun): a philosophy that the sum of an organizationÕs actions in this and previous states of existence, are eligible to decide their fate in future existences.
Landfill (noun): an area where solid or solidified waste materials from municipal or industrial sources are buried or collected.
Leaching (noun): the process by which soluble materials are washed out and removed from the soil, ore, or buried waste.
Leadership in Energy and Environmental Design (LEEDª): an internationally recognized green building certification system, providing third-party verification that a building or community was designed and built using strategies intended to improve performance in metrics and performance criteria such as energy savings, water efficiency, CO2 emissions reduction, improved indoor environmental quality, and stewardship of resources and sensitivity to their impacts.
LEED: Leadership in Energy and Environmental Design (LEED) is an internationally recognized green building certification system, providing third-party verification that a building or community was designed and built using strategies intended to improve performance in metrics such as energy savings, water efficiency, CO2 emissions reduction, improved indoor environmental quality, and stewardship of resources and sensitivity to their impacts.
Life Cycle Assessment/Environmental impact Assessment (LCA/EIA) (noun): the study of the effects of a product or activity through each stage of it’s life including inception, manufacture, distribution, use, and disposal; often however the direct and indirect impacts of a product or activity are difficult to fully quantify. A science-based tool for comparing the environmental performance of two or more scenarios. LCA quantifies the potential environmental impacts of products or systems throughout their life cycles, and can highlight a productÕs impact areas to target strategic improvements.
Lifecycle (noun): the series of changes or stages that products, activities, and organisms undergo as they pass from creation to disposal. Conventionally the stages in a productÕs lifecycle are manufacture, transport, stockpile, purchase, consumption, and disposal.
Local (adj): belonging or relating to a particular area or neighborhood, typically exclusively so, relating to a particular region or part, or to each of any number of these.
Locavore (noun): one who chooses to eat locally produced food, as to lessen the environmental impact and conserve resources
Low emissions (noun): producing a low amount of discharged substances or emissions. See EMISSIONS.
Low flow (adj): plumbing fixtures, including faucets, toilets and showerheads, that are efficient with the amount of water used to satisfy a task. Also a phenomenon that occurs where river depth is below the average, which occurs in dry periods or droughts.
Mercury (noun): a heavy metal element that is liquid at room temperature. It is naturally or anthropogenically released into the environment. When at room temperature it is toxic. It does not biodegrade naturally and can remain in the environment indefinetly, and therefore can easily bioaccumulate. It can be biotransformed into various states, and can be highly toxic of swallowed or inhaled. It is used in compact fluorescent lights, and thermometers.
Mitigate (verb): the action of reducing the severity, seriousness, or painfulness of something. To minimize or avoid the negative effects. This might be achieved by the rectification of the the impact by repairing the affected area, reducing the impact by taking protective steps, or compensating for the impact.
Mutagen (noun): an agent, such as radiation or a chemical substance, that has the ability to cause a genetic mutation. See also CARCINOGEN.
Nanoparticle (noun): particles used in cosmetics because they penetrate the skin easily and may accumulate in body tissue. This penetration into skin can possibly cause cell damage and gene damage. The European Union is demanding label of products where nanoparticles are present but the US is not. Read more at NPR and the EDF.
National Organic Program(NOP): is the federal regulatory framework governing organic food. It is also the name of the organization in the Department ofvAgriculture (USDA) responsible for administering and enforcing the regulatory framework. The Organic Food Production Act of 1990 required that the USDA develop national standards for organic products. NOP regulations cover in detail all aspects of food production, processing, delivery and retail sale. Under the NOP, farmers and food processors who wish to use theword ”organic” in reference to their businesses and products, must be certified organic. A USDA Organic seal identifies products with at least 95% organic ingredients.
Natural Capital (noun): is the extension of the economic notion of capital to goods and services relating to the natural environment. Natural capital is thus the stock of naturalecosystems that yields a flow of valuable ecosystem goods or services into the future. For example, a stock of trees or fish provides a flow of new trees or fish, a flow which can be indefinitely sustainable. Natural capital may also provide services like recycling wastes or water catchment and erosion control. Since the flow of services from ecosystems requires that they function as whole systems, the structure and diversity of the system are important components of natural capital. The idea of natural capital expands economic models to include natural resources that have value to humanity but no inherent price.
Natural gas (noun): a mixture of of gaseous hydrocarbons(fossil fuel), chiefly methane , ethane, propane, and butane which is trapped in porous rocks beneath the ground and is often found with reserves of oil. A clean burning fuel (without producing a smoke or soot) that has a high heat value. It provides for a third of America’s consumed energy. It is transported across water in a liquefied form and is transported municipally by pipeline. It is one of the major fossil fuels.
Natural resource (noun): any portion or feature of the natural environment such as the atmosphere, water, soil, forest, wildlife, land, minerals, or other environmental asset that is of value in meeting human demand. Natural resources can be renewable or non-renewable.
Natural (adj): existing in or caused by nature or natural processes. Note that the term when used in marketing, advertising, packaging, or labeling is almost entirely unregulated.
Neurotoxin (noun): a substance that can damage, poison, or destroy nerve tissues or cells. Botulism, mercury, and lead are common examples of neurotoxins.
Neurotoxicity: occurs when the exposure to natural or artificial toxic substances, which are called neurotoxins, alters the normal activity of thenervous system in such a way as to cause damage to nervous tissue. This can eventually disrupt or even kill neurons, key cells that transmit and process signals in the brain and other parts of the nervous system
Non-degradable (adj): describes an organic compound or substance that is not decomposed or metabolized by the routine mechanisms that lead to the destruction of materials in the natural environment. Some materials described as nondegradable will eventually be degraded under the influence of long term biological, chemical, or geological factors, however the rate of decomposition is slow.
Nonrenewable Resource (noun): a natural resource such as fossil fuels that cannot be replaced after it’s extraction, removal or consumption; or is replaced very slowly. The exhaustion process is usually accompanied by a considerable increase in price, as most nonrenewable resources are finite and inelastic in the resource economy.
Obesity: is a medical condition in which excess body fat has accumulated to the extent that it may have an adverse effect on health, leading to reduced life expectancy and/or increased health problems
Off-the-grid (adj): self-sufficient home or building where one or more form of public utilities are not needed(energy, gas, water resource utilities for example). Solar or wind energy supplies or freshwater supply are examples of such setups.
Off-gas (noun): Exhausted gases from any process vessel or equipment.
Omnivore (noun): An animal that consumes both plants and animals, such as a human being. Contrast herbivore, pescetarian.
Organochloride (noun): an organic compound containing at least one covalently bonded chlorine atom. Their wide structural variety and divergent chemical properties lead to a broad range of applications. Many derivatives are controversial because of the effects of these compounds on the environment and on human and animal health. Examples are DDT in pesticides, and PCB in wiring insulation.
Organic Cotton(noun): Organic cotton is generally understood as cotton, from plants not genetically modified, that is certified to be grown without the use of any synthetic agricultural chemicals, such as fertilizers or pesticides. Its production also promotes and enhances biodiversity and biological cycles. United States cotton plantations are required to enforce the National Organic Program (NOP). This institution determines the allowed practices for pest control, growing, fertilizing, and handling of organic crops.
Organic (noun): relating to or containing compounds of carbon, chiefly of biological origins. the form of agriculture that relies on techniques such as crop rotation, green manure,compost and biological pest control to maintain soil productivity and control pests on a farm. Organic farming excludes or strictly limits the use of manufactured fertilizers,pesticides (which include herbicides, insecticides and fungicides), plant growth regulators such as hormones, livestock antibiotics, food additives, and genetically modified organisms. Note: Organic measures are often misproperly practiced in agriculture however are often used in marketing, advertising, packaging and labeling. However there are organic regulatory organizations such as the USDA, that serve as third parties which can certify a agricultural practice or product as organic.
Overpopulation (noun): a situation in which an existing population is too large to be adequately supported by available resources on a sustainable level of consumption. A population size that exceeds the carrying capacity of the environment and that is likely to lead to a population crash.
Ozone Layer (noun): a layer in the stratosphere that houses the gas ozone(O3). This layer is responsible for preventing 97% of Ultraviolet radiation from entering the rest of the earth’s atmosphere. The stratosphere ozone layer is non-uniform throughout the earth. Also a developing layer in the troposphere that causes acid rain, respiratory problems and agricultural damage; the development of this layer is attributed to pollution through vehicle exhaust. Ozone is a constituent of smog.
Ozone Layer Depletion(noun): a phenomenon caused by the the break down of certain compounds that contain chlorine and/or bromine which destroy ozone molecules in the stratosphere. Significant ozone thinning was noticed over the Antarctic in 1985, and confirmed by satellite surveillance. The main cause of thinning was believed to be correlated to an increase in atmospheric CFCs, and natural volcanic eruptions. CFC phase-out protocols were adopted by many developed countries and by 1996 were successful in halting the production of most CFCs. In the instance that the ozone layer was depleted; agricultural crops would be scorched, marine plankton would be affected, human skin cancer as well as eye cataracts and immune system damage rates would increase.
Paraffin (noun): a flammable, whitish, translucent, waxy solid consisting of a mixture of saturated hydrocarbons, obtained by distillation from petroleum or shale and used in candles, cosmetics, polishes, and sealing and waterproofing compounds. Any hydrocarbon chemical that is saturated, used synonymously with alkane when used to describe chemical makeup.
Paleowater (noun): is groundwater that has remained sealed in an aquifer for a long period of time. Water can rest underground in “fossil aquifers” for thousands or even millions of years. When changes in the surrounding geology seal the aquifer off from further replenishing from precipitation, the water becomes trapped within, and is known as fossil water. Paleowater is a non-renewable resource.
PBDEs (polybrominated diphenyl ether): a group of the most commonly used chemicals, may have negative effects on your health and the environment, commonly used in electronics as well as textiles. ItÕs a good idea to minimize exposure to them especially as theyre all around is in: furniture, wire insulation, draperies, and computers.
Pesticide (noun): a collective name for a variety of insecticides, fungicides, herbicides, fumigants, rodenticides, or other chemical agent used to kill an unwanted organism. Pesticides have greatly contributed to human welfare through increased productivity. Excessive use, misuse, and environmental contamination however have allowed these persistent chemicals and toxins to spread through the food chain. See also BIOACCUMULATION, BIOMAGNIFICATION.
Pescetarianism (noun): is the practice of a diet that includes seafood and excludes other animals. In addition to fish and/or shellfish, a pescetarian diet typically includes all of vegetables, fruit, nuts, grains, beans, eggs and dairy.
Petroleum (noun): a liquid mixture of hydrocarbons that is present in certain rock strata and can be extracted and refined to produce fuels including gasoline, kerosene, and diesel oil.
Petrochemical (noun): chemicals that are derived from the refining and processing of crude oil or natural gas and are used in the manufacture of many industrial chemicals, fertilizers, pesticides, plastics, synthetic fibers, paints, and medicines. Based on ethylene, propylene, and butylene. The unsaturated petrochemical compounds are used for chemicals, and the saturated are used for combustible gases.
Photovoltaic Cells (PV cells) (noun): also called Solar Cells, they convert sunlight directly into electricity. PV cells are made of semiconducting materials(silicone primarily) similar to those used in computer chips. When sunlight is absorbed by these materials, the solar energy knocks electrons loose from their atoms, allowing the electrons to flow in circuit through conductive material to produce direct current electricity. Photovoltaic cells are positioned with others into solar modules. From there on modules are positioned with many others into an array or with a few others to make a panel.
Phthalates (noun): are esters of phthalic acid and are mainly used as plasticizers (substances added to plastics to increase their flexibility, transparency, durability, and longevity). They are used primarily to soften polyvinyl chloride. Phthalates are being phased out of many products in the United States, Canada, and European Union over health concerns. Found in toys, shower curtains, window blinds, furniture covers, artificial leather, personal care products and more common products.
Plasticizers (noun): are additives(usually solvents) that increase the plasticity or fluidity of the material to which they are added; these include plastics, cement, concrete, wallboard, and clay.
Polylactic Acid (PLA)(noun): a biopolymer made from renewable resources. It is thermoplastic and can be used to make fibers, packaging and other products as an alternative to petroleum based plastics. It is derived from bacterial fermentation of agricultural by-products such as corn, sugar, or wheat. PLA is not only made from renewable resources, but is also biodegradable. PLA is currently manufactured by Cargill, PURAC, Hycail, and several other companies.
Pollutants (noun): contaminant substances that are harmful or poisonous.
Polycarbonate Plastic (noun): a particular group of thermoplastic polymers. They are easily worked, moulded, and thermoformed. Because of these properties, polycarbonates find many applications. Polycarbonates do not have a unique plastic identification code and are identified as Other, 7. Their hydrolysis (degradation by water, often referred to as leaching) releases BPA. Applications of Polycarbonates include electrical components, construction materials, data storage devices, water bottles, DVDs, CDs, soundwalls, and bullet resistant glass.
Post-Consumer recycled content (noun): material that is recovered after its intended use as a consumer product, then reused as a component of another product. Examples of post-consumer waste that are recycled include carpet tiles (for new yarn and tile backing), aluminum cans, PET soda bottles, and office paper.
Post Industrial Recycled Content (noun): also known as Pre-Consumer Recycled Content, it is waste material from manufacturing processes that is reused as a component of another product. Post-industrial recycled content comes from material that would have otherwise been waste, and has undergone some physical recycling process. Examples of post-industrial waste that are recycled include yarn extrusion waste, metal scrap, and fiber in paper manufacturing.
Potable (adj): safe to drink, drinkable, pure enough to be consumed or used with low risk of immediate or long term harm. Developing countries often have little access to potable water and attaining a potable water resource for these areas has been a major humanitarian goal.
Pre-Owned (adj): Vintage, used, pre-owned — it’s all new to you. Made in the 1920s or 2005, a pre-owned find has history, and when you buy green and bring it home, you extend its useful life. From purses to jewelry to lamps and beyond, really good things get better with time.(source: eBay)
Preconsumer Recycling (noun): when the materials do not reach the intended use or user (consumer), and are either discarded or recycled. Pre-consumer recycled materials can be broken down and remade into similar or different materials, or can be sold “as is” to third party buyers who then utilize those materials for consumer products. One of the largest contributing industries to the pre-consumer recycling paradigm is the textile industry. Pre-consumer waste by-products from the textile industry include, fibers, fabrics, trims and unsold “new” garments sold to third party buyers.
Precycle Waste Minimization (verb): is the process and the policy of reducing the amount of waste produced by a person or a society. Waste minimization involves efforts to minimise resource and energy use during manufacture. For the same commercial output, usually the fewer materials are used, the less waste is produced. Waste minimisation usually requires knowledge of the production process, cradle-to-grave analysis (the tracking of materials from their extraction to their return to earth) or environmental impact assessments(EIAs), and detailed knowledge of the composition of the waste.
PVC (polyvinyl chloride) (noun): a chemical found in vinyl, emits toxins, contains phthalates. It is a vinyl polymer constructed of repeating vinyl groups (ethenyls) having one hydrogen replaced by chloride. Polyvinyl chloride is the third most widely produced plastic, after polyethylene and polypropylene. PVC is a controversial material in that during its production, useful life and incineration, especially in accidental and uncontrolled circumstances, it may liberate persistent toxins in manufacture, use and destruction; suitable alternative plastics such as polypropylene do not.
Quest (Quality Utilizing Employee Suggestions and Teamwork) (noun): Interface’s initiative designed to eliminate measurable waste by establishing focused and innovative teams throughout the world to identify, measure, and then eliminate waste streams.
Radiation(noun): the emission of energy as electromagnetic waves or as moving subatomic particles, especially high-energy particles that cause ionization. The transfer of heat and other energy by the means of electromagnetic waves. The earth is warmed by shortwave radiant energy from the sun, and it warms the overlying atmosphere by longwave radiation.
Rapidly renewable (adj): materials that replenish faster than hardwoods, like bamboo and cork.
rBGH (rBST): Recombinant Bovine Growth Hormone; an engineered hormone which is injected into cows to increase their milk output by 15 percent, or about 10 gallons extra per day. This puts extra strain on cows as they are forced to overproduce. Cows injected with rBGH suffer from increases in cystic ovaries, disorders of the uterus, decreases the birth weight of calves, risk of clinical mastitis (produces abnormal milk and cause pain for the cows), period of increased body temperature, increase in digestive disorders, increase number in hocks and lesions, drains the cows’ bones of calcium. Canada has banned rBGH because it threatens the safety of dairy cows.
Recyclable (adj): a designation for products or materials that are capable of being recovered from, or otherwise diverted from waste streams into an established recycling program. Able to be reused or converted into reusable material
Recycled Content (noun): refers to the amount of recycled materials in a product – typically expressed as a percentage.
Recycled (noun): converted (waste) into reusable material, returned (material) to a previous stage in a cyclic process, used again
Recycling (noun): the series of activities, including collection, separation, and processing, by which materials are recovered from the waste stream for use as raw materials in the manufacture of new products.
Reduce (verb): the process and the policy of reducing the amount of waste produced by a producer, consumer, or society. Waste minimization involves efforts to minimise resource and energy use during manufacture. For the same commercial output, usually the fewer materials are used, the less waste is produced. Waste minimization usually requires knowledge of the production process, cradle-to-grave analysis (the tracking of materials from their extraction to their return to earth) and detailed knowledge of the composition of the waste. In the waste hierarchy, the most effective approaches to managing waste are at the top (prevention). In contrast to waste minimisation, waste management focuses on processing waste after it is created, concentrating on re-use, recycling, and waste-to-energy conversion.
Reentry Program (noun): Interface’s reclamation program through which carpet is taken back at the end of its useful life.
Reforestation (noun): the restocking of existing forests and woodlands which have been depleted, an effect of deforestation. Reforestation can be used to improve the quality of human life by soaking up pollution and dust from the air, rebuild natural habitats and ecosystems, mitigate global warming since forests facilitate biosequestration of atmosphericcarbon dioxide, and harvest for resources, particularly timber.
Renewable (adj): a resource or energy that is replaced by natural processes and if replenished with the passage of time
Renewable Energy Credits (RECS), Green Tags, Green Energy Certificates, Tradable renewable certificates (noun): these commodities represent the technology and environmental attributes of electricity generated from
Renewable Resources (noun): a resource that can be replenished at a rate equal to or greater than its rate of depletion. Examples of renewable resources include corn, trees, and soy-based products.
Reproductive toxin (Genitotoxin) (noun): a substance that can damage, poison, or destroy reproductive tissues, organs or cells.
Repurposing (noun): cleaning or refurbishing that allows a product to be reused again in its current form, thereby extending its useful life.
Reuse (verb): to use an item more than once. This includes conventional reuse where the item is used again for the same function, and new-life reuse where it is used for a different function. contrast recycling. By taking useful products and exchanging them, without reprocessing, reuse help save time, money, energy, and resources. In broader economic terms, reuse offers quality products to people and organizations with limited means, while generating jobs and business activity that contribute to the economy.
Salvage (verb): rescue from loss, retrieve or preserve something from potential loss or adverse circumstances.
Single use (disposable) (adj): a product designed for cheapness and short-term convenience rather than medium to long-term durability, with most products only intended for single use. The term is also sometimes used for products that may last several months (ex. disposable air filters) to distinguish from similar products that last indefinitely (ex. washable air filters).
Social business (noun): a non-loss, non-dividend company designed to address a social objective within the highly regulated marketplace of today. It is distinct from a non-profit because the business should seek to generate a modest profit but this will be used to expand the company’s reach, improve the product or service or in other ways to subsidise the social mission. This term describes broadly ‘commercial activity by socially minded organizations’. Charities may engage in social enterprise in order to generate funds, as per the ‘op-shop’ model; a social enterprise model may also be used to provide supported employment to those with barriers to work. Kick Starter and Kiva are renowned examples.
Social entrepreneur (noun): social entrepreneur recognizes a social problem and uses entrepreneurial principles to organize, create and manage a venture to achieve social change (a social venture). While a business entrepreneur typically measures performance in profit and return, a social entrepreneur focuses on creating social capital. Thus, the main aim of social entrepreneurship is to further social and environmental goals. Social entrepreneurs are most commonly associated with the voluntary and not-for-profit sectors, but this need not preclude making a profit.
Solar Energy (noun): he energy of the sun, that reaches the surface of the Earth in the form of visible light, short-wave radiation, ultraviolet light, and all other wavelength qualities. After penetrating the atmosphere the energy heats the surface of the Earth while part of it is re-radiated into the atmosphere in the form of long-wave radiation and absorbed by carbon dioxide and water vapor in the atmosphere. The utilization of solar for the generation of of electricity using photovoltaic cells has been developed in recent years, providing for energy utilities and satellites (which are able to absorb extra terrestrial radiation like ultraviolet waves). Biological systems use sunlit algae to convert carbon dioxide and water into oxygen and and protein rich carbohydrates.
Stakeholder (noun): an individual or group potentially affected by the activities of a company or organization; in sustainable business models the term includes financial shareholders as well as those affected by environmental or social factors such as suppliers, consumers, employees, the local community, and the natural environment.
Standards (noun): governmental or privately-created criteria used to regulate or evaluate products, consumers, organizations and/or producers. Standards can play a critical role in stimulating the market and giving companies information to create better products or change corporate behavior. An example is the LEED green building rating system for buildings, or the Take Back laws imposed in the European Union. See INTERNATIONAL ORGANIZATION FOR STANDARDIZATION
Sustainability (noun): the aspiration to ensure that meeting the needs of the present does not compromise the ability of future generations to meet their own needs, the most widely accepted definition comes from “Our Common Future,” Report of World Commission on Environment and Development, commonly called the The Brundtland Report).
Sustainable (adj): able to be maintained or upheld while conserving an ecological balance and avoiding the depletion of natural resources; can apply to other fields.
Sustainable fashion: also called eco fashion, is a part of the growing design philosophy and trend of sustainability, the goal of which is to create a system which can be supported indefinitely in terms of environmentalism and responsibility.
Synthetic Chemical (noun): an artificially produced chemicals.
Synthetic Organic Chemicals (noun): artificial organic chemicals, some of which are volatile and others of which tend to stay dissolved in water without undergoing evaporation.
Textile: A flexible material consisting of a network of natural or artificial fibres often referred to as thread, textiles are usually used to make clothing, bedding, and more.
Three Rs (noun): Reduce, Reuse, Recycle. Throughout the green movement these three words have been used to describe processes concerned with waste minimalization. There have been addendums made to this list for example The Story of Stuff’s Reject (describing short life products) and the composting Rot. There is also Rethink and Repurpose.
Thriftcycle (noun): the series of stages in a product’s (usually clothing’s) lifecycle beginning at purchase and ending before the garment is discarded permanently, as long as the product is exchanged at least once in a market scenario between a consumer and vendor. Usually the stages are purchase, usage, and mercantile transaction. Thrift stores are able to sell and either accept donations or buy clothing.
Toxicity (noun): a physiological or biological property that enables a chemical to do harm. or create injury, to a living organism by other than mechanical means; the ability of a chemical to cause poisoning when the chemical is administered to a living organism in a an appropriate form in and manner. Some chemicals have a low-toxicity potential whereas others have a high toxicity.
Transitional (noun): the process or a period of changing from one state or condition to another. a passage in a piece of writing that smoothly connects two topics or sections to each other.(Physics)a change of an atom, nucleus, electron, etc., from one quantum state to another, with emission or absorption of radiation. (verb) undergo or cause to undergo a process or period of transition
Triclosan: is an antibacterial andantifungal agent. Despite being used in many consumer products, beyond its use in toothpaste to prevent gingivitis, there is no evidence according to the American Food and Drug Administration (FDA) that triclosan provides an extra benefit to health in other consumer products.
Triple bottom line / quadruple (noun): captures an expanded spectrum of values and criteria for measuring organizational (and societal) success: economic, ecological and social (profit, planet, and people). With the ratification of the United Nations and ICLEI TBL standard for urban and community accounting in early 2007, this became the dominant approach to public sector full cost accounting. Similar UN standards apply to natural capital and human capital measurement to assist in measurements required by TBL, e.g. the ecoBudget standard for reporting ecological footprint.
Upcycle (verb): is the process of converting waste materials or useless products into new materials or products of better quality or a higher environmental value.
USDA(United States Department of Agriculture): the United States federal executive department responsible for developing and executing U.S. federal government policy onfarming, agriculture, and food. It aims to meet the needs of farmers and ranchers, promote agricultural trade and production, work to assure food safety, protect natural resources, foster rural communities and end hunger in the United States and abroad. It also certifies organic agriculture products.
USDA Certified Organic: Label given to food products that meet the requirements set by the National Organic Program (NOP). To receive the certifying label, at least 95% of the ingredients must be organic.
Vegan (noun): a vegetarian who does not consume or use products that have been derived from animals, including eggs, milk, and all other products that are derived from animals.
Vegetarian (noun): a person who does not eat meat, and sometimes other animal products, esp. for moral, religious, or health reasons.
Vintage (noun): the year or place in which wine, esp. wine of high quality, was produced. the grapes or wine produced in a particular season. The time that something of quality was produced. (adj) Of, relating to, or denoting wine of high quality. denoting something of high quality, esp. something from the past or characteristic of the best period of a person’s work
Virgin(adj): not yet used exploited or touched, (olive oil) made from the first pressing of olives.
VOCs (Volatile Organic Compounds): organic chemicals that have a high vapor pressure at ordinary, room-temperature conditions. Their high vapor pressure results from a low boiling point, which causes large numbers of molecules to evaporate from the liquid or solid form of the compound and enter the surrounding air. An example is formaldehyde, which slowly exits paint and gets into the air. Many VOCs are dangerous to human health or cause harm to the environment. VOCs are numerous, varied, and ubiquitous. They include both man-made and naturally occurring chemical compounds. Anthropogenic VOCs are regulated by law, especially indoors, where concentrations are the highest. VOCs are typically not acutely toxic, but instead have compounding long-term health effects. Because the concentrations are usually low and the symptoms slow to develop, research into VOCs and their effects is difficult. They are known to be found in paint, fabrics, finishes, foams, stains, and more industrial products.
Waste-to-Energy: the burning of combustion of waste in a controlled-environment incinerator to generate a usable form of steam, heat, or electricity.
Wind power: the conversion of wind energy into a useful form of energy, such as using wind turbines to make electricity, windmills for mechanical power, windpumps for water pumping or drainage or sails to propel ships. Wind power renewable, widely distributed, clean, and produces no greenhouse gas emissions during operation. A large wind farmmay consist of several hundred individual wind turbines which are connected to the electric power transmission network. At the end of 2010, worldwide nameplate capacity of wind-powered generators was 197 gigawatts (GW). Energy production was 430 TWh, which is about 2.5% of worldwide electricity usage. Several countries have achieved relatively high levels of wind power penetration, such as 21% of stationary electricity production in Denmark, 18% in Portugal, 16% in Spain, 14% in Ireland and 9% in Germany in 2010. As of 2011, 83 countries around the world are using wind power on a commercial basis.
Xeric (noun): containing little moisture; very dry.
Xeriscaping (noun): landscaping that incorporates native species and plants that are not water intensive. | http://www.teensturninggreen.org/green-u/green-guide/glossary/ | 13 |
22 | African-American Civil Rights Movement (1896–1954)
The Civil Rights Movement in the United States was a long, primarily nonviolent struggle to bring full civil rights and equality under the law to all Americans. The movement has had a lasting impact on United States society, in its tactics, the increased social and legal acceptance of civil rights, and in its exposure of the prevalence and cost of racism.
The American Civil Rights movement has been made up of many movements. The term usually refers to the political struggles and reform movements between 1945 and 1970 to end discrimination against African Americans and other disadvantaged groups and to end legal racial segregation, especially in the US South.
This article focuses on an earlier phase of the struggle. Two United States Supreme Court decisions—Plessy v. Ferguson, 163 U.S. 537 (1896), which upheld "separate but equal" racial segregation as constitutional doctrine, and Brown v. Board of Education, 347 U.S. 483 (1954) which overturned Plessy — serve as milestones. This was an era of stops and starts, in which some movements, such as Marcus Garvey's Universal Negro Improvement Association, were very successful but left little lasting legacy, while others, such as the NAACP's painstaking legal assault on state-sponsored segregation, achieved modest results in its early years but made steady progress on voter rights and gradually built to a key victory in Brown v. Board of Education (1954).
After the Civil War, the US expanded the legal rights of African Americans. Congress passed, and enough states ratified, an amendment ending slavery in 1865—the 13th Amendment to the United States Constitution. This amendment only outlawed slavery; it provided neither citizenship nor equal rights. In 1868, the 14th Amendment was ratified by the states, granting African Americans citizenship. All persons born in the US were extended equal protection under the laws of the Constitution. The 15th Amendment (ratified in 1870) stated that race could not be used as a condition to deprive men of the ability to vote. During Reconstruction (1865–1877), Northern troops occupied the South. Together with the Freedmen's Bureau, they tried to administer and enforce the new constitutional amendments. Many black leaders were elected to local and state offices, and many others organized community groups, especially to support education.
Reconstruction ended following the Compromise of 1877 between Northern and Southern white elites. In exchange for deciding the contentious Presidential election in favor of Rutherford B. Hayes, supported by Northern states, over his opponent, Samuel J. Tilden, the compromise called for the withdrawal of Northern troops from the South. This followed violence and fraud in southern elections from 1868–1876, which had reduced black voter turnout and enabled Southern white Democrats to regain power in state legislatures across the South. The compromise and withdrawal of Federal troops meant that white Democrats had more freedom to impose and enforce discriminatory practices. Many African Americans responded to the withdrawal of federal troops by leaving the South in what is known as the Kansas Exodus of 1879.
The Radical Republicans, who spearheaded Reconstruction, had attempted to eliminate both governmental and private discrimination by legislation. That effort was largely ended by the Supreme Court's decision in the Civil Rights Cases, 109 U.S. 3 (1883), in which the Court held that the Fourteenth Amendment did not give Congress power to outlaw racial discrimination by private individuals or businesses.
Key events
The Supreme Court's decision in Plessy v. Ferguson (1896) upheld state-mandated discrimination in public transportation under the "separate but equal" doctrine. As Justice Harlan, the only member of the Court to dissent from the decision, predicted:
- If a state can prescribe, as a rule of civil conduct, that whites and blacks shall not travel as passengers in the same railroad coach, why may it not so regulate the use of the streets of its cities and towns as to compel white citizens to keep on one side of a street, and black citizens to keep on the other? Why may it not, upon like grounds, punish whites and blacks who ride together in street cars or in open vehicles on a public road or street? . . . .
The Plessy decision did not address an earlier Supreme Court case, Yick Wo v. Hopkins, 118 U.S. 356 (1886), involving discrimination against Chinese immigrants, that held that a law that is race-neutral on its face, but is administered in a prejudicial manner, is an infringement of the Equal Protection Clause in the Fourteenth Amendment to the US Constitution. While in the 20th century, the Supreme Court began to overturn state statutes that disfranchised African Americans, as in Guinn v. United States (1915), with Plessy, it upheld segregation that Southern states enforced in nearly every other sphere of public and private life. The Court soon extended Plessy to uphold segregated schools. In Berea College v. Kentucky, 211 U.S. 45 (1908), the Court upheld a Kentucky statute that barred Berea College, a private institution, from teaching both black and white students in an integrated setting. Many states, particularly in the South, took Plessy and Berea as blanket approval for restrictive laws, generally known as Jim Crow laws, that created second-class status for African Americans.
In many cities and towns, African Americans were not allowed to share a taxi with whites or enter a building through the same entrance. They had to drink from separate water fountains, use separate restrooms, attend separate schools, be buried in separate cemeteries and swear on separate Bibles. They were excluded from restaurants and public libraries. Many parks barred them with signs that read "Negroes and dogs not allowed." One municipal zoo listed separate visiting hours.
The etiquette of racial segregation was harsher, particularly in the South. African Americans were expected to step aside to let a white person pass, and black men dared not look any white woman in the eye. Black men and women were addressed as "Tom" or "Jane", but rarely as "Mr." or "Miss" or "Mrs," titles then widely in use for adults. Whites referred to black men of any age as "boy" and a black woman as "girl"; both often were called by labels such as "nigger" or "colored."
Less formal social segregation in the North began to yield to change. In 1941, however, the United States Naval Academy, based in segregated Maryland, refused to play a lacrosse game against Harvard University because Harvard's team included a black player.
Paul Robeson addresses segregation in Major League Baseball, 1943
In December 1943, the singer and activist Paul Robeson became the first black man to address baseball team owners on the subject of integration. At the owners' annual winter meeting, Robeson argued that baseball, as a national game, had an obligation to ensure segregation did not become a national pattern. The owners gave Robeson a round of applause. Although Baseball Commissioner Kenesaw Mountain Landis remarked after the meeting that there was no rule on the books denying blacks entry into the league, he had stood in way of integration for more than 20 years. His death in 1944 removed a significant obstacle to integrating Major League Baseball. Still, Robeson is credited with helping to pave the way for Jackie Robinson's entry into major league baseball four years later.
Jackie Robinson’s Major League Baseball debut, 1947
Jackie Robinson was a sports pioneer of the Civil Rights Movement, best known for becoming the first African American to play professional sports in the major leagues. Robinson debuted with the Brooklyn Dodgers of Major League Baseball on April 15, 1947. His first major league game came one year before the US Army was integrated, seven years before Brown v. Board of Education, eight years before Rosa Parks, and before Martin Luther King Jr. was leading the Civil Rights Movement.
Political opposition
Lily-White Movement
Following the Civil War black leaders made substantial progress in establishing representation in the Republican Party. Among the most stunning was the rise of Norris Wright Cuney to the chairmanship of the Texas Republican Party. These gains led to substantial discomfort among many white voters, some of whom left the party to join the Democrats.
During the 1888 Texas Republican Convention, Cuney coined the term Lily-White Movement to describe efforts by white conservatives to oust blacks from positions of party leadership and incite riots to divide the party. Increasingly organized efforts by this movement gradually eliminated black leaders from the party. The writer Michael Fauntroy contends that the effort was coordinated with Democrats as part of a larger movement toward disfranchisement of blacks in the South, but by the late 19th century, the Democratic Party had retaken most state legislatures in the South and accomplished disfranchisement of blacks without Republican assistance.
Nationally, the Republican Party responded to black demands. For instance, opposition to lynching was part of the Republican platform at the 1920 Republican National Convention. Lynchings, primarily of black men in the South, had increased in the decades around the turn of the 20th century. Leonidas C. Dyer, a white Republican Representative from St. Louis, Missouri, worked with the NAACP to introduce an anti-lynching bill into the House, where he gained strong passage in 1922. His effort was defeated by the Southern Democratic block in the Senate, which filibustered the bill that year, and in 1923 and 1924.
Opponents of black civil rights used economic reprisals and sometimes violence in the 1870s and 1880s to discourage blacks from registering to vote. By the turn of the 20th century, white Democratic-dominated Southern legislatures disfranchised nearly all age-eligible African-American voters through a combination of statute and constitutional provisions. While requirements applied to all citizens, in practice, they were targeted at blacks and poor whites (and Mexican Americans in Texas), and subjectively administered. The feature "Turnout in Presidential and Midterm Elections" at this University of Texas website devoted to politics, shows the drastic drop in voting as these provisions took effect in Southern states compared to the rest of the US, and the longevity of the measures.
Mississippi was the first state to have such constitutional provisions, such as poll taxes, literacy tests (which depended on the arbitrary decisions of white registrars), and complicated record keeping to establish residency, litigated before the Supreme Court. In 1898, in Williams v. Mississippi, the Court upheld the state. Other Southern states quickly adopted the "Mississippi plan", and from 1890–1908, ten states adopted new constitutions with provisions to disfranchise most blacks and many poor whites. States continued to disfranchise these groups for decades, until mid-1960s federal legislation provided for oversight and enforcement of voting rights. Blacks were most adversely affected, and in many states black voter turnout dropped to zero.
Poor whites were also disfranchised. In Alabama, for instance, by 1941, 600,000 poor whites had been disfranchised, as well as 520,000 blacks.
It was not until the 20th century that litigation by African Americans on such provisions began to meet some success before the Supreme Court. In 1915 in Guinn v. United States, the Court declared Oklahoma’s ‘grandfather clause’ to be unconstitutional. Although the decision affected all states that used the grandfather clause, state legislatures quickly employed new devices to continue disfranchisement. Each provision or statute had to be litigated separately. The NAACP litigated against many such provisions.
One device which the Democratic Party began to use more widely in Southern states in the early 20th century was the white primary, which served for decades to disfranchise the few blacks who managed to get past barriers of voter registration. Barring blacks from voting in the Democratic Party primaries meant they had no chance to vote in the only competitive contests. White primaries were not struck down by the Supreme Court until Smith v. Allwright in 1944.
Criminal law and lynching
In 1880, the United States Supreme Court ruled in Strauder v. West Virginia, 100 U.S. 303 (1880) that African Americans could not be excluded from juries. But, beginning in 1890 with new state constitutions and electoral laws, the South effectively disfranchised blacks in the South, which routinely disqualified them for jury duty which was limited to voters. This left them at the mercy of a white justice system arrayed against them. In some states, particularly Alabama, the state used the criminal justice system to reestablish a form of peonage, through the convict-lease system. The state sentenced black males to years of imprisonment, which they spent working without pay. The state leased prisoners to private employers, such as Tennessee Coal, Iron and Railroad Company, a subsidiary of United States Steel Corporation, which paid the state for their labor. Because the state made money, the system created incentives for the jailing of more men, who were disproportionately black. It also created a system in which treatment of prisoners received little oversight.
Extrajudicial punishment was more brutal. During the last decade of the 19th century and the first decades of the 20th century, white vigilante mobs lynched thousands of black males, sometimes with the overt assistance of state officials, mostly within the South. No whites were charged with crimes in any of those murders. Whites were so confident of their immunity from prosecution for lynching that they not only photographed the victims, but made postcards out of the pictures.
The Ku Klux Klan, which had largely disappeared after a brief violent career in the early years of Reconstruction, reappeared in 1915. It grew mostly in industrializing cities of the South and Midwest that underwent the most rapid growth from 1910 to 1930. Social instability contributed to racial tensions from severe competition for jobs and housing. People joined KKK groups who were anxious about their place in American society, as cities were rapidly changed by a combination of industrialization, migration of blacks and whites from the rural South, and waves of increased immigration from mostly rural southern and eastern Europe.
Initially the KKK presented itself as another fraternal organization devoted to betterment of its members. The KKK's revival was inspired in part by the movie Birth of a Nation, which glorified the earlier Klan and dramatized the racist stereotypes concerning blacks of that era. The Klan focused on political mobilization, which allowed it to gain power in states such as Indiana, on a platform that combined racism with anti-immigrant, anti-Semitic, anti-Catholic and anti-union rhetoric, but also supported lynching. It reached its peak of membership and influence about 1925, declining rapidly afterward as opponents mobilized.
Republicans repeatedly introduced bills in the House to make lynching a federal crime, but they were defeated by the Southern block. In 1920 the Republicans made an anti-lynching bill part of their platform and achieved passage in the House by a wide margin. Southern Democrats in the Senate repeatedly filibustered the bill to prevent a vote, and defeated it in the 1922, 1923 and 1924 sessions as they held the rest of the legislative program hostage.
Segregated economic life and education
Besides excluding blacks from equal participation in many areas of public life, white society also kept blacks in a position of economic subservience or marginality. After widespread losses from crops from disease and land from financial failures in the late 19th century, black farmers in the South by the early 20th century worked in virtual economic bondage as sharecroppers or tenant farmers. In Mississippi particularly, many blacks had become landowners before the financial failures of the late 19th century. Employers and labor unions generally restricted African Americans to the worst paid and least desirable jobs. Because of the lack of steady, well-paid jobs, relatively undistinguished positions, such as those with the Pullman Porter or as hotel doorman, became prestigious positions in black communities in the North. The expansion of railroads meant that they recruited in the South for laborers, and tens of thousands of blacks moved North to work with the Pennsylvania Railroad, for example, during the period of the Great Migration.
The Jim Crow system that excluded African Americans from many areas of economic life led to creation of a vigorous, but stunted economic life within the segregated sphere. Black newspapers sprang up throughout the North, while black owners of insurance and funeral establishments, and other services for blacks, acquired disproportionate influence as both economic and political leaders.
This period saw the maturing of independent black churches, whose leaders were usually also strong community leaders. Blacks had left white churches and the Southern Baptist Convention to set up their own churches free of white supervision immediately during and after the American Civil War. With the help of northern associations, they quickly began to set up state conventions and, by 1895, joined several associations into the black National Baptist Convention, the first of that denomination among blacks. In addition, independent black denominations, such as the African Methodist Episcopal Church and AME Zion Church, had made hundreds of thousands of converts in the South, founding AME churches across the region. The churches were centers of community activity, especially organizing for education.
Continuing to see education as the primary route of advancement and critical for the race, many talented blacks went into teaching, which had high respect as a profession. Segregated schools for blacks were underfunded in the South and ran on shortened schedules in rural areas. Despite segregation, in Washington, DC by contrast, as Federal employees, black and white teachers were paid on the same scale. Outstanding black teachers in the North received advanced degrees and taught in highly regarded schools, which trained the next generation of leaders in cities such as Chicago, Washington, and New York, whose black populations had increased in the 20th century due to the Great Migration.
Education was one of the major achievements of the black community in the 19th century. Blacks in Reconstruction governments had supported the establishment of public education in every Southern state. Despite the difficulties, with the enormous eagerness of freedmen for education, by 1900 the African-American community had trained and put to work 30,000 African-American teachers in the South. In addition, a majority of the black population had achieved literacy. Not all the teachers had a full 4-year college degree in those years, but the shorter terms of normal schools were part of the system of teacher training in both the North and the South to serve the many new communities across the frontier. African-American teachers got many children and adults started on education.
Northern alliances had helped fund normal schools and colleges to teach African-American teachers, as well as create other professional classes. The American Missionary Association, supported largely by the Congregational and Presbyterian churches, had helped fund and staff numerous private schools and colleges in the South, who collaborated with black communities to train generations of teachers and other leaders. Major 20th-century industrialists, such as George Eastman of Rochester, New York, acted as philanthropists and made substantial donations to black educational institutions such as Tuskegee Institute.
In 1862, the US Congress passed the Morrill Act, which established federal funding of a land grant college in each state, but 17 states refused to admit black students to their land grant colleges. In response, Congress enacted the second Morrill Act of 1890, which required states that excluded blacks from their existing land grant colleges to open separate institutions and to equitably divide the funds between the schools. The colleges founded in response to the second Morill Act became today's public historically black colleges and universities (HBCUs) and, together with the private HBCUs and the unsegregated colleges in the North and West, provided higher educational opportunities to African Americans. Federally funded extension agents from the land grant colleges spread knowledge about scientific agriculture and home economics to rural communities with agents from the HBCUs focusing on black farmers and families.
In the 19th century, blacks formed fraternal organizations across the South and the North, including an increasing number of women's clubs. They created and supported institutions that increased education, health and welfare for black communities. After the turn of the 20th century, black men and women also began to found their own college fraternities and sororities to create additional networks for lifelong service and collaboration. These were part of the new organizations that strengthened independent community life under segregation.
The Black church
As the center of community life, Black churches were integral leaders and organizers in the Civil Rights Movement. Their history as a focal point for the Black community and as a link between the Black and White worlds made them natural for this purpose. Rev. Martin Luther King, Jr. was but one of many notable Black ministers involved in the movement. Ralph David Abernathy, Bernard Lee, Fred Shuttlesworth, and C.T. Vivian are among the many notable minister-activists. They were especially important during the later years of the movement in the 1950s and 1960s.
The Niagara Movement and the founding of the NAACP
At the turn of the 20th century, Booker T. Washington was regarded, particularly by the white community, as the foremost spokesman for African Americans in the US. Washington, who led the Tuskegee Institute, preached a message of self-reliance. He urged blacks to concentrate on improving their economic position rather than demanding social equality until they had proved that they "deserved" it. Publicly, he accepted the continuation of Jim Crow and segregation in the short term, but privately helped to fund national court cases that challenged the laws.
W. E. B. Du Bois and others in the black community rejected Washington's apology for segregation. One of his close associates, William Monroe Trotter, was arrested after challenging Washington when he came to deliver a speech in Boston in 1905. Later that year Du Bois and Trotter convened a meeting of black activists on the Canadian side of Niagara Falls. They issued a manifesto calling for universal manhood suffrage, elimination of all forms of racial segregation and extension of education—not limited to the vocational education that Washington emphasized—on a nondiscriminatory basis. The Niagara Movement was actively opposed by Washington, and had effectively collapsed due to internal divisions by 1908.
Du Bois joined with other black leaders and white activists, such as Mary White Ovington, Oswald Garrison Villard, William English Walling, Henry Moskowitz, Julius Rosenthal, Lillian Wald, Rabbi Emil G. Hirsch, and Stephen Wise to create the National Association for the Advancement of Colored People (NAACP) in 1909. Du Bois also became editor of its magazine The Crisis. In its early years, the NAACP concentrated on using the courts to attack Jim Crow laws and disfranchising constitutional provisions. It successfully challenged the Louisville, Kentucky ordinance that required residential segregation in Buchanan v. Warley, 245 U.S. 60 (1917). It also gained a Supreme Court ruling striking down Oklahoma's grandfather clause that exempted most illiterate white voters from a law that disfranchised African-American citizens in Guinn v. United States (1915).
The NAACP lobbied against President Woodrow Wilson's introduction of racial segregation into Federal government employment and offices in 1913. They lobbied for commissioning of African Americans as officers in World War I. In 1915 the NAACP organized public education and protests in cities across the nation against D.W. Griffith's silent film Birth of a Nation, a film that glamorized the Ku Klux Klan. Some cities refused to allow the film to open.
The American Jewish community and the civil rights movement
Many from the American Jewish community tacitly or actively supported the civil rights movement. Several of the co-founders of the NAACP were Jewish. Many of its white members and leading activists came from within the Jewish community. The great majority of American Jews who were active in promoting civil rights were secular Jews, Reform Jews and Conservative Jews, especially during the later years.
Jewish philanthropists actively supported the NAACP and various civil rights group, and schools for African Americans. The Jewish philanthropist Julius Rosenwald funded the creation of dozens of primary schools, secondary schools and colleges for segregated black youth. In partnership with Booker T. Washington and Tuskegee University, Rosenwald created a fund which provided seed money for building 5,000 schools for black Americans, mostly in the rural South. Tuskegee architects created model school plans. What is most remarkable is that black communities essentially taxed themselves twice to pay for such schools, which required community matching funds. Often most of the residents in rural areas were blacks. Public funds were committed for the schools, and blacks raised additional funds by community events, and sometimes by members' getting second mortgages on their homes. At one time some forty percent of rural southern blacks were learning at Rosenwald elementary schools.
The PBS television show From Swastika to Jim Crow discussed Jewish involvement in the civil rights movement. It recounted that Jewish scholars fleeing from or surviving the Holocaust of World War II came to teach at many Southern schools, where they reached out to black students:
- Thus, in the 1930s and 1940s when Jewish refugee professors arrived at Southern Black Colleges, there was a history of overt empathy between Blacks and Jews, and the possibility of truly effective collaboration. Professor Ernst Borinski organized dinners at which Blacks and Whites would have to sit next to each other — a simple yet revolutionary act. Black students empathized with the cruelty these scholars had endured in Europe and trusted them more than other Whites. In fact, often Black students — as well as members of the Southern White community — saw these refugees as "some kind of colored folk."
"The New Negro"
The experience of fighting in World War I along with exposure to different racial attitudes in Europe influenced the black veterans by created a widespread demand for the freedoms and equality for which they had fought. Those veterans found conditions at home as bad as ever. Some were assaulted for having the impertinence to wear their uniforms in public. This generation responded with a far more militant spirit than the generation before, urging blacks to fight back when whites attacked them. A. Philip Randolph introduced the term the "New Negro" in 1917; it became the catchphrase to describe the new spirit of militancy and impatience of the post-war era.
A group known as the African Blood Brotherhood, a socialist group with a large number of Caribbean émigrés in its leadership, organized around 1920 to demand the same sort of self-determination for black Americans that the Wilson administration was promising to Eastern European peoples at the Versailles conference in the aftermath of World War I. The leaders of the Brotherhood, many of whom joined the Communist Party in the years to come, were also inspired by the anti-imperialist program of the new Soviet Union.
In addition, during the Great Migration, hundreds of thousands of African Americans moved to northern industrial cities starting before World War I and through 1940. Another wave of migration during and after World War II led many to West Coast cities, as well as more in the North and Midwest. They were both fleeing violence and segregation and seeking jobs, as manpower shortages in war industries promised steady work. Continued depressed conditions in the farm economy of the South in the 1920s made the north look more appealing. Those expanding northern communities confronted familiar problems—racism, poverty, police abuse and official hostility— but these were in a new setting, where the men could vote (and women, too, after 1920), and possibilities for political action were far broader than in the South.
Marcus Garvey and the UNIA
Marcus Garvey's Universal Negro Improvement Association made great strides in organizing in these new communities in the North, and among the internationalist-minded "New Negro" movement in the early 1920s. Garvey's program pointed in the opposite direction from mainstream civil rights organizations such as the NAACP; instead of striving for integration into white-dominated society, Garvey's program of Pan Africanism has become known as Garveyism. It encourages economic independence within the system of racial segregation in the United States, an African Orthodox Church with a black Jesus and black Virgin Mother that offered an alternative to the white Jesus of the black church, and a campaign that urged African Americans to "return to Africa", if not physically, at least in spirit. Garvey attracted thousands of supporters, both in the United States and in the African diaspora in the Caribbean, and claimed eleven million members for the UNIA, which was broadly popular in Northern black communities.
Garvey's movement was a contradictory mix of defeatism, accommodation and separatism: he married themes of self-reliance that Booker T. Washington could have endorsed and the "gospel of success" so popular in white America in the 1920s with a rejection of white colonialism abroad and any hope of reform of white society at home. The movement at first attracted many of the foreign-born radicals also associated with the Socialist and Communist parties, but drove many of them away when Garvey began to suspect them of challenging his control.
The movement collapsed nearly as quickly as it blossomed, as the federal government convicted Garvey for mail fraud in 1922 in connection with the movement's financially troubled "Black Star Line". The government commuted Garvey's sentence and deported Garvey to his native Jamaica in 1927. While the movement floundered without him, it inspired other self-help and separatist movements that followed, including Father Divine and the Nation of Islam.
The Labor movement and civil rights
The labor movement, with some exceptions, had historically excluded African Americans. While the radical labor organizers who led organizing drives among packinghouse workers in Chicago and Kansas City during World War I and the steel industry in 1919 made determined efforts to appeal to black workers, they were not able to overcome the widespread distrust of the labor movement among black workers in the North. With the ultimate defeat of both of those organizing drives, the black community and the labor movement largely returned to their traditional mutual mistrust.
Left-wing political activists in the labor movement made some progress in the 1920s and 1930s, however, in bridging that gap. A. Phillip Randolph, a long-time member of the Socialist Party of America, took the leadership of the fledgling Brotherhood of Sleeping Car Porters at its founding in 1925. Randolph and the union faced opposition not only from the Pullman Company, but from the press and churches within the black community, many of whom were the beneficiaries of financial support from the company. The union eventually won over many of its critics in the black community by wedding its organizing program with the larger goal of black empowerment. The union won recognition from the Pullman Company in 1935 after a ten-year campaign, and a union contract in 1937.
The BSCP became the only black-led union within the American Federation of Labor in 1935. Randolph chose to remain within the AFL when the Congress of Industrial Organizations split from it. The CIO was much more committed to organizing African-American workers and made strenuous efforts to persuade the BSCP to join it, but Randolph believed more could be done to advance black workers' rights, particularly in the railway industry, by remaining in the AFL, to which the other railway brotherhoods belonged. Randolph remained the voice for black workers within the labor movement, raising demands for elimination of Jim Crow unions within the AFL at every opportunity. BSCP members such as Edgar Nixon played a significant role in the civil rights struggles of the following decades.
Many of the CIO unions, in particular the Packinghouse Workers, the United Auto Workers and the Mine, Mill and Smelter Workers made advocacy of civil rights part of their organizing strategy and bargaining priorities: they gained improvements for workers in meatpacking in Chicago and Omaha, and in the steel and related industries throughout the Midwest. The Transport Workers Union of America, which had strong ties with the Communist Party at the time, entered into coalitions with Adam Clayton Powell, Jr., the NAACP and the National Negro Congress to attack employment discrimination in public transit in New York City in the early 1940s.
The CIO was particularly vocal in calling for elimination of racial discrimination by defense industries during World War II; they were also forced to combat racism within their own membership, putting down strikes by white workers who refused to work with black co-workers. While many of these "hate strikes" were short-lived: a wildcat strike launched in Philadelphia in 1944 when the federal government ordered the private transit company to desegregate its workforce lasted two weeks and was ended only when the Roosevelt administration sent troops to guard the system and arrested the strike's ringleaders.
Randolph and the BSCP took the battle against employment discrimination even further, threatening a March on Washington in 1942 if the government did not take steps to outlaw racial discrimination by defense contractors. Randolph limited the March on Washington Movement to black organizations to maintain black leadership; he endured harsh criticism from others on the left for his insistence on black workers' rights in the middle of a war. Randolph only dropped the plan to march after winning substantial concessions from the Roosevelt administration.
The Left and civil rights
The Scottsboro Boys
In 1931, the NAACP and the Communist Party USA also organized support for the "Scottsboro Boys", nine black men arrested after a fight with some white men also riding the rails, then convicted and sentenced to death for allegedly raping two white women dressed in men's clothes later found on the same train. The NAACP and the CP fought over the control of those cases and the strategy to be pursued; the CP and its arm the International Labor Defense largely prevailed. The ILD's legal campaign produced two significant Supreme Court decisions (Powell v. Alabama and Norris v. Alabama) extending the rights of defendants; its political campaign saved all the defendants from the death sentence and ultimately led to freedom for most of them.
The Scottsboro defense was only one of the ILD's many cases in the South; for a period in the early and mid-1930s, the ILD was the most active defender of blacks' civil rights, and the Communist Party attracted many members among activist African Americans. Its campaigns for black defendants' rights did much to focus national attention on the extreme conditions which black defendants faced in the criminal justice system throughout the South.
The NAACP
The NAACP devoted much of its energy between the first and second world wars to fighting the lynching of blacks and investigating the serious race riots in numerous cities throughout the United States in what was called the "Red Summer" of 1919, resulting from postwar economic and social tensions. The organization sent Walter F. White, who later became its general secretary, to Phillips County, Arkansas in October 1919 to investigate the Elaine Race Riot. In that year, it was unusual for being a rural riot: more than 200 black tenant farmers were killed by roving white vigilantes and federal troops after a deputy sheriff's attack on a union meeting of sharecroppers left one white man dead. The NAACP organized the appeals for twelve men sentenced to death a month later, based on their testimony having been obtained by beating and electric shocks. They obtained a groundbreaking Supreme Court decision in Moore v. Dempsey, 261 U.S. 86 (1923) that significantly expanded the Federal courts' oversight of the states' criminal justice systems in the years to come.
The NAACP also spent more than a decade seeking federal legislation barring lynching. It regularly displayed a black flag stating "A Man Was Lynched Yesterday" from the window of its offices in New York to mark each outrage. Efforts to pass an anti-lynching law foundered on Southern Democratic power in Congress. For instance, while Republicans achieved passage in the House of an anti-lynching law in 1922, Southern Democratic senators filibustered the bill in the Senate and defeated it in the 1922, 1923 and 1924 legislative sessions. The Southern Democratic block controlled important chairmanships in both houses of Congress and defeated all lynching legislative proposals.
The NAACP led the successful fight, in alliance with the American Federation of Labor, to prevent the nomination of John Johnston Parker to the Supreme Court. They opposed him because of his opposition to black suffrage and his anti-labor rulings. This alliance and lobbying campaign were important for the NAACP, both in demonstrating the NAACP's ability to mobilize widespread opposition to racism and as a first step toward building political alliances with the labor movement.
After World War II, returning African-American veterans were spurred by their sacrifices and experiences to renew demands for the protection and exercise of their constitutional rights as citizens in US society. One serviceman reportedly said,t "I spent four years in the Army to free a bunch of Dutchmen and Frenchmen, and I'm hanged if I'm going to let the Alabama version of the Germans kick me around when I get home. No sirree-bob! I went into the Army a nigger; I'm comin' out a man." From 1940 to 1946, the NAACP's membership grew from 50,000 to 450,000.
The NAACP's legal department, headed by Charles Hamilton Houston and Thurgood Marshall, undertook a campaign spanning several decades to bring about the reversal of the "separate but equal" doctrine announced by the Supreme Court's decision in Plessy v. Ferguson. Instead of appealing to the legislative or executive branches of government, they focused on the judiciary, reasoning that Congress was dominated by Southern segregationists, while the Presidency could not afford to lose the Southern vote. The NAACP's first cases did not challenge the principle directly, but sought instead to show that the state's segregated facilities were not equal.
Even those more modest goals helped lay the foundation for the ultimate reversal of the doctrine in Plessy v. Ferguson by showing the irrational nature of the distinctions that the states drew to preserve segregation and the humiliating impact it had on the black subjects of "separate but equal" treatment. The Supreme Court's unanimous decision in Brown v. Board of Education (1954), holding that state-sponsored segregation of elementary schools was unconstitutional, was a first step in dismantling segregation in the South. It was a historic milestone in reframing the national debate over segregation by putting state-sponsored discrimination beyond constitutional defense.
Marshall eventually decided to go beyond the initial aims of the NAACP, thinking that the time had come to do away with "separate but equal". The NAACP issued a directive stating that their goal was now "obtaining education on a nonsegregated basis and that no relief other than that will be acceptable." The first case Marshall argued on this basis was Briggs v. Elliott, but cases were also filed in other states. In Topeka, Kansas, the local NAACP branch determined that Oliver Brown would be a good candidate for filing a lawsuit; as an assistant pastor and the father of three girls, he was an ideal candidate. The NAACP instructed him to apply to enroll his daughters at a local white school; after the expected rejection, Brown v. Board of Education was filed. Later, this and several other cases made their way to the Supreme Court, where they were all consolidated under the title of Brown. The decision to name the case was apparently made "so that the whole question would not smack of being a purely southern one."
Some in the NAACP thought Marshall was being too enthusiastic, fearing that the Southern judge, Chief Justice Fred M. Vinson, who would almost certainly oppose overruling Plessy, could destroy their case. One historian stated: "There was a sense that if you do this and you lose, you're going to enshrine Plessy for a generation." A government lawyer involved in the case agreed that it was "a mistake to push for the overruling of segregation per se so long as Vinson was chief justice — it was too early." In December 1952, the Supreme Court heard the case, but could not come to a decision. Unusually, the case was pushed back by a year to allow the lawyers involved to research the intention of the framers who drafted the equal protection clause of the 14th Amendment. In September 1953, Vinson died due to a heart attack, leading Justice Felix Frankfurter to proclaim: "This is the first indication I have ever had that there is a God." Vinson was replaced by Earl Warren, who was known for his moderate views on civil rights.
After the case was reheard in December, Warren set about persuading his colleagues to reach a unanimous decision overruling Plessy. Five of the other eight judges were firmly on his side, while another two were persuaded by Warren's promise that the decision would not touch greatly on the question of Plessy's legality, focusing instead on the principle of equality. The remaining holdout, Justice Stanley Reed, was swayed after it was suggested that a Southerner's lone dissent could be more dangerous and incendiary than a unanimous decision. In May 1954, Warren announced the Court's decision, authored by him, which declared that "segregation of children in public schools solely on the basis of race" deprived "the children of the minority group of equal educational opportunities".
The decision was strongly resisted by a number of Southerners; the Governor of Virginia, Thomas B. Stanley, insisted he would "use every legal means at my command to continue segregated schools in Virginia." One survey suggested that only 13% of Florida policemen were willing to enforce the decision in Brown; while 19 Senators and 77 members of the House of Representatives, including the entire congressional delegations of the states of Alabama, Arkansas, Georgia, Louisiana, Mississippi, South Carolina and Virginia, signed "The Southern Manifesto", all but two of the signatories were Southern Democrats: Republicans Joel Broyhill and Richard Poff of Virginia also promised to resist the decision by "lawful means". By the fall of 1955, Cheryl Brown started first grade at an integrated school — the first step on the long road to eventual equality for African Americans.
The Regional Council of Negro Leadership: Laying a Civil Rights Foundation
On December 28, 1951, T.R.M. Howard, an entrepreneur, surgeon, fraternal leader and planter in Mississippi, founded the Regional Council of Negro Leadership (RCNL) along with other key blacks in the state. At first the RCNL, which was based in the all-black town of Mound Bayou, did not directly challenge "separate but equal" (much like the initial stance of the Montgomery Improvement Association), but worked to guarantee the "equal." It often identified inadequate schools as the primary factor responsible for the black exodus to the North. Instead of demanding immediate integration, however, it called for equal school terms for both races. From the beginning, the RCNL also pledged an "all-out fight for unrestricted voting rights."
The Board of RCNL represented some of the key black business, fraternal, agricultural, educational, and governmental leaders in the state. Sixteen relatively autonomous committees, each headed by a respected leader in business, education, the church, or the professions, formed the backbone of the RCNL. The committees, in turn, reported to an executive board and board of directors headed by Howard.
The RCNL's most famous member was Medgar Evers. Fresh from graduation at Alcorn State University in 1952, he had moved to Mound Bayou to sell insurance for Howard. Evers soon became the RCNL's program director and helped to organize a boycott of service stations that failed to provide restrooms for blacks. As part of this campaign, the RCNL distributed an estimated 20,000 bumper stickers with the slogan “Don’t Buy Gas Where You Can’t Use the Rest Room." Beginning in 1953, it directly challenged "separate but equal" and demanded integration of schools.
The RCNL’s annual meetings in Mound Bayou between 1952 and 1955 attracted crowds of ten thousand or more. They featured speeches by Rep. William L. Dawson of Chicago, Rep. Charles Diggs of Michigan, Alderman Archibald Carey, Jr. of Chicago, and NAACP attorney Thurgood Marshall. Each of these events, in the words of Myrlie Evers, later Myrlie Evers-Williams, the wife of Medgar, constituted "a huge all-day camp meeting: a combination of pep rally, old-time revival, and Sunday church picnic." The conferences also included panels and workshops on voting rights, business ownership, and other issues. Attendance was a life-transforming experience for many future civil black leaders who became prominent in the 1960s, such as Fannie Lou Hamer, Amzie Moore, Aaron Henry and George W. Lee.
The RCNL also played a key role in the search for witnesses and evidence in the Emmett Till murder case in late 1955, and Howard spoke at many rallies throughout the country in the aftermath of the trial.
On November 27, 1955, Rosa Parks attended one of these speeches at Dexter Avenue Church in Montgomery. The host for this event was a then relatively unknown Rev. Martin Luther King Jr. Parks later said that she was thinking of Till when she refused to give up her seat four days later.
See also
|Wikimedia Commons has media related to: History of civil rights in the United States|
- African-American Civil Rights Movement (1955–1968)
- List of 19th-century African-American civil rights activists
- Nadir of American race relations
- African-American history
- Timeline of the African American Civil Rights Movement
- Timeline of racial tension in Omaha, Nebraska
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (April 2009)|
- America: A Narrative History, Chapter 18-19
- Text of Yick Wo v. Hopkins, 118 U.S. 356 (1886) is available from: · Findlaw
- West, Jean. "Branch Rickey and Jackie Robinson, Interview Essay". Retrieved 2009-04-15.
- Tygiel, Jules (2002). Extra Bases: Reflections on Jackie Robinson, Race, and Baseball History. Lincoln, Neb.: University of Nebraska Press. pp. 69–70. ISBN 0-8032-9447-6.
- Foner, Henry (2002). "Foreword". In Dorinson, Joseph; Pencak, William. Paul Robeson: Essays on His Life and Legacy. Jefferson, N.C.: McFarland. p. 1. ISBN 0-7864-2163-0.
- Myrdal, Gunnar; Bok, Sissela (1995). An American dilemma: the Negro problem and modern democracy. Transaction Publishers. p. 478.
- Fauntroy, Michael K. (2007). Republicans and the Black vote. Lynne Rienner Publishers. p. 43. "... lily whites worked with Democrats to disenfranchise African Americans."
- [dead link]
- Glenn Feldman, The Disfranchisement Myth: Poor Whites and Suffrage Restriction in Alabama, Athens: University of Georgia Press, 2004, p. 136
- Kenneth T. Jackson, The Ku Klux Klan in the City, 1915–1930, New York: Oxford University Press, 1967; reprint, Chicago: Elephant Paperback, 1992, pp.242–243
- Kenneth T. Jackson, The Ku Klux Klan in the City, 1915–1930, New York: Oxford University Press, 1967; reprint, Chicago: Elephant Paperback, 1992
- James D. Anderson, Black Education in the South, 1860–1935, Chapel Hill: University of North Carolina Press, 1988, pp.244–245
- For example, Alpha Phi Alpha the first black intercollegiate fraternity was founded at Cornell University in 1906. Wesley, Charles H. (1950). The History of Alpha Phi Alpha: A Development in Negro College Life (6th ed.). Chicago, IL: Foundation.
- "We Shall Overcome: The Players". Archived from the original on 7 June 2007. Retrieved 2007-05-29.
- James D. Anderson, Black Education in the South, 1860–1935, Chapel Hill: University of North Carolina Press, 198
- Source — PBS website From Swastika to Jim Crow
- Rawn James, Jr. (22 January 2013). The Double V: How Wars, Protest, and Harry Truman Desegregated America's Military. Bloomsbury Publishing. pp. 77–80. ISBN 978-1-60819-617-3. Retrieved 16 May 2013.
- Ewers, Justin (March 22, 2004). "'Separate but equal' was the law of the land, until one decision brought it crashing down" (page 2). US News & World Report.
- Ewers, Justin (March 22, 2004). "'Separate but equal' was the law of the land, until one decision brought it crashing down" (page 3). US News & World Report.
- Ewers, Justin (March 22, 2004). "'Separate but equal' was the law of the land, until one decision brought it crashing down" (page 4). US News & World Report.
- David T. Beito and Linda Royster Beito, Black Maverick: T.R.M. Howard's Fight for Civil Rights and Economic Power, Urbana: University of Illinois Press, 2009, pp.72-89.
Further reading
- Bates, Beth Tompkins, Pullman Porters and the Rise of Protest Politics in Black America, 1929–1945, 2001 ISBN 0-8078-2614-6.
- Carson, Clayborne; Garrow, David J.; Kovach, Bill; Polsgrove, Carol, eds. Reporting Civil Rights: American Journalism 1941–1963 and Reporting Civil Rights: American Journalism 1963–1973. New York: Library of America, 2003. ISBN 1-931082-28-6 and ISBN 1-931082-29-4.
- Danenhower Wilson, Ruth, “Jim Crow Joins Up: A study of Negroes in the Armed Forces of the United States," (W.J. Clark, revised edition, 1945).
- Dagbovie, Pero Gaglo, “Exploring a Century of Historical Scholarship on Booker T. Washington,” Journal of African American History, 92 (Spring 2007), 239–64.
- Egerton, John, Speak Now Against the Day: The Generation Before the Civil Rights Movement in the South (New York: Alfred A. Knopf, 1994). ISBN 0-679-40808-8.
- Kluger, Richard, Simple Justice: The History of Brown v. Board of Education and Black America's Struggle for Equality (1975; New York, Vintage Books, 1976). ISBN 0-394-72255-8.
- Nahal, Anita, and Lopez D. Matthews Jr., “African American Women and the Niagara Movement, 1905–1909,” Afro-Americans in New York Life and History, 32 (July 2008), 65–85.
- Parker, Christopher S., “When Politics Becomes Protest: Black Veterans and Political Activism in the Postwar South,” Journal of Politics, 71 (January 2009), 113–31.
- Sitkoff, Harvard. "Harry Truman and the Election of 1948: The Coming of Age of Civil Rights in American Politics," Journal of Southern History Vol. 37, No. 4 (Nov., 1971), pp. 597–616 in JSTOR
- Civil Rights Resource Guide, from the Library of Congress
- What Was Jim Crow? (The racial caste system that precipitated the Civil Rights Movement)
- Civil Rights – Religious Action Center of Reform Judaism
- Seattle Civil Rights and Labor History Project
- Texas Politics – Historical Barriers to Voting, University of Texas
- Integrating with All Deliberate Speed—contains video history interviews with African American Civil Rights pioneers, a timeline of the Civil Rights Movement and primary source materials (photographs, speeches, historical documents).
- African-American History: The Modern Freedom Struggle – course lecture videos from Stanford University | http://en.wikipedia.org/wiki/African-American_Civil_Rights_Movement_(1896%E2%80%931954) | 13 |
29 | In 1860, the North and South were divided by more than ideology, the issue of slavery. Northern and Southern elites viewed each other as stereotypes. Today, the divide between the Publicrats approaches the level that produced secession, the true cause of the war.
The conflict between the sections concerned the extension of slavery into the territories: both sides were wrong.
Letter to Alexander H. Stephens
Slavery was not viable in the territories; it was doomed in the South. Ideology prevailed…civil war ensued.
If the war had ended in 1862 with the North victorious, slavery would have survived. The planter class succumbed to emotional logic. Fearful that their way of life would end because of Lincoln’s election, which meant slavery would not be extended into the territories, they agitated for secession. The slavocracy persuaded farmers and laborers that it would be in their self-interest to leave the Union because blacks would become socially equal to whites: racism. Abolitionists furnished emotional canister that was underlined by John Brown’s Raid on Harper’s Ferry. Republicans, however, assiduously attempted to assure Southerners they would not interfere with slavery in the South. The Fugitive Slave Law would not be repealed and the inter-state slave trade would not be interfered with. The Republican Party was more concerned with developing a national economy than ending slavery. After the war, Republicans abandoned former slaves in favor of developing the industrial state, the economy.
“The quality of the Second Corps was evident at every level, but no more so than in the individual soldiers. Countless letters and diaries speak to individual soldiers’ bravery through an unremitting series of brutal battles. But also evident is a deep commitment to the Union. Soldiers in the Second Corps knew that the United States would endure even if the Confederacy established itself as an independent political entity, but they believed the freedoms guaranteed white Americans by a republican form of government would suffer a fatal blow. They were, one wrote, fighting to protect ‘our great and free government’ and the ‘best government that ever was instituted.’
Jonathan Stowe, a farm laborer, enlisted in the 15th Massachusetts — one of the hardest-fighting regiments in the Second Corps — during the autumn of 1861. Stowe wrote that Southerners who took up arms against a freely elected government were not only ‘my country’s enemy,’ but ‘base traitors to humanity and the world.’ He made the ultimate sacrifice one year later, when he was mortally wounded at Antietam.
By the spring of 1864, Lt. Josiah Favill admitted that he and many other soldiers were homesick. Yet maintaining the Union came above all else, and ‘until the work is done this army will never lay down its arms.’
At first glance, not all of the soldiers of the Second Corps seemed destined to become among the most redoubtable fighters in the Union Army. Many of them came from Democratic homes and ethnic communities that gave little support to the expansion of Federal war aims to include emancipation. Pvt. William Smith of the 116th Pennsylvania, a largely Irish regiment recruited in Philadelphia, argued that fighting to free the slaves was a betrayal of why he had gone to war. ‘To hell with the Niggers,’ he concluded. After the Second Corps was badly bloodied in the fighting around Fredericksburg, Va., during late 1862, a New York private fumed that the loss of life was because of the ‘accursed Nigger. It is all fudge and I am mad.’
And yet what modern-day Americans must grasp, amid the harshness of such racial attitudes, is that the soldiers of the Second Corps saw the fighting through. Many of these men reenlisted during the winter of 1863 and 1864, when their three-year term of service was about to expire. That fall, in fact, they overwhelmingly voted for Abraham Lincoln and the continuation of the war in overwhelming numbers. Without such resolve, whether the Union even would have won the war, let alone destroyed the institution of slavery, remains open to question.
The emphasis on Union continued into the postwar era…Reflecting the move toward national reconciliation by the late 19th century, veterans of the Second Corps praised their former gray-clad adversaries as Americans ‘as brave as ourselves’ and ‘foemen “worthy of our steel.”’ African-Americans and their plight in the postwar South rarely intruded upon the good feelings, even as Second Corps veterans often patted themselves on the back for helping to bring an end to the institution of slavery. Talk flowed freely about slavery as a ‘foul blot wiped out forever’ and America as a land; where all mankind are free.’ Yet the transition of millions of blacks from slavery to freedom simply was not these veterans’ concern.”
Ideology is a form of ignorance which delimits thinking. Ideologues place boundaries around their minds that precludes contemplating or allowing new ideas to enter a calculus.
In the early nineteenth century, the Republican Party hated taxes and feared the rise of a despot who could gain control of the government through a standing army. This ideology included maintenance of the United States Navy, even though numerous warships had already been built.
Result: The British navy impeded American sea commerce with impunity. British frigates boarded brigs from Boston and New York and frequently impounded the vessels which caused insurance rates to quadruple. One London paper printed: The sea is ours, and we must maintain the doctrine that no nation, no fleet, no cock-boat shall sail upon it without our permission.
More importantly, British captains impressed and forced unwilling American crewmen to serve on their warships. Crown policy included natural born Americans and citizens who were born in Great Britain because once British always British. According to their own records, the British impressed at a minimum 4,000 American sailors. When the Constitution returned to Boston in October 1807, there were no American warships at sea.
President Thomas Jefferson’ solution to defend American harbors was to build small gunboats. Dozens were built but these vessels proved unsuitable for the task — they were crap. The cost of these feeble “warships”, $1.5 million, could have bought 8 frigates, destroyers of the day. Finally, in January 1809, Federalists and Republicans overcame their ideological differences for the common good and passed a bill to revive the United States Navy.
Neo-liberalism is an ideology tied to the concept of free trade, which is a myth. Result: American jobs lost.
Perhaps the greatest issue facing our country is the inevitable bankruptcy of the federal government. Anyone with common sense discerns that there must be massive spending cuts accompanied by modest, fair tax increases. Yet…the Publicrats are unable to find common ground to address the salient issue that will affect US All, eventually, down the road.
The Simpson Bowles Commission provided a non-ideological path toward fiscal responsibility; it has been rejected.
If Columbus had accepted the ideology that the world is flat, he never would have sailed West to discover the New World.
Guest column: Ideologies are closing American minds
8:49 AM, Apr. 11, 2012
A disconcerting thought has been gnawing at me for a while now. I am now certain of its truth and it truly saddens me. As a society, we have slowly become incapable of civil discourse and would rather shield ourselves in ignorance than listen to an opposing point of view. I have not gleaned this from watching MSNBC or listening to Rush Limbaugh. That is too easy. No, I slowly found this by watching and listening to regular folks.
Two recent snippets of conversations really drove this point home. One was a colleague who expressed utter contempt for President George W. Bush. Another was an acquaintance who explained that he would rather have bamboo shoved under his fingernails (or something akin to that) than watch President Obama address the nation.
Now I know these people well enough to know they are educated and well-intentioned. And this is not about two isolated comments. This is about a national trend of intolerance I have witnessed in recent years.
There have always been people on the fringes who assaulted “the other side” and made it their mission to lambaste or lampoon any idea generated from across the aisle. But this has somehow become the new normal. We have grown so distrustful of our perceived adversaries that we cannot even stop to listen (even critically) to their ideas. Compromise is gone; leadership is a lost art.
I am a Republican who often votes for Democrats. Yes – that is absolutely possible and the way it should be. I listen. I think. I vote. And if I cannot get enough education on a particular issue, I abstain. There are no party lines that mandate my voting record. And here is a real bombshell – I am perfectly willing to have my opinions on political and other “taboo” issues swayed by persuasive arguments from the other side. There are few, if any, absolutes in this world, and I have no fear of modifying my beliefs if a compelling argument is made.
This nation was founded on democratic principles of debate and discourse – not distrust and contempt. One of the greatest framers, Thomas Jefferson, said the following: “I never considered a difference of opinion in politics, in religion, in philosophy, as cause for withdrawing from a friend.” Amen to that.
So why do we now look so hard to isolate ourselves in various ideological camps and refuse to even give grudging respect to the other point of view? I for one actually respect a politician whose voting record or other past acts show independent thought. Give me a “flip-flop” over a party-line automaton any day. In my opinion, this often shows an actual ability to think – to hear the other side, work to a resolution, and perhaps to do what is best in the end after serious soul-searching. It may also show personal growth over a long career – something laudable indeed.
My wife and I hate it when people assume they know the way we think because of some perceived stereotype. We are both free thinkers and are working diligently to raise two kids who will do the same.
I don’t do this for the sake of unpredictability. I do this because I seek personal growth which can only come through hearing and studying other points of view on important topics.
I may stay right where I am, but I will listen, consider, and weigh the points offered by my friend or adversary. And though I may disagree, I will not disrespect. This applies to my friends, adversaries, and certainly the current or former president of the United States of America.
Paul Haffner of Mariemont is an attorney and insurance executive who also teaches a class in business ethics at a local university.
March 13, 2012 at 00:02:47
Republicans Court the “White Trash” Vote in Alabama and Mississippi
By Walter Uhler
Although it was far behind Texas, in 2011 Alabama executed more prisoners than any other state in the United States. Citizens of Alabama live in more mobile homes per capita than all but three other states. Alabama has the third worst infant mortality rate in the United States, the third lowest life expectancy, and the second highest obesity rate. Only four states have more single parent households per capita than the great state of Alabama.
Mississippi is worse. It has the lowest per capita income in the United States, the highest percentage of persons living below the poverty level, the fourth highest unemployment rate, the third highest percentage of mobile homes per capita, the highest infant mortality rate, the highest percentage of out-of-wedlock births, the highest percentage of single-parent households, the highest obesity rate, the second worst high school graduation rate and the lowest life expectancy.
In a word, Alabama and Mississippi are two of the nation’s worst performers when it comes to providing a decent quality of life for its citizens. But they hardly stand out in the American South, the region that embarrasses the rest of the country with its egregious social pathologies. (For a chart demonstrating just how poorly the South performs, see
Students of the American South have provided a few plausible explanations for the South’s interminable failure to bring about widespread social and economic improvement. W. J. Cash, in his classic study, The Mind of the South , concluded that the Southern plantations, which thrived by exploiting Negro slave labor, proved doubly beneficial to virtually every yeoman farmer and the landless poor white in the region. As Mr. Cash saw it, “Not only was he not exploited directly, he was himself made by extension a member of the dominant class — was lodged solidly on a tremendous superiority, which, however much the blacks in the “big house’ might sneer at him, and however much the masters might privately agree with them, he could never publicly lose.”
However, “The grand outcome was the almost complete disappearance of economic and social focus on the part of the masses. One simply did not have to get on in this world in order to achieve security, independence, or value in one’s own estimation and in that of one’s fellows. [p.39] Unfortunately, when the economic focus disappeared, so too did the work ethic.
Like Mr. Cash, Professor Grady McWhiney also noted the absence of a work ethic in the American South, but he attributed it to the Scotch-Irish culture dominating the region, as well as to the many Southerners who “earned” their living by herding hogs and cattle. “Aside from marketing or branding their animals, Southerners had little more to do than round them up in the fall and either sell them to a local buyer or drive them to market. One could even raise livestock without owning land.” [Cracker Culture , p. 67]
That aversion to work also undermined efforts to educate the masses; leaving untouched the region’s infamous anti-intellectualism, which was largely a product of religious enthusiasm and Scotch-Irish culture. Henry Adams — who observed Robert E. Lee’s son and other Southerners at Harvard — would note: “Strictly, the Southerner had no mind; he had temperament. He was not a scholar; he had no intellectual training; he could not analyze an idea, and he could not even conceive of admitting two”" [p. 99]
Nevertheless, being uneducated and emotional hasn’t prevented the common white Southerner from believing himself intelligent enough to spot a liberal, socialist, communist, traitor, moral degenerate or godless atheist, whenever somebody challenges, questions or seeks to improve upon his culturally impoverished status quo. And it doesn’t prevent 45 percent of the good folks in Alabama — and 52 percent of the good folks in Mississippi — from expressing the absolutely shameful, asinine, opinion that President Obama is a Muslim.
How many of them are racist white trash? In 2008, Obama received but 10 percent of the white votes in Alabama, compared with the 19 percent that John Kerry received there in 2004. Similarly, Obama received only 11 percent of the white vote in Mississippi in 2008, whereas Kerry received 14 percent.
(Readers of The South and America Since World War II, by James C. Cobb might recall that the author briefly discusses the work of “white-trash” writers like Rick Bragg, Dorothy Allison, Larry Brown, Harry Crews and Tim McLaurin. According to Professor Cobb, “the characters created by the white-trash writers seethe with class resentment while clinging tenaciously to a fierce sense of independence and pride that is often their undoing.” See pp. 253-257)
More recently, Glenn Feldman has updated W. J. Cash’s insight about the enervating superiority that, for centuries, — immobilized the poor whites who luxuriated in their racial dominance. For Feldman, however, the issue now is not the South’s cheap racial superiority — which has been demolished by the Civil Rights movement — but its equally cheap religious/moral superiority.
According to Professor Feldman: “People tend to lose sight of issues that have a relevance or their day-to-day lives in the rush to feel part of a majority that carries with it a sense of emotional well-being, even superiority. For so long in the South, that issue was race and white supremacy. Now, it is increasingly morality and religion, accompanied by a sense of moral superiority and righteousness.” [Politics and Religion in the White South , ed. By Glenn Feldman, 2005, p. 332]
Why “cheap?” Because it often is based on only one or two issues — especially abortion — that allow many Southerners to feel complacently superior enough to ignore what would be, for any decent person, an obligation to engage in self-improvement, good works and the promotion of social justice. As Professor Feldman notes: Catholics in the Deep South litter their churches with anti-abortion literature that “instruct the faithful on how to clear all other issues from their conscience save abortion when going to the polls.” [Ibid. p. 314]
Thus, we obtain a profound insight into how, historically, the South can teem with openly self-righteous Christians and, yet, make almost no progress in ameliorating the many pathologies that plague their society. It’s a place where big-talking social conservatives like Rick Santorum and Newt Gingrich should do well.
Rep. Allen West: 80 Communist Party Members In U.S.House
April 11, 2012 5:15 PM
MIAMI (CBSMiami) – It wasn’t quite Joseph McCarthy waving around papers claiming to know of communists in the federal government, but according to the Palm Beach Post, Congressman Allen West claims he “has heard” there are communists in the House of Representatives.
According to the Post, West was talking to Jensen Beach voters this week when he said President Barack Obama was “scared” to have a discussion with him.
West then said that “he’s heard” up to 80 members of the U.S. House of Representatives are Communist Party members, but declined to give the names of any of the alleged members, according to the Post.
Here’s the question and answer from the meeting provided by Congressman West’s office.
Moderator: What percentage of the American legislature do you think are card-carrying Marxists or International Socialist?
West: It’s a good question. I believe there’s about 78 to 81 members of the Democrat Party who are members of the Communist Party. It’s called the Congressional Progressive Caucus.
Congressman West’s office responded to questions from CBSMiami.com with the following statement:
“The Congressman was referring to the 76 members of the Congressional Progressive Caucus. The Communist Party has publicly referred to the Progressive Caucus as its allies. The Progressive Caucus speaks for itself. These individuals certainly aren’t proponents of free markets or individual economic freedom.”
When pressed on the issue, Congressman West’s office refused to back down and said because the Communist Party considers the Progressive Caucus as its allies, that speaks for itself.
CBSMiami reached out to Co-Chairman of the Congressional Progressive Caucus, Representative Raul Grijalva for their comments on the issue.
“Allen West is denigrating the millions of Americans who voted to elect Congressional Progressive Caucus (CPC) members, and he is ignoring the oath they took to protect and defend the U.S. Constitution—just like he did. Calling fellow Members of Congress ‘communists’ is reminiscent of the days when Joe McCarthy divided Americans with name-calling and modern-day witch hunts that don’t advance policies to benefit people’s lives.
“We hope the people of Florida’s 22nd Congressional District will note that he repeatedly polarizes the American people instead of focusing on their interests. When people like Rep. West have no ideals or principles, they rely on personal attacks. The CPC is proud to stand up for economic equality and civil and human rights for all Americans. Congress is having, and will continue to have, an ongoing debate about job creation, home foreclosures and the issues that concern working families. But we will not engage in base and childish conversations that lower the high level of discourse Americans rightly expect from their representatives.”
The CPC also directed people to a finding from Politifact on the issue West was referring to in his comments.
“Just because you are a member of the Progressive Caucus does not mean you are a socialist,” Politifact said in September 2011.
Roll Call reported Wednesday that it contacted Communist Party USA and asked about West’s comments and got the reply, “That’s the most ludicrous things I’ve ever heard.”
West’s opponent in November, Democrat Patrick Murphy told the Post that he wasn’t surprised by what West said.
“The bottom line is, Allen West is trying to make it in the press with comments that don’t even make sense,” Murphy said. “He’s trying to make headlines, get a rise out of people and not get anything done.”
It’s not the first time West has made comments that have ignited a firestorm. In February, West was talking about how he doesn’t believe Israel has support from Barack Obama when he warned of a “second Holocaust” if Israel doesn’t have the backing of the United States.
West told a crowd in January that Senate Majority Leader Harry Reid, Speaker of the House Nancy Pelosi, and Congresswoman Debbie Wasserman Schultz should “get the hell out of the United States of America.”
Last December, West said that “If Joseph Goebbels was around, he’d be very proud of the Democrat Party.”
Ideology is ignorance
- Comment on Iran
- Democracy In China Delayed
- Egypt Spring Election
- Job Scams
- Obama and His Record
- Palestinian sic Egypt Spring
- 4th Virginia Textbook
- A Love Story
- ABC facts about China
- Acting like Assholes
- Afghanistan Behind US
- Afghanistan beyond 2011
- Afghanistan Pakistan Karzai
- Afghanistan Petraeus Karzai
- Afghanistan reality check
- Alabama what you think
- Amazing colorized photographs
- Amazing Elephant Rescue
- American Common Sense Immigration Reform
- American Dream in trouble
- Another Wall Street scam
- Arab Democracy touch of reality
- Arab Spring contrary opinion
- Arab Spring Egypt update
- Arab Spring ground zero
- Arab Spring in April
- Arab Spring in Mali cocaine
- Arab Spring Islamist Festival
- Arab Spring Outcome
- Arab Spring Reality
- Arab Spring Syria
- Arab Spring they hate US
- Arab Spring Update
- Arab Spring update
- Arab Spring update Libya
- Arizona Massacre
- BAD parenting means BAD kids
- BEWARE Islam in America
- Beyond Devotion to Service
- Bin Laden Dead Never Forget
- Black Issues
- Boston Marathon Tragedy
- Break up BIG Banks
- British heroes Luftwaffe interviews
- British military decline
- Broken Government Serves Politicians
- Budget Battle Bust
- Bush Book Decision Points
- Bush broke IRAQ no ownership
- Bush’s Iraq
- Bush’s lasting legacy
- Cambridge Holiday
- Cancer is a Modern Disease
- Cancer treatment shows promise
- China Changes Leaders
- China Currency
- China going forward 2012
- China in Focus
- China is a Sleeping Dragon
- China means cyber-warfare
- Chinese Advantages Problems
- Chinese business model
- Chinese consumerism
- Chris Christie don’t bet on it
- Christmas 2012
- Chronology of Failure
- Class warfare Facts
- Coca Cola secret formula
- Color photography 1940s
- coming soon Mexifornia
- Comment on 2012 Candidates
- Comment on the debt limit crisis
- Common Sense about Afghanistan
- Concentration Camps
- Conflict South China Sea
- Corporations NOT PAYING taxes
- Courage Faith Jessica Lynch
- Debt Deal Graphics
- Debt-ceiling imperative
- December 1860
- Declaration on Iran
- Detroit Schools – Archaeological Evidence
- Detroit Schools – SOS
- Dick Cheney doinks history
- Dick Morris predicts the election
- Dick Morris spins Bain Romney
- Disgraceful Treatment of VETerans HE SAVED MANY
- Divided State of America
- Dreams Deferred Dayton OH
- Dubya’s Disaster
- Economic Recovery – What Recovery?
- Education Means Individual Achievement
- Egypt in reality
- Egypt under Morsi
- Egyptian democracy prospects
- Election 2010
- Emily Perez an example
- Energy Independence Happy Fourth
- Energy Issues
- Equality of Opportunity Based upon Merit
- European Affairs Greece
- European Civilization
- European Economics – Russia – Germany – France – Italy
- Fact Check
- FACT Obama not a Leader
- Facts about China
- Facts About Climate Change
- Facts About Climate Change
- Facts About Entitlements
- Facts about Financial MELTdown
- Facts about High unEmployment
- Facts about Islam
- Facts About Medicare
- Facts about Mitt Romney
- Facts about NO Leadership and You
- Facts about North Korea
- Facts about Rick Santorum
- Facts about Taxes
- Facts about the debt deal
- Facts about the Iraq invasion
- Facts about the one perCent
- Facts about Trayvon Martin killing
- Facts on the Ground in Iraq
- Facts Rick Perry cronyism etc
- First Black Fighter Pilot Last Cavalry Charge
- Fiscal Cliff Chronology
- Fiscal Cliff Cometh
- Fiscal Cliff is here
- Flavor of the month Herman Cain
- Flood Photos
- Florida Foreclosures
- Four More Years
- Gay Marriage
- Get Out of Afghanistan
- get the Money out of politics
- Global Warming is here
- Glory of God
- Goldman Sachs Naked and exposed Historical Inequality
- GOOD news POSITIVE stories
- Governor Walker is a Republican Ideologue
- Grover Norquist
- Hackers SpyEye
- Hamas strikes out
- Hard Times then and now
- Henry Kissinger gives advice
- Honest Abe Savior and Martyr
- How Catholic bishops operate
- How Did We Get Into This Mess
- How government works
- Hurricane Sandy Tragedy no one should die alone in darkness
- Ideology is ignorance
- Illegal Immigration attitudes
- illegal immigration What You Think
- Importance of BEES Colony Collapse
- Internet Issues cyber-warfare
- Islam in America
- Islam in Europe
- Jesse Owens American Hero
- Jim Martin Letter 101st Airborne
- Jim Tressel
- Jobs Jobs Jobs
- Judas the Galilean and his Unterbrink writings
- Keith Olbermann ko’d
- Kevin Phillips Predicts Future
- KFC conquers China
- Kick The comment on Comment
- La reConquista giving Our Country back to the Indians
- laissez faire capitalism – Where are the good jobs?
- Likely Libya Outcome
- Link to tornado videos
- Litigation in Libya
- Living in Paris
- Manufacturing creates wealth
- Marilyn Monroe unseen photographs
- Marine Life dying coral
- Medicare Mediscare Mega problems
- METH MESSes You Up
- Michael Redd Bucks
- Middle Class economy 21st cen
- Middle Class fACTs
- Middle Class Stories Change for the American Dream
- Mindless VIOLENCE and Senseless Cruelty
- Mitt Romney Bain Capital 2012 election
- More Climate Change FACTS
- MORE Divided than Ever
- More of the Same?
- MORE Soldier Stories HELP Taylor Morris
- Mullahs want the bomb
- Muslim Brotherhood revealed
- Muslim Democracy
- Neil Armstrong last interview
- Never Forget 9 11 Essay
- NEW 9-11 video
- New Pearl Harbor account
- Newsweek how Dumb Are We?
- Norway Nutjob Anders Behring Breivik
- NYC imam True Face of Islam
- Obama abandons white working class
- Obama AD 2010
- Obama affect on America
- Obama and Middle East Peace
- Obama and Syria
- Obama is a phony
- Obama no sense of humor
- Obama not the anti-Christ
- Obama Re-elected Watch Out
- Obama Scorecard
- Obama supports gay marriage
- Obamacare is Poison
- Oceans are dying SOS planet
- Oil Prices means speculators
- Old Men Who Don’t Care
- On Planet Krauthammer
- Opinion about Fracking
- Opinion about Newt Gingrich
- Opinion about Obama
- Opinion about Romney
- Oregon Somali Terrorist
- Our country committing suicide
- Our Friends in Pakistan
- Pakistan Protected Osama bin-Laden updated
- Palin a Phony?
- Palin Book Tour
- past Present FUTURE America
- Pay Attention
- People Need Good Jobs
- Pluto in Capricorn
- Polar Bear Family
- Political Correctness in the UK
- Prairie Chapel Ranch
- Pray for the Children
- Pray for the Children
- Predatory Chinese
- President Santorum
- Progress in Iraq no democracy
- Rare Earth
- Reality in Libya
- Republican Mandate?
- Robert Bales should be executed
- Rolling Stones at Fifty
- Romney and Afghanistan
- Romney and His Record
- Romney no military service
- Ron Paul
- Ronald Reagan Tax Review
- Ronald Reagan true myths
- SAD Obama
- Santa Muerte
- Secure Protect Your Data
- See The Evidence PHOTOGRAPHS
- Sesquicentennial of the Civil War
- Sgt Dennis Weichel and Afghanistan
- Snow Scenes around the world
- Soldier Stories
- SSG Johnson surprises Skylar – HAPPY
- State by State casualties Afghanistan
- State of our Democracy
- State of the Economy
- Stories about the After life Service is the currency of Salvation
- Stories World War I and II
- Supercommittee FAILure
- Syrian Spring updated
- Tea Party Hearty
- Ten Conditions for Change
- The Connected Class con
- The FED is private
- The Great Helmsman
- The Rich Get Richer
- There are the Jobs
- Trump for President
- Truth to power The People
- Tunisian Revolution
- U.S. Troops to Leave Iraq by Year’s End
- UNDERWATER PHOTOGRAPHY
- Unfair Competition equals Free Trade
- Veterans Aging Prematurely
- Veterans Issues
- Washington’s Farewell Address
- Waterloo and Trafalgar
- WEAKest President in History
- What about Scientology
- What Does Romney Stand For
- What President Morsi Really Thinks
- What you think about Obama
- What you think about Romney
- who cares gr$$dy NFL
- Why Romney Lost
- Wisdom about Fools
- Xi Jinping and China
- Xi Jinping new leader in China
- Zero Some Game | http://napoleonlive.info/what-i-think/ideology-is-ignorance-2/ | 13 |
20 | New DealArticle Free Pass
New Deal, the domestic program of the administration of U.S. President Franklin D. Roosevelt between 1933 and 1939, which took action to bring about immediate economic relief as well as reforms in industry, agriculture, finance, waterpower, labour, and housing, vastly increasing the scope of the federal government’s activities. The term was taken from Roosevelt’s speech accepting the Democratic nomination for the presidency on July 2, 1932. Reacting to the ineffectiveness of the administration of President Herbert Hoover in meeting the ravages of the Great Depression, American voters the following November overwhelmingly voted in favour of the Democratic promise of a “new deal” for the “forgotten man.” Opposed to the traditional American political philosophy of laissez-faire, the New Deal generally embraced the concept of a government-regulated economy aimed at achieving a balance between conflicting economic interests.
Much of the New Deal legislation was enacted within the first three months of Roosevelt’s presidency, which became known as the Hundred Days. The new administration’s first objective was to alleviate the suffering of the nation’s huge number of unemployed workers. Such agencies as the Works Progress Administration (WPA) and the Civilian Conservation Corps (CCC) were established to dispense emergency and short-term governmental aid and to provide temporary jobs, employment on construction projects, and youth work in the national forests. Before 1935 the New Deal focused on revitalizing the country’s stricken business and agricultural communities. To revive industrial activity, the National Recovery Administration (NRA) was granted authority to help shape industrial codes governing trade practices, wages, hours, child labour, and collective bargaining. The New Deal also tried to regulate the nation’s financial hierarchy in order to avoid a repetition of the stock market crash of 1929 and the massive bank failures that followed. The Federal Deposit Insurance Corporation (FDIC) granted government insurance for bank deposits in member banks of the Federal Reserve System, and the Securities and Exchange Commission (SEC) was formed to protect the investing public from fraudulent stock-market practices. The farm program was centred in the Agricultural Adjustment Administration (AAA), which attempted to raise prices by controlling the production of staple crops through cash subsidies to farmers. In addition, the arm of the federal government reached into the area of electric power, establishing in 1933 the Tennessee Valley Authority (TVA), which was to cover a seven-state area and supply cheap electricity, prevent floods, improve navigation, and produce nitrates.
In 1935 the New Deal emphasis shifted to measures designed to assist labour and other urban groups. The Wagner Act of 1935 greatly increased the authority of the federal government in industrial relations and strengthened the organizing power of labour unions, establishing the National Labor Relations Board (NLRB) to execute this program. To aid the “forgotten” homeowner, legislation was passed to refinance shaky mortgages and guarantee bank loans for both modernization and mortgage payments. Perhaps the most far-reaching programs of the entire New Deal were the Social Security measures enacted in 1935 and 1939, providing old-age and widows’ benefits, unemployment compensation, and disability insurance. Maximum work hours and minimum wages were also set in certain industries in 1938.
Certain New Deal laws were declared unconstitutional by the U.S. Supreme Court on the grounds that neither the commerce nor the taxing provisions of the Constitution granted the federal government authority to regulate industry or to undertake social and economic reform. Roosevelt, confident of the legality of all the measures, proposed early in 1937 a reorganization of the court. This proposal met with vehement opposition and ultimate defeat, but the court meanwhile ruled in favour of the remaining contested legislation. Despite resistance from business and other segments of the community to “socialistic” tendencies of the New Deal, many of its reforms gradually achieved national acceptance. Roosevelt’s domestic programs were largely followed in the Fair Deal of President Harry S. Truman (1945–53), and both major U.S. parties came to accept most New Deal reforms as a permanent part of the national life.
What made you want to look up "New Deal"? Please share what surprised you most... | http://www.britannica.com/EBchecked/topic/411331/New-Deal | 13 |
17 | By the middle of the 17th century the Abenaki were living in a nightmarish landscape shaped by conflict, disease, and alcohol, and they turned to the missionaries for help and reassurance.
After the cessation of hostilities in Europe, the 1713 Treaty of Portsmouth quickly brought peace to the Maine frontier. By this time it was apparent that English population expansion would engulf southern Maine, and most Indians in the area withdrew to the St. Lawrence settlements.
The century before the American Revolution was marked by a series of destructive wars between Natives and Europeans that kept Maine – the frontier between New France, New England, and the Abenaki homelands – in constant turmoil.
The tensions were local – disputes over control of land and resources – and international. France, Spain and Great Britain engaged in numerous wars. Most were economic in nature with the European powers seeking to control both territory and resources to expand their economic power.
Religion also played a part in these struggles. The Europeans viewed control of North America as crucial to their economic and political success and fought for territory and colonies in the New World. Even the wars that were largely centered in Europe often spilled over into North America.
Tensions between the native population and Europeans began as early as the first European arrivals. In 1525 Estevan Gomez raided Nova Scotia and Maine and took some 58 surviving Indians back to Spain, and subsequent explorers, whalers, fishers, and traders continued this practice into the 18th century.
Early fishing settlements and trading posts further poisoned the relation between native and newcomer. Walter Bagnall was killed on Richmond Island in 1631, for instance, for repeatedly cheating his clients, and when John Winter arrived in 1632 he found the Indians so unfriendly he abandoned hope of trade.
Indians, on the other hand, suspected that English colonials brought on the terrible recurring epidemics, and they found it difficult, under their own political system, to rein in those who wished vengeance for trading abuses, land grabs, murders, and enslavements.
Fluctuations in the price of furs left the impression that all whites cheated them, and as the Wabanaki became more dependent on European guns, ammunition, and commodities, fur-trading – and its abuses – became an increasingly desperate matter. A heritage of mutual suspicion soured relations between Indians and whites in Maine.
Effects of European Rivalries
Rivalries between France and England in the New World further strained Indian-white relations. Most of the wars in colonial North America followed upon conflicts in Europe, and although Maine's Wabanaki did their best to remain aloof from these foreign quarrels, they were inevitably drawn into the maelstrom. Still, they entered these wars for their own reasons, maintaining a political independence that both French and English officials refused to respect.
French or English alliances with various tribes exacerbated ancient feuds and created new conflicts, and as the devastating plagues swept through the villages, these alliances were again disrupted; those who survived regrouped and exacted tribute from more debilitated or less powerful neighbors.
Was the outcome of these wars inevitable? European advantages included a technology based on metal and gunpowder and expertise with capitalist relations, while Indians clung to a culture disordered by plague and constant demographic movement. Indians, however, enjoyed an advantage in logistics and tactics.
Most of Maine's 6,000 English settlers were dispersed in "ribbon" settlements strung out along the coast or lower rivers, almost impossible to defend militarily. The English clung to what early historian William Hubbard called the "sea-border," considering the unfamiliar woods behind them "a great Chaos, the lair of wild beasts and wilder men."
This, of course, was familiar territory to the Abenaki, who could traverse the woods and waters, wait for an opportune moment, raid, and scatter. Indian tactics – sudden attack and withdrawal – prevailed against a people with little wilderness experience and a history of open-field combat.
However, these tactics were designed for short wars or raids to avenge particular wrongs or insults. Given their subsistence regimes and their limited capacity for storage, Indians simply did not have time to wage a protracted war, and when English militia began destroying their corn fields and blocking access to traditional hunting, fishing, and foraging grounds, Indians were powerless to resist.
English victories also depended on alliances with other Indians, particularly the Iroquois-Mohawk, while the Wabanaki's French allies were relatively weak south of the St. Lawrence. By the 1670s, New England contained about 50,000 inhabitants, and New France about 10,000, and there were fewer than a thousand French inhabitants in Acadia.
Most important were England's pathogenic allies – the plagues that swept through the Indian villages beginning in 1616, killing more than 75 percent of the inhabitants and leaving the rest weakened culturally, spiritually, economically, and militarily.
The Wabanaki made alliances with the French through the fur trade, and here the French had a decided advantage over the English. Fur trading relationships were based on mutual respect nurtured carefully over years. In their 1604-1605 voyage to the Gulf of Maine, Sieur de Monts and Samuel Champlain mastered the tricky diplomatic exchanges that involved ritual gift exchanges, speeches, banquets, dances, and songs, and tribal alliances, and by the early 1600s French adventurers had the upper hand in relationships with Wabanaki north and east of the Kennebec.
Since no New England river offered the trading advantages of the St. Lawrence, and since southern New England Indians grew crops more than they hunted, English colonists were less interested in the fur trade. For the French, Indians were the essence of empire; for the English they were obstacles to an agricultural empire fashioned after the English countryside.
French missionaries also were more successful than their English counterparts. They lived in the Indians' villages, knew their spiritual needs, and benefited from the cultural disruptions brought on by war and plague.
By the middle of the 17th century the Abenaki were living in a nightmarish landscape shaped by conflict, disease, and alcohol, and they turned to the missionaries for help and reassurance. Catholicism was something of a compromise with traditional religion, just as European trade was a compromise with native material culture. English missionaries were less interested in compromise and generally lacked the ability to use religion to cement military alliances.
Despite the inadequacies of English diplomacy, Indians became increasingly dependent on their trade goods. The epidemics disrupted oral communication and accelerated the loss of traditional hunting, fishing, and gathering skills.
As Indians narrowed their economic focus, their involvement in the fur-trade took on a desperate tone. Tensions increased in the mid-1640s when truck houses began selling hard liquor. As beaver populations declined, Abenaki interjected themselves as intermediaries in the trade with tribes further west, resulting in a series of violent clashes known as the "Beaver Wars."
These conflicts, involving tribes from Cape Breton Island to the Chesapeake and as far west as the Great Lakes, eventually yielded new alliances that turned the Abenaki against the English.
King Philip's War
By 1670 Indian frustration with trade abuses, land encroachments, rum dealing, and free-roaming English livestock in their cornfields was mounting. Sensing these tensions, in fall 1674 English officials banned trade of shot and powder to Indians. The Abenaki suffered severe food shortages during the following winter, and some fled to Canada seeking French aid.
In summer 1675 war broke out in southern New England between Pilgrims and Wampanoags led by King Philip, or Metacomet, and the war strained relations all through New England. Relations between French "Papists" and Indian "heathens" fueled English fears that all Indians were conspirators of King Philip, and with war raging to the south, the General Court sent commissioners to Maine trading posts to enforce the ban on arms. English scalp hunters, given a bounty to hunt Indians south of the Piscataqua, no doubt crossed the river into Maine as well.
Madockawando, the chief sagamore on the Maine coast, withdrew to the Penobscot, where French traders at Pentagoet and Port Royal provided muskets and shot.
In July magistrates met with local Indians to encourage neutrality, but later that summer British sailors accosted the wife and child of Squando, a sagamore among the Saco River Abenaki, and overset their canoe to test the theory that Indian babies could swim from birth. The baby died, and as native law required, Squando sought revenge on white settlers.
In September a party of 20 Indians robbed a trading house belonging to Thomas Purchase at Brunswick. Purchase's neighbors pursued the raiders up the New Meadows River, surprising and killing one, and the resulting skirmish was the first battle of King Philips War in Maine.
In Falmouth members of the Wakely family were tomahawked and two children carried away as captives, and throughout the fall Indian bands continued raiding English settlements from Saco to Casco Bay. With no knowledge of the interior, militia and settlers alike were forced to "huddle together, in danger of being shot down," until winter snows and lack of ammunition restricted Indian military movement.
At a conference in Pemaquid in 1676 English officials gained an uneasy armistice that lasted until several Indians were kidnapped nearby and carried off as slaves. Indians insisted on powder and shot, and English negotiators refused, demanding that the Abenaki admit blame for the war and join in attacking other hostile tribes.
That summer Abenaki and their allies, including Micmacs and remnants from King Philips' troops, attacked settlements eastward to Cushnoc on the Kennebec, moving from cabin to cabin in swift raid-and-retreat maneuvers. In August the well-established trading post at Arrowsic fell in hand-to-hand combat, and the fort, mills, mansion house, and outbuildings were burned.
Later that fall the Pemaquid settlement was destroyed as Indians cut off access to the neck of land separating men in the fields and fishing boats from women and children in the village. With no alternative but to return to their homes, the men regained the fortress, but many were killed or taken prisoner.
Settlers fled to the nearby islands and watched as the "whole circle of the horizon landward was darkened and illuminated by the columns of smoke and fire rising from the burning houses of the neighboring Main." After a month, they sailed south. In the course of five weeks, 60 miles of coast east of Casco Bay had been wiped clean of English settlements.
Hardships were equally severe on the Abenaki side. Families fled their villages, leaving fields unharvested. Denied access to their guns, ammunition and fishing grounds, many starved.
Despite overtures for peace on both sides, seafaring slavers continued to murder and kidnap along the coast, and in September 1676 Major Richard Waldron invited 400 Indians to a conference at Dover, New Hampshire, and used the occasion to enslave around 200.
In February Waldron led an expedition eastward to ransom English captives and capture Madockawando. Although he failed on both accounts, he managed to kill eight peace-seeking Indians at Pemaquid.
In 1678 the provincial government of New York, which controlled Maine between 1677 and 1686, signed the Treaty of Casco. According to its terms, the Abenaki recognized English property rights but retained sovereignty over Maine, symbolized by an annual land use tax for every English family. The treaty also stipulated closer government regulation of the fur trade.
In 1686 Sir Edmund Andros, appointed governor of the Dominion of New England, took charge of Indian relations. Although widely resented as a representative of the Catholic King Charles, Andros acted decisively to regulate the fur trade in a manner that would ensure fair prices and protect native clients from abuses. Pemaquid was designated the sole trading post between the Kennebec and Penobscot rivers, and ammunition was traded only in amounts deemed necessary for hunting.
Despite fresh memories of a horrible conflict, settlers refused to abide by the terms of the Treaty of Casco. Traders continued unfair practices, settlers placed nets across the Saco River, preventing fish from migrating upriver to the Wabanaki villages, and livestock ruined Indian corn. Negotiations and further treaty attempts were not successful and confrontations continued.
King William's War
During King William's War (1689-1699), Comte de Frontenac, the aggressive governor general of New France, launched a campaign to conquer all of North America. A large force of French and Indians drove the English from the settlements east of Falmouth. Baron de St. Castin, who lived with his family in a village of 160 Etchemin Indians on the Bagadauce River near present-day Castine, became a target for militia raids, and he helped launch a series of attacks on Maine settlements in the summer of 1689.
The major event of the war came in September 1689 when 200 Norridgewock, Penobscot, and Canada Indians converged on Peaks Island In Casco Bay and, on September 20, attacked the Back Cove settlements. Major Benjamin Church arrived by sloop at sunrise at Fort Loyal, and after a "fierce fight" drove the Indians from the area.
Exhausted by war and discouraged by French ambivalence, in 1693 the Abenaki sued for peace, but the English refused to negotiate on realistic terms. This brought another round of attacks on English settlements in 1694.
The English at Fort William Henry, built under the authority of Governor Sir William Phips in 1692 at a huge cost, fell to a force of Canadian-based Abenaki in August 1696, and the English once again abandoned the lower Kennebec. Massachusetts counterattacks against Port Royal and Quebec were largely ineffectual, as were several raids up the Kennebec and Penobscot rivers.
France and England concluded a peace in 1697, and in 1699 the Wabanaki agreed to a treaty. In 1698 Father Sebastien Rasle (also spelled Rale or Rasles) built a mission at the Indian village in Norridgewock on the upper Kennebec River, and this became a center for French-Indian interaction. With the coast east of Wells nearly devoid of English settlers, Rasle's mission became the southern boundary of New France.
The Latter Wars
In the quarter century after King William's War, Falmouth, once the center of a vigorous trade in fish, masts, spars, timber, and sawed lumber, slowly revived. Sawmills, gristmills, and boatworks again dotted the rivers and inlets between the Piscataqua and Kennebec, and farms sent hay, dairy products, cattle, sheep, swine, cordwood, and fish to Massachusetts ports for the local and coastwise trade.
Returning settlers took up a quasi-military life. Garrison houses, usually under a militia command, provided nuclei for small settlements either just outside or within a stockade. During daylight, men and women worked in their fields under protection of scouts and guards. For most of the period, English Maine lived in a state of virtual siege. Only the larger seaports – Boston, Salem, Portsmouth, Kittery – enjoyed sufficient security to benefit from the military expenditures from Great Britain.
By 1701 France and England were engaged in what came to be known as Queen Anne's War. When France proved less willing to supply arms, the Penobscots ratified a series of neutrality agreements with Massachusetts. But in August 1703 an expedition of about 500 French and Micmac Indians from the St. Lawrence devastated the coastal towns and forts from Wells to Falmouth, and Massachusetts declared war on all Maine Indians. Militia raids in the upper Saco kept villagers from their fields and from critical foraging areas.
After the cessation of hostilities in Europe, the 1713 Treaty of Portsmouth quickly brought peace to the Maine frontier. By this time it was apparent that English population expansion would engulf southern Maine, and most Indians in the area withdrew to the St. Lawrence settlements under the command of Governor Vaudreuil.
Indian military successes were significant, and the upper Kennebec remained a contested territory. With English settlers pushing upriver, the Massachusetts militia rebuilt the fort at Brunswick, giving the English power to prevent Indians from reaching the coast for foraging and fishing activities. Indian security in central Maine was becoming more tenuous.
Despite the peace treaty, wars involving Indians and Europeans and between Europeans were not over. Dummer's War in 1721-1727 began as a series of skirmishes in Maine and Vermont in territory claimed by both French and English.
By this time the Muscongus Company had pushed the English frontier eastward to the St. Georges in Thomaston. Responding to Indian raids in March 1723, acting Governor William Dummer sent militia under Colonel Thomas Westbrook into the Kennebec region to burn Indian villages and fields, and in August 1724 a combined force of English militia and Massachusetts and Mohawk Indians destroyed the village at Norridgewock, killing as many as 100 Indians and Father Rasle.
Another desperate encounter took place in April 1725 on the upper Saco valley when a party of bounty hunters under John Lovewell encountered an Indian troop near the Pigwacket village. Lovewell and 11 other English were killed, along with an equal number of Indians.
During this war the French offered only limited aid, leaving Massachusetts free to focus its attention on the Wabanaki. With the destruction of Norridgewock, the Penobscots emerged as leaders of a new intertribal alliance, and after consulting with Vaudreuil, leaders ratified a treaty with Massachusetts in summer 1727.
Seventeen years of peace followed Dummer's War and during that time, English resettled to the St. Georges River. Hostilities resumed in 1744 during King George's War after a group of English scalp hunters killed or wounded several Penobscot Indians. In 1745 Canadian Indians attacked Pemaquid and Fort St. Georges, and despite minimal Penobscot and Kennebec participation, Massachusetts again declared war on the Wabanaki in August 1745.
In this war, colonial forces, including those from Maine, prevailed against the French stronghold at Louisbourg on Cape Breton Island, but in Maine military action was limited to occasional skirmishes. The war ended with the Treaty of Falmouth in October 1749.
The sixth and final Anglo-Abenaki war, known as the Seven Years, or French and Indian war (1754-1760), was largely fought in the Ohio Valley. In Maine, Governor William Shirley used rumors of French maneuvers on the Kennebec to construct Fort Halifax above Norridgewock at Winslow. Many Penobscots withdrew from the St. Georges area when both Massachusetts and the French demanded that the Indians take up arms against the other.
In 1759, English forces defeated the French at Quebec, ending the long struggle for control of North America. During the next few years Indian family bands re-occupied tribal grounds on the upper Penobscot, Kennebec, and Saco rivers.
Governor Bernard banned white hunters and trappers from the upper Penobscot and sent surveyor Joseph Chadwick to mark the limits of English settlement at the falls above the Kenduskeag, but theft, murder, poaching, land encroachment, and an explosion of white settlement up the river valleys made a return to the old ways all but impossible.
Between the late 17th century and the early 19th century, Great Britain, France, and others in Europe engaged in nearly constant warfare. The battles for economic and political power spilled into North America, catching the native populations in the middle. By the time a lasting peace came between France and Britain, European descendants had permanent settlements in North America and the native populations were relegated to the fringes. | http://www.mainememory.net/sitebuilder/site/897/page/1308/print | 13 |
21 | The language, norms, values, habits, and material goods that constitute the life-way of a society.
The totality of institutions and practices (including the forms of discourse) developed and sustained by some specific group of human beings. Ethnology, the branch of anthropology devoted to the study of culture, is a field from which a number of semioticians have come.
Patterns of learned behaviour and values which are shared among members of a group and are transmitted to group members over time, and distinguish the members of a group from those of another group. Culture can include: ethnicity, language, religion and spiritual beliefs, race, gender social-economic class, age, sexual orientation, geographic origin, group history, education, upbringing and life experiences.
a manner of life involving learned and shared behavior, experiences and material artifacts. The concept is commonly applied as a general term signifying social behavior, values, ideas and material objects of a human society. Culture is socially rather than biologically acquired, and is maintained through the use of symbols.
Way of life including language, food, clothing etc.
The social practices of a particular people or group, including shared beliefs, values, knowledge, customs and lifestyle.
beliefs, feelings and customs shared by people from a certain area or group (Children growing up in the Pinelands share a culture which differs from the culture found in cities.)
All of the beliefs and customs that we learn as members of society and that bind members of any given society together. Archaeology attempts to study culture by examining the artifacts and sites of people of the past.
the beliefs, customs and art that are produced or shared by a particular society
A system of ideas and beliefs that can be seen in peoples’ creations and activities, which over time, comes to characterize the people who share in the system.
The beliefs, values, rules, and customs that exist within a group of people who share a common language and environment, that are transmitted through learning from one generation to the next. go to glossary index
understandings, patterns of behavious, practices, and values shared by a group of people.
the sum of ways of living built up by a group of human beings, which is transmitted from one generation to another.
a common way of life of a group of people
A person's attitude s arising out of their professional, religious, class, educational, gender, age and other backgrounds. [D02628] 23 The integrated pattern of human knowledge, belief, and behavior that depends upon peopleÍs capacity for learning and transmit ting knowledge to succeeding generation Editor's Note: See also Social Factors. [D00465] PMK87 The framework that provides people with their identity. [D04957] 47
Social system that is taught and learned by successive generations.
Static human institutions and mores at any given period of time. Compare: Civilization.
The sum total of knowledge passed on from generation to generation within any given society. This body of knowledge includes language, forms of art and expression, religion, social and political structures, economic systems, legal systems, norms of behavior, ideas about illness and healing, and so on.
The shared values, norms, traditions, customs, arts, history, folklore, and institutions of a group of people.
Culture refers to the learned values, beliefs, norms and ways of life of an individual that influence thinking, actions, and decisions.
a people's whole way of life. This includes their ideas, their beliefs, language, values, knowledge, customs, and the things they make.
a broad and relatively indistinct term that implies a commonality of history and some cohesiveness of purpose within a group. One can speak of southern culture, for example, or urban culture, or American culture, or rock culture; at any one time, each of us belongs to a number of these cultures.
Aspects of a social environment that are learned and used to communicate values such as what is considered good and desirable, right and wrong, normal, different, appropriate or attractive. The means through which society creates a context from which individuals derive meaning and prescriptions for successful living within that culture (language, speech patterns, orientation toward time, standards of beauty, holidays that are celebrated, images of a "normal" family).
Shared beliefs, values, goals, norms, traditions, arts, history, religion, folklore, experience, and institutions of a group of people. [Adapted from SAMHSA definition.
the pattern of daily life learned by a group of people. These patterns can be seen in language, governing practices, arts, customs, holiday celebrations, food, religion, dating, rituals and clothing, to name a few examples.
The process by which information about the world and how it to deal with it is stored, retrieved, and transmitted. It is learned in social settings and shared by a social community. It is the principal means by which humans adapt to their environments. It is symbolic.
(cul•ture) n. – the customs, beliefs, laws and ways of living that belong to a people.
learned, nonrandom, systematic behavior and knowledge that can be transmitted from generation to generation.
The learned patterns of thought and behavior characteristic of a population or society. The main components of a culture include its economic, social, and belief systems.
behavior patterns, acts, beliefs, manners, and characterizations of a society; the customary beliefs, social forms, and material traits of a racial, religious, or social group.
a system of beliefs, values and practices which distinguishes a particular nation of people from other nations.
The learned behavior of people, such as belief systems and languages, social relations, institutions, organizations, and material goods such as food, clothing, buildings, technology.
A group of individuals or a society sharing common characteristics, patterns of behaviour, beliefs, or values. Cultures may be ethnic, national, religious, workplace-centered, or social.
the way of life built up by a group of human beings and passed on from one generation to another.
the customs, ideas, tastes, and beliefs acquired from a person's background; the sum total of one's lifestyle
Used by foreign ideologies. American Way of Life is a better, more American, word to signify the embodiment of our people. Better to use American Heritage, American Way or American System.
The set of Perceiver facts, Mercy experiences, Mercy feelings, and Server actions held in common by a group of people, and integrated around their Perceiver beliefs. Culture can either be the basis for mental thought, or an expression of internal thought.
the full range of learned behavior patterns that are acquired by people as members of a society. A culture is a complex, largely interconnected whole that consists of the knowledge, belief, art, law, morals, customs, skills, and habits learned from parents and others in a society. Culture is the primary adaptive mechanism for humans.
The distinctive customs, religious beliefs, habits, languages and technologies that are shared commonly by people in various parts of the world.
the learned attitudes, beliefs or values that are shared by individuals within a social group.
The development of criminology to some degree can be told as the story of a deepening understanding of culture. For early sociological criminologists—and for many today—'culture' is primarily understood as the values and goals that orient individual actors. Many subcultural and labeling theorists deepen this understanding, seeing a 'culture' as the understandings and behaviors that arise, in the words of Howard Becker, ". . . in response to a problem faced in common by a group of people . . ." ( Outsiders, 81). Finally, recent criminologists—especially feminist and critical criminologists—view culture very broadly, as the beliefs and values, tastes and interests, knowledge, behavior, and even the very ways that individuals conceive their of 'selves'. Culture, in short, has come to be seen as the fabric out of which the social is made.
people's customs, clothing, food, houses, language, dancing, music, drama, literature and religion
the sum total of learned beliefs, values and customs that serve to guide the behaviour of members of a particular society. It covers all languages, traditions, customs, values, beliefs, rules of conduct and institutions. It also includes those things in which cultural achievements are embodied such as buildings, tools, machines, communication devices, art objects, dress and food.
The learned values, beliefs, perceptions, and behaviors of specific groups of people. Nurses or therapists value cultural differences and recognize mental disorders within the context of their individual cultures.
The accumulated habits, attitudes, and beliefs of a group of people that define for them their general behavior and way of life; the total set of learned activities of a people.
Behaviors, customs, ideas, and skills of a distinct group of people.
Pattern of human behaviour and its products that includes thought, speech, action, institutions, and artefacts and that is taught to or adopted by successive generations; the total of the inherited ideas, beliefs, values, and knowledge, which constitute the shared bases of social action.
A set of beliefs, values, and practices that sustains a particular people; also, the products those people produce.
The complex set of beliefs, customs, traditions, and experiences that assist in forming and sustaining individual character.
the ideas, customs, skills, arts, etc. of a people or group, that are transferred, communicated, or passed along, as in or to succeeding generations. ( return to database)
The patterns of daily life learned consciously and unconsciously by a group of people. These patterns can be seen in language, governing practices, arts, customs, holiday celebrations, food, religion, dating rituals, and clothing.
a particular society at a particular time and place; "early Mayan civilization"
the tastes in art and manners that are favored by a social group
all the knowledge and values shared by a society
the attitudes and behavior that are characteristic of a particular social group or organization; "the developing drug culture"; "the reason that the agency is doomed to inaction has something to do with the FBI culture"
a body of customary beliefs, mutual goals, rituals, social forms, language and artifacts that unify and provide distinction for a group
a collection of shared beliefs about how things are
a collection of traits and characteristics that a group of people have in common and are able to pass down to successive generations
a collection of values and the behaviors required to achieve those values
a combination of languages, rituals, activities, values, and pastimes that creates a common environment and allows people to interact with and relate to each other
a common way of life -- a particular adjustment of man to his natural surroundings and his economic needs
a comprehensive expression of a way of life for certain groups of people
a configuration of learned behaviors and results of behavior whose component elements are shared and transmitted by the members of a particular society
a delineated group of people who because of group boundaries hold to consistent common understandings and ways of doing things
a elaborate, interconnected network of actions, beliefs, and symbols that shape an organization and, in turn, are shaped by an organization
a group, a society or even a country which shares common ideas about the way the world is and how to behave there
a group of people in society with each other, society being definied as day to day social intercourse, i
a group of people who share a background because of their common language, knowledge, beliefs, views, values, and behaviors
a group of people who share the same values, beliefs, and norms of behavior
a group of people with rather similar grids
a growing, changing, dynamic thing consisting most significantly of shared perceptions in the minds of its members
a learned process
a mode of being human, and is always particular
an agreement by a group of people on individual behavior within that group
an agreement by a group of people that establishes personal behavioral standards for that group
a network of conversations that define a way of living, a way of being oriented in existence in the human domain, and involves a manner of acting, a manner of emotioning, and a manner of growing in acting and emotioning
an identifier for a particular locale
an informal set of operating rules, unlike a society which formalizes them
an interwoven system of beliefs, values, history, mythology, rituals and ceremonies
a particular complex of habits, understandings and loyalties that are normative although mostly unstated among a particular group of people
a pattern or traits shown by a particular population
a set of attitudes, values, beliefs, and behaviors that characterize a particular society
a set of behavior traits or social rules that a specific group of a species follows
a set of beliefs that a group of people accepts as true
a set of habits, rules and regulations which a group of people follow as part of their lives
a set of rules, conventions, traditions, of generating behavior and relationships
a status-quo way of operating
a system of beliefs and behaviors that include symbols, values and norms which characterize a certain society, and are understood to its members alone
a terrible thing to waste, as the Afghans have learned at the cost of considerable pain
a total way of life
a way of seeing the world, but we cannot see the way we ourselves "see
a way to identify a particular setting pertinent to a location or country
a way to see if the cryptococcus fungus can be grown from the sample of spinal fluid
Learned behavior of people, which includes their belief systems and languages, their social relationships, their institutions and organizations, and their material goods - food, clothing, buildings, tools, and machines.
The characteristic features of a group of people including its beliefs, its artistic and material products, and its social institutions. The structure of behaviors, ideas, attitudes, values, habits, beliefs, customs, language, rituals, ceremonies, and practices among a group of people, which defines for them their design for living and for life.
A set of learned ways of thinking and acting that characterizes a decision-making human group.
the beliefs, knowledge, and behaviors that characterize the life of a particular community.
The set of learned values, norms, and behaviors that are shared by a society and are designed to increase the probability of the society's survival. These include shared superstitions, myths, folkways, mores and behavior patterns that are rewarded or punished. For libraries, the understanding of different cultures, as new immigrant groups move into the market area is extremely important to take into consideration, in order to provide the needed materials and services.
The collective body of understanding, belief and behavior among a given group of people; depends on the human capacity for learning and transmitting knowledge from one generation to another.
Broad meaning: All that people have learned and shared, including skills, knowledge, language, values, perceptions, motives, symbols, etc. Narrow meaning: The dynamic patterns of learning behaviors, values, or beliefs exhibited by a group of people who share historical and geographic proximity.
I have found the most manageable definition of culture to be as recorded in an excellent book titled "Riding the Waves of Culture", fabulously written by Fons Trompenaars and Charles Hampden-Turner. "Culture is the way in which a group of people solves problems and reconciles dilemmas. Every culture distinguishes itself from others by the specific solutions it chooses to certain problems which reveal themselves as dilemmas". Another definition: "accumulated pattern of values, beliefs, and behaviours".
distinctively human process by which traditions and customs that govern behavior are passed down from generation to generation; body of learned behaviors common to a given human society that has patterned and predictable form and content to a degree.
a collective noun for the symbolic and learned, non-biological aspects of human society, including language, custom and convention.
The accepted and traditionally patterned ways of behaving and a set of common understandings shared by members of a group or community. Includes land, language, ways of living and working artistic expression, relationships and identity.
The material and non material attributes (built environment, traditions, activities etc.) of a society.
A highly ambiguous notion, "culture" has directly opposed connotations, and it always best to consider carefully the context of its use by individual authors. For some it means high art and only high art. There is sometimes a tacit assumption that "culture" refers only to creative, non-utilitarian endeavours. "Culture" may also mean all things produced by human agency: decorative artifacts, high art, political ideologies, ritual beliefs, social customs, and so on. It is equally possible to reason that humanity and all its products exist within nature, however superficially different they appear to be.
The ideas, customs, skills, arts, etc. of a given people in a given period; civilization. Can also refer to archaeological objects of a culture.
The sum of attitudes, customs, and beliefs that distinguishes one group of people from another. Culture is transmitted through language, material objects, ritual, institutions, and art from one generation to the next.
Refers to a particular group's shared system of socially transmitted behavioral patterns and beliefs. It also encompasses the required needs shared by the group to have access to resources.
the beliefs, customs, language, and traits of a particular group
Can be defined as a "set of guidelines (both explicit and implicit) which individuals inherit as members of a particular society, and which tells them how to view the world, how to experience it emotionally, and how to behave in it in relation to other people, to supernatural forces or gods and to the natural environment" (Helman 1990).
shared knowledge, behavior, ideas, and customs of a group or groups of people
The rarely questioned system of beliefs, values and practices that form one's life. Cultures are often identified by national borders, ethnicity, and religion—while some cultures cross borders, ethnicities and organized faiths. A culture which involves a select portion of a population and which is organized around a particular interest (such as cars, graffiti, or music) is known as a subculture.
An integrated system of learned behavior patterns that are characteristic of the members of any particular group. Culture includes shared customs, experiences, beliefs, rituals, and practices in a group of people. The elements of culture range from visible factors-such as appearance or dress-to assumptions people make about themselves, their relationships with others, and their values and priorities.
The beliefs, expectations, ways of operating, and behaviors that characterize the interactions of people in an organization.
(see also Food culture). The systematic patterns of explicit and implicit concepts (ideas) about behavior and behavior settings (environments), learned and used by individuals and groups in understanding and adapting to their life situations. We have gradually realized that it is useful to define culture as that which is in the heads of individuals-ideas, concepts, recipes for behavior, values, attitudes, and expectations. Culture is most visible in the language-the words and expressions- that people use in talking and thinking about various domains of information.
Learned behavior of people, which includes their languages, belief systems, social relationships, institutions, and organizations as well as their material goods.
The knowledge, language, values, customs, and material objects that are passed from person to person and from one generation to the next in a human group or society.
The total lifestyle of a people from a particular social grouping, including all the ideas, symbols, preferences, and material objects that they share.
Non-physical traits, such as values, beliefs, attitudes, and customs that are shared by a group of people and passed from one generation to the next. A meta-communication system.
a set of learned beliefs, values and behaviors--the way of life--shared by the members of a society.
a group's way of life
The shared values, traditions, norms, customs, arts, history, institutions, and experience of a group of people. The group may be identified by race, age, ethnicity, language, national origin, religion, or other social categories or groupings.
Any patterned set of behaviors, knowledge, values, beliefs, experiences and traditions shared by a particular group of people.
a whole way of life, and the human expressions of individuals in a particular society.
Culture refers to the standards of social interaction, values, and beliefs from a given group of people. Cultural issues can affect team interactions through different understandings of communication, family, and can appear to be an excuse for preferential treatment.
a group of people who speak the same language and have the same customs and way of life from generation to generation
The culture of an organization is the way it works. It includes the shared assumptions of the organization’s members, often tacit rather than explicit, and the values, language and mental models they share. Raymond Williams writes that culture is one of the most complicated words in English (Williams 1976). It can mean growing things (e.g. agriculture) and hence human development. It can mean refinement or taste. And it can mean a particular way of life. The last meaning is the one most relevant for study of organizations. Section 3.3
A group of people that share a common language, religion, way of life, and beliefs about the world
A distinctive heritage shared by a group of people. It influences the importance of family, work, education, and other concepts by passing on a series of beliefs, norms, and customs.
The shared customs, traditions, and beliefs of a group of people. These shared values are learned by members of the group from each other, and members of a specific culture share, create, contribute to, and preserve their culture for future generations.
The assumed or shared set of values, beliefs, perceptions and behaviors within an organization.
a term used by social scientists for a peoples whole way of life, including arts, beliefs, customs, inventions, language, technology and traditions
the collective term for the customs, traditions, beliefs, or values of a group of people, usually defined by demographic factors (geography, age, etc.). Includes the usual expectations for behavior as well as explicit and implicit rules that characterize a group of people. Alternately, a culture may be viewed as that group of people that is characterized by similar mores, traditions, beliefs, and so on.
Behaviour and belief patterns found within an organisation are called organisational culture. The Compact is a culture-changing process in which government departments (or local public bodies) and the voluntary and community sector together can improve both how each work and how they work in partnership.
the way of life or sum total of the behavior and beliefs shared by a particular group
The collective behavior patterns, communication styles, beliefs, concepts, values, institutions, standards, and other factors unique to a community that are socially transmitted to individuals and to which individuals are expected to conform.
the arts, beliefs, and traditions of a particular population of a region or country.
A set of learned behaviors, beliefs, attitudes, values, and ideals that are characteristic of a particular society of population.
The ideas, activities (art, foods, businesses), and ways of behaving that are special to a country, people, or region
understandings, patterns of behaviour, values and symbol systems that are acquired, preserved, and transmitted by a group of people and that can be embodied in art works
the customs, values, worldview, attitudes, expressive behaviors, and organization of a folk group, their way of life, which is learned through observation and imitation, not inherited genetically.
We're not talking about traditions, beliefs and expression here. With maps it means man-made features on Earth that are symbolized on maps. Some examples are roads, buildings and electricity transmission lines.
a people's ways of being, knowing, and doing
The way of life of a group of people who share a common historical experience as well as attitudes, values, traditions, and a language that identifies them as a specific group.
the customs, beliefs, art, music, and all the other products of human thought made by a particular group of people at a particular time
the way of life of a group of people. This includes what they wear, how they govern themselves, their religious belief, other rituals, etc.
A symbolic construct, shared by a group of people, which serves both to guide and interpret behavior. This is humans' primary means of adaptation.
How do you get people to share and use knowledge instinctively? How to overcome the hoarding, and trust issues. These issues can mean the difference between success and failure.
A shared system of meanings. Cultural analyst Geert Hofstede calls culture ‘the collective programming of the mindâ€(tm). The important words here are ‘sharedâ€(tm) and ‘collectiveâ€(tm). Culture is about a common understanding, encompassing beliefs, assumptions, worldview and taken-for-granted meanings. It follows that there can be no such thing as a culture of one. The roots of every culture lie in its language. We all ‘think in our languageâ€(tm), whether we are English, Polish, Japanese or Swahili.
Common beliefs and practices of a group of people.
The total social behavior patterns, beliefs and traits passed within a specific group of people.
A group of people linked together by shared values, beliefs, and historical associations, together with the group's social institutions and physical objects necessary to the operation of the institution.
Socially transmitted (learned) behavior patterns (norms), arts, beliefs, and institutions that enable a society to survive for many generations.
The arts, beliefs, habits, institutions and other human endeavours considered together as being characteristic of a particular community, people or nation.
Culture has been described by critic Raymond Williams as “one of the two or three most complicated words in the English language.” The term has a wide and diverse range of meanings and associations that cannot easily be reduced to a single definition. In contemporary usage, the term carries three main significations: (1) a description of a whole way of social life (as in the idea that humanity is comprised of numerous, distinct cultures); (2) the name for “serious” works of literature, music, fine arts, film, and so on, and the activities involved in producing these kinds of works; and, finally, (3) as an extension of the latter definition, culture can be used to refer to a wide range of signifying and symbolic works and activities, whether these involve everyday social practices (e.g., folk culture ) or the objects and practices of popular culture (e.g., detective novels as well as serious literature, television as well as film, etc.).
the sum of a group's intellectual achievements
The way of life of a people; for example, their attitudes toward each other and their moral and religious beliefs.
For the purposes of this unit, culture is the values, customs, language hustory, and traditions of a group of people. This term includes, but is not exclusive to, ethnic origin. [ ] DADA-- Anti- art movement which emerged in Europe in 1916 as a reaction against the inhumanity of World War I; interpreted irrational and nihilistic, or hopeless, social forces by creating ridiculing images; and used shock tactics.
The total way of life held in common by a group of people, including technology, traditions, language, and social roles. It is learned and handed-down from one generation to the next by non biological means. It includes the patterns of human behavior (i.e. ideas, beliefs, values, artifacts, and ways of making a living) which any society transmits to succeeding generations to meet its fundamental needs.
A specific set of social, educational, religious and professional behaviors, practices and values that individuals learn and adhere to while participating in or out of groups they usually interact with.
beliefs and customs of a society at a given time; a complex body or assemblage of human beliefs, art, morals, customs, religion, and laws, which has evolved historically and is handed down through the generations as a force that determines behavior and standard characteristics of a society
The way of life of a society, including beliefs and behaviors.
1. the customary beliefs, social forms, material traits, shared attitudes, values, and goals, of a racial, religious, or social group. 2. acquaintance with and taste in fine arts, humanities, and broad aspects of science as distinguished from vocational and technical skills.
The way of life of a group of people, including customs, beliefs, arts, institutions and worldview. Culture is acquired through many means and is always changing.
all the human creations that form the matrix within which it is possible for individuals to find shared meaning and to experience some sense of belonging, to communicate and cooperate. Culture comprises language, values, belief systems, the built environment and the objects with which we fill and adorn it, religious and spiritual observances, forms of political participation and action, customs, dietary practices, holidays and commemorations, work, kinship, friendship, games, spectacles, gatherings, costumes and personal adornments, art, and so on.
Culture is learned in families and communities, belongs to groups of people, and is a shared way of doing, believing and knowing. (Australian Early Childhood Association Inc.)
Integrated patterns of human behavior that includes thoughts, communications, actions, customs, beliefs, values, and institutions of racial, ethnic, religious, or social groups. The customary beliefs, social forms, and material traits, set of shared attitudes, values, goals, and practices that characterizes a racial, religious, or social group.
Values, ideas, and other symbolic meaningful systems that are transmitted and created by a group of people.
the set of learned beliefs, values, styles and behaviors, generally shared by members of a society or group.
Culture refers to a system of values, beliefs, attitudes, traditions, and standards of behavior that govern the organization of people into social groups and regulate both group and individual behavior. Culture is created by groups of individuals to assure the survival and well-being of group members. Culture is learned and is more complex than either ethnicity or race.
How we do things. Actual behaviors. Our personality and style.
An archaeological culture refers to the pattern of remains left behind by a distinct group of people. Culture in the anthropological, as opposed to the archaeological, sense can be defined as the sum total of socially-learned and transmitted behaviour and thought.
Combinations of the ideas, objects, and patterns of behavior that result from human social interaction. (p. 10)
One of the 5 goals of the National Foreign Language Standards. Students of language need to understand the perspectives (worldviews), practices (what to do when and where) and products (aesthetic expressions and everyday functional objects) of the places where the target language is spoken.
Set of values, guiding beliefs, understandings and ways of thinking that are shared by members of an organization and are taught to new members. Culture represents the unwritten, informal standards of an organization.
the concept of culture is vast and widely discussed: entire volumes have been published on the subject. A tentative brief definition could be that culture is the behaviour, ideas and technologies as a whole which are transmitted within some species, in particular ours (H. sapiens), and to successive generations not by inheritance but by acquisition (learning). Although inseparable from nature in which it originates, it has an apparently independent life, dividing into a variable number of subgroups which join the culture of the individual. Another hidden but fundamental characteristic of culture is that it is irreversible: it is unthinkable that all the knowledge acquired by man in millions of years could be completely eliminated all over the world. Culture together with genetic heritage is responsible for much of the malaise present in our society. Nevertheless within itself, culture contains all the probable and possible means to alleviate and annul all the destructiveness to which we are subjected.
The complete way of life of a people: the shared attitudes, values, goals, and practices that characterize a group; their customs, art, literature, religion, philosophy, etc.; the pattern of learned and shared behavior among the members of a group.
Characteristics of a particular group of people--their beliefs, customs, traditions, ceremonies.
The ideas, beliefs, values, activities, knowledge and traditions of a group of individuals who share a historical, geographic, religious, racial, linguistic, ethnic or social context, and who transmit, reinforce and modify those ideas, beliefs, etc. A culture is the total of everything an individual learns by being immersed in a particular context. It results in a set of expectations for appropriate behavior in seemingly similar contexts.
aspects of living developed by a group of people and passed from one generation to the next.
All the ideas, knowledge, traditions, beliefs, norms and values that are widely known and accepted by individuals in a society. Subculture – A group that shares some of the cultural elements of the larger society, but also has its own distinctive values, beliefs, norms, etc.
the integrated pattern of human behavior that includes thoughts, communication, actions, customs, beliefs, values, and institutions of a racial, ethnic, faith or social group.
The typical or expected behaviors, norms, and ideas that characterize a group of people.
The shared set of habits, customs, knowledge, beliefs, language, and behaviors that set one group of people apart from others. This grouping may range from very large to very small groups (such as an office or a business). Culture is invisible to people who are part of it, and often incomprehensible to people who are encountering a specific culture for the first time. The risk of culture for usability is that culture is a deep source of unstated assumptions. These assumptions need to be identified and stated explicitly before they can be incorporated into a usable design.
The societal forces affecting the values, beliefs, and actions of a distinct group of people.
A frame of reference that distinguishes one group from another, providing a unique set of formal and informal 'rules' and behaviours. These rules help people understand what is appropriate and inappropriate, and shape individuals' beliefs and assumptions.
Organizational culture is a system of shared values, assumptions, beliefs, and norms that unite the members of the organization. Individual leaders cannot easily create or change culture.
Culture was a Jamaican roots reggae group founded in 1976. Originally they were known as the African Disciples. Critically considered one of the most authentic traditional reggae acts, at the time of the first Rolling Stone Record Guide publication, they were the only band of any genre whose every recording received a five-star review (of bands with more than one recording in the guide). | http://www.metaglossary.com/meanings/466715/ | 13 |
39 | History of slavery
|By country or region|
|Opposition and resistance|
The history of slavery covers slave systems in historical perspective in which one human being is legally the property of another, can be bought or sold, is not allowed to escape and must work for the owner without any choice involved. As Drescher (2009) argues, "The most crucial and frequently utilized aspect of the condition is a communally recognized right by some individuals to possess, buy, sell, discipline, transport, liberate, or otherwise dispose of the bodies and behavior of other individuals." An integral element is that children of a slave mother automatically become slaves. It does not include historical forced labor by prisoners, labor camps, or other forms of unfree labor in which laborers are not considered property.
Slavery can be traced back to the earliest records, such as the Code of Hammurabi (c. 1760 BC), which refers to it as an established institution. Slavery is rare among hunter-gatherer populations as slavery depends on a system of social stratification. Slavery typically also requires a shortage of labor and a surplus of land to be viable. David P. Forsythe wrote: "The fact remained that at the beginning of the nineteenth century an estimated three-quarters of all people alive were trapped in bondage against their will either in some form of slavery or serfdom."
Evidence of slavery predates written records, and has existed in many cultures. Slavery is rare among hunter–gatherer populations, as slavery is a system of social stratification. Mass slavery also requires economic surpluses and a high population density to be viable. Due to these factors, the practice of slavery would have only proliferated after the invention of agriculture during the Neolithic Revolution about 11,000 years ago.
Slavery was known in civilizations as old as Sumer, as well as almost every other ancient civilization, including Ancient Egypt, Ancient China, the Akkadian Empire, Assyria, Ancient India, Ancient Greece, the Roman Empire, the Islamic Caliphate, and the pre-Columbian civilizations of the Americas. Such institutions were a mixture of debt-slavery, punishment for crime, the enslavement of prisoners of war, child abandonment, and the birth of slave children to slaves.
Records of slavery in Ancient Greece go as far back as Mycenaean Greece. The origins are not known, but it appears that slavery became an important part of the economy and society only after the establishment of cities.Slavery was common practice and an integral component of ancient Greece throughout its rich history, as it was in other societies of the time including ancient Israel and early Christian societies. It is estimated that in Athens, the majority of citizens owned at least one slave. Most ancient writers considered slavery not only natural but necessary, but some isolated debate began to appear, notably in Socratic dialogues while the Stoics produced the first condemnation of slavery recorded in history.
During the 8th and the 7th centuries BC, in the course of the two Messenian Wars the Spartans reduced an entire population to a pseudo-slavery called helotry. According to Herodotus (IX, 28–29), helots were seven times as numerous as Spartans. Following several helot revolts around the year 600 BC, the Spartans restructured their city-state along authoritarian lines, for the leaders decided that only by turning their society into an armed camp could they hope to maintain control over the numerically dominant helot population. In some Ancient Greek city states about 30% of the population consisted of slaves, but paid and slave labor seem to have been equally important.
Romans inherited the institution of slavery from the Greeks and the Phoenicians. As the Roman Republic expanded outward, entire populations were enslaved, thus creating an ample supply to work in Rome's farms and households. The people subjected to Roman slavery came from all over Europe and the Mediterranean. Such oppression by an elite minority eventually led to slave revolts; the Third Servile War led by Spartacus was the most famous and severe. Greeks, Berbers, Germans, Britons, Slavs, Thracians, Gauls (or Celts), Jews, Arabs, and many more were slaves used not only for labor, but also for amusement (e.g. gladiators and sex slaves). If a slave ran away, he was liable to be crucified. By the late Republican era, slavery had become a vital economic pillar in the wealth of Rome. In the Roman Empire, probably over 25% of the empire's population, and 30 to 40% of the population of Italy was enslaved.
Celtic tribes of Europe are recorded by various Roman sources as owning slaves. Yet it is well known that romans like Julius Caesar have created propaganda about the celts in the past to gain support for their war. The extent of slavery in prehistorical Europe is not well known however.
In the Viking era beginning circa 793, the Norse raiders often captured and enslaved militarily weaker peoples they encountered. In the Nordic countries the slaves were called thralls (Old Norse: Þræll). The thralls were mostly from Western Europe, among them many Franks, Anglo-Saxons, and Celts. Many Irish slaves participated in the colonization of Iceland. There is evidence of German, Baltic, Slavic and Latin slaves as well. The slave trade was one of the pillars of Norse commerce during the 6th through 11th centuries. The Persian traveller Ibn Rustah described how Swedish Vikings, the Varangians or Rus, terrorized and enslaved the Slavs. The thrall system was finally abolished in the mid-14th century in Scandinavia.
Chaos and invasion made the taking of slaves habitual throughout Europe in the early Middle Ages. St. Patrick, himself captured and sold as a slave, protested against an attack that enslaved newly baptized Christians in his Letter to the Soldiers of Coroticus.
Slavery during the Early Middle Ages had several distinct sources. Jewish participation in the slave trade was recorded starting in the 5th century. After the Muslim conquests of North Africa and most of the Iberian peninsula, the Islamic world became a huge importer of Saqaliba (Slavic) slaves from central and eastern Europe. Olivia Remie Constable wrote: "Muslim and Jewish merchants brought slaves into al-Andalus from eastern Europe and Christian Spain, and then re-exported them to other regions of the Islamic world." This trade came to an end after the Christianisation of Slavic countries. The etymology of the word slave comes from this period, the word sklabos meaning Slav.
The Vikings raided across Europe, though their slave raids were the most destructive in the British Isles and Eastern Europe. While the Vikings kept some slaves for themselves as servants, known as thralls, most people captured by the Vikings would be sold on the Byzantine or Islamic markets. In the West the targets of Viking slavery were primarily English, Irish, and Scottish, while in the East they were mainly Slavs. The Viking slave trade slowly ended in the 11th century, as the Vikings settled in the European territories they once raided, Christianized serfdom, and merged with the local populace.
The Islamic World was a main factor in slavery. Islamic law forbade Muslims to enslave fellow Muslims or so-called People of the Book: Christians, Jews, and Zoroastrians, but an exception was made if they were captured in battle. If they converted to Islam, their master was expected to free them as an act of piety, and if they did not, the master had to teach them. However, Muslims did not always treat with slaves in accordance with Islamic law. The Muslim powers of Iberia both raided for slaves and purchased slaves from European merchants, often the Jewish Radhanites, one of the few groups that could easily move between the Christian and Islamic worlds. The Middle Ages from 1100 to 1500 saw a continuation of the European slave trade, though with a shift from the Western Mediterranean Islamic nations to the Eastern, as Venice and Genoa, in firm control of the Eastern Mediterranean from the 12th century and the Black Sea from the 13th century sold both Slavic and Baltic slaves, as well as Georgians, Turks, and other ethnic groups of the Black Sea and Caucasus, to the Muslim nations of the Middle East. The sale of European slaves by Europeans slowly ended as the Slavic and Baltic ethnic groups Christianized by the Late Middle Ages. European slaves in the Islamic World would, however, continue into the Modern time period as Muslim pirates, primarily Algerians, with the support of the Ottoman Empire, raided European coasts and shipping from the 16th to the 19th centuries, ending their attacks with the naval decline of the Ottoman Empire in the late 16th and 17th centuries, as well as the European conquest of North Africa throughout the 19th century.
The Mongol invasions and conquests in the 13th century made the situation worse. The Mongols enslaved skilled individuals, women and children and marched them to Karakorum or Sarai, whence they were sold throughout Eurasia. Many of these slaves were shipped to the slave market in Novgorod.
Slave commerce during the Late Middle Ages was mainly in the hands of Venetian and Genoese merchants and cartels, who were involved in the slave trade with the Golden Horde. In 1382 the Golden Horde under Khan Tokhtamysh sacked Moscow, burning the city and carrying off thousands of inhabitants as slaves. Between 1414 and 1423, some 10,000 eastern European slaves were sold in Venice. Genoese merchants organized the slave trade from the Crimea to Mamluk Egypt. For years the Khanates of Kazan and Astrakhan routinely made raids on Russian principalities for slaves and to plunder towns. Russian chronicles record about 40 raids of Kazan Khans on the Russian territories in the first half of the 16th century. In 1521, the combined forces of Crimean Khan Mehmed Giray and his Kazan allies attacked Moscow and captured thousands of slaves.
In 1441, Haci I Giray declared independence from the Golden Horde and established the Crimean Khanate. For a long time, until the early 18th century, the khanate maintained a massive slave trade with the Ottoman Empire and the Middle East. In a process called the "harvesting of the steppe", they enslaved many Slavic peasants. About 30 major Tatar raids were recorded into Muscovite territories between 1558 and 1596. In 1571, the Crimean Tatars attacked and sacked Moscow, burning everything but the Kremlin and taking thousands of captives as slaves. In Crimea, about 75% of the population consisted of slaves.
Medieval Spain and Portugal were the scene of almost constant warfare between Muslims and Christians. Periodic raiding expeditions were sent from Al-Andalus to ravage the Iberian Christian kingdoms, bringing back booty and slaves. In a raid against Lisbon, Portugal in 1189, for example, the Almohad caliph Yaqub al-Mansur took 3,000 female and child captives, while his governor of Córdoba, in a subsequent attack upon Silves, Portugal in 1191, took 3,000 Christian slaves.
The Byzantine-Ottoman wars and the Ottoman wars in Europe brought large numbers of Christian slaves into the Islamic world too. After the battle of Lepanto approximately 12,000 Christian galley slaves were freed from the Ottoman fleet. Christians were also selling Muslim slaves captured in war. The Knights of Malta attacked pirates and Muslim shipping, and their base became a centre for slave trading, selling captured North Africans and Turks. Malta remained a slave market until well into the late 18th century. It required a thousand slaves to equip merely the galleys (ships) of the Order.
Slavery in Poland was forbidden in the 15th century; in Lithuania, slavery was formally abolished in 1588; they were replaced by the second enserfment. Slavery remained a minor institution in Russia until the 1723, when the Peter the Great converted the household slaves into house serfs. Russian agricultural slaves were formally converted into serfs earlier in 1679. The runaway Polish and Russian serfs and kholops known as Cossacks ('outlaws') formed autonomous communities in the southern steppes.
The 15th-century Portuguese exploration of the African coast is commonly regarded as the harbinger of European colonialism. In 1452, Pope Nicholas V issued the papal bull Dum Diversas, granting Afonso V of Portugal the right to reduce any "Saracens, pagans and any other unbelievers" to hereditary slavery which legitimized slave trade under Catholic beliefs of that time. This approval of slavery was reaffirmed and extended in his Romanus Pontifex bull of 1455. These papal bulls came to serve as a justification for the subsequent era of slave trade and European colonialism. Although for a short period as in 1462, Pius II declared slavery to be "a great crime". The followers of the church of England and Protestants did not use the papal bull as a justification. The position of the church was to condemn the slavery of Christians, but slavery was regarded as an old established and necessary institution which supplied Europe with the necessary workforce. In the 16th century African slaves had replaced almost all other ethnicities and religious enslaved groups in Europe. Within the Portuguese territory of Brazil, and even beyond its original borders, the enslavement of native Americans was carried out by the Bandeirantes.
Among many other European slave markets, Genoa, and Venice were some well-known markets, their importance and demand growing after the great plague of the 14th century which decimated much of the European work force. The maritime town of Lagos, Portugal, was the first slave market created in Portugal for the sale of imported African slaves – the Mercado de Escravos, opened in 1444. In 1441, the first slaves were brought to Portugal from northern Mauritania. Prince Henry the Navigator, major sponsor of the Portuguese African expeditions, as of any other merchandise, taxed one fifth of the selling price of the slaves imported to Portugal. By the year 1552 African slaves made up 10 percent of the population of Lisbon. In the second half of the 16th century, the Crown gave up the monopoly on slave trade and the focus of European trade in African slaves shifted from import to Europe to slave transports directly to tropical colonies in the Americas – in the case of Portugal, especially Brazil. In the 15th century one third of the slaves were resold to the African market in exchange of gold.
As Portugal increased its presence along China's coast, they began trading in slaves. Many Chinese slaves were sold to Portugal. Since the 16th century Chinese slaves existed in Portugal, most of them were Chinese children and a large amount were shipped to the Indies. Chinese prisoners were sent to Portugal, where they were sold as slaves, they were prized and regarded better than moorish and black slaves. The first known visit of a Chinese person to Europe dates to 1540, when a Chinese scholar, enslaved during one of several Portuguese raids somewhere on the southern China coast, was brought to Portugal. Purchased by João de Barros, he worked with the Portuguese historian on translating Chinese texts into Portuguese.Dona Maria de Vilhena, a Portuguese noble woman from Évora, Portugal, owned a Chinese male slave in 1562. In the 16th century, a small number of Chinese slaves, around 29–34 people were in southern Portugal, where they were used in agricultural labor. Chinese boys were captured in China, and through Macau were brought to Portugal and sold as slaves in Lisbon. Some were then sold in Brazil, a Portuguese colony. Due to hostility from the Chinese regarding the trafficking in Chinese slaves, in 1595 a law was passed by Portugal banning the selling and buying of Chinese slaves. On 19 February 1624, the King of Portugal forbade the enslavement of Chinese of either sex.
The Spaniards were the first Europeans to use African slaves in the New World on islands such as Cuba and Hispaniola, where the native population starved themselves rather than work for the Spanish. Although the natives were used as forced labor (the Spanish employed the pre-Columbian draft system called the mita), the spread of disease caused a shortage of labor, and so the Spanish colonists gradually became involved in the Atlantic slave trade. The first African slaves arrived in Hispaniola in 1501; by 1517, the natives had been "virtually annihilated" by the settlers.
Although slavery was illegal inside the Netherlands it flourished in the Dutch Empire, and helped support the economy. By 1650 the Dutch had the pre-eminent slave trade in Europe. They were overtaken by Britain around 1700. Historians agree that in all the Dutch shipped about 550,000 African slaves across the Atlantic, about 75,000 of whom died on board before reaching their destinations. From 1596 to 1829, the Dutch traders sold 250,000 slaves in the Dutch Guianas, 142,000 in the Dutch Caribbean islands, and 28,000 in Dutch Brazil. In addition, tens of thousands of slaves, mostly from India and some from Africa, were carried to the Dutch East Indies.
Great Britain and Ireland
Capture in war, voluntary servitude and debt slavery became common, and slaves were routinely bought and sold, but running away was common and slavery was never a major economic factor. Ireland and Denmark were markets for captured Anglo Saxon and Celtic slaves. Pope Gregory I reputedly made the pun, Non Angli, sed Angeli ("Not Angles, but Angels"), after a response to his query regarding the identity of a group of fair-haired Angles slave children whom he had observed in the marketplace. After 1100 slavery faded away as uneconomical.
From the 16th to 19th century, Barbary Corsairs raided the coasts of Europe and attacked lone ships at sea. From 1609 to 1616, England lost 466 merchant ships to Barbary pirates. 160 English ships were captured by Algerians between 1677 and 1680. Many of the captured sailors were made into slaves and held for ransom. The corsairs were no strangers to the South West of England where raids were known in a number of coastal communities. In 1627 Barbary Pirates under command of the Dutch renegade Jan Janszoon operating from the Moroccan port of Salé occupied the island of Lundy. During this time there were reports of captured slaves being sent to Algiers.
Ireland, despite its northern position, was not immune from attacks by the corsairs. In June 1631 Murat Reis, with pirates from Algiers and armed troops of the Ottoman Empire, stormed ashore at the little harbor village of Baltimore, County Cork. They captured almost all the villagers and took them away to a life of slavery in North Africa. The prisoners were destined for a variety of fates—some lived out their days chained to the oars as galley slaves, while others would spend long years in the scented seclusion of the harem or within the walls of the sultan's palace. Only two of them ever saw Ireland again.
Atlantic slave trade
Britain played a prominent role in the Atlantic slave trade, especially after 1600. Slavery was a legal institution in all of the 13 American colonies and Canada (acquired by Britain in 1763). The profits of the slave trade and of West Indian plantations amounted to 5% of the British economy at the time of the Industrial Revolution. The Somersett's case in 1772 was generally taken at the time to have decided that the condition of slavery did not exist under English law in England. In 1785, English poet William Cowper wrote: "We have no slaves at home – Then why abroad? Slaves cannot breathe in England; if their lungs receive our air, that moment they are free. They touch our country, and their shackles fall. That's noble, and bespeaks a nation proud. And jealous of the blessing. Spread it then, And let it circulate through every vein." In 1807, following many years of lobbying by the Abolitionist movement, the British Parliament voted to make the slave trade illegal anywhere in the Empire with the Slave Trade Act 1807. Thereafter Britain took a prominent role in combating the trade, and slavery itself was abolished in the British Empire with the Slavery Abolition Act 1833. Between 1808 and 1860, the West Africa Squadron seized approximately 1,600 slave ships and freed 150,000 Africans who were aboard. Action was also taken against African leaders who refused to agree to British treaties to outlaw the trade, for example against "the usurping King of Lagos", deposed in 1851. Anti-slavery treaties were signed with over 50 African rulers. In 1839, the world's oldest international human rights organization, Anti-Slavery International, was formed in Britain by Joseph Sturge, which worked to outlaw slavery in other countries.
In 1811, Arthur William Hodge was the first slave owner executed for the murder of a slave in the British West Indies. He was not, however, as some have claimed, the first white person to have been lawfully executed for the killing of a slave.
It became the custom among the Mediterranean powers to sentence condemned criminals to row in the war-galleys of the state (initially only in time of war). The French Huguenots filled the galleys after the revocation of the Edict of Nantes in 1685 and Camisard rebellion.Galley-slaves lived in unsavoury conditions, so even though some sentences prescribed a restricted number of years, most rowers would eventually die, even if they survived shipwreck and slaughter or torture at the hands of enemies or of pirates.Naval forces often turned 'infidel' prisoners-of-war into galley-slaves. Several well-known historical figures served time as galley slaves after being captured by the enemy—the Ottoman corsair and admiral Turgut Reis and the Knights Hospitaller Grand Master Jean Parisot de la Valette among them.
From the 1440s into the 18th century hundreds of thousands of Ukrainians were sold into slavery to the Turks. In 1575, the Tatars captured over 35,000 Ukrainians; a 1676 raid took almost 40,000. About 60,000 Ukrainians were captured in 1688; some were ransomed, but most were sold into slavery. Some of the Roma people were enslaved over five centuries in Romania until abolition in 1864 (see Slavery in Romania).
Denmark-Norway was the first European country to ban the slave trade. This happened with a decree issued by the king in 1792, to become fully effective by 1803. Slavery itself was not banned until 1848. At this time Iceland was a part of Denmark-Norway but slave trading had been abolished in Iceland in 1117 and had never been reestablished.
Slavery in the French Republic was abolished on 4 February 1794 however it was re-established by Napoleon Bonaparte in 1804. Slavery would be permanently abolished in the French empire during the French Revolution of 1848. The Haitian Revolution established Haiti as a free republic ruled by blacks, the first of its kind. At the time of the revolution, Haiti was known as Saint-Domingue and was a colony of France.
In most African societies, there was very little difference between the free peasants and the feudal vassal peasants. Vassals of the Songhay Muslim Empire were used primarily in agriculture; they paid tribute to their masters in crop and service but they were slightly restricted in custom and convenience. These people were more an occupational caste, as their bondage was relative. In the Kanem Bornu Empire, vassals were three classes beneath the nobles. Marriage between captor and captive was far from rare, blurring the anticipated roles.
French historian Fernand Braudel noted that slavery was endemic in Africa and part of the structure of everyday life. "Slavery came in different disguises in different societies: there were court slaves, slaves incorporated into princely armies, domestic and household slaves, slaves working on the land, in industry, as couriers and intermediaries, even as traders" (Braudel 1984 p. 435). During the 16th century, Europe began to outpace the Arab world in the export traffic, with its slave traffic from Africa to the Americas. The Dutch imported slaves from Asia into their colony in South Africa. In 1807 Britain, which held extensive, although mainly coastal colonial territories on the African continent (including southern Africa), made the international slave trade illegal, as did the United States in 1808. The end of the slave trade and the decline of slavery was imposed upon Africa by outside powers.
The nature of the slave societies differed greatly across the continent. There were large plantations worked by slaves in Egypt, the Sudan and Zanzibar, but this was not a typical use of slaves in Africa as a whole. In most African slave societies, slaves were protected and incorporated into the slave-owning family.
In Senegambia, between 1300 and 1900, close to one-third of the population was enslaved. In early Islamic states of the western Sudan, including Ghana (750–1076), Mali (1235–1645), Segou (1712–1861), and Songhai (1275–1591), about a third of the population were slaves. In Sierra Leone in the 19th century about half of the population consisted of slaves. In the 19th century at least half the population was enslaved among the Duala of the Cameroon, the Igbo and other peoples of the lower Niger, the Kongo, and the Kasanje kingdom and Chokwe of Angola. Among the Ashanti and Yoruba a third of the population consisted of slaves. The population of the Kanem was about a third-slave. It was perhaps 40% in Bornu (1396–1893). Between 1750 and 1900 from one- to two-thirds of the entire population of the Fulani jihad states consisted of slaves. The population of the Sokoto caliphate formed by Hausas in the northern Nigeria and Cameroon was half-slave in the 19th century. It is estimated that up to 90% of the population of Arab-Swahili Zanzibar was enslaved. Roughly half the population of Madagascar was enslaved.
The Anti-Slavery Society estimated that there were 2,000,000 slaves in the early 1930s Ethiopia, out of an estimated population of between 8 and 16 million. Slavery continued in Ethiopia until the brief Second Italo-Abyssinian War in October 1935, when it was abolished by order of the Italian occupying forces. In response to pressure by Western Allies of World War II Ethiopia officially abolished slavery and serfdom after regaining its independence in 1942. On 26 August 1942 Haile Selassie issued a proclamation outlawing slavery.
When British rule was first imposed on the Sokoto Caliphate and the surrounding areas in northern Nigeria at the turn of the 20th century, approximately 2 million to 2.5 million people there were slaves. Slavery in northern Nigeria was finally outlawed in 1936.
Elikia M'bokolo, April 1998, Le Monde diplomatique. Quote: "The African continent was bled of its human resources via all possible routes. Across the Sahara, through the Red Sea, from the Indian Ocean ports and across the Atlantic. At least ten centuries of slavery for the benefit of the Muslim countries (from the ninth to the nineteenth)." He continues: "Four million slaves exported via the Red Sea, another four million through the Swahili ports of the Indian Ocean, perhaps as many as nine million along the trans-Saharan caravan route, and eleven to twenty million (depending on the author) across the Atlantic Ocean"
David Livingstone wrote of the slave trades:
"To overdraw its evils is a simple impossibility.... We passed a slave woman shot or stabbed through the body and lying on the path. [Onlookers] said an Arab who passed early that morning had done it in anger at losing the price he had given for her, because she was unable to walk any longer. We passed a woman tied by the neck to a tree and dead.... We came upon a man dead from starvation.... The strangest disease I have seen in this country seems really to be broken heartedness, and it attacks free men who have been captured and made slaves."
Livingstone estimated that 80,000 Africans died each year before ever reaching the slave markets of Zanzibar. Zanzibar was once East Africa's main slave-trading port, and under Omani Arabs in the 19th century as many as 50,000 slaves were passing through the city each year.
Prior to the 16th century, the bulk of slaves exported from Africa were shipped from East Africa to the Arabian peninsula. Zanzibar became a leading port in this trade. Arab slave traders differed from European ones in that they would often conduct raiding expeditions themselves, sometimes penetrating deep into the continent. They also differed in that their market greatly preferred the purchase of female slaves over male ones.
The increased presence of European rivals along the East coast led Arab traders to concentrate on the overland slave caravan routes across the Sahara from the Sahel to North Africa. The German explorer Gustav Nachtigal reported seeing slave caravans departing from Kukawa in Bornu bound for Tripoli and Egypt in 1870. The slave trade represented the major source of revenue for the state of Bornu as late as 1898. The eastern regions of the Central African Republic have never recovered demographically from the impact of 19th-century raids from the Sudan and still have a population density of less than 1 person/km². During the 1870s, European initiatives against the slave trade caused an economic crisis in northern Sudan, precipitating the rise of Mahdist forces. Mahdi's victory created an Islamic state, one that quickly reinstituted slavery.
The Middle Passage, the crossing of the Atlantic to the Americas, endured by slaves laid out in rows in the holds of ships, was only one element of the well-known triangular trade engaged in by Portuguese, Dutch, French and British. Ships having landed slaves in Caribbean ports would take on sugar, indigo, raw cotton, and later coffee, and make for Liverpool, Nantes, Lisbon or Amsterdam. Ships leaving European ports for West Africa would carry printed cotton textiles, some originally from India, copper utensils and bangles, pewter plates and pots, iron bars more valued than gold, hats, trinkets, gunpowder and firearms and alcohol. Tropical shipworms were eliminated in the cold Atlantic waters, and at each unloading, a profit was made.
The Atlantic slave trade peaked in the late 18th century, when the largest number of slaves were captured on raiding expeditions into the interior of West Africa. These expeditions were typically carried out by African states, such as the Oyo empire (Yoruba), Kong Empire, Kingdom of Benin, Imamate of Futa Jallon, Imamate of Futa Toro, Kingdom of Koya, Kingdom of Khasso, Kingdom of Kaabu, Fante Confederacy, Ashanti Confederacy, Aro Confederacy and the kingdom of Dahomey. Europeans rarely entered the interior of Africa, due to fear of disease and moreover fierce African resistance. The slaves were brought to coastal outposts where they were traded for goods. The people captured on these expeditions were shipped by European traders to the colonies of the New World. As a result of the War of the Spanish Succession, the United Kingdom obtained the monopoly (asiento de negros) of transporting captive Africans to Spanish America. It is estimated that over the centuries, twelve to twenty million people were shipped as slaves from Africa by European traders, of whom some 15 percent died during the terrible voyage, many during the arduous journey through the Middle Passage. The great majority were shipped to the Americas, but some also went to Europe and Southern Africa.
In Algiers during the time of the Regency of Algiers in North Africa in the 19th century, Christians and Europeans that were captured and forced into slavery. This eventually led to the Bombardment of Algiers in 1816.
African participation in the slave trade
Some African states played a role in the slave trade. They would sell their captives or prisoners of war to European buyers. Selling captives or prisoners was common practice among Africans and Arabs during that era. However, as the Atlantic slave trade increased its demand, local systems which primarily serviced indentured servitude became corrupted and started to supply the European slave traders, changing social dynamics. It also ultimately undermined local economies and political stability as villages' vital labor forces were shipped overseas as slave raids and civil wars became commonplace. Crimes which were previously punishable by some other punishment became punishable by enslavement.
The prisoners and captives that were sold were usually from neighboring or enemy ethnic groups. These captive slaves were not considered as part of the ethnic group or 'tribe' and kings did not have a particular loyalty to them. At times, kings and chiefs would sell criminals into slavery so that they could no longer commit crimes in that area. Most other slaves were obtained from kidnappings, or through raids that occurred at gunpoint working together with Europeans. Some African kings refused to sell any of their captives or criminals. King Jaja of Opobo, a former slave himself, completely refused to do business with slavers. Ashanti King Agyeman Prempeh (Ashanti king, b. 1872) also sacrificed his own freedom so that his people would not face collective slavery.
Before the arrival of the Portuguese, slavery had already existed in Kingdom of Kongo. Despite its establishment within his kingdom, Afonso I of Kongo believed that the slave trade should be subject to Kongo law. When he suspected the Portuguese of receiving illegally enslaved persons to sell, he wrote letters to the King João III of Portugal in 1526 imploring him to put a stop to the practice.
The kings of Dahomey sold their war captives into transatlantic slavery, who otherwise would have been killed in a ceremony known as the Annual Customs. As one of West Africa's principal slave states, Dahomey became extremely unpopular with neighbouring peoples. Like the Bambara Empire to the east, the Khasso kingdoms depended heavily on the slave trade for their economy. A family's status was indicated by the number of slaves it owned, leading to wars for the sole purpose of taking more captives. This trade led the Khasso into increasing contact with the European settlements of Africa's west coast, particularly the French.Benin grew increasingly rich during the 16th and 17th centuries on the slave trade with Europe; slaves from enemy states of the interior were sold, and carried to the Americas in Dutch and Portuguese ships. The Bight of Benin's shore soon came to be known as the "Slave Coast".
"The slave trade is the ruling principle of my people. It is the source and the glory of their wealth…the mother lulls the child to sleep with notes of triumph over an enemy reduced to slavery…"
"We think this trade must go on. That is the verdict of our oracle and the priests. They say that your country, however great, can never stop a trade ordained by God himself."
Some historians conclude that the total loss in persons removed, those who died on the arduous march to coastal slave marts and those killed in slave raids, far exceeded the 65–75 million inhabitants remaining in Sub-Saharan Africa at the trade's end. Others believe that slavers had a vested interest in capturing rather than killing, and in keeping their captives alive; and that this coupled with the disproportionate removal of males and the introduction of new crops from the Americas (cassava, maize) would have limited general population decline to particular regions of western Africa around 1760–1810, and in Mozambique and neighbouring areas half a century later. There has also been speculation that within Africa, females were most often captured as brides, with their male protectors being a "bycatch" who would have been killed if there had not been an export market for them.
During the period from late 19th century and early 20th century, demand for the labor-intensive harvesting of rubber drove frontier expansion and slavery. The personal monarchy of Belgian King Leopold II in the Congo Free State saw mass killings and slavery to extract rubber.
The trading of children has been reported in modern Nigeria and Benin. In parts of Ghana, a family may be punished for an offense by having to turn over a virgin female to serve as a sex slave within the offended family. In this instance, the woman does not gain the title or status of "wife". In parts of Ghana, Togo, and Benin, shrine slavery persists, despite being illegal in Ghana since 1998. In this system of ritual servitude, sometimes called trokosi (in Ghana) or voodoosi in Togo and Benin, young virgin girls are given as slaves to traditional shrines and are used sexually by the priests in addition to providing free labor for the shrine.
It is estimated that as many as 200,000 black south Sudanese children and women (mostly from the Dinka tribe sold by the Sudanese Arabs of the north) have been taken into slavery in Sudan during the Second Sudanese Civil War. In Mauritania it is estimated that up to 600,000 men, women and children, or 20% of the population, are currently enslaved, many of them used as bonded labor.Slavery in Mauritania was criminalized in August 2007.
Among indigenous peoples
In Pre-Columbian Mesoamerica the most common forms of slavery were those of prisoners-of-war and debtors. People unable to pay back a debt could be sentenced to work as a slave to the person owed until the debt was worked off. Warfare was important to the Maya society, because raids on surrounding areas provided the victims required for human sacrifice, as well as slaves for the construction of temples. Most victims of human sacrifice were prisoners of war or slaves. According to Aztec writings, as many as 84,000 people were sacrificed at a temple inauguration in 1487. Slavery was not usually hereditary; children of slaves were born free. In the Inca Empire, workers were subject to a mita in lieu of taxes which they paid by working for the government. Each ayllu, or extended family, would decide which family member to send to do the work. It is unclear if this labor draft or corvée counts as slavery. The Spanish adopted this system, particularly for their silver mines in Bolivia.
Other slave-owning societies and tribes of the New World were, for example, the Tehuelche of Patagonia, the Comanche of Texas, the Caribs of Dominica, the Tupinambá of Brazil, the fishing societies, such as the Yurok, that lived along the coast from what is now Alaska to California, the Pawnee and Klamath. Many of the indigenous peoples of the Pacific Northwest Coast, such as the Haida and Tlingit, were traditionally known as fierce warriors and slave-traders, raiding as far as California. Slavery was hereditary, the slaves being prisoners of war. Among some Pacific Northwest tribes about a quarter of the population were slaves. One slave narrative was composed by an Englishman, John R. Jewitt, who had been taken alive when his ship was captured in 1802; his memoir provides a detailed look at life as a slave, and asserts that a large number were held.
Slavery was a mainstay of the Brazilian colonial economy, especially in mining and sugar cane production. Brazil obtained 38% of all African slaves traded, and more than 3 million slaves were sent to this one country. Starting around 1550, the Portuguese began to trade African slaves to work the sugar plantations, once the native Tupi people deteriorated. Although Portuguese Prime Minister Marquês de Pombal abolished slavery in mainland Portugal on 12 February 1761, slavery continued in her overseas colonies. Slavery was practiced among all classes. Slaves were owned by upper and middle classes, by the poor, and even by other slaves.
From São Paulo, the Bandeirantes, adventurers mostly of mixed Portuguese and native ancestry, penetrated steadily westward in their search for Indian slaves. Along the Amazon river and its major tributaries, repeated slaving raids and punitive attacks left their mark. One French traveler in the 1740s described hundreds of miles of river banks with no sign of human life and once-thriving villages that were devastated and empty. In some areas of the Amazon Basin, and particularly among the Guarani of southern Brazil and Paraguay, the Jesuits had organized their Jesuit Reductions along military lines to fight the slavers. In the mid-to-late 19th century, many Amerindians were enslaved to work on rubber plantations.
Resistance and abolition
Escaped slaves formed Maroon communities which played an important role in the histories of Brazil and other countries such as Suriname, Puerto Rico, Cuba, and Jamaica. In Brazil, the Maroon villages were called palenques or quilombos. Maroons survived by growing vegetables and hunting. They also raided plantations. At these attacks, the maroons would burn crops, steal livestock and tools, kill slavemasters, and invite other slaves to join their communities.
Jean-Baptiste Debret, a French painter who was active in Brazil in the first decades of the 19th century, started out with painting portraits of members of the Brazilian Imperial family, but soon became concerned with the slavery of both blacks and indigenous inhabitants. His paintings on the subject (two appear on this page) helped bring attention to the subject in both Europe and Brazil itself.
The Clapham Sect, a group of evangelical reformers, campaigned during much of the 19th century for the United Kingdom to use its influence and power to stop the traffic of slaves to Brazil. Besides moral qualms, the low cost of slave-produced Brazilian sugar meant that British colonies in the West Indies were unable to match the market prices of Brazilian sugar, and each Briton was consuming 16 pounds (7 kg) of sugar a year by the 19th century. This combination led to intensive pressure from the British government for Brazil to end this practice, which it did by steps over several decades.
First, foreign slave trade was banned in 1850. Then, in 1871, the sons of the slaves were freed. In 1885, slaves aged over 60 years were freed. The Paraguayan War contributed to ending slavery, since many slaves enlisted in exchange for freedom. In Colonial Brazil, slavery was more a social than a racial condition. In fact, some of the greatest figures of the time, like the writer Machado de Assis and the engineer André Rebouças had black ancestry.
Brazil's 1877–78 Grande Seca (Great Drought) in the cotton-growing northeast led to major turmoil, starvation, poverty and internal migration. As wealthy plantation holders rushed to sell their slaves south, popular resistance and resentment grew, inspiring numerous emancipation societies. They succeeded in banning slavery altogether in the province of Ceará by 1884. Slavery was legally ended nationwide on 13 May by the Lei Áurea ("Golden Law") of 1888. In fact, it was an institution in decadence at these times, as since the 1880s the country had begun to use European immigrant labor instead. Brazil was the last nation in the Western Hemisphere to abolish slavery.
Other South American countries
During the period from late 19th century and early 20th century, demand for the labor-intensive harvesting of rubber drove frontier expansion and slavery in Latin America and elsewhere. Indigenous people were enslaved as part of the rubber boom in Ecuador, Peru, Colombia, and Brazil. In Central America, rubber tappers participated in the enslavement of the indigenous Guatuso-Maleku people for domestic service.
British and French Caribbean
Slavery was commonly used in the parts of the Caribbean controlled by France and the British Empire. The Lesser Antilles islands of Barbados, St. Kitts, Antigua, Martinique and Guadeloupe, which were the first important slave societies of the Caribbean, began the widespread use of African slaves by the end of the 17th century, as their economies converted from sugar production.
By the middle of the 18th century, British Jamaica and French Saint-Domingue had become the largest slave societies of the region, and the Caribbean was rivaling Brazil as a destination for enslaved Africans. Due to overwork and tropical diseases, the death rates for Caribbean slaves were greater than birth rates. The conditions led to increasing numbers of slave revolts, escaped slaves forming Maroon communities and fighting guerrilla wars against the plantation owners.
To regularise slavery, in 1685 Louis XIV had enacted the code noir, which accorded certain human rights to slaves and responsibilities to the master, who was obliged to feed, clothe and provide for the general well-being of his slaves. Free blacks owned one-third of the plantation property and one-quarter of the slaves in Saint Domingue (later Haiti). Slavery in the French Republic was abolished on 4 February 1794. When it became clear that Napoleon intended to re-establish slavery in Haiti, Dessalines and Pétion switched sides, in October 1802. On 1 January 1804, Jean-Jacques Dessalines, the new leader under the dictatorial 1801 constitution, declared Haiti a free republic. Thus Haiti became the second independent nation in the Western Hemisphere, after the United States, and the only successful slave rebellion in world history.
Whitehall in England announced in 1833 that slaves in its territories would be totally freed by 1840. In the meantime, the government told slaves they had to remain on their plantations and would have the status of "apprentices" for the next six years.
In Port-of-Spain, Trinidad, on 1 August 1834, an unarmed group of mainly elderly Negroes being addressed by the Governor at Government House about the new laws, began chanting: "Pas de six ans. Point de six ans" ("Not six years. No six years"), drowning out the voice of the Governor. Peaceful protests continued until a resolution to abolish apprenticeship was passed and de facto freedom was achieved. Full emancipation for all was legally granted ahead of schedule on 1 August 1838, making Trinidad the first British colony with slaves to completely abolish slavery.
After Great Britain abolished slavery, it began to pressure other nations to do the same. France, too, abolished slavery. By then Saint-Domingue had already won its independence and formed the independent Republic of Haiti. French-controlled islands were then limited to a few smaller islands in the Lesser Antilles.
Before the arrival of European settlers each Maori tribe (iwi)considered itself a separate entity equivalent to a nation. During the inter tribal Musket Wars 1807 to 1843 large numbers of slaves were captured by northern tribes who had acquired muskets. About 20,000 Maori died in the wars which were concentrated in the North Island. An unknown number of slaves were captured. Northern tribes used slaves (called mokai) to grow large areas of potatoes for trade with visiting ships. Chiefs started an extensive sex trade in the Bay of Islands in the 1830s using mainly slave girls. By 1835 about 70-80 ships per year called into the port. One French captain described the impossibility of getting rid of the girls who swarmed over his ship out numbering his crew of 70 by 3 to 1. All payments to the girls were stolen by the chief. By 1833 Christianity had become established in the north and large numbers of slaves were freed. However two Taranaki tribes,Ngati Tama and Ngati Mutunga, displaced by the wars carried out a carefully planned invasion of the Chatham Islands,800km west of Christchurch, in 1835. About 10% of the Polynesian Morori natives who had migrated to the islands about 1500 were killed with many women being tortured to death. The remaining population were enslaved for the purpose of growing food, especially potatoes. The Moriori were treated in an inhumane and degrading manner for many years. Their culture was banned and they were forbidden to marry. Some Maori took Moriori partners. The state of enslavement of Moriori lasted until the 1860s although it had been banned by British law since 1809 and discouraged by CMS missionaries in North New Zealand from the late 1820s. In 1870 Ngati Mutunga one of the invading tribes, argued before the Native land Court in New Zealand that their gross mistreatment of the Moriori was standard Maori practice or tikanga.
The first slaves used by Europeans in what later became United States territory were among Lucas Vásquez de Ayllón's colonization attempt of North Carolina in 1526. The attempt was a failure, lasting only one year; the slaves revolted and fled into the wilderness to live among the Cofitachiqui people.
The first historically significant slave in what would become the United States was Estevanico, a Moroccan slave and member of the Narváez expedition in 1528 and acted as a guide on Fray Marcos de Niza's expedition to find the Seven Cities of Gold in 1539.
In 1619 twenty Africans were brought by a Dutch soldier and sold to the English colony of Jamestown, Virginia as indentured servants. It is possible that Africans were brought to Virginia prior to this, both because neither John Rolfe our source on the 1619 shipment nor any contemporary of his ever says that this was the first contingent of Africans to come to Virginia and because the 1625 Virginia census lists one black as coming on a ship that appears to only have landed people in Virginia prior to 1619. The transformation from indentured servitude to racial slavery happened gradually. It was not until 1661 that a reference to slavery entered into Virginia law, directed at Caucasian servants who ran away with a black servant. It was not until the Slave Codes of 1705 that the status of African Americans as slaves would be sealed. This status would last for another 160 years, until after the end of the American Civil War with the ratification of the 13th Amendment in December 1865.
Only a fraction of the enslaved Africans brought to the New World ended up in British North America—perhaps 5%. The vast majority of slaves shipped across the Atlantic were sent to the Caribbean sugar colonies, Brazil, or Spanish America.
By the 1680s with the consolidation of England's Royal African Company, enslaved Africans were imported to English colonies in larger numbers, and the practice continued to be protected by the English Crown. Colonists began purchasing slaves in larger numbers.
Slavery in American colonial law
- 1642: Massachusetts becomes the first colony to legalize slavery.
- 1650: Connecticut legalizes slavery.
- 1661: Virginia officially recognizes slavery by statute.
- 1662: A Virginia statute declares that children born would have the same status as their mother.
- 1663: Maryland legalizes slavery.
- 1664: Slavery is legalized in New York and New Jersey.
Development of slavery
The shift from indentured servants to African slaves was prompted by a dwindling class of former servants who had worked through the terms of their indentures and thus became competitors to their former masters. These newly freed servants were rarely able to support themselves comfortably, and the tobacco industry was increasingly dominated by large planters. This caused domestic unrest culminating in Bacon's Rebellion. Eventually, chattel slavery became the norm in regions dominated by plantations.
Many slaves in British North America were owned by plantation owners who lived in Britain. The British courts had made a series of contradictory rulings on the legality of slavery which encouraged several thousand slaves to flee the newly independent United States as refugees along with the retreating British in 1783. The British courts having ruled in 1772 that such slaves could not be forcibly returned to North America, the British government resettled them as free men in Sierra Leone.
Several slave rebellions took place during the 17th and 18th centuries.
Early United States law
The Republic of Vermont banned slavery in its constitution of 1777 and continued the ban when it entered the United States in 1791. Through the Northwest Ordinance of 1787 under the Congress of the Confederation, slavery was prohibited in the territories north west of the Ohio River. By 1804, abolitionists succeeded in passing legislation that would eventually (in conjunction with the 13th amendment) emancipate the slaves in every state north of the Ohio River and the Mason-Dixon Line. However, emancipation in the free states was so gradual that both New York and Pennsylvania listed slaves in their 1840 census returns, and a small number of black slaves were held in New Jersey in 1860. The importation or export of slaves was banned on 1 January 1808; but not the internal slave trade.
Despite the actions of abolitionists, free blacks were subject to racial segregation in the Northern states. Slavery was legal in most of Canada until 1833, but after that it offered a haven for hundreds of runaway slaves. Refugees from slavery fled the South across the Ohio River to the North via the Underground Railroad. Midwestern state governments asserted States Rights arguments to refuse federal jurisdiction over fugitives. Some juries exercised their right of jury nullification and refused to convict those indicted under the Fugitive Slave Act of 1850.
After the passage of the Kansas-Nebraska Act in 1854, armed conflict broke out in Kansas Territory, where the question of whether it would be admitted to the Union as a slave state or a free state had been left to the inhabitants. The radical abolitionist John Brown was active in the mayhem and killing in "Bleeding Kansas." The true turning point in public opinion is better fixed at the Lecompton Constitution fraud. Pro-slavery elements in Kansas had arrived first from Missouri and quickly organized a territorial government that excluded abolitionists. Through the machinery of the territory and violence, the pro-slavery faction attempted to force an unpopular pro-slavery constitution through the state. This infuriated Northern Democrats, who supported popular sovereignty, and was exacerbated by the Buchanan administration reneging on a promise to submit the constitution to a referendum – which it would surely fail. Anti-slavery legislators took office under the banner of the newly formed Republican Party. The Supreme Court in the Dred Scott decision of 1857 asserted that one could take one's property anywhere, even if one's property was chattel and one crossed into a free territory. It also asserted that African Americans could not be federal citizens. Outraged critics across the North denounced these episodes as the latest of the Slave Power (the politically organized slave owners) taking more control of the nation.
Approximately one Southern family in four held slaves prior to war. According to the 1860 United States Census, about 385,000 individuals (i.e. 1.4% of White Americans in the country, or 4.8% of southern whites) owned one or more slaves. and the slave population in the United States stood at four million. 95% of blacks lived in the South, comprising one third of the population there as opposed to 1% of the population of the North. Consequently, fears of eventual emancipation were much greater in the South than in the North.
In the election of 1860, the Republicans swept Abraham Lincoln into the Presidency (with only 39.8% of the popular vote) and legislators into Congress. Lincoln however, did not appear on the ballots in most southern states and his election split the nation along sectional lines. After decades of controlling the Federal Government, several of the southern states declared they had seceded from the U.S. (the Union) in an attempt to form the Confederate States of America.
Northern leaders like Lincoln viewed the prospect of a new Southern nation, with control over the Mississippi River and the West, as unacceptable. This led to the outbreak of the Civil War, which spelled the end for chattel slavery in America. However, in August 1862, Lincoln wrote to editor Horace Greeley that despite his own moral objection to slavery, the objective of the war was to save the Union and not either to save or to destroy slavery . Lincoln's Emancipation Proclamation of 1863 was a powerful move that proclaimed freedom for slaves within the Confederacy as soon as the Union Army arrived; Lincoln had no power to free slaves in the border states or the rest of the Union, so he promoted the Thirteenth Amendment, which freed all the remaining slaves in December 1865. The proclamation made the abolition of slavery an official war goal and it was implemented as the Union captured territory from the Confederacy. Slaves in many parts of the south were freed by Union armies or when they simply left their former owners. Over 150,000 joined the Union Army and Navy as soldiers and sailors.
The remaining slaves within the United States remained enslaved until the final ratification of the Thirteenth Amendment to the Constitution on 6 December 1865 (with final recognition of the amendment on 18 December), eight months after the cessation of hostilities. Only in Kentucky did a significant slave population remain by that time, although there were some in West Virginia and Delaware.
After the failure of Reconstruction, freed slaves in the United States were treated as second class citizens. For decades after their emancipation, many former slaves living in the South sharecropped and had a low standard of living. In some states, it was only after the civil rights movement of the 1950s and 60s that blacks obtained legal protection from racial discrimination (see segregation).
Asia↑Jump back a section
In the first half of the 19th century, small-scale slave raids took place across Polynesia to supply labor and sex workers for the whaling and sealing trades, with examples from both the westerly and easterly extremes of the Polynesian triangle. By the 1860s this had grown to a larger scale operation with Peruvian slave raids in the South Sea Islands to collect labor for the guano industry.
Ancient Hawaii was a caste society. People were born into specific social classes. Kauwa were the outcast or slave class. They are believed to have been war captives, or the descendents of war captives. Marriage between higher castes and the kauwa was strictly forbidden. The kauwa worked for the chiefs and were often used as human sacrifices at the luakini heiau. (They were not the only sacrifices; law-breakers of all castes or defeated political opponents were also acceptable as victims.)
In traditional Māori society of Aotearoa, prisoners of war became taurekareka, slaves, unless released, ransomed or tortured. With some exceptions, the child of a slave remained a slave. As far as it is possible to tell, slavery seems to have increased in the early 19th century, as a result of increased numbers of prisoners being taken by Māori military leaders such as Hongi Hika and Te Rauparaha in the Musket Wars, the need for labor to supply whalers and traders with food, flax and timber in return for western goods, and the missionary condemnation of cannibalism. Slavery was outlawed when the British annexed New Zealand in 1840, immediately prior to the signing of the Treaty of Waitangi, although it did not end completely until government was effectively extended over the whole of the country with the defeat of the Kingi movement in the Wars of the mid-1860s.
One group of Polynesians who migrated to the Chatham Islands became the Moriori who developed a largely pacifist culture. It was originally speculated that they settled the Chathams direct from Polynesia, but it is now widely believed they were disaffected Māori who emigrated from the South Island of New Zealand. Their pacifism left the Moriori unable to defend themselves when the islands were invaded by mainland Māori in the 1830s. Some 300 Moriori men, women and children were massacred and the remaining 1,200 to 1,300 survivors were enslaved.
Rapa Nui / Easter Island
The isolated island of Rapa Nui/Easter Island was inhabited by the Rapanui, who suffered a series of slave raids from 1805 or earlier, culminating in a near genocidal experience in the 1860s. The 1805 raid was by American sealers and was one of a series that changed the attitude of the islanders to outside visitors, with reports in the 1820s and 1830s that all visitors received a hostile reception. In December 1862, Peruvian slave raiders took between 1,400 and 2,000 islanders back to Peru to work in the guano industry; this was about a third of the island's population and included much of the island's leadership, the last ariki-mau and possibly the last who could read Rongorongo. After intervention by the French ambassador in Lima, the last 15 survivors were returned to the island, but brought with them smallpox, which further devastated the island.
Slavery has existed, in one form or another, throughout the whole of human history. So, too, have movements to free large or distinct groups of slaves. However, abolitionism should be distinguished from efforts to help a particular group of slaves, or to restrict one practice, such as the slave trade.
Drescher (2009) provides a model for the history of the abolition of slavery, emphasizing its origins in Western Europe. Around the year 1500, slavery had virtually died out in Western Europe, but was a normal phenomenon practically everywhere else. The imperial powers, France, Spain, Britain, Portugal, the Netherlands and a few others, built worldwide empires based primarily on plantation agriculture using slaves imported from Africa. However, the powers took care to minimize the presence of slavery in their homelands. During the "Age of Revolutions" (c. 1770–1815), Britain abolished its international slave trade and imposed similar restrictions upon other western nations; the U.S. followed suit in 1808. Although there were numerous slave revolts in the Caribbean, the only successful uprising came in the French colony of St. Domingue, where the slaves rose up, killed the mulattoes and whites, and established the independent Republic of Haiti. The continuing profitability of slave-based plantations and the threats of race war slowed the development of abolition movements during the first half of the 19th century. These movements were strongest in Britain, and after 1840 in the United States, in both instances they were based on evangelical religious enthusiasm that stressed the horrible impact on the slaves themselves. The Northern states of the United States abolished slavery, partly in response to the Declaration of Independence, between 1777 and 1804. Britain ended slavery in its empire in the 1830s. However the plantation economies of the southern United States, based on cotton, and those in Brazil and Cuba, based on sugar, expanded and grew even more profitable. The bloody American Civil War ended slavery in the United States in the 1860s; the system ended in Cuba and Brazil in the 1880s because it was no longer profitable for the owners. Slavery continued to exist in Africa, where Arab slave traders raided black areas for new captives to be sold in the system. European colonial rule and diplomatic pressure slowly put an end to the trade, and eventually to the practice of slavery itself.
In 1772, the Somersett Case (R. v. Knowles, ex parte Somersett) of the English Court of King's Bench ruled that slavery was unlawful in England (although not elsewhere in the British Empire). A similar case, that of Joseph Knight, took place in Scotland five years later and ruled slavery to be contrary to the law of Scotland.
Following the work of campaigners in the United Kingdom, such as William Wilberforce and Thomas Clarkson, the Act for the Abolition of the Slave Trade was passed by Parliament on 25 March 1807, coming into effect the following year. The act imposed a fine of £100 for every slave found aboard a British ship. The intention was to outlaw entirely the Atlantic slave trade within the whole British Empire.
The significance of the abolition of the British slave trade lay in the number of people hitherto sold and carried by British slave vessels. Britain shipped 2,532,300 Africans across the Atlantic, equalling 41% of the total transport of 6,132,900 individuals. This made the British empire the biggest slave-trade contributor in the world due to the magnitude of the empire. A fact that made the abolition act all the more damaging to the global trade of slaves.
The Slavery Abolition Act, passed on 23 August 1833, outlawed slavery itself in the British colonies. On 1 August 1834 all slaves in the British West Indies, were emancipated, but still indentured to their former owners in an apprenticeship system. The intention of, was to educate former slaves to a trade but instead allowed slave owners to maintain ownership illegally. The act was finally repealed in 1838.
Domestic slavery practised by the educated African coastal elites (as well as interior traditional rulers) in Sierra Leone was abolished in 1928. A study found practices of domestic slavery still widespread in rural areas in the 1970s.
There were slaves in mainland France (especially in trade ports such as Nantes or Bordeaux)., but the institution was never officially authorized there. The legal case of Jean Boucaux in 1739 clarified the unclear legal position of possible slaves in France, and was followed by laws that established registers for slaves in mainland France, who were limited to a three-year stay, for visits or learning a trade. Unregistered "slaves" in France were regarded as free. However, slavery was of vital importance in France's Caribbean possessions, especially Saint-Domingue.
In 1793, influenced by the French Declaration of the Rights of Man of August 1789 and alarmed as the massive slave revolt of August 1791 that had become the Haitian Revolution threatened to ally itself with the British, the French Revolutionary commissioners Sonthonax and Polverel declared general emancipation to reconcile them with France. In Paris, on 4 February 1794, Abbé Grégoire and the Convention ratified this action by officially abolishing slavery in all French territories outside mainland France, freeing all the slaves both for moral and security reasons.
Napoleon restores slavery
Napoleon came to power in 1799 and soon had grandiose plans for the French sugar colonies; to achieve them he had to reintroduce slavery. Napoleon's major adventure into the Caribbean—sending 30,000 troops in 1802 to retake Saint Domingue (Haiti) from ex-slaves under Toussaint L'Ouverture who had revolted. Napoleon wanted to preserve France's financial benefits from the colony's sugar and coffee crops; he then planned to establish a major base at New Orleans. He therefore reestablished slavery in Haiti and Guadeloupe, where it had been abolished after rebellions. Slaves and black freedmen fought the French for their freedom and independence. Revolutionary ideals played a central role in the fighting for it was the slaves and their comrades who were fighting for the revolutionary ideals of freedom and equality, while the French troops under General Charles Leclerc fought to restore the order of the ancien régime. The goal of reestablishing slavery - which explicitly contradicted the ideals of the French Revolution - demoralized the French troops. The demoralized French soldiers were unable to cope with the tropical diseases, and most died of yellow fever. Slavery was reimposed in Guadeloupe but not in Haiti, which became an independent black republic. Napoleon's vast colonial dreams for Egypt, India, the Caribbean, Louisiana, and even Australia were all doomed for lack of a fleet capable of matching Britain's Royal Navy. Realizing the fiasco Napoleon liquidated the Haiti project, brought home the survivors and sold off Louisiana to the U.S. in 1803
In 1688, four German Quakers in Germantown presented a protest against the institution of slavery to their local Quaker Meeting. It was ignored for 150 years but in 1844 it was rediscovered and was popularized by the abolitionist movement. The 1688 Petition was the first American public document of its kind to protest slavery, and in addition was one of the first public documents to define universal human rights.
The American Colonization Society, the primary vehicle for returning black Americans to greater freedom in Africa, established the colony of Liberia in 1821–22, on the premise former American slaves would have greater freedom and equality there. The ACS assisted in the movement of thousands of African Americans to Liberia, with its founder Henry Clay stating; "unconquerable prejudice resulting from their color, they never could amalgamate with the free whites of this country. It was desirable, therefore, as it respected them, and the residue of the population of the country, to drain them off". Abraham Lincoln, an enthusiastic supporter of Clay, adopted his position on returning the blacks to their own land.
Slaves in the United States who escaped ownership would often make their way to Canada via the "Underground Railroad". The more famous of the African American abolitionists include former slaves Harriet Tubman, Sojourner Truth and Frederick Douglass. Many more people who opposed slavery and worked for abolition were northern whites, such as William Lloyd Garrison and John Brown. Slavery was legally abolished in 1865 by the Thirteenth Amendment to the United States Constitution.
While abolitionists agreed on the evils of slavery, there were differing opinions on what should happen after African Americans were freed. By the time of Emancipation, African-Americans were now native to the United States and did not want to leave. Most believed that their labor had made the land theirs as well as that of the whites.
Congress of Vienna
The Declaration of the Powers, on the Abolition of the Slave Trade, of 8 February 1815 (Which also formed ACT, No. XV. of the Final Act of the Congress of Vienna of the same year) included in its first sentence the concept of the "principles of humanity and universal morality" as justification for ending a trade that was "odious in its continuance".
The 1926 Slavery Convention, an initiative of the League of Nations, was a turning point in banning global slavery. Article 4 of the Universal Declaration of Human Rights, adopted in 1948 by the UN General Assembly, explicitly banned slavery. The United Nations 1956 Supplementary Convention on the Abolition of Slavery was convened to outlaw and ban slavery worldwide, including child slavery. In December 1966, the UN General Assembly adopted the International Covenant on Civil and Political Rights, which was developed from the Universal Declaration of Human Rights. Article 8 of this international treaty bans slavery. The treaty came into force in March 1976 after it had been ratified by 35 nations. As of November 2003, 104 nations had ratified the treaty. However illegal forced labour involves millions of people in the 21st century, 43% for sexual exploitation and 32% for economic exploitation.
- Davis, David Brion. Slavery and Human Progress (1984).
- Davis, David Brion. The Problem of Slavery in Western Culture (1966)
- Davis, David Brion. Inhuman Bondage: The Rise and Fall of Slavery in the New World (2006)
- Drescher, Seymour. Abolition: A History of Slavery and Antislavery (Cambridge University Press, 2009)
- Finkelman, Paul, and Joseph Miller, eds. Macmillan Encyclopedia of World Slavery (2 vol 1998)
- Hinks, Peter, and John McKivigan, eds. Encyclopedia of Antislavery and Abolition (2 vol. 2007) 795pp; ISBN 978-0-313-33142-8
- McGrath, Elizabeth and Massing, Jean Michel, The Slave in European Art: From Renaissance Trophy to Abolitionist Emblem, London (The Warburg Institute) and Turin 2012.
- Parish, Peter J. Slavery: History and Historians (1989)
- Phillips, William D. Slavery from Roman Times to the Early Atlantic Slave Trade (1984)
- Rodriguez, Junius P. ed. The Historical Encyclopedia of World Slavery (2 vol. 1997)
- Rodriguez, Junius P. ed. Encyclopedia of Slave Resistance and Rebellion (2 vol. 2007)
Greece and Rome
- Bradley, Keith. Slavery and Society at Rome (1994)
- Cuffel, Victoria. "The Classical Greek Concept of Slavery," Journal of the History of Ideas Vol. 27, No. 3 (Jul. – Sep. 1966), pp. 323–342 JSTOR 2708589
- Finley, Moses, ed. Slavery in Classical Antiquity (1960)
- Westermann, William L. The Slave Systems of Greek and Roman Antiquity (1955) 182pp
Africa and Middle East
- Campbell, Gwyn. The Structure of Slavery in Indian Ocean Africa and Asia (Frank Cass, 2004)
- Lovejoy, Paul. Transformations in Slavery: A History of Slavery in Africa (Cambridge UP, 1983)
- Toledano, Ehud R. As If Silent and Absent: Bonds of Enslavement in the Islamic Middle East (Yale University Press, 2007) ISBN 978-0-300-12618-1
Latin America and British Empire
- Blackburn, Robin. The American Crucible: Slavery, Emancipation, and Human Rights (Verso; 2011) 498 pages; on slavery and abolition in the Americas from the 16th to the late 19th centuries.
- Klein, Herbert S. African Slavery in Latin America and the Caribbean (Oxford University Press, 1988)
- Klein, Herbert. The Atlantic Slave Trade (1970)
- Klein, Herbert S. Slavery in Brazil (Cambridge University Press, 2009)
- Morgan, Kenneth. Slavery and the British Empire: From Africa to America (2008)
- Stinchcombe, Arthur L. Sugar Island Slavery in the Age of Enlightenment: The Political Economy of the Caribbean World (Princeton University Press, 1995)
- Walvin, James. Black Ivory: Slavery in the British Empire (2nd ed. 2001)
- Ward, J. R. British West Indian Slavery, 1750–1834 (Oxford U.P. 1988)
- Fogel, Robert. Without Consent or Contract: The Rise and Fall of American Slavery (1989)
- Genovese, Eugene. Roll Jordan, Roll: The World the Slaves Made (1974)
- Miller, Randall M., and John David Smith, eds. Dictionary of Afro-American Slavery (1988)
- Phillips, Ulrich B. American Negro Slavery: A Survey of the Supply, Employment and Control of Negro Labor as Determined by the Plantation Regime (1918)
- Rodriguez, Junius P. ed. Slavery in the United States: A Social, Political, and Historical Encyclopedia (2 vol 2007)
- Types of slavery:
- Types of slave trades:
- Present-day slavery:
- List of famous slaves
- Notable abolitionists
- William Wilberforce – UK
- Types of slave soldiers:
- Ideals and organisations
- Anti-Slavery Society
- Coalition to Abolish Slavery and Trafficking
- Religious Society of Friends
- Society for effecting the abolition of the slave trade
- United States National Slavery Museum
- Guarani people
- History of Liverpool
- History of slavery in the United States:
- Influx of disease in the Caribbean
- Pedro Blanco
- Religion and slavery
- Sambo's Grave
- Slave narrative
- Slave rebellion
- Slave ship
- Slavery at common law
- William Lynch speech
- Seymour Drescher, Abolition: A History of Slavery and Antislavery (2009) pp 4–5
- Paul Finkelman, "Laws" in Finkelman and Miller, eds, Macmillan Encyclopedia of World Slavery (1998) 2:477-8
- "Mesopotamia: The Code of Hammurabi". "e.g. Prologue, "the shepherd of the oppressed and of the slaves" Code of Laws No. 7, "If any one buy from the son or the slave of another man"."[dead link]
- "Slavery". Britannica.
- David P. Forsythe (2009). "Encyclopedia of Human Rights, Volume 1". Oxford University Press. p. 399. ISBN 0195334027
- "Anti-Slavery Society". Anti-slaverysociety.addr.com. Retrieved 4 December 2011.
- "Mauritanian MPs pass slavery law". BBC News. 9 August 2007. Retrieved 8 January 2011.
- "Historical survey > Slave-owning societies". Encyclopædia Britannica.
- Demography, Geography and the Sources of Roman Slaves, by W. V. Harris: The Journal of Roman Studies, 1999
- Victoria Cuffel, "The Classical Greek Concept of Slavery," Journal of the History of Ideas Vol. 27, No. 3 (Jul. – Sep. 1966), pp. 323–342 JSTOR 2708589
- John Byron, Slavery Metaphors in Early Judaism and Pauline Christianity: A Traditio-historical and Exegetical Examination, Mohr Siebeck, 2003, ISBN 3-16-148079-1, p.40
- Roland De Vaux, John McHugh, Ancient Israel: Its Life and Institutions, Wm. B. Eerdmans Publishing, 1997, ISBN 0-8028-4278-X, p.80
- J.M.Roberts, The New Penguin History of the World, p.176–177, 223
- "Sparta – A Military City-State". Ancienthistory.about.com. 7 August 2010. Retrieved 4 December 2011.
- Thomas R. Martin, Ancient Greece: From Prehistoric to Hellenistic Times (Yale UP, 2000) pp. 66, 75–77
- "Ancient Greece". Archived from the original on 31 October 2009.
- "Slavery" The Encyclopedia Americana, 1981, page 19
- "Slavery". Retrieved 18 September 2010.
- Roman Slavery[dead link]
- "BBC – History – Resisting Slavery in Ancient Rome". BBC. 17 February 2011. Retrieved 4 December 2011.
- The Ancient Celts, Barry Cunliffe
- Junius P Rodriguez, Ph.D. (1997). The historical encyclopedia of world slavery. vol 1. A - K. ABC-CLIO. p. 674.
- See Iceland History
- Niels Skyum-Nielsen, "Nordic Slavery in an International Context," Medieval Scandinavia 11 (1978–79) 126-48
- Slave Trade. Jewish Encyclopedia
- Junius P Rodriguez, Ph.D. (1997). The historical encyclopedia of world slavery. vol 1. A - K. ABC-CLIO. p. 565.
- Olivia Remie Constable (1996). "Trade and Traders in Muslim Spain: The Commercial Realignment of the Iberian Peninsula, 900-1500". Cambridge University Press. pp. 203–204. ISBN 0521565030
- "slave", Online Etymology Dictionary, retrieved 26 March 2009
- Merriam-Webster's, retrieved 18 August 2009
- Richard W. Bulliet (2010). The Earth and Its Peoples: A Global History. Cengage Learning. p. 226.
- Clarence-Smith, Willian Gervase (2006). Islam and the Abolition of Slavery. Oxford University Press. pp. 2–5.
- "Christian Slaves, Muslim Masters: White Slavery in the Mediterranean, the Barbary Coast and Italy, 1500–1800". Robert Davis (2004). p.45. ISBN 1-4039-4551-9.
- "The Destruction of Kiev". Tspace.library.utoronto.ca. Retrieved 4 December 2011.
- "William of Rubruck's Account of the Mongols". Depts.washington.edu. Retrieved 4 December 2011.
- "Life in 13th Century Novgorod – Women and Class Structure". Web.archive.org. 26 October 2009. Retrieved 4 December 2011.
- Sras.Org (15 July 2003). "The Effects of the Mongol Empire on Russia". Sras.org. Retrieved 4 December 2011.
- How To Reboot Reality—Chapter 2, Labor[dead link]
- The Full Collection of the Russian Annals, vol.13, SPb, 1904
- "The Tatar Khanate of Crimea – All Empires". Allempires.com. Retrieved 4 December 2011.
- "Supply of Slaves". Coursesa.matrix.msu.edu. Retrieved 4 December 2011.
- Moscow – Historical background[dead link]
- "Historical survey > Slave societies". Britannica.com. Retrieved 4 December 2011.
- James William Brodman. "Ransoming Captives in Crusader Spain: The Order of Merced on the Christian-Islamic Frontier". Libro.uca.edu. Retrieved 4 December 2011.
- Phillips, Jr, William D. (1985). Slavery from Roman times to the Early Transatlantic Trade. Manchester: Manchester University Press. p. 37. ISBN 978-0-7190-1825-1.
- "Famous Battles in History The Turks and Christians at Lepanto". Trivia-library.com. Retrieved 4 December 2011.
- A medical service for slaves in Malta during the rule of the Order of St. John of Jerusalem[dead link]
- "Brief History of the Knights of St. John of Jerusalem". Hmml.org. 23 September 2010. Retrieved 4 December 2011.
- "Historical survey > Ways of ending slavery". Britannica.com. 31 January 1910. Retrieved 4 December 2011.
- Cossacks, Encyclopedia.com
- Allard, Paul (1912). "Slavery and Christianity". Catholic Encyclopedia XIV. New York: Robert Appleton Company. Retrieved 4 February 2006.
- Klein, Herbert. The Atlantic Slave Trade.
- Bales, Kevin. Understanding Global Slavery: A Reader
- Goodman, Joan E. (2001). A Long and Uncertain Journey: The 27,000 Mile Voyage of Vasco Da Gama. Mikaya Press, ISBN 978-0-9650493-7-5.
- de Oliveira Marques, António Henrique R. (1972). History of Portugal. Columbia University Press, ISBN 978-0-231-03159-2, p. 158-160, 362–370.
- Thomas Foster Earle, K. J. P. Lowe "Black Africans in Renaissance Europe" p.157 Google
- David Northrup, "Africa's Discovery of Europe" p.8 (Google)
- José Yamashiro (1989). Chòque luso no Japão dos séculos XVI e XVII. IBRASA. p. 103. ISBN 978-85-348-1068-5. Retrieved 14 July 2010.
- Maria do Rosário Pimente (1995). Viagem ao fundo das consciências: a escravatura na época moderna. Edições Colibri. p. 49. ISBN 978-972-8047-75-7. Retrieved 14 July 2010.
- Julita Scarano. "MIGRAÇÃO SOB CONTRATO: A OPINIÃO DE EÇA DE QUEIROZ". Unesp- Ceru. p. 4. Retrieved 14 July 2010.
- Paul Finkelman, Joseph Calder Miller (1998). Macmillan encyclopedia of world slavery, Volume 2. Macmillan Reference USA, Simon & Schuster Macmillan. p. 737. ISBN 978-0-02-864781-4. Retrieved 14 October 2010.
- David E. Mungello (2009). The great encounter of China and the West, 1500–1800. Rowman & Littlefield. p. 81. ISBN 978-0-7425-5798-7. Retrieved 14 October 2010.
- Alberto da Costa e Silva (2002). A manilha e o libambo: a África e a escravidâo, de 1500 a 1700. Editora Nova Fronteira. p. 849. ISBN 978-85-209-1262-1. Retrieved 14 October 2010.
- Hugh Thomas (1999). The slave trade: the story of the Atlantic slave trade, 1440–1870. Simon and Schuster. p. 119. ISBN 978-0-684-83565-5. Retrieved 14 October 2010.
- Jorge Fonseca (1997). Os escravos em Évora no século XVI. Câmara Municipal de Évora. p. 21. ISBN 978-972-96965-3-4. Retrieved 14 July 2010.
- Peter C. Mancall, Omohundro Institute of Early American History & Culture (2007). The Atlantic world and Virginia, 1550–1624. UNC Press Books. p. 228. ISBN 978-0-8078-5848-6. Retrieved 14 October 2010.
- José Roberto Teixeira Leite (1999). A China no Brasil: influências, marcas, ecos e sobrevivências chinesas na sociedade e na arte brasileiras. Editora da Unicamp. p. 20. ISBN 978-85-268-0436-4. Retrieved 14 July 2010.
- José Roberto Teixeira Leite (1999). A China no Brasil: influências, marcas, ecos e sobrevivências chinesas na sociedade e na arte brasileiras. Editora da Unicamp. p. 20. ISBN 978-85-268-0436-4. Retrieved 14 July 2010.
- José Yamashiro (1989). Chòque luso no Japão dos séculos XVI e XVII. IBRASA. p. 101. ISBN 978-85-348-1068-5. Retrieved 14 July 2010.
- Maria Suzette Fernandes Dias (2007). Legacies of slavery: comparative perspectives. Cambridge Scholars Publishing. p. 71. ISBN 978-1-84718-111-4. Retrieved 14 July 2010.
- Gary João de Pina-Cabral (2002). Between China and Europe: person, culture and emotion in Macao. Berg Publishers. p. 114. ISBN 978-0-8264-5749-3. Retrieved 14 July 2010.
- Gary João de Pina-Cabral (2002). Between China and Europe: person, culture and emotion in Macao. Berg Publishers. p. 115. ISBN 978-0-8264-5749-3. Retrieved 14 July 2010.
- "U.S. Library of Congress". Countrystudies.us. Retrieved 4 December 2011.
- Health in slavery[dead link]
- "CIA Factbook: Haiti". Cia.gov. Retrieved 4 December 2011.
- Johannes Postma, The Dutch in the Atlantic Slave Trade, 1600–1815 (2008)
- P. C. Emmer, Chris Emery, "The Dutch Slave Trade, 1500–1850" (2006) p. 3
- Rik Van Welie, "Slave Trading and Slavery in the Dutch Colonial Empire: A Global Comparison," NWIG: New West Indian Guide / Nieuwe West-Indische Gids, 2008, Vol. 82 Issue 1/2, pp 47–96 tables 2 and 3
- Vink Markus, "'The World's Oldest Trade': Dutch Slavery and Slave Trade in the Indian Ocean in the Seventeenth Century," Journal of World History June 2003 24 Dec 2010.
- Allen J. Frantzen and Douglas Moffat, eds. The work of work: servitude, slavery, and labor in Medieval England (1994)
- Rees Davies, British Slaves on the Barbary Coast, BBC, 1 July 2003
- Konstam, Angus (2008). Piracy: the complete history. Osprey Publishing. p. 91. ISBN 978-1-84603-240-0. Retrieved 15 April 2011.
- de Bruxelles, Simon (28 February 2007). "Pirates who got away with it". Study of sails on pirate ships (London). Retrieved 25 November 2007.
- "Europe: a History". Norman Davis. Retrieved 25 November 2007.
- This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Barbary Pirates". Encyclopædia Britannica (11th ed.). Cambridge University Press.
- Digital History, Steven Mintz. "Was slavery the engine of economic growth?". Digitalhistory.uh.edu. Retrieved 4 December 2011.
- Rhodes, Nick (2003). William Cowper: Selected Poems. p.84. Routledge, 2003
- Sailing against slavery. By Jo Loosemore BBC
- "The West African Squadron and slave trade". Pdavis.nl. Retrieved 4 December 2011.
- Anti-Slavery International UNESCO. Retrieved 15 October 2011
- John Andrew, The Hanging of Arthur HodgeThe Hanging of Arthur Hodge, Xlibris, 2000, ISBN 978-0-7388-1930-3. The assertion is probably correct; there appear to be no other records of any British slave owners being executed for holding slaves, and, given the excitement which the Hodge trial excited, it seems improbable that another execution could have occurred without attracting attention. Slavery itself as an institution in the British West Indies only continued for another 23 years after Hodge's death.
- Vernon Pickering, A Concise History of the British Virgin Islands, ISBN 978-0-934139-05-2, page 48
- Records indicate at least two earlier incidents. On 23 November 1739, in Williamsburg, Virginia, two white men, Charles Quin and David White, were hanged for the murder of another white man's black slave; and on 21 April 1775, the Fredericksburg newspaper, the Virginia Gazette reported that a white man William Pitman had been hanged for the murder of his own black slave. Blacks in Colonial America, p101, Oscar Reiss, McFarland & Company, 1997; Virginia Gazette, 21 April 1775, University of Mary Washington Department of Historic Preservation archives[dead link]
- "The Last Galleys". Uh.edu. 1 August 2004. Retrieved 4 December 2011.
- "Huguenots and the Galleys". Manakin.addr.com. 14 June 2011. Retrieved 4 December 2011.
- "French galley slaves of the ancien régime". Milism.net. Retrieved 4 December 2011.
- "The Great Siege of 1565". Sanandrea.edu.mt. Retrieved 4 December 2011.
- Junius A. Rodriguez, ed., The Historical Encyclopedia of World Slavery (1997) 2:659
- Paul E. Lovejoy, Slavery on the frontiers of Islam – (2004) p. 27
- "Roma Celebrate 150 years of Freedom 2005 Romania". Roconsulboston.com. 21 February 2006. Retrieved 4 December 2011.
- The Historical encyclopedia of world slavery, Volume 1 By Junius P. Rodriguez. Books.google.co.uk. Retrieved 4 December 2011.
- "A Brief History of Dessalines from 1825 Missionary Journal". Webster.edu. Retrieved 4 December 2011.
- Jeremy Popkin, You Are All Free: The Haitian Revolution and the Abolition of Slavery(Cambridge University Press; 2010)
- Yale Law School Avalon Project retrieved 8 January 2011
- "German Firms That Used Slave or Forced Labor During the Nazi Era". Jewish Virtual Library. 27 January 2000. Retrieved 19 September 2010.
- United States Holocaust Museum retrieved 8 January 2011
- Robert Conquest in "Victims of Stalinism: A Comment." Europe-Asia Studies, Vol. 49, No. 7 (Nov., 1997), pp. 1317-1319 states: "We are all inclined to accept the Zemskov totals (even if not as complete) with their 14 million intake to Gulag 'camps' alone, to which must be added 4-5 million going to Gulag 'colonies', to say nothing of the 3.5 million already in, or sent to, 'labor settlements'. However taken, these are surely 'high' figures."
- "Slavery In Arabia". "Owen 'Alik Shahadah".
- "Welcome to Encyclopædia Britannica's Guide to Black History". Britannica.com. Retrieved 4 December 2011.
- Slow Death for Slavery – Cambridge University Press[dead link]
- Digital History, Steven Mintz. "Digital History Slavery Fact Sheets". Digitalhistory.uh.edu. Retrieved 4 December 2011.
- Tanzania – Stone Town of Zanzibar[dead link]
- "18th and Early 19th centuries. The Encyclopedia of World History". Bartelby.com. Retrieved 4 December 2011.
- Fulani slave-raids[dead link]
- "Central African Republic: History". Infoplease.com. 13 August 1960. Retrieved 4 December 2011.
- "Twentieth Century Solutions of the Abolition of Slavery" (PDF). Retrieved 4 December 2011.
- "CJO – Abstract – Trading in slaves in Ethiopia, 1897–1938". Journals.cambridge.org. 8 September 2000. Retrieved 4 December 2011.
- "Ethiopia" (PDF). Retrieved 4 December 2011.
- Chronology of slavery
- Slow Death for Slavery: The Course of Abolition in Northern Nigeria, 1897–1936 (review), Project MUSE – Journal of World History
- The end of slavery, BBC World Service | The Story of Africa
- "The impact of the slave trade on Africa". Mondediplo.com. 22 March 1998. Retrieved 4 December 2011.
- David Livingstone; Christian History Institute[dead link]
- The blood of a nation of Slaves in Stone Town[dead link]
- Mwachiro, Kevin (30 March 2007). "BBC Remembering East African slave raids". BBC News. Retrieved 4 December 2011.
- "Swahili Coast". .nationalgeographic.com. 17 October 2002. Retrieved 4 December 2011.
- "Central African Republic: Early history". Britannica.com. Retrieved 4 December 2011.
- "Civil War in the Sudan: Resources or Religion?". American.edu. Retrieved 4 December 2011.
- Slave trade in the Sudan in the nineteenth century and its suppression in the years 1877–80.[dead link]
- The Great Slave Empires Of Africa[dead link]
- "The Transatlantic Slave Trade". Metmuseum.org. Retrieved 4 December 2011.
- Baepler, B. "White Slaves, African Masters 1st Edition." White Slaves, African Masters 1st Edition by Baepler. University of Chicago Press, n.d. Web. 07 Jan. 2013. Page 5
- Tunde Obadina. "Slave trade: a root of contemporary African Crisis". Africa Business Information Services. Retrieved 19 September 2010.
- "African Holocaust Special". African Holocaust Society. Retrieved 4 January 2007.
- Souljah (22 February 2007). "Myth Busting: "Africans Sold Their Own Into Slavery and Are Just As Guilty as Whites..."". Retrieved 18 September 2010.
- African Political Ethics and the Slave Trade[dead link]
- "Museum Theme: The Kingdom of Dahomey". Museeouidah.org. Retrieved 4 December 2011.
- "Dahomey (historical kingdom, Africa)". Britannica.com. Retrieved 4 December 2011.
- "Benin seeks forgiveness for role in slave trade". Finalcall.com. Retrieved 4 December 2011.
- "Le Mali précolonial". Histoire-afrique.org. Retrieved 4 December 2011.
- "The Story of Africa". BBC. Retrieved 4 December 2011.
- 5 Minutes 10 Minutes. "West is master of slave trade guilt". Theaustralian.news.com.au. Retrieved 4 December 2011.
- "African Slave Owners". BBC. Retrieved 4 December 2011.
- Adam Hochschild, King Leopold's Ghost
- War and Genocide in Sudan[dead link]
- Coe, Erin. "The Lost Children of Sudan". Journalism.nyu.edu. Retrieved 4 December 2011.
- "The Abolition season on BBC World Service". BBC. Retrieved 4 December 2011.
- "Mauritanian MPs pass slavery law". BBC News. 9 August 2007. Retrieved 4 December 2011.
- "Maya Society". Library.umaine.edu. Retrieved 4 December 2011.
- "human sacrifice – Britannica Concise Encyclopedia". Britannica.com. Retrieved 4 December 2011.
- Evidence May Back Human Sacrifice Claims |LiveScience[dead link]
- "Bolivia – Ethnic Groups". Countrystudies.us. Retrieved 4 December 2011.
- "Slavery in the New World". Britannica.com. Retrieved 4 December 2011.
- Digital History African American Voices[dead link]
- Haida Warfare[dead link]
- Herbert S. Klein and Francisco Vidal Luna, Slavery in Brazil (Cambridge University Press, 2010)
- "Rebellions in Bahia, 1798-l838. Culture of slavery". Isc.temple.edu. Retrieved 4 December 2011.
- "Bandeira". Britannica.com. Retrieved 4 December 2011.
- "Bandeira – Encyclopædia Britannica". Concise.britannica.com. Retrieved 4 December 2011.
- "Bandeirantes". V-brazil.com. Retrieved 4 December 2011.
- (Mike Davis, Late Victorian Holocausts, 88–90)
- Michael Edward Stanfield , Red Rubber, Bleeding Trees: Violence, Slavery, and Empire in Northwest Amazonia, 1850–1933
- Mark Edelman, "A Central American Genocide: Rubber, Slavery, Nationalism, and the Destruction of the Guatusos-Malekus," Comparative Studies in Society and History (1998), 40: 356–390.
- "Involuntary Immigrants". New York Times. 27 August 1995. Retrieved 4 December 2011.
- "Slavery and the Haitian Revolution". Chnm.gmu.edu. Retrieved 4 December 2011.
- "Haiti, 1789 to 1806".
- Dryden, John. 1992 "Pas de Six Ans!" In: Seven Slaves & Slavery: Trinidad 1777–1838, by Anthony de Verteuil, Port of Spain, pp. 371–379.
- The Meeting Place.V'Malley. Auckland University Press.
- Moriori. M. King. Penguin. 2003.
- Moriori.Michael King.Penguin.2003
- Niruena (18 September 2005). "Lucas Vasquez de Ayllon". Everything2.
- Vaughn, Alden T. "Blacks in Virginia: A Note on the First Decade" in William and Mary Quarterly 29 (1972) no. 3, p. 474
- McElrath, Jessica, Timeline of Slavery in America-African American History, About.com. Retrieved 6 December 2006.
- "(National Archives Link)". Nationalarchives.gov.uk. Retrieved 4 December 2011.
- Dictionary of Afro-American slavery By Randall M. Miller, John David Smith. Greenwood Publishing Group, 1997. p.471.
- Foner, Eric. "Forgotten step towards freedom," New York Times. 30 December 2007.
- "Africans in America" – PBS Series – Part 4 (2007)
- Leonard L. Richards, The Slave Power: The Free North and Southern Domination, 1780–1860 (2000)
- Kathleen Collins, "The Scourged Back," History of Photography 9 (January 1985): 43-45.
- Gary A. Warner, Journey to freedom, Daily Press, 24 June 2005
- "Black Slaveowners". Americancivilwar.com. Retrieved 4 December 2011.
- Southern History[dead link]
- "Introduction – Social Aspects of the Civil War". Itd.nps.gov. Retrieved 4 December 2011.
- James McPherson, Drawn with the Sword, page 15
- mythichawaii.com (23 October 2006). "Kapu System and Caste System of Ancient Hawai'i". Mythichawaii.com. Retrieved 4 December 2011.
- Maori Prisoners and Slaves in the Nineteenth Century. JSTOR 480764.
- Clark, Ross (1994). Moriori and Maori: The Linguistic Evidence. In Sutton, Douglas G. (Ed.) (1994), The Origins of the First New Zealanders. Auckland: Auckland University Press. pp. 123–135.
- Solomon, Māui; Denise Davis (updated 9 June 2006). Moriori. Te Ara – the Encyclopedia of New Zealand.
- Howe, Kerry (updated 9-June-2006). "Ideas of Māori origins". Te Ara – the Encyclopedia of New Zealand.
- King, Michael (2000 (Original edition 1989)). Moriori: A People Rediscovered. Viking. ISBN 978-0-14-010391-5.
- "Moriori – The impact of new arrivals – Te Ara Encyclopedia of New Zealand". Teara.govt.nz. 4 March 2009. Retrieved 4 December 2011.
- "Chatham Islands". New Zealand A to Z. Retrieved 4 December 2011.
- Seymour Drescher, Abolition: A History of Slavery and Antislavery (Cambridge University Press, 2009)
- (1772) 20 State Tr 1; (1772) Lofft 1
- Paul E. Lovejoy: 'The Volume of the Atlantic Slave Trade: A Synthesis.' The Journal of African History, Vol. 23, No. 4 (1982).
- "This Day at Law: Slavery abolished in the British Empire". Jurist.law.pitt.edu. 1 August 2009. Retrieved 4 December 2011.
- "Indian Legislation". Commonlii.org. Retrieved 4 December 2011.
- The Committee Office, House of Commons (6 March 2006). "House of Commons – International Development – Memoranda". Publications.parliament.uk. Retrieved 4 December 2011.
- "Response The 1833 Abolition of Slavery Act didn't end the vile trade". The Guardian. UK. 25 January 2007. Retrieved 4 December 2011.
- Bordeaux faces its slave history[dead link]
- Philippe R. Girard, "Liberte, Egalite, Esclavage: French Revolutionary Ideals and the Failure of the Leclerc Expedition to Saint-Domingue," French Colonial History (2005) 6#1 pp 55-77.
- Steven Englund, Napoleon: A Political Life (2004) p 259.
- Background on conflict in Liberia
- Maggie Montesinos Sale (1997). The slumbering volcano: American slave ship revolts and the production of rebellious masculinity. p.264. Duke University Press, 1997. ISBN 978-0-8223-1992-4
- Robin D. G. Kelley and Earl Lewis, To Make Our World Anew: Volume I (2005) p. 255
- The Parliamentary Debates from the Year 1803 to the Present Time, Published by s.n., 1816 Volume 32. p. 200
- David P. Forsythe, ed. (2009). Encyclopedia of human rights. Oxford University Press. pp. 494–502.
|Wikisource has original text related to this article:|
|Wikisource has original text related to this article:|
- Mémoire St Barth : Saint-Barthelemy's history (slave trade, slavery, abolitions)
- UN.GIFT – Global Initiative to Fight Human Trafficking
- Slave Trade Archives Project, UNESCO
- Parliament & The British Slave Trade 1600 – 1807
- Digital History – Slavery Facts & Myths
- Muslim Slave System in Medieval India
- Arab Slave Trade
- Scotland and the Abolition of the Slave Trade – schools resource
- African Holocaust Society – Anti-slavery and self-determination working to educate via media
- The Forgotten Holocaust: The Eastern Slave Trade
- Teaching resources about Slavery and Abolition on blackhistory4schools.com
- "What really ended slavery?" Robin Blackburn, author of a two-volume history of the slave trade, interviewed by International Socialism
- David Brion Davis, "American and British Slave Trade Abolition in Perspective", Southern Spaces, 4 February 2009.
- The Slave Next Door: Human Trafficking and Slavery in America Today – video report by Democracy Now!
- Archives on slavery at the University of London
- Slavery Museum. Great Britain. | http://en.mobile.wikipedia.org/wiki/History_of_slavery | 13 |
15 | THE BEGINNING OF THE COLONY OF BERBICE
In 1627, Abraham van Pere, a Dutch trader, received permission from the Dutch West India Company to start a colony on the Berbice River. Shortly after, he sent 40 men and 20 boys to settle at Nassau, about 50 miles upriver. Van Pere had a good knowledge of the territory since he had apparently been trading with the Amerindians of the area for a few years before 1627. He later applied his trading skills when he was contracted by the Zeeland Chamber to supply goods from Europe to the Dutch settlements in Essequibo.
At Nassau, where a fort was built, the settlers planted crops and traded with Amerindians. African slaves were introduced soon after the settlement was established to cultivate sugar and cotton. The situation was very peaceful until 1665 when the settlement was attacked by an English privateer. However, the colonists put up a strong defence and the English left after causing some damage to the settlement. This was a period of war between the English and the Dutch, and an English expedition led by Major John Scott attacked and seized Dutch settlements in Essequibo.
Meanwhile, the Berbice administration attempted to expand the size of the colony by establishing a trading post as far west as the Demerara River. At that time the Demerara area was unoccupied, but the West India Company objected to the presence of the trading post after claiming that the Demerara River fell under its jurisdiction. The trading post was then moved in 1671 to the Abary River which eventually became the boundary of Berbice and Demerara.
During the late 1680s when yet another European war was being waged, the French privateer, Jean Baptiste du Casse attacked Suriname and Berbice in 1689. His attack on Suriname was a failure, but he did some damage to the Berbice settlements. Commander de Feer of Berbice agreed to pay a ransom of 6,000 guilders before du Casse agreed to withdraw.
Berbice remained at peace until 1712 when the infamous French buccaneer, Jacques Cassard, with official French support, sent his men in warships to attack the colony. This was the period of the War of the Spanish Succession when the English and Dutch were allied against the French. The ships, commanded by Baron de Mouans, sailed up the Berbice River and attacked Fort Nassau. The Dutch commander, de Waterman, was forced to surrender the colony.
De Mouans demanded a ransom of 10,000 guilders for the private estates and 300,000 guilders for the fort and the estates of the van Pere family. While the private planters were able to raise the sum demanded of them, the Commander could only manage to gather 118,000 guilders on the van Peres' account. De Mouans grudgingly accepted this sum and a promissory note for the remainder. To ensure that this note was honoured, he left with two members of the Berbice Council as hostages. The buccaneers also took with them 259 of the best African slaves.
The van Pere family refused to pay the balance of the ransom to de Mouans, but after negotiations that lasted two years between the French company that sponsored the buccaneers, and Van Hoorn and Company, financial backers of the van Pere family and other Berbice planters, the Dutch company settled the issue by paying 108,000 guilders for the colony. The van Pere family subscribed a quarter of this sum, thus maintaining a financial interest in the colony.
Following this raid, Berbice suffered an economic decline. While the payment of the ransom was being negotiated between 1712 and 1714, the French firm that financed the buccaneers took away two shiploads of sugar. After the ransom was paid, there was great need to repair damage that occurred during the raid and also to improve production of sugar, but there was a severe shortage of slaves. The Dutch West India Company refused to permit Van Hoorn and Company, the financial backers of the Berbice colony, to transport slaves with their own ships and insisted that only the Dutch West India Company's ships had to do so. But since an advance payment of two-fifths of the price for each slave had to be made to the Company, slaves were hard to come by since the Berbice planters could not raise the credit required.
While the lack of slaves slowed progress, the shortage of capital for investment also posed a severe drawback. Since profits could not be seen, the Commander, de Waterman, was dismissed, but his successor Anthony Tierens could not do any better. The directors of Van Hoorn and Company then decided to raise capital by forming a new company with the express purpose of raising 3,200,000 guilders. This new company, the Berbice Association, was launched in 1720 but it could only start with a working capital of one million guilders.
Anthony Tierens, the Commander, now came under the supervision of the Berbice Association. He was ordered to establish new plantations and to introduce coffee cultivation. By 1722, he was able to establish on the Berbice River the plantations of Johanna, Cornelia, Jacoba, Savonette, Hardenbroek, Dageraad, Hogelande, Elizabeth and Debora. | http://www.guyana.org/features/guyanastory/chapter12.html | 13 |
74 | Atlantic slave trade
|By country or region|
|Opposition and resistance|
The Atlantic slave trade or transatlantic slave trade took place across the Atlantic Ocean from the 16th through to the 19th centuries. The vast majority of slaves transported to the New World were Africans from the central and western parts of the continent, sold by Africans to European slave traders who then transported them to North and South America. The numbers were so great that Africans who came by way of the slave trade became the most numerous Old-World immigrants in both North and South America before the late eighteenth century. The South Atlantic economic system centered on making goods and clothing to sell in Europe and increasing the numbers of African slaves brought to the New World. This was crucial to those European countries which, in the late seventeenth and eighteenth centuries, were vying with each other to create overseas empires.
The first Africans imported to the English colonies were also called “indentured servants” or “apprentices for life”. By the middle of the seventeenth century, they and their offspring were legally the property of their owners. As property, they were merchandise or units of labor, and were sold at markets with other goods and services.
The Portuguese were the first to engage in the New World slave trade, and others soon followed. Slaves were considered cargo by the ship owners, to be transported to the Americas as quickly and cheaply as possible, there to be sold to labor in coffee, tobacco, cocoa, cotton and sugar plantations, gold and silver mines, rice fields, construction industry, cutting timber for ships, and as house servants.
The Atlantic slave traders, ordered by trade volume, were: the Portuguese, the British, the French, the Spanish, the Dutch, and the Americans. They had established outposts on the African coast where they purchased slaves from local African tribal leaders. Current estimates are that about 12 million were shipped across the Atlantic, although the actual number purchased by the traders is considerably higher.
The slave trade is sometimes called the Maafa by African and African-American scholars, meaning "great disaster" in Swahili. Some scholars, such as Marimba Ani and Maulana Karenga, use the terms "African Holocaust" or "Holocaust of Enslavement".
Atlantic travel
The Atlantic slave trade arose after trade contacts were first made between the continents of the "Old World" (Europe, Africa, and Asia) and those of the "New World" (North America and South America). For centuries, tidal currents had made ocean travel particularly difficult and risky for the boats that were then available, and as such there had been very little, if any, naval contact between the peoples living in these continents. In the 15th century however, new European developments in seafaring technologies meant that ships were better equipped to deal with the problem of tidal currents, and could begin traversing the Atlantic ocean. Between 1600 and 1800, approximately 300,000 sailors engaged in the slave trade visited West Africa. In doing so, they came into contact with societies living along the west African coast and in the Americas which they had never previously encountered. Historian Pierre Chaunu termed the consequences of European navigation "disenclavement", with it marking an end of isolation for some societies and an increase in inter-societal contact for most others.
Historian John Thornton noted, "A number of technical and geographical factors combined to make Europeans the most likely people to explore the Atlantic and develop its commerce." He identified these as being the drive to find new and profitable commercial opportunities outside Europe as well as the desire to create an alternative trade network to that controlled by the Muslim Empire of the Middle East, which was viewed as a commercial, political and religious threat to European Christendom. In particular, European traders wanted to trade for gold, which could be found in western Africa, and also to find a naval route to "the Indies" (India), where they could trade for luxury goods such as spices without having to obtain these items from Middle Eastern Islamic traders.
Although the initial Atlantic naval explorations were performed purely by Europeans, members of many European nationalities were involved, including sailors from the Iberian kingdoms, the Italian kingdoms, England, France and the Netherlands. This diversity led Thornton to describe the initial "exploration of the Atlantic" as "a truly international exercise, even if many of the dramatic discoveries [such as those of Christopher Columbus and Ferdinand Magellan] were made under the sponsorship of the Iberian monarchs", something that would give rise to the later myth that "the Iberians were the sole leaders of the exploration".
African slavery
Slavery was practiced in some parts of Africa, Europe, Asia and the Americas before the beginning of the Atlantic slave trade. There is evidence that enslaved people from some African states were exported to other states in Africa, Europe and Asia prior to the European colonization of the Americas. The African slave trade provided a large number of slaves to Europeans.
The Atlantic slave trade was not the only slave trade from Africa, although it was the largest in volume and intensity. As Elikia M’bokolo wrote in Le Monde diplomatique: "The African continent was bled of its human resources via all possible routes. Across the Sahara, through the Red Sea, from the Indian Ocean ports and across the Atlantic. At least ten centuries of slavery for the benefit of the Muslim countries (from the ninth to the nineteenth).... Four million enslaved people exported via the Red Sea, another four million through the Swahili ports of the Indian Ocean, perhaps as many as nine million along the trans-Saharan caravan route, and eleven to twenty million (depending on the author) across the Atlantic Ocean."
According to John K. Thornton, Europeans usually bought enslaved people who were captured in endemic warfare between African states. There were also Africans who had made a business out of capturing Africans from neighboring ethnic groups or war captives and selling them. People living around the Niger River were transported from these markets to the coast and sold at European trading ports in exchange for muskets (matchlock between 1540–1606 but flintlock from then on) and manufactured goods such as cloth or alcohol. However, the European demand for slaves provided a large new market for the already existing trade. Further, while those held in slavery in their own region of Africa might hope to escape, those shipped away had little chance of returning to Africa.
European colonization and slavery in West Africa
||This section relies largely or entirely upon a single source. (April 2011)|
Upon discovering new lands through their naval explorations, European colonisers soon began to migrate to and settle in lands outside of their native continent. Off of the coast of Africa, European migrants, under the directions of the Kingdom of Castile, invaded and colonised the Canary Islands during the 15th century, where they converted much of the land to the production of wine and sugar. Along with this, they also captured native Canary Islanders, the guanches, to use as slaves both on the Islands and across the Christian Mediterranean.
As historian John Thornton remarked, "the actual motivation for European expansion and for navigational breakthroughs was little more than to exploit the opportunity for immediate profits made by raiding and the seizure or purchase of trade commodities." Using the Canary Islands as a naval base, European, and at the time primarily Portuguese traders then began to move their activities down the western coast of Africa, performing raids in which slaves would be captured to be later sold in the Mediterranean. Although initially successful in this venture, "it was not long before African naval forces were alerted to the new dangers, and the Portuguese [raiding] ships began to meet strong and effective resistance", with the crews of several of them being killed by African sailors, whose boats were better equipped at traversing the west African coasts and river systems.
By 1494, the Portuguese king had entered agreements with the rulers of several West African states that would allow trade between their respective peoples, enabling the Portuguese to "tap into" the "well-developed commercial economy in Africa... without engaging in hostilities." "[P]eaceful trade became the rule all along the African coast", although there were some rare exceptions when acts of aggression led to violence; for instance Portuguese traders attempted to conquer the Bissagos Islands in 1535, which was followed in 1571 when Portugal, supported by the Kingdom of Kongo, was able to capture the south-western region of Angola in order to secure its threatened economic interest in the area. Although Kongo later joined a coalition to force the Portuguese out in 1591, Portugal had secured a foothold on the continent that it would continue to occupy until the 20th century. Despite these incidences of occasional violence between African and European forces however, many African states were able to ensure that any trade went on in their own terms, imposing custom duties on foreign ships, and in one case that occurred in 1525, the Kongolese king, Afonso I, seized a French vessel and its crew for illegally trading on his coast.
Historians have widely debated the nature of the relationship between these African kingdoms and the European traders. The Guyanese historian Walter Rodney (1972) has argued that it was an unequal relationship, with Africans being forced into a "colonial" trade with the more economically developed Europeans, exchanging raw materials and human resources (i.e. slaves) for manufactured goods. He argued that it was this economic trade agreement dating back to the 16th century that led to Africa being underdeveloped in his own time. These ideas were supported by other historians, including Ralph Austen (1987). This idea of an unequal relationship was however contested by John Thornton (1998), who argued that "the Atlantic slave trade was not nearly as critical to the African economy as these scholars believed" and that "African manufacturing [at this period] was more than capable of handling competition from preindustrial Europe." However, Anne Bailey directly contests Thornton and states:
To see Africans as partners implies equal terms and equal influence on the global and intercontinental processes of the trade. Africans had great influence on the continent itself, but they had no direct influence on the engines behind the trade in the capital firms, the shipping and insurance companies of Europe and America, or the plantation systems in Americas. They did not wield any influence on the building manufacturing centers of the West.
European colonization and slavery in the Americas
||This section relies largely or entirely upon a single source. (April 2011)|
However, it was not just along the west African coast, but also in the Americas that Europeans began searching for commercial viability. European Christendom first became aware of the existence of the Americas after they were discovered by an expedition led by Christopher Columbus in 1492. As in Africa however, the indigenous peoples widely resisted European incursions into their territory during the first few centuries of contact, being somewhat effective in doing so. In the Caribbean, Spanish settlers were only able to secure control over the larger islands by allying themselves with certain Native American tribal groups in their conflicts with neighbouring societies. Groups such as the Kulinago of the Lesser Antilles and the Carib and Arawak people of (what is now) Venezuela launched effective counterattacks against Spanish bases in the Caribbean, with native-built boats, which were smaller and better suited to the seas around the islands, achieving success on a number of cases at defeating the Spanish ships.
In the 15th and 16th centuries, colonists from Europe also settled on the otherwise uninhabited islands of the Atlantic such as Madeira and the Azores, where with no slaves to sell, exporting products for export became the main industry.
16th, 17th and 18th centuries
The Atlantic slave trade is customarily divided into two eras, known as the First and Second Atlantic Systems.
The First Atlantic system was the trade of enslaved Africans to, primarily, South American colonies of the Portuguese and Spanish empires; it accounted for only slightly more than 3% of all Atlantic slave trade. It started (on a significant scale) in about 1502 and lasted until 1580 when Portugal was temporarily united with Spain. While the Portuguese traded enslaved people themselves, the Spanish empire relied on the asiento system, awarding merchants (mostly from other countries) the license to trade enslaved people to their colonies. During the first Atlantic system most of these traders were Portuguese, giving them a near-monopoly during the era, although some Dutch, English, and French traders also participated in the slave trade. After the union, Portugal came under Spanish legislation that prohibited it from directly engaging in the slave trade as a carrier, and become a target for the traditional enemies of Spain, losing a large share to the Dutch, British and French.
The Second Atlantic system was the trade of enslaved Africans by mostly British, Portuguese, French and Dutch traders. The main destinations of this phase were the Caribbean colonies and Brazil, as European nations built up economically slave-dependent colonies in the New World. Only slightly more than 3% of the enslaved people exported were traded between 1450 and 1600, 16% in the 17th century.
It is estimated that more than half of the slave trade took place during the 18th century, with the British, Portuguese and French being the main carriers of nine out of ten slaves abducted from Africa. The British were the biggest transporters of slaves across the Atlantic during the 18th century.
The 19th century saw a reduction of the slave trade, that accounted to 28.5% of the total Atlantic slave trade.
Triangular trade
European colonists initially practiced systems of both bonded labour and "Indian" slavery, enslaving many of the natives of the New World. For a variety of reasons, Africans replaced Native Americans as the main population of enslaved people in the Americas. In some cases, such as on some of the Caribbean Islands, warfare and diseases such as smallpox eliminated the natives completely. In other cases, such as in South Carolina, Virginia, and New England, the need for alliances with native tribes coupled with the availability of enslaved Africans at affordable prices (beginning in the early 18th century for these colonies) resulted in a shift away from Native American slavery.
A burial ground in Campeche, Mexico, suggests slaves had been brought there not long after Hernán Cortés completed the subjugation of Aztec and Mayan Mexico. The graveyard had been in use from approximately 1550 to the late 17th century.
The first side of the triangle was the export of goods from Europe to Africa. A number of African kings and merchants took part in the trading of enslaved people from 1440 to about 1833. For each captive, the African rulers would receive a variety of goods from Europe. These included guns, ammunition and other factory made goods. The second leg of the triangle exported enslaved Africans across the Atlantic Ocean to the Americas and the Caribbean Islands. The third and final part of the triangle was the return of goods to Europe from the Americas. The goods were the products of slave-labour plantations and included cotton, sugar, tobacco, molasses and rum.
However, Brazil (the main importer of slaves) manufactured these goods in South America and directly traded with African ports, thus not taking part in a triangular trade.
Labor and slavery
The Atlantic Slave Trade was the result of, among other things, labor shortage, itself in turn created by the desire of European colonists to exploit New World land and resources for capital profits. Native peoples were at first utilized as slave labor by Europeans, until a large number died from overwork and Old World diseases. Alternative sources of labor, such as indentured servitude, failed to provide a sufficient workforce. Many crops could not be sold for profit, or even grown, in Europe. Exporting crops and goods from the New World to Europe often proved to be more profitable than producing them on the European mainland. A vast amount of labor was needed to create and sustain plantations that required intensive labor to grow, harvest, and process prized tropical crops. Western Africa (part of which became known as "the Slave Coast"), and later Central Africa, became the source for enslaved people to meet the demand for labor.
The basic reason for the constant shortage of labor was that, with large amounts of cheap land available and lots of landowners searching for workers, free European immigrants were able to become landowners themselves after a relatively short time, thus increasing the need for workers.
Thomas Jefferson attributed the use of slave labor in part to the climate, and the consequent idle leisure afforded by slave labor: "For in a warm climate, no man will labour for himself who can make another labour for him. This is so true, that of the proprietors of slaves a very small proportion indeed are ever seen to labour."
African participation in the slave trade
Africans themselves played a role in the slave trade, by selling their captive or prisoners of war to European buyers. Selling captives or prisoners was common practice among Africans and Arabs during that era, just as it had been in ancient Europe. The prisoners and captives who were sold were usually from neighboring or enemy ethnic groups. These captive slaves were not considered part of the ethnic group or "tribe", African kings held no particular loyalty to them. Sometimes the criminals would be sold so that they could no longer commit crimes in that area. Most other slaves were obtained from kidnappings, or through raids that occurred at gunpoint through joint ventures with the Europeans. But some African kings refused to sell any of their captives or criminals. King Jaja of Opobo refused to do business with the slavers completely. However, Shahadah notes that with the rise of a large commercial slave trade, driven by European needs, enslaving enemies became less a consequence of war, and more and more a reason to go to war.
European participation in the slave trade
Although Europeans were the market for slaves, Europeans rarely entered the interior of Africa, due to fear of disease and fierce African resistance. The enslaved people would be brought to coastal outposts where they would be traded for goods. Enslavement became a major by-product of internal war in Africa as nation states expanded through military conflicts in many cases through deliberate sponsorship of benefiting Western European nations. During such periods of rapid state formation or expansion (Asante and Dahomey being good examples), slavery formed an important element of political life which the Europeans exploited: As Queen Sara's plea to the Portuguese courts revealed, the system became "sell to the Europeans or be sold to the Europeans". In Africa, convicted criminals could be punished by enslavement, a punishment which became more prevalent as slavery became more lucrative. Since most of these nations did not have a prison system, convicts were often sold or used in the scattered local domestic slave market.
The Atlantic slave trade peaked in the last two decades of the 18th century, during and following the Kongo Civil War. Wars amongst tiny states along the Niger River's Igbo-inhabited region and the accompanying banditry also spiked in this period. Another reason for surplus supply of enslaved people was major warfare conducted by expanding states such as the kingdom of Dahomey, the Oyo Empire and Asante Empire.
The majority of European conquests, raids and enslavements occurred toward the end or after the transatlantic slave trade. One exception to this is the conquest of Ndongo in present day Angola where Ndongo's slaves, warriors, free citizens and even nobility were taken into slavery by the Portuguese conquerors after the fall of the state.
Slavery in Africa and the New World contrasted
Forms of slavery varied both in Africa and in the New World. In general, slavery in Africa was not heritable – that is, the children of slaves were free – while in the Americas slaves' children were legally enslaved at birth. This was connected to another distinction: slavery in West Africa was not reserved for racial or religious minorities, as it was in European colonies, although the case was otherwise in places such as Somalia, where Bantus were taken as slaves for the ethnic Somalis.
The treatment of slaves in Africa was more variable than in the Americas. At one extreme, the kings of Dahomey routinely slaughtered slaves in hundreds or thousands in sacrificial rituals, and the use of slaves as human sacrifices was also known in Cameroon. On the other hand, slaves in other places were often treated as part of the family, "adopted children," with significant rights including the right to marry without their masters' permission. Scottish explorer Mungo Park wrote: "The slaves in Africa, I suppose, are nearly in the proportion of three to one to the freemen. They claim no reward for their services except food and clothing, and are treated with kindness or severity, according to the good or bad disposition of their masters.... The slaves which are thus brought from the interior may be divided into two distinct classes - first, such as were slaves from their birth, having been born of enslaved mothers; secondly, such as were born free, but who afterwards, by whatever means, became slaves. Those of the first description are by far the most numerous ...." In the Americas, slaves were denied the right to marry freely and even humane masters did not accept them as equal members of the family; however, while grisly executions of slaves convicted of revolt or other offenses were commonplace in the Americas, New World slaves were not subject to arbitrary ritual sacrifice. New World slaves were very useful and expensive enough to maintain and care for, but still the property of their owners.
Slave market regions and participation
There were eight principal areas used by Europeans to buy and ship slaves to the Western Hemisphere. The number of enslaved people sold to the New World varied throughout the slave trade. As for the distribution of slaves from regions of activity, certain areas produced far more enslaved people than others. Between 1650 and 1900, 10.24 million enslaved Africans arrived in the Americas from the following regions in the following proportions:
- Senegambia (Senegal and the Gambia): 4.8%
- Upper Guinea (Guinea-Bissau, Guinea and Sierra Leone): 4.1%
- Windward Coast (Liberia and Côte d'Ivoire): 1.8%
- Gold Coast (Ghana and east of Côte d'Ivoire): 10.4%
- Bight of Benin (Togo, Benin and Nigeria west of the Niger Delta): 20.2%
- Bight of Biafra (Nigeria east of the Niger Delta, Cameroon, Equatorial Guinea and Gabon): 14.6%
- West Central Africa (Republic of Congo, Democratic Republic of Congo and Angola): 39.4%
- Southeastern Africa (Mozambique and Madagascar): 4.7%
African kingdoms of the era
There were over 173 city-states and kingdoms in the African regions affected by the slave trade between 1502 and 1853, when Brazil became the last Atlantic import nation to outlaw the slave trade. Of those 173, no fewer than 68 could be deemed nation states with political and military infrastructures that enabled them to dominate their neighbors. Nearly every present-day nation had a pre-colonial predecessor, sometimes an African Empire with which European traders had to barter.
Ethnic groups
The different ethnic groups brought to the Americas closely corresponds to the regions of heaviest activity in the slave trade. Over 45 distinct ethnic groups were taken to the Americas during the trade. Of the 45, the ten most prominent according to slave documentation of the era are listed below.
- The BaKongo of the Democratic Republic of Congo and Angola
- The Mandé of Upper Guinea
- The Gbe speakers of Togo, Ghana and Benin (Adja, Mina, Ewe, Fon)
- The Akan of Ghana and Cote d'Ivoire
- The Wolof of Senegal and the Gambia
- The Igbo of southeastern Nigeria
- The Mbundu of Angola (includes both Ambundu and Ovimbundu)
- The Yoruba of southwestern Nigeria
- The Chamba of Cameroon
- The Makua of Mozambique
Human toll
The transatlantic slave trade resulted in a vast and as yet still unknown loss of life for African captives both in and outside of America. Approximately 1.2 – 2.4 million Africans died during their transport to the New World More died soon upon their arrival. The amount of life lost in the actual procurement of slaves remains a mystery but may equal or exceed the amount actually enslaved.
The savage nature of the trade led to the destruction of individuals and cultures. The following figures do not include deaths of enslaved Africans as a result of their actual labor, slave revolts or diseases they caught while living among New World populations.
A database compiled in the late 1990s put the figure for the transatlantic slave trade at more than 11 million people. For a long time an accepted figure was 15 million, although this has in recent years been revised down. Most historians now agree that at least 12 million slaves left the continent between the 15th and 19th century, but 10 to 20% died on board ships. Thus a figure of 11 million enslaved people transported to the Americas is the nearest demonstrable figure historians can produce. Besides the slaves who died on the Middle Passage itself, even more slaves probably died in the slave raids in Africa. The death toll from four centuries of the Atlantic slave trade is estimated at 10 million. According to William Rubinstein, "... of these 10 million estimated dead blacks, possibly 6 million were killed by other blacks in African tribal wars and raiding parties aimed at securing slaves for transport to America."
African conflicts
According to Dr. Kimani Nehusi, the presence of European slavers affected the way in which the legal code in African societies responded to offenders. Crimes traditionally punishable by some other form of punishment became punishable by enslavement and sale to slave traders. According to David Stannard's American Holocaust, 50% of African deaths occurred in Africa as a result of wars between native kingdoms, which produced the majority of slaves. This includes not only those who died in battles, but also those who died as a result of forced marches from inland areas to slave ports on the various coasts. The practice of enslaving enemy combatants and their villages was widespread throughout Western and West Central Africa, although wars were rarely started to procure slaves. The slave trade was largely a by-product of tribal and state warfare as a way of removing potential dissidents after victory or financing future wars. However, some African groups proved particularly adept and brutal at the practice of enslaving such as Oyo, Benin, Igala, Kaabu, Asanteman, Dahomey, the Aro Confederacy and the Imbangala war bands.
In letters written by the Manikongo, Nzinga Mbemba Afonso, to the King João III of Portugal, he writes that Portuguese merchandise flowing in is what is fueling the trade in Africans. He requests the King of Portugal to stop sending merchandise but should only send missionaries. In one of his letter he writes:
- "Each day the traders are kidnapping our people—children of this country, sons of our nobles and vassals, even people of our own family. This corruption and depravity are so widespread that our land is entirely depopulated. We need in this kingdom only priests and schoolteachers, and no merchandise, unless it is wine and flour for Mass. It is our wish that this Kingdom not be a place for the trade or transport of slaves."
- Many of our subjects eagerly lust after Portuguese merchandise that your subjects have brought into our domains. To satisfy this inordinate appetite, they seize many of our black free subjects.... They sell them. After having taken these prisoners [to the coast] secretly or at night..... As soon as the captives are in the hands of white men they are branded with a red-hot iron.
Before the arrival of the Portuguese, slavery had already existed in Kongo. Despite its establishment within his kingdom, Afonso believed that the slave trade should be subject to Kongo law. When he suspected the Portuguese of receiving illegally enslaved persons to sell, he wrote in to King João III in 1526 imploring him to put a stop to the practice.
The kings of Dahomey sold their war captives into transatlantic slavery, who otherwise would have been killed in a ceremony known as the Annual Customs. As one of West Africa's principal slave states, Dahomey became extremely unpopular with neighbouring peoples. Like the Bambara Empire to the east, the Khasso kingdoms depended heavily on the slave trade for their economy. A family's status was indicated by the number of slaves it owned, leading to wars for the sole purpose of taking more captives. This trade led the Khasso into increasing contact with the European settlements of Africa's west coast, particularly the French. Benin grew increasingly rich during the 16th and 17th centuries on the slave trade with Europe; slaves from enemy states of the interior were sold, and carried to the Americas in Dutch and Portuguese ships. The Bight of Benin's shore soon came to be known as the "Slave Coast".
King Gezo of Dahomey said in the 1840s:
- The slave trade is the ruling principle of my people. It is the source and the glory of their wealth…the mother lulls the child to sleep with notes of triumph over an enemy reduced to slavery…[better source needed]
In 1807, the UK Parliament passed the Bill that abolished the trading of slaves. The King of Bonny (now in Nigeria) was horrified at the conclusion of the practice:
- We think this trade must go on. That is the verdict of our oracle and the priests. They say that your country, however great, can never stop a trade ordained by God himself.
Port factories
After being marched to the coast for sale, enslaved people waited in large forts called factories. The amount of time in factories varied, but Milton Meltzer's Slavery: A World History states this process resulted in or around 4.5% of deaths during the transatlantic slave trade. In other words, over 820,000 people would have died in African ports such as Benguela, Elmina and Bonny reducing the number of those shipped to 17.5 million.
Atlantic shipment
After being captured and held in the factories, slaves entered the infamous Middle Passage. Meltzer's research puts this phase of the slave trade's overall mortality at 12.5%. Around 2.2 million Africans died during these voyages where they were packed into tight, unsanitary spaces on ships for months at a time. Measures were taken to stem the onboard mortality rate such as enforced "dancing" (as exercise) above deck and the practice of force-feeding enslaved people who tried to starve themselves. The conditions on board also resulted in the spread of fatal diseases. Other fatalities were the result of suicides by jumping over board by slaves who could no longer endure the conditions. The slave traders would try to fit anywhere from 350 to 600 slaves on one ship. Before the shipping of enslaved people was completely outlawed in 1853, 15.3 million enslaved people had arrived in the Americas.
Raymond L. Cohn, an economics professor whose research has focused on economic history and international migration, has researched the mortality rates among Africans during the voyages of the Atlantic slave trade. He found that mortality rates decreased over the history of the slave trade, primarily because the length of time necessary for the voyage was declining. "In the eighteenth century many slave voyages took at least 2½ months. In the nineteenth century, 2 months appears to have been the maximum length of the voyage, and many voyages were far shorter. Fewer slaves died in the Middle Passage over time mainly because the passage was shorter."
Seasoning camps
Meltzer also states that 33% of Africans would have died in the first year at seasoning camps found throughout the Caribbean. Many slaves shipped directly to North America bypassed this process; however most slaves (destined for island or South American plantations) were likely to be put through this ordeal. The enslaved people were tortured for the purpose of "breaking" them (like the practice of breaking horses) and conditioning them to their new lot in life. Jamaica held one of the most notorious of these camps. Dysentery was the leading cause of death. All in all, 5 million Africans died in these camps reducing the final number of Africans to about 10 million.
European competition
The trade of enslaved Africans in the Atlantic has its origins in the explorations of Portuguese mariners down the coast of West Africa in the 15th century. Before that, contact with African slave markets was made to ransom Portuguese that had been captured by the intense North African Barbary pirate attacks on Portuguese ships and coastal villages, frequently leaving them depopulated. The first Europeans to use enslaved Africans in the New World were the Spaniards who sought auxiliaries for their conquest expeditions and laborers on islands such as Cuba and Hispaniola, where the alarming decline in the native population had spurred the first royal laws protecting the native population (Laws of Burgos, 1512–1513). The first enslaved Africans arrived in Hispaniola in 1501. After Portugal had succeeded in establishing sugar plantations (engenhos) in northern Brazil ca. 1545, Portuguese merchants on the West African coast began to supply enslaved Africans to the sugar planters there. While at first these planters relied almost exclusively on the native Tupani for slave labor, a titanic shift toward Africans took place after 1570 following a series of epidemics which decimated the already destabilized Tupani communities. By 1630, Africans had replaced the Tupani as the largest contingent of labor on Brazilian sugar plantations, heralding equally the final collapse of the European medieval household tradition of slavery, the rise of Brazil as the largest single destination for enslaved Africans and sugar as the reason that roughly 84% of these Africans were shipped to the New World. It has also been alleged that Jews dominated or had a significant impact on the Atlantic slave trade, but this has been rejected by some scholars.
As Britain rose in naval power and settled continental North America and some islands of the West Indies, they became the leading slave traders. At one stage the trade was the monopoly of the Royal Africa Company, operating out of London, but following the loss of the company's monopoly in 1689, Bristol and Liverpool merchants became increasingly involved in the trade. By the late 17th century, one out of every four ships that left Liverpool harbour was a slave trading ship. Much of the wealth on which the city of Manchester, and surrounding towns, was built in the late eighteenth century, and for much of the nineteenth century, was based on the processing of slave-picked cotton.. Other British cities also profited from the slave trade.Birmingham, the largest gun producing town in Britain at the time, supplied guns to be traded for slaves. 75% of all sugar produced in the plantations came to London to supply the highly lucrative coffee houses there.
New World destinations
The first slaves to arrive as part of a labor force appeared in 1502 on the island of Hispaniola (now Haiti and the Dominican Republic). Cuba received its first four slaves in 1513. Jamaica received its first shipment of 4000 slaves in 1518. Slave exports to Honduras and Guatemala started in 1526. The first enslaved Africans to reach what would become the US arrived in January 1526 as part of a Spanish attempt at colonizing South Carolina near Jamestown. By November the 300 Spanish colonists were reduced to a mere 100 accompanied by 70 of their original 100 slaves. The enslaved people revolted and joined a nearby native population while the Spanish abandoned the colony altogether. Colombia received its first enslaved people in 1533. El Salvador, Costa Rica and Florida began their stint in the slave trade in 1541, 1563 and 1581 respectively.
The 17th century saw an increase in shipments with enslaved people arriving in the English colony of Jamestown, Virginia in 1619, although these first kidnapped Africans were classed as indentured servants and freed after seven years; chattel slavery entered Virginia law in 1656. Irish immigrants brought slaves to Montserrat in 1651, and in 1655, slaves arrived in Belize.
|British America (minus North America)||18.4%|
|British North America||6.45%|
|Dutch West Indies||2.0%|
|Danish West Indies||0.3%|
The number of the Africans arrived in each area can be calculated taking into consideration that the total number of slaves was close to 10,000,000.
Economics of slavery
The plantation economies of the New World were built on slave labor. Seventy percent of the enslaved people brought to the new world were used to produce sugar, the most labor-intensive crop. The rest were employed harvesting coffee, cotton, and tobacco, and in some cases in mining. The West Indian colonies of the European powers were some of their most important possessions, so they went to extremes to protect and retain them. For example, at the end of the Seven Years' War in 1763, France agreed to cede the vast territory of New France (now Eastern Canada) to the victors in exchange for keeping the minute Antillean island of Guadeloupe.
In France in the 18th century, returns for investors in plantations averaged around 6%; as compared to 5% for most domestic alternatives, this represented a 20% profit advantage. Risks—maritime and commercial—were important for individual voyages. Investors mitigated it by buying small shares of many ships at the same time. In that way, they were able to diversify a large part of the risk away. Between voyages, ship shares could be freely sold and bought.
By far the most financially profitable West Indian colonies in 1800 belonged to the United Kingdom. After entering the sugar colony business late, British naval supremacy and control over key islands such as Jamaica, Trinidad, the Leeward Islands and Barbados and the territory of British Guiana gave it an important edge over all competitors; while many British did not make gains, a handful of individuals made small fortunes. This advantage was reinforced when France lost its most important colony, St. Dominigue (western Hispaniola, now Haiti), to a slave revolt in 1791 and supported revolts against its rival Britain, after the 1793 French revolution in the name of liberty. Before 1791, British sugar had to be protected to compete against cheaper French sugar.
After 1791, the British islands produced the most sugar, and the British people quickly became the largest consumers. West Indian sugar became ubiquitous as an additive to Indian tea. It has been estimated that the profits of the slave trade and of West Indian plantations created up to one-in-twenty of every pound circulating in the British economy at the time of the Industrial Revolution in the latter half of the 18th century.
Historian Walter Rodney has argued that at the start of the slave trade in the 16th century, even though there was a technological gap between Europe and Africa, it was not very substantial. Both continents were using Iron Age technology. The major advantage that Europe had was in ship building. During the period of slavery the populations of Europe and the Americas grew exponentially while the population of Africa remained stagnant. Rodney contended that the profits from slavery were used to fund economic growth and technological advancement in Europe and the Americas. Based on earlier theories by Eric Williams, he asserted that the industrial revolution was at least in part funded by agricultural profits from the Americas. He cited examples such as the invention of the steam engine by James Watt, which was funded by plantation owners from the Caribbean.
Other historians have attacked both Rodney's methodology and factual accuracy. Joseph C. Miller has argued that the social change and demographic stagnation (which he researched on the example of West Central Africa) was caused primarily by domestic factors. Joseph Inikori provided a new line of argument, estimating counterfactual demographic developments in case the Atlantic slave trade had not existed. Patrick Manning has shown that the slave trade did indeed have profound impact on African demographics and social institutions, but nevertheless criticized Inikori's approach for not taking other factors (such as famine and drought) into account and thus being highly speculative.
Effect on the economy of Africa
No scholars dispute the harm done to the enslaved people but the effect of the trade on African societies is much debated, due to the apparent influx of goods to Africans. Proponents of the slave trade, such as Archibald Dalzel, argued that African societies were robust and not much affected by the trade. In the 19th century, European abolitionists, most prominently Dr. David Livingstone, took the opposite view, arguing that the fragile local economy and societies were being severely harmed by the trade. Historian Walter Rodney estimates that by c.1770, the King of Dahomey was earning an estimated £250,000 per year by selling captive African soldiers and enslaved people to the European slave-traders.
Effects on the economy of Europe
Some have stressed the importance of natural or financial resources that Britain received from its many overseas colonies or that profits from the British slave trade between Africa and the Caribbean helped fuel industrial investment. West Indian writer Eric Williams asserts the contribution of Africans on the basis of profits from the slave trade and slavery, and the employment of those profits to finance England's industrialization process. He argues that the enslavement of Africans was an essential element to the Industrial Revolution, and that British wealth is, in part, a result of slavery. However, he says that by the time of its abolition it had lost its profitability and it was in Britain's economic interest to ban it.
Other researchers and historians have strongly contested what has come to be referred to as the “Williams thesis” in academia: David Richardson has concluded that the profits from the slave trade amounted to less than 1% of domestic investment in Britain, and economic historian Stanley Engerman finds that even without subtracting the associated costs of the slave trade (e.g., shipping costs, slave mortality, mortality of whites in Africa, defense costs) or reinvestment of profits back into the slave trade, the total profits from the slave trade and of West Indian plantations amounted to less than 5% of the British economy during any year of the Industrial Revolution. Engerman’s 5% figure gives as much as possible in terms of benefit of the doubt to the Williams argument, not solely because it does not take into account the associated costs of the slave trade to Britain, but also because it carries the full-employment assumption from economics and holds the gross value of slave trade profits as a direct contribution to Britain’s national income. Historian Richard Pares, in an article written before Williams’ book, dismisses the influence of wealth generated from the West Indian plantations upon the financing of the Industrial Revolution, stating that whatever substantial flow of investment from West Indian profits into industry there was occurred after emancipation, not before.
Seymour Drescher and Robert Anstey argue the slave trade remained profitable until the end, and that moralistic reform, not economic incentive, was primarily responsible for abolition. They say slavery remained profitable in the 1830s because of innovations in agriculture.
Karl Marx in his influential economic history of capitalism Das Kapital wrote that '...the turning of Africa into a warren for the commercial hunting of black-skins, signaled the rosy dawn of the era of capitalist production.' He argued that the slave trade was part of what he termed the 'primitive accumulation' of European capital, the 'non-capitalist' accumulation of wealth that preceded and created the financial conditions for Britain's industrialisation.
The demographic effects of the slave trade are some of the most controversial and debated issues. More than 12 million people were removed from Africa via the slave trade, and what effect this had on Africa is an important question.
Walter Rodney argued that the export of so many people had been a demographic disaster and had left Africa permanently disadvantaged when compared to other parts of the world, and largely explains the continent's continued poverty. He presented numbers showing that Africa's population stagnated during this period, while that of Europe and Asia grew dramatically. According to Rodney, all other areas of the economy were disrupted by the slave trade as the top merchants abandoned traditional industries to pursue slaving, and the lower levels of the population were disrupted by the slaving itself.
Others have challenged this view. J. D. Fage compared the number effect on the continent as a whole. David Eltis has compared the numbers to the rate of emigration from Europe during this period. In the nineteenth century alone over 50 million people left Europe for the Americas, a far higher rate than were ever taken from Africa.
Other scholars accused Rodney of mischaracterizing the trade between Africans and Europeans. They argue that Africans, or more accurately African elites, deliberately let European traders join in an already large trade in enslaved people and were not patronized.
As Joseph E. Inikori argues, the history of the region shows that the effects were still quite deleterious. He argues that the African economic model of the period was very different from the European, and could not sustain such population losses. Population reductions in certain areas also led to widespread problems. Inikori also notes that after the suppression of the slave trade Africa's population almost immediately began to rapidly increase, even prior to the introduction of modern medicines. Owen Alik Shahadah also states that the trade was not only of demographic significance in aggregate population losses but also in the profound changes to settlement patterns, exposure to epidemics, and reproductive and social development potential.
Legacy of racism
Professor Maulana Karenga states that the effects of slavery were that "the morally monstrous destruction of human possibility involved redefining African humanity to the world, poisoning past, present and future relations with others who only know us through this stereotyping and thus damaging the truly human relations among peoples." He states that it constituted the destruction of culture, language, religion and human possibility.
Walter Rodney states: "Above all, it was the institution of slavery in the Americas which ultimately conditioned racial attitudes, even when their more immediate derivation was the literature on Africa or contacts within Europe itself. It has been well attested that New World slave - plantation society was the laboratory of modern racism. The owners contempt for and fear of the black slaves was expressed in religious, scientific and philosophical terms, which became the stock attitudes of European and even Africans in subsequent generations.Athough there have been contributions to racist philosophy both before and after the slave trade epoch, the historical experience of whites enslaving blacks for four centuries forged the tie between racist and colour prejudice, and produced not merely individual racists but a society where racism was so all-pervasive that it not even perceived for what it was. The very concept of human racial variants was never satisfactorily established in biological terms,and the assumptions of scientists and laymen alike were rooted in the perception of a reality in which Europeans had succeeded in reducing Africans to the level of chattel."
Walter Rodney states, "The role of slavery in promoting racist prejudice and ideology has been carefully studied in certain situations, especially in the U.S.A. The simple fact is that no people can enslave another for four centuries without coming out with a notion of superiority, and when the colour and other physical traits of those peoples were quite different it was inevitable that the prejudice should take a racist form."
End of the Atlantic slave trade
In Britain, America, Portugal and in parts of Europe, opposition developed against the slave trade. Davis says that abolitionists assumed "that an end to slave imports would lead automatically to the amelioration and gradual abolition of slavery.". Opposition to the trade was led by the Religious Society of Friends (Quakers) and establishment Evangelicals such as William Wilberforce. The movement was joined by many and began to protest against the trade, but they were opposed by the owners of the colonial holdings. Following Lord Mansfield's decision in 1772, slaves became free upon entering the British isles. Under the leadership of Thomas Jefferson, the new state of Virginia in 1778 became the first state and one of the first jurisdictions anywhere to stop the importation of slaves for sale; it made it a crime for traders to bring in slaves from out of state or from overseas for sale; migrants from other states were allowed to bring their own slaves. The new law freed all slaves brought in illegally after its passage and imposed heavy fines on violators. Denmark, which had been active in the slave trade, was the first country to ban the trade through legislation in 1792, which took effect in 1803. Britain banned the slave trade in 1807, imposing stiff fines for any slave found aboard a British ship (see Slave Trade Act 1807). The Royal Navy, which then controlled the world's seas, moved to stop other nations from continuing the slave trade and declared that slaving was equal to piracy and was punishable by death. The United States Congress passed the Slave Trade Act of 1794, which prohibited the building or outfitting of ships in the U.S. for use in the slave trade. In 1807 Congress outlawed the importation of slaves beginning on January 1, 1808, the earliest date permitted by the United States Constitution for such a ban.
On Sunday 28 October 1787, William Wilberforce wrote in his diary: "God Almighty has set before me two great objects, the suppression of the slave trade and the Reformation of society." For the rest of his life, William Wilberforce dedicated his life as a Member of the British Parliament to opposing the slave trade and working for the abolition of slavery throughout the British Empire. On 22 February 1807, twenty years after he first began his crusade, and in the middle of Britain's war with France, Wilberforce and his team's labors were rewarded with victory. By an overwhelming 283 votes for to 16 against, the motion to abolish the Atlantic slave trade was carried in the House of Commons. The United States acted to abolish the slave trade the same year, but not its internal slave trade which became the dominant character in American slavery until the 1860s. In 1805 the British Order-in-Council had restricted the importation of slaves into colonies that had been captured from France and the Netherlands. Britain continued to press other nations to end its trade; in 1810 an Anglo-Portuguese treaty was signed whereby Portugal agreed to restrict its trade into its colonies; an 1813 Anglo-Swedish treaty whereby Sweden outlawed its slave trade; the Treaty of Paris 1814 where France agreed with Britain that the trade is "repugnant to the principles of natural justice" and agreed to abolish the slave trade in five years; the 1814 Anglo-Netherlands treaty where the Dutch outlawed its slave trade.
With peace in Europe from 1815, and British supremacy at sea secured, the Royal Navy turned its attention back to the challenge and established the West Africa Squadron in 1808, known as the "preventative squadron", which for the next 50 years operated against the slavers. By the 1850s, around 25 vessels and 2,000 officers and men were on the station, supported by some ships from the small United States Navy, and nearly 1,000 "Kroomen"—experienced fishermen recruited as sailors from what is now the coast of modern Liberia. Service on the West Africa Squadron was a thankless and overwhelming task, full of risk and posing a constant threat to the health of the crews involved. Contending with pestilential swamps and violent encounters, the mortality rate was 55 per 1,000 men, compared with 10 for fleets in the Mediterranean or in home waters. Between 1807 and 1860, the Royal Navy's Squadron seized approximately 1,600 ships involved in the slave trade and freed 150,000 Africans who were aboard these vessels. Several hundred slaves a year were transported by the navy to the British colony of Sierra Leone, where they were made to serve as "apprentices" in the colonial economy until the Slavery Abolition Act 1833. Action was taken against African leaders who refused to agree to British treaties to outlaw the trade, for example against "the usurping King of Lagos", deposed in 1851. Anti-slavery treaties were signed with over 50 African rulers.
The last recorded slave ship to land on American soil was the Clotilde, which in 1859 illegally smuggled a number of Africans into the town of Mobile, Alabama. The Africans on board were sold as slaves; however, slavery in the U.S. was abolished 5 years later following the end of the American Civil War in 1865. The last survivor of the voyage was Cudjoe Lewis, who died in 1935. The last country to ban the Atlantic slave trade was Brazil in 1831. However, a vibrant illegal trade continued to ship large numbers of enslaved people to Brazil and also to Cuba until the 1860s, when British enforcement and further diplomacy finally ended the Atlantic trade.
African diaspora
The African diaspora which was created via slavery has been a complex interwoven part of America history and culture. In the United States, the success of Alex Haley's book Roots: The Saga of an American Family, published in 1976, and the subsequent television miniseries based upon it Roots, broadcast on the ABC network in January 1977, led to an increased interest and appreciation of African heritage amongst the African-American community. The influence of these led many African Americans to begin researching their family histories and making visits to West Africa. In turn, a tourist industry grew up to supply them. One notable example of this is through the Roots Homecoming Festival held annually in the Gambia, in which rituals are held through which African Americans can symbolically "come home" to Africa. Issues of dispute have however developed between African Americans and African authorities over how to display historic sites that were involved in the Atlantic slave trade, with prominent voices in the former criticising the latter for not displaying such sites sensitively, but instead treating them as a commercial enterprise.
"Back to Africa"
In 1816, a group of wealthy European-Americans, some of whom were abolitionists and others who were racial segregationists, founded the American Colonization Society with the express desire of returning African Americans who were in the United States to West Africa. In 1820, they sent their first ship to Liberia, and within a decade around two thousand African Americans had been settled in the west African country. Such re-settlement continued throughout the 19th century, increasing following the deterioration of race relations in the southern states of the US following Reconstruction in 1877.
Rastafari movement
The Rastafari movement, which originated in Jamaica, where 98% of the population are descended from victims of the Atlantic slave trade, has made great efforts to publicize the slavery, and to ensure it is not forgotten, especially through reggae music.
In 1998, UNESCO designated August 23 as International Day for the Remembrance of the Slave Trade and its Abolition. Since then there have been a number of events recognizing the effects of slavery.
On 9 December 1999 Liverpool City Council passed a formal motion apologising for the City's part in the slave trade. It was unanimously agreed that Liverpool acknowledges its responsibility for its involvement in three centuries of the slave trade. The City Council has made an unreserved apology for Liverpool's involvement and the continual effect of slavery on Liverpool's Black communities.
At the 2001 World Conference Against Racism in Durban, South Africa, African nations demanded a clear apology for slavery from the former slave-trading countries. Some nations were ready to express an apology, but the opposition, mainly from the United Kingdom, Portugal, Spain, the Netherlands, and the United States blocked attempts to do so. A fear of monetary compensation might have been one of the reasons for the opposition. As of 2009, efforts are underway to create a UN Slavery Memorial as a permanent remembrance of the victims of the Atlantic slave trade.
On January 30, 2006, Jacques Chirac (the then French President) said that 10 May would henceforth be a national day of remembrance for the victims of slavery in France, marking the day in 2001 when France passed a law recognising slavery as a crime against humanity.
On November 27, 2006, then British Prime Minister Tony Blair made a partial apology for Britain's role in the African slavery trade. However African rights activists denounced it as "empty rhetoric" that failed to address the issue properly. They feel his apology stopped shy to prevent any legal retort. Mr Blair again apologized on March 14, 2007.
On February 24, 2007 the Virginia General Assembly passed House Joint Resolution Number 728 acknowledging "with profound regret the involuntary servitude of Africans and the exploitation of Native Americans, and call for reconciliation among all Virginians." With the passing of that resolution, Virginia became the first of the 50 United States to acknowledge through the state's governing body their state's involvement in slavery. The passing of this resolution came on the heels of the 400th anniversary celebration of the city of Jamestown, Virginia, which was the first permanent English colony to survive in what would become the United States. Jamestown is also recognized as one of the first slave ports of the American colonies.
On May 31, 2007, the Governor of Alabama, Bob Riley, signed a resolution expressing "profound regret" for Alabama's role in slavery and apologizing for slavery's wrongs and lingering effects. Alabama is the fourth Southern state to pass a slavery apology, following votes by the legislatures in Maryland, Virginia, and North Carolina.
On August 24, 2007, Ken Livingstone (then Mayor of London) apologized publicly for London's role in the slave trade. "You can look across there to see the institutions that still have the benefit of the wealth they created from slavery", he said pointing towards the financial district, before breaking down in tears. He claimed that London was still tainted by the horrors of slavery. Jesse Jackson praised Mayor Livingstone, and added that reparations should be made.
On July 30, 2008, the United States House of Representatives passed a resolution apologizing for American slavery and subsequent discriminatory laws. The language included a reference to the "fundamental injustice, cruelty, brutality and inhumanity of slavery and Jim Crow" segregation.
On June 18, 2009, the United States Senate issued an apologetic statement decrying the "fundamental injustice, cruelty, brutality, and inhumanity of slavery". The news was welcomed by President Barack Obama.
In 2009, the Civil Rights Congress of Nigeria has written an open letter to all African chieftains who participated in trade calling for an apology for their role in the Atlantic slave trade: "We cannot continue to blame the white men, some Africans - albeit a very small minority, particularly the traditional rulers, are not blameless. In view of the fact that the Americans and Europe have accepted the cruelty of their roles and have forcefully apologised, it would be logical, reasonable and humbling if African traditional rulers ... [can] accept blame and formally apologise to the descendants of the victims of their collaborative and exploitative slave trade."
See also
|Wikisource has original text related to this article:|
- Curtin, Philip (1969). The Atlantic Slave Trade. The University Of Wisconsin Press. pp. 1–58.
- Mannix, Daniel (1962). Black Cargoes. The Viking Press. pp. Introduction–1–5.
- Klein, Herbert S. and Jacob Klein. The Atlantic Slave Trade. Cambridge University Press, 1999. pp. 103–139.
- Ronald Segal, The Black Diaspora: Five Centuries of the Black Experience Outside Africa (New York: Farrar, Straus and Giroux, 1995), ISBN 0-374-11396-3, p. 4. "It is now estimated that 11,863,000 slaves were shipped across the Atlantic. [Note in original: Paul E. Lovejoy, "The Impact of the Atlantic Slave Trade on Africa: A Review of the Literature", in Journal of African History 30 (1989), p. 368.]"
- Eltis, David and Richardson, David. The Numbers Game. In: Northrup, David: The Atlantic Slave Trade, 2nd edition, Houghton Mifflin Co., 2002. p. 95.
- Basil Davidson. The African Slave Trade.
- "African Holocaust How Many". African Holocaust Society. Retrieved 2007-01-04. "While traditional studies often focus on official French and British records of how many Africans arrived in the New World, these studies neglect to include the death from raids, the fatalities on board the ships, deaths caused by European diseases, the victims from the consequences of enslavement, and trauma of refugees displaced by slaving activities. The number of arrivals also neglects the volume of Africans who arrived via pirates, who for obvious reasons, wouldn't have kept records."
- "African Holocaust Special". African Holocaust Society. Retrieved 2007-01-04.
- Thornton 1998. pp. 15–17.
- Christopher 2006, p. 127.
- Thornton 1998. p. 13.
- Chaunu 1969. pp. 54–58.
- Thornton 1998. p. 24.
- Thornton 1998. pp. 24–26.
- Thornton 1998. p. 27.
- Historical survey > Slave societies Britannica.
- Ferro, Mark (1997). Colonization: A Global History. Routledge. p. 221, ISBN 978-0-415-14007-2.
- Adu Boahen, Topics In West African History, p. 110.
- Kwaku Person-Lynn, African Involvement In Atlantic Slave Trade.
- Slave trade: a root of contemporary African Crisis Africa Economic Analysis 2000
- Elikia M’bokolo, April 2, 1998, The impact of the slave trade on Africa, Le Monde diplomatique
- Thornton, p. 112.
- Thornton, page 310
- Thornton, p. 45.
- Thornton, p. 94.
- Thornton 1998. pp. 28–29.
- Thornton 1998. p. 31.
- Thornton 1998. pp. 29–31.
- Thornton 1998. pp. 37.
- Thornton 1998. p. 38.
- Thornton 1998. p. 39.
- Thornton 1998. p. 40.
- Rodney 1972. pp. 95-113.
- Austen 1987. pp. 81–108.
- Thornton 1998. p. 44.
- Anne C. Bailey, African Voices of the Atlantic Slave Trade: Beyond the Silence and the Shame.
- Thornton 1998. p. 35.
- Thornton 1998. pp. 40–41.
- Thornton 1998. p. 33.
- Anstey, Roger: The Atlantic Slave Trade and British abolition, 1760–1810. London: Macmillan, 1975,p.5.
- P.C. Emmer, The Dutch in the Atlantic Economy, 1580–1880. Trade, Slavery and Emancipation (1998), p. 17.
- Klein 2010)
- Keith Bradley, Paul Cartledge (2011). The Cambridge World History of Slavery. Cambridge University Press. p. 583. ISBN 0-521-84066-X.
- Christopher 2006, p. 6.
- Lovejoy, Paul E., "The Volume of the Atlantic Slave Trade. A Synthesis". In: Northrup, David (ed.): The Atlantic Slave Trade. D.C. Heath and Company 1994.
- Skeletons Discovered: First African Slaves in New World. January 31, 2006. LiveScience.com. Accessed September 27, 2006.
- "Smallpox Through History". Archived from the original on 2009-10-31.
- Solow, Barbara (ed.). Slavery and the Rise of the Atlantic System, Cambridge: Cambridge University Press, 1991.
- Notes on the State of Virginia Query 18
- Historical survey > The international slave trade
- "Transatlantic Slave Trade". "Hakim Adi".
- Thornton, page 304
- Thornton, page 305
- Thornton, p. 311.
- Thornton, p. 122.
- Howard Winant (2001), The World is a Ghetto: Race and Democracy Since World War II, Basic Books, p. 58.
- Catherine Lowe Besteman, Unraveling Somalia: Race, Class, and the Legacy of Slavery (University of Pennsylvania Press: 1999), pp. 83–84.
- Kevin Shillington, ed. (2005), Encyclopedia of African History, CRC Press, vol. 1, pp. 333–34; Nicolas Argenti (2007), The Intestines of the State: Youth, Violence and Belated Histories in the Cameroon Grassfields, University of Chicago Press, p. 42.
- Rights & Treatment of Slaves. Gambia Information Site.
- Mungo Park, Travels in the Interior of Africa v. II, Chapter XXII - War and Slavery.
- The Negro Plot Trials: A Chronology.
- Lovejoy, Paul E. Transformations in Slavery. Cambridge University Press, 2000.
- Midlo Hall, Gwendolyn (2007). Slavery and African Ethnicities in the Americas. University of North Carolina Press. p. [page needed]. ISBN 978-0-8078-5862-2. Retrieved 2011-01-24.
- Quick guide: The slave trade; Who were the slaves? BBC News
- Stannard, David. American Holocaust. Oxford University Press, 1993
- Rubinstein, W. D. (2004). Genocide: a history. Pearson Education. p. 78. ISBN 0-582-50601-8.
- "African Holocaust: Kimani Nehusi How Many". African Holocaust Society. Retrieved 2005-01-04.
- Gomez, Michael A. Exchanging Our Country Marks. Chapel Hill, 1998
- Thornton, John. Africa and Africans in the Making of the Atlantic World, 1400–1800 Cambridge University Press, 1998
- Stride, G.T. and C. Ifeka. Peoples ad Empires of West Africa: West Africa in History 1000–1800. Nelson, 1986
- King Leopold's Ghost: A Story of Greed, Terror, and Heroism in Colonial Africa. Houghton Mifflin Books. 1998. ISBN 0-618-00190-5.
- African Political Ethics and the Slave Trade
- Museum Theme: The Kingdom of Dahomey
- Dahomey (historical kingdom, Africa)
- Benin seeks forgiveness for role in slave trade
- Le Mali précolonial
- The Story of Africa
- West is master of slave trade guilt
- African Slave Owners
- Meltzer, Milton. Slavery: A World History. Da Capo Press, 1993
- Raymond L. Cohn
- Cohn, Raymond L. "Deaths of Slaves in the Middle Passage", Journal of Economic History, September 1985.
- Kiple, Kenneth F. (2002). The Caribbean Slave: A Biological History. Cambridge University Press. p. 65. ISBN 0-521-52470-9.
- BBC – History – British Slaves on the Barbary Coast
- HEALTH IN SLAVERY
- Refutations of charges of Jewish prominence in slave trade:
- "Nor were Jews prominent in the slave trade." - Marvin Perry, Frederick M. Schweitzer: Antisemitism: Myth and Hate from Antiquity to the Present. Palgrave Macmillan, 2002. ISBN 0-312-16561-7. p.245
- "In no period did Jews play a leading role as financiers, shipowners, or factors in the transatlantic or Caribbean slave trades. They possessed far fewer slaves than non-Jews in every British territory in North America and the Caribbean. Even when Jews in a handful of places owned slaves in proportions slightly above their representation among a town's families, such cases do not come close to corroborating the assertions of The Secret Relationship." - Wim Klooster (University of Southern Maine): Review of Jews, Slaves, and the Slave Trade: Setting the Record Straight. By Eli Faber. Reappraisals in Jewish Social and Intellectual History. William and Mary Quarterly Review of Books. Volume LVII, Number 1. by Omohundro Institute of Early American History and Culture. 2000
- "Medieval Christians greatly exaggerated the supposed Jewish control over trade and finance and also became obsessed with alleged Jewish plots to enslave, convert, or sell non-Jews... Most European Jews lived in poor communities on the margins of Christian society; they continued to suffer most of the legal disabilities associated with slavery. ... Whatever Jewish refugees from Brazil may have contributed to the northwestward expansion of sugar and slaves, it is clear that Jews had no major or continuing impact on the history of New World slavery." - Professor David Brion Davis of Yale University in Slavery and Human Progress (New York: Oxford Univ. Press, 1984), p.89 (cited in Shofar FTP Archive File: orgs/american/wiesenthal.center//web/historical-facts)
- "The Jews of Newport seem not to have pursued the [slave trading] business consistently ... [When] we compare the number of vessels employed in the traffic by all merchants with the number sent to the African coast by Jewish traders ... we can see that the Jewish participation was minimal. It may be safely assumed that over a period of years American Jewish businessmen were accountable for considerably less than two percent of the slave imports into the West Indies" - Professor Jacob R. Marcus of Hebrew Union College in The Colonial American Jew (Detroit: Wayne State Univ. Press, 1970), Vol. 2, pp. 702-703 (cited in Shofar FTP Archive File: orgs/american/wiesenthal.center//web/historical-facts)
- "None of the major slave-traders was Jewish, nor did Jews constitute a large proportion in any particular community. ... probably all of the Jewish slave-traders in all of the Southern cities and towns combined did not buy and sell as many slaves as did the firm of Franklin and Armfield, the largest Negro traders in the South." - Bertram W. Korn, Jews and Negro Slavery in the Old South, 1789-1865, in The Jewish Experience in America, ed. Abraham J. Karp (Waltham, Massachusetts: American Jewish Historical Society, 1969), Vol. 3, pp. 197-198 (cited in Shofar FTP Archive File: orgs/american/wiesenthal.center//web/historical-facts)
- "[There were] Jewish owners of plantations, but altogether they constituted only a tiny proportion of the Southerners whose habits, opinions, and status were to become decisive for the entire section, and eventually for the entire country. ... [Only one Jew] tried his hand as a plantation overseer even if only for a brief time." - Bertram W. Korn, Jews and Negro Slavery in the Old South, 1789-1865, in The Jewish Experience in America, ed. Abraham J. Karp (Waltham, Massachusetts: American Jewish Historical Society, 1969), Vol. 3, p. 180. (cited in Shofar FTP Archive File: orgs/american/wiesenthal.center//web/historical-facts)
- Elkins, Stanley: Slavery. New York: Universal Library, 1963. p.48
- Rawley, James: London, Metropolis of the Slave Trade 2003
- Anstey, Roger: The Atlantic Slave Trade and British abolition, 1760–1810. London: Macmillan, 1975.
- Wynter, Sylvia (1984a). "New Seville and the Conversion Experience of Bartolomé de Las Casas: Part One"". Jamaica Journal 17 (2): 25-32.
- Dauenhauer, Nora Marks; Richard Dauenhauer, Lydia T. Black (2008). Anóoshi Lingít Aaní Ká, Russians in Tlingit America: The Battles of Sitka, 1802 and 1804. Seattle: University of Washington Press. pp. XXVI. ISBN 978-0-295-98601-2.
- Stephen D. Behrendt, David Richardson, and David Eltis, W. E. B. Du Bois Institute for African and African-American Research, Harvard University. Based on "records for 27,233 voyages that set out to obtain slaves for the Americas". Stephen Behrendt (1999). "Transatlantic Slave Trade". Africana: The Encyclopedia of the African and African American Experience. New York: Basic Civitas Books. ISBN 0-465-00071-1.
- The Atlantic slave trade. By Philip D. Curtin, 1972. P.88
- Daudin 2004
- Slave Revolt in St. Domingue (Haiti)
- Digital History
- UN report
- How Europe Underdeveloped Africa Walter RodneyISBN 0950154644
- Manning, Patrick: Contours of Slavery and Social change in Africa. In: Northrup, David (ed.): The Atlantic Slave Trade. D.C. Heath & Company, 1994, pp. 148–160.
- Williams, Capitalism & Slavery (University of North Carolina Press, 1944), pp. 98–107, 169–177, et passim.
- David Richardson, "The British Empire and the Atlantic Slave Trade, 1660-1807," in P.J. Marshall, ed. The Oxford History of the British Empire: Volume II: The Eighteenth Century (1998) pp 440-64
- Stanley L. Engerman. The Slave Trade and British Capital Formation in the Eighteenth Century. JSTOR 3113341.
- Richard Pares. The Economic Factors in the History of the Empire. JSTOR 2590147.
- J.R. Ward, "The British West Indies in the Age of Abolition," in P.J. Marshall, ed. The Oxford History of the British Empire: Volume II: The Eighteenth Century (1998) pp 415-39.
- Marx, K. "Chapter Thirty-One: Genesis of the Industrial Capitalist" Das Kapital: Volume 1, 1867.,
- Rodney, Walter. How Europe underdeveloped Africa. London: Bogle-L'Ouverture Publications, 1972
- David Eltis Economic Growth and the Ending of the Transatlantic slave trade
- Thornton, John. Africa and Africans in the Making of the Atlantic World, 1400-1800. Cambridge University Press, 1992
- Ideology versus the Tyranny of Paradigm: Historians and the Impact of the Atlantic Slave Trade on African Societies, by Joseph E. Inikori African Economic History. 1994.
- "African Holocaust: Dark Voyage audio CD". "Owen 'Alik Shahadah".
- "Effects on Africa". "Ron Karenga".
- Williams, Eric (1994) . Capitalism and Slavery. p. 7.
- David Brion Davis, The Problem of Slavery in the Age of Revolution: 1770–1823 (1975) 129
- Library of Society of Friends Subject Guide: Abolition of the Slave Trade
- Paul E. Lovejoy (2000). Transformations in slavery: a history of slavery in Africa. p.290. Cambridge University Press, 2000
- John E. Selby and Don Higginbotham, The Revolution in Virginia, 1775–1783 (2007) p. 158
- Erik S. Root, All Honor to Jefferson?: The Virginia Slavery Debates and the Positive Good Thesis (2008), p. 19.
- William Wilberforce (1759–1833)
- Marcyliena H. Morgan (2002). Language, discourse and power in African American culture p.20. Cambridge University Press, 2002
- The Royal Navy and the Battle to End Slavery. By Huw Lewis-Jones
- Jo Loosemore, "Sailing against slavery". BBC.
- "Britain forces 'freed slaves' into colonial labour".
- The West African Squadron and slave trade
- "Navy News, June 2007". Retrieved 2008-02-09.
- Question of the Month – Jim Crow Museum at Ferris State University
- Diouf, Sylvianne (2007). Dreams of Africa in Alabama: The Slave Ship Clotilda and the Story of the Last Africans Brought to America. Oxford University Press. ISBN 0-19-531104-3.
- http://www.pbs.org/wgbh/aia/ Africans in America PBS Special
- Handley 2006. pp. 21–23.
- Handley 2006. pp. 23–25.
- Osei-Tutu 2006.
- Handley 2006. p. 21.
- Reggae and slavery
- . National Museums Liverpool, Accessed 31 August 2010.
- "Chirac names slavery memorial day". BBC News, 30 January 2006. Accessed 22 July 2009.
- "Blair 'sorrow' over slave trade". BBC News, 27 November 2006. Accessed March 15, 2007.
- "Blair 'sorry' for UK slavery role". BBC News, 14 March 2007. Accessed March 15, 2007.
- House Joint Resolution Number 728. Commonwealth of Virginia. Accessed 22 July 2009.
- Associated Press. "Alabama Governor Joins Other States in Apologizing For Role in Slavery". Fox News, May 31, 2007. Accessed 22 July 2009.
- "Livingstone breaks down in tears at slave trade memorial". Daily Mail, 24 August 2007. Accessed 22 July 2009.
- Fears, Darryl. "House Issues An Apology For Slavery". The Washington Post, July 30, 2008, p. A03. Accessed 22 July 2009.
- Agence France-Presse. "Obama praises 'historic' Senate slavery apology". Google News, June 18, 2009. Accessed 22 July 2009.
- "African chiefs urged to apologise for slave trade". The Guardian. November 18, 2009.
- Academic books
- Austen, Ralph (1987). African Economic History: Internal Development and External Dependency. London: James Currey. ISBN 978-0-85255-009-0.
- Chaunu, Pierre (1969). L'expansion européen du XIIIe à XVe siècles. Paris.
- Rodney, Walter (1972). How Europe Underdeveloped Africa. London: Bogle L'Ouverture. ISBN 978-0-9501546-4-0.
- Thornton, John (1998). Africa and Africans in the Making of the Atlantic World, 1400–1800 (Second edition). New York: Cambridge University Press. ISBN 978-0-521-62217-2.
- Academic articles
- Handley, Fiona J.L. (2006). "Back to Africa: Issues of hosting "Roots" tourism in West Africa". African Re-Genesis: Confronting Social Issues in the Diaspora (London: UCL Press): 20–31.
- Osei-Tutu, Brempong (2006). "Contested Monuments: African-Americans and the commoditization of Ghana's slave castles". African Re-Genesis: Confronting Social Issues in the Diaspora (London: UCL Press): 09–19.
- Non-academic sources
Further reading
- Anstey, Roger: The Atlantic Slave Trade and British Abolition, 1760–1810. London: Macmillan, 1975. ISBN 0-333-14846-0.
- Blackburn, Robin (2011). The American Crucible: Slavery, Emancipation and Human Rights. London & New York: Verso. ISBN 978-1-84467-569-2.
- Christopher, Emma (2006). Slave Ship Sailors and Their Captive Cargoes, 1730–1807. Cambridge: Cambridge University Press. ISBN 0-521-67966-4.
- Clarke, Dr. John Henrik: Christopher Columbus and the Afrikan Holocaust: Slavery and the Rise of European Capitalism. Brooklyn, N.Y.: A & B Books, 1992. ISBN 1-881316-14-9.
- Curtin, Philip D: Atlantic Slave Trade. University of Wisconsin Press, 1969.
- Daudin, Guillaume: "Profitability of slave and long distance trading in context: the case of eighteenth century France", Journal of Economic History, 2004.
- Diop, Er. Cheikh Anta: Precolonial Black Africa: A Comparative Study of the Political and Social Systems of Europe and Black Africa. Harold J. Salemson, trans. Westport, Conn.: L. Hill, 1987. ISBN 0-88208-187-X, ISBN 0-88208-188-8.
- Doortmont, Michel R.; Jinna Smit (2007). Sources for the mutual history of Ghana and the Netherlands. An annotated guide to the Dutch archives relating to Ghana and West Africa in the Nationaal Archief, 1593–1960s. Leiden: Brill. ISBN 978-90-04-15850-4.
- Drescher, Seymour: From Slavery to Freedom: Comparative Studies in the Rise and Fall of Atlantic Slavery. London: Macmillan Press, 1999. ISBN 0-333-73748-2.
- Emmer, Pieter C.: The Dutch in the Atlantic Economy, 1580–1880. Trade, Slavery and Emancipation. Variorum Collected Studies Series CS614. Aldershot [u.a.]: Variorum, 1998. ISBN 0-86078-697-8.
- Gleeson, David T. and Simon Lewis. eds. Ambiguous Anniversary: The Bicentennial of the International Slave Trade Bans (University of South Carolina Press; 2012) 207 pages
- Gomez, Michael Angelo: Exchanging Our Country Marks (The Transformation of African Identities in the Colonial and AnteBellum South). Chapel Hill, N.C.: The University of North Carolina Press, 1998. ISBN 0-8078-4694-5.
- Hall, Gwendolyn Midlo: Slavery and African Ethnicities in the Americas: Restoring the Links. Chapel Hill, N.C.: The University of North Carolina Press, 2006. ISBN 0-8078-2973-0.
- Horne, Gerald: The Deepest South: The United States, Brazil, and the African Slave Trade. New York, NY : New York Univ. Press, 2007. ISBN 978-0-8147-3688-3, ISBN 978-0-8147-3689-0.
- James, E. Wyn: "Welsh Ballads and American Slavery", Welsh Journal of Religious History, 2 (2007), pp. 59–86. ISSN 0967-3938.
- Klein, Herbert S.: The Atlantic Slave Trade (2nd ed. 2010)
- Lindsay, Lisa A. "Captives as Commodities: The Transatlantic Slave Trade". Prentice Hall, 2008. ISBN 978-0-13-194215-8
- McMillin, James A. The final victims: foreign slave trade to North America, 1783–1810. (Includes database on CD-ROM) ISBN 978-1-57003-546-3
- Meltzer, Milton: Slavery: A World History. New York: Da Capo Press, 1993. ISBN 0-306-80536-7.
- Northrup, David: The Atlantic Slave Trade (3rd ed. 2010)
- Rediker, Marcus (2007). The Slave Ship: A Human History. New York, NY: Viking Press. ISBN 978-0-670-01823-9.
- Rodney, Walter: How Europe Underdeveloped Africa. Washington, D.C.: Howard University Press; Revised edition, 1981. ISBN 0-88258-096-5.
- Rodriguez, Junius P., ed. Encyclopedia of Emancipation and Abolition in the Transatlantic World. Armonk, N.Y.: M.E. Sharpe, 2007. ISBN 978-0-7656-1257-1.
- Solow, Barbara (ed.). Slavery and the Rise of the Atlantic System. Cambridge: Cambridge University Press, 1991. ISBN 0-521-40090-2.
- Thomas, Hugh: The Slave Trade: The History of the Atlantic Slave Trade 1440–1870. London: Picador, 1997. ISBN 0-330-35437-X.; comprehensive history
- Thornton, John: Africa and Africans in the Making of the Atlantic World, 1400–1800, 2nd ed. Cambridge University Press, 1998. ISBN 0-521-62217-4, ISBN 0-521-62724-9, ISBN 0-521-59370-0, ISBN 0-521-59649-1.
- Williams, Eric (1994) . Capitalism & Slavery. Chapel Hill: University of North Carolina Press. ISBN 0-8078-2175-6.
|Wikimedia Commons has media related to: Slavery|
- Voyages: The Trans-Atlantic Slave Trade Database
- African Holocaust: The legacy of Slavery remembered
- BBC | Africa|Quick guide: The slave trade
- Teaching resources about Slavery and Abolition on blackhistory4schools.com
- International Slavery Museum
- Mémoire St Barth | History of St Barthélemy (archives & history of slavery, slave trade and their abolition), Comité de Liaison et d'Application des Sources Historiques.
- British documents on slave holding and the slave trade, 1788–1793 | http://en.wikipedia.org/wiki/Atlantic_slave_trade | 13 |
35 | The Online Teacher Resource
Receive free lesson plans, printables, and worksheets by email:
- Over 20,000 Printables
- For All Grade Levels
- A Complete Elementary Curriculum
- Print and go!
Grade 9 through Grade 12 (High School)
Overview and Purpose:
Students will be able to apply percentages to real life situations by figuring the amount of income tax that will be withheld from a salary. Students will learn about credit and credit scores. They will learn what will impact their scores and what will help their scores. Make sure the students will learn about all the aspects of credit, and that they will be able to make informed decision regarding credit.
The student will be able to
*calculate how much they will pay in federal, state, and local income tax for a year when they are paid for doing their dream job.
*Research salary information on the Internet.
Students will understand the concept of credit
They will understand the concepts of a credit score.
They will learn the importance of having good credit.
Concept of interest, and learn how to analyze interest.
Tax percentages for your area
1. Open by asking if any of the students have a credit card. If not ask
if their parents have credit cards, and if they understand how they work.
2. Next write these words on the board: Credit, credit card, credit risk, interest, APR, and credit limit. Make sure I discuss this with the class in detail.
3. Explain how credit cards work. Explain to them how the interest works if they do not pay off the balance of the card when it is due.
4. Discuss the difference between credit cards, debit cards, and cash. Explain how each of them work.
5. Ask how banks make decisions on if they want to offer you credit.
Begin a discussion about credit scores, how lenders use them, and what
makes up an individual's credit score. Explain that a credit score is a
number calculated using a number of different variables. The resulting score
helps lenders determine how likely a borrower is to pay a loan or credit
card back on time. In other words, a score is a snapshot of "credit risk"
at a given time.
Ask students if they know which organizations calculate credit scores: is it the banks, the government, or private organizations? (Answer: Private organizations calculate credit scores. One well-known organization is the Fair Isaac Corporation, which produces the "FICO score" -- the most widely used credit score. Other scores include NextGen, VantageScore, and the CE Score.)
Have students look up the average salary for their dream job. Explain that they will have to pay taxes on that salary to the local, state and federal government. They can either research the tax percentages or you can give them to them. Have them calculate how much each week, month, and year they will pay in taxes and record the amount in their journals.
Graph the different amounts and have them draw conclusions about the amount of taxes people pay.
Homework could include having students calculate and graph the amount paid in taxes for salaries at least $25,000 apart. More advanced students may also want to research the amount of other taxes withheld (Social Security, Workman's Compensation) and analyze how that affects salaries. | http://www.teach-nology.com/teachers/lesson_plans/math/912percentages.html | 13 |
16 | In the early 1970s, oil exploration turned up vast reservoirs of oil and gas under the Bahía de Campeche, located along the southern margin of the Gulf of Mexico, just west of Mexico’s Yucatán Peninsula. Offshore oil drilling continues in that region today, and signs of the activity are visible from space.
Crude oil often contains natural gas. When buried deep underground, the natural gas stays dissolved in the oil due to high pressure. But as the oil nears the surface and pressure decreases, flammable gas (mostly methane) bubbles out. Many oilrig operators try to preserve the gas for use by customers, but depending on the situation, some operators may instead choose to burn it. Sometimes the gas is burned because it is contaminated with mud or other substances. In other cases, there may be no other way to quickly and safely dispose of it.
On September 13, 2009, the Advanced Land Imager (ALI) on NASA’s Earth Observing-1 (EO-1) satellite captured a natural-color image of gas flares and an oil slick in the Bahía de Campeche (top). Sunglint—sunlight reflecting off the ocean surface and back to the satellite—gives the ocean a silver-gray appearance and also illuminates the oil slick, which smoothes the ocean surface. (For more information on oil slicks and sunglint, see Gulf of Mexico Oil Slick Images: Frequently Asked Questions.)
Nearly three years later (July 26, 2012), the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi-NPP satellite captured a nighttime image of city lights and of oil production (bottom). Gas flares appear as extremely bright central spots, surrounded by a circular halo. Electric lights in cities and oil production sites vary in brightness, and do not have a halo.
Methane is a potent greenhouse gas—roughly 23 times as powerful as carbon dioxide. When pure methane is burned completely, the combustion process creates carbon dioxide and water. Unfortunately, gas flares in oil drilling operations rarely burn all of the methane, so some is released into the atmosphere. Because unprocessed natural gas often contains other substances besides methane, flares at oilrigs can produce other compounds such as carbon monoxide and nitrous oxide.
On June 16, 2012, the Global Gas Flaring Reduction (GGFR) public-private partnership released estimates of flared volumes of natural gas in oil operations for 2007 through 2011, based on National Oceanic and Atmospheric Administration satellite data. In 2011, Mexico flared an estimated 2.1 billion cubic meters (bcm), down from the previous year’s estimate of 2.8 bcm. Mexico ranked 15th on the list of the top-20 flaring countries, behind Russia, Nigeria, Iran, Iraq, the United States, Algeria, Kazakhstan, Angola, Saudi Arabia, Venezuela, China, Canada, Libya, and Indonesia.
- Canadian Centre for Energy Information. (2007) Flaring: Questions + Answers. Accessed September 5, 2012.
- GGFR. (2012, June 14) Estimated Flared Volumes from Satellite Data, 2007–2011. Accessed September 5, 2012.
- NOAA National Geophysical Data Center. Global Gas Flaring Estimates. Accessed September 5, 2012.
- U.S. Library of Congress Country Studies. Mexico: Oil. Accessed September 5, 2012.
NASA Earth Observatory images created by Jesse Allen and Robert Simmon, using Advanced Land Imager data from the NASA EO-1 team and VIIRS Day-Night Band data from the Suomi National Polar-orbiting Partnership (Suomi NPP). Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Michon Scott.
- EO-1 - ALI | http://www.visibleearth.nasa.gov/IOTD/view.php?id=79153 | 13 |
30 | Though the U.S. economy had gone into depression six months earlier, the Great Depression may be said to have begun with a catastrophic collapse of stock-market prices on the New York Stock Exchange in October 1929. During the next three years stock prices in the United States continued to fall, until by late 1932 they had dropped to only about 20 percent of their value in 1929. Besides ruining many thousands of individual investors, this precipitous decline in the value of assets greatly strained banks and other financial institutions, particularly those holding stocks in their portfolios. Many banks were consequently forced into insolvency; by 1933, 11,000 of the United States’ 25,000 banks had failed. The failure of so many banks, combined with a general and nationwide loss of confidence in the economy, led to much-reduced levels of spending and demand and hence of production, thus aggravating the downward spiral. The result was drastically falling output and drastically rising unemployment; by 1932, U.S. manufacturing output had fallen to 54 percent of its 1929 level, and unemployment had risen to between 12 and 15 million workers, or 25-30 percent of the work force.
The Great Depression began in the United States but quickly turned into a worldwide economic slump owing to the special and intimate relationships that had been forged between the United States and European economies after World War I. The United States had emerged from the war as the major creditor and financier of postwar Europe, whose national economies had been greatly weakened by the war itself, by war debts, and, in the case of Germany and other defeated nations, by the need to pay war reparations. So once the American economy slumped and the flow of American investment credits to Europe dried up, prosperity tended to collapse there as well. The Depression hit hardest those nations that were most deeply indebted to the United States, i.e., Germany and Great Britain. In Germany, unemployment rose sharply beginning in late 1929, and by early 1932 it had reached 6 million workers, or 25 percent of the work force. Britain was less severely affected, but its industrial and export sectors remained seriously depressed until World War II. Many other countries had been affected by the slump by 1931.
Almost all nations sought to protect their domestic production by imposing tariffs, raising existing ones, and setting quotas on foreign imports. The effect of these restrictive measures was to greatly reduce the volume of international trade: by 1932 the total value of world trade had fallen by more than half as country after country took measures against the importation of foreign goods.
The Great Depression had important consequences in the political sphere. In the United States, economic distress led to the election of the Democrat Franklin D. Roosevelt to the presidency in late 1932. Roosevelt introduced a number of major changes in the structure of the American economy, using increased government regulation and massive public-works projects to promote a recovery. But despite this active intervention, mass unemployment and economic stagnation continued, though on a somewhat reduced scale, with about 15 percent of the work force still unemployed in 1939 at the outbreak of World War II. After that, unemployment dropped rapidly as American factories were flooded with orders from overseas for armaments and munitions. The depression ended completely soon after the United States’ entry into World War II in 1941. In Europe, the Great Depression strengthened extremist forces and lowered the prestige of liberal democracy. In Germany, economic distress directly contributed to Adolf Hitler’s rise to power in 1933. The Nazis’ public-works projects and their rapid expansion of munitions production ended the Depression there by 1936.
At least in part, the Great Depression was caused by underlying weaknesses and imbalances within the U.S. economy that had been obscured by the boom psychology and speculative euphoria of the 1920s. The Depression exposed those weaknesses, as it did the inability of the nation’s political and financial institutions to cope with the vicious downward economic cycle that had set in by 1930. Prior to the Great Depression, governments traditionally took little or no action in times of business downturn, relying instead on impersonal market forces to achieve the necessary economic correction. But market forces alone proved unable to achieve the desired recovery in the early years of the Great Depression, and this painful discovery eventually inspired some fundamental changes in the United States’ economic structure. After the Great Depression, government action, whether in the form of taxation, industrial regulation, public works, social insurance, social-welfare services, or deficit spending, came to assume a principal role in ensuring economic stability in most industrial nations with market economies.
Big Brother had come to stay. | http://kenbaker.wordpress.com/tag/a-level-history/ | 13 |
24 | This page explains what electronegativity is, and how and why it varies around the Periodic Table. It looks at the way that electronegativity differences affect bond type and explains what is meant by polar bonds and polar molecules.
If you are interested in electronegativity in an organic chemistry context, you will find a link at the bottom of this page.
What is electronegativity
Electronegativity is a measure of the tendency of an atom to attract a bonding pair of electrons.
The Pauling scale is the most commonly used. Fluorine (the most electronegative element) is assigned a value of 4.0, and values range down to caesium and francium which are the least electronegative at 0.7.
What happens if two atoms of equal electronegativity bond together?
Consider a bond between two atoms, A and B. Each atom may be forming other bonds as well as the one shown - but these are irrelevant to the argument.
If the atoms are equally electronegative, both have the same tendency to attract the bonding pair of electrons, and so it will be found on average half way between the two atoms. To get a bond like this, A and B would usually have to be the same atom. You will find this sort of bond in, for example, H2 or Cl2 molecules.
Note: It's important to realise that this is an average picture. The electrons are actually in a molecular orbital, and are moving around all the time within that orbital.
This sort of bond could be thought of as being a "pure" covalent bond - where the electrons are shared evenly between the two atoms.
What happens if B is slightly more electronegative than A?
B will attract the electron pair rather more than A does.
That means that the B end of the bond has more than its fair share of electron density and so becomes slightly negative. At the same time, the A end (rather short of electrons) becomes slightly positive. In the diagram, "" (read as "delta") means "slightly" - so + means "slightly positive".
Defining polar bonds
This is described as a polar bond. A polar bond is a covalent bond in which there is a separation of charge between one end and the other - in other words in which one end is slightly positive and the other slightly negative. Examples include most covalent bonds. The hydrogen-chlorine bond in HCl or the hydrogen-oxygen bonds in water are typical.
What happens if B is a lot more electronegative than A?
In this case, the electron pair is dragged right over to B's end of the bond. To all intents and purposes, A has lost control of its electron, and B has complete control over both electrons. Ions have been formed.
A "spectrum" of bonds
The implication of all this is that there is no clear-cut division between covalent and ionic bonds. In a pure covalent bond, the electrons are held on average exactly half way between the atoms. In a polar bond, the electrons have been dragged slightly towards one end.
How far does this dragging have to go before the bond counts as ionic? There is no real answer to that. You normally think of sodium chloride as being a typically ionic solid, but even here the sodium hasn't completely lost control of its electron. Because of the properties of sodium chloride, however, we tend to count it as if it were purely ionic.
Note: Don't worry too much about the exact cut-off point between polar covalent bonds and ionic bonds. At A'level, examples will tend to avoid the grey areas - they will be obviously covalent or obviously ionic. You will, however, be expected to realise that those grey areas exist.
Lithium iodide, on the other hand, would be described as being "ionic with some covalent character". In this case, the pair of electrons hasn't moved entirely over to the iodine end of the bond. Lithium iodide, for example, dissolves in organic solvents like ethanol - not something which ionic substances normally do.
Polar bonds and polar molecules
In a simple molecule like HCl, if the bond is polar, so also is the whole molecule. What about more complicated molecules?
In CCl4, each bond is polar.
Note: Ordinary lines represent bonds in the plane of the screen or paper. Dotted lines represent bonds going away from you into the screen or paper. Wedged lines represent bonds coming out of the screen or paper towards you.
The molecule as a whole, however, isn't polar - in the sense that it doesn't have an end (or a side) which is slightly negative and one which is slightly positive. The whole of the outside of the molecule is somewhat negative, but there is no overall separation of charge from top to bottom, or from left to right.
By contrast, CHCl3 is polar.
The hydrogen at the top of the molecule is less electronegative than carbon and so is slightly positive. This means that the molecule now has a slightly positive "top" and a slightly negative "bottom", and so is overall a polar molecule.
A polar molecule will need to be "lop-sided" in some way.
Patterns of electronegativity in the Periodic Table
The most electronegative element is fluorine. If you remember that fact, everything becomes easy, because electronegativity must always increase towards fluorine in the Periodic Table.
Note: This simplification ignores the noble gases. Historically this is because they were believed not to form bonds - and if they don't form bonds, they can't have an electronegativity value. Even now that we know that some of them do form bonds, data sources still don't quote electronegativity values for them.
Trends in electronegativity across a period
As you go across a period the electronegativity increases. The chart shows electronegativities from sodium to chlorine - you have to ignore argon. It doesn't have an electronegativity, because it doesn't form bonds.
Trends in electronegativity down a group
As you go down a group, electronegativity decreases. (If it increases up to fluorine, it must decrease as you go down.) The chart shows the patterns of electronegativity in Groups 1 and 7.
Explaining the patterns in electronegativity
The attraction that a bonding pair of electrons feels for a particular nucleus depends on:
Note: If you aren't happy about the concept of screening or shielding, it would pay you to read the page on ionisation energies before you go on. The factors influencing ionisation energies are just the same as those influencing electronegativities.
Use the BACK button on your browser to return to this page.
Why does electronegativity increase across a period?
Consider sodium at the beginning of period 3 and chlorine at the end (ignoring the noble gas, argon). Think of sodium chloride as if it were covalently bonded.
Both sodium and chlorine have their bonding electrons in the 3-level. The electron pair is screened from both nuclei by the 1s, 2s and 2p electrons, but the chlorine nucleus has 6 more protons in it. It is no wonder the electron pair gets dragged so far towards the chlorine that ions are formed.
Electronegativity increases across a period because the number of charges on the nucleus increases. That attracts the bonding pair of electrons more strongly.
Why does electronegativity fall as you go down a group?
Think of hydrogen fluoride and hydrogen chloride.
The bonding pair is shielded from the fluorine's nucleus only by the 1s2 electrons. In the chlorine case it is shielded by all the 1s22s22p6 electrons.
In each case there is a net pull from the centre of the fluorine or chlorine of +7. But fluorine has the bonding pair in the 2-level rather than the 3-level as it is in chlorine. If it is closer to the nucleus, the attraction is greater.
As you go down a group, electronegativity decreases because the bonding pair of electrons is increasingly distant from the attraction of the nucleus.
Diagonal relationships in the Periodic Table
What is a diagonal relationship?
At the beginning of periods 2 and 3 of the Periodic Table, there are several cases where an element at the top of one group has some similarities with an element in the next group.
Three examples are shown in the diagram below. Notice that the similarities occur in elements which are diagonal to each other - not side-by-side.
For example, boron is a non-metal with some properties rather like silicon. Unlike the rest of Group 2, beryllium has some properties resembling aluminium. And lithium has some properties which differ from the other elements in Group 1, and in some ways resembles magnesium.
There is said to be a diagonal relationship between these elements.
There are several reasons for this, but each depends on the way atomic properties like electronegativity vary around the Periodic Table.
So we will have a quick look at this with regard to electronegativity - which is probably the simplest to explain.
Explaining the diagonal relationship with regard to electronegativity
Electronegativity increases across the Periodic Table. So, for example, the electronegativities of beryllium and boron are:
Electronegativity falls as you go down the Periodic Table. So, for example, the electronegativities of boron and aluminium are:
So, comparing Be and Al, you find the values are (by chance) exactly the same.
The increase from Group 2 to Group 3 is offset by the fall as you go down Group 3 from boron to aluminium.
Something similar happens from lithium (1.0) to magnesium (1.2), and from boron (2.0) to silicon (1.8).
In these cases, the electronegativities aren't exactly the same, but are very close.
Similar electronegativities between the members of these diagonal pairs means that they are likely to form similar types of bonds, and that will affect their chemistry. You may well come across examples of this later on in your course.
Warning! As far as I am aware, none of the UK-based A level (or equivalent) syllabuses any longer want the next bit. It used to be on the AQA syllabus, but has been removed from their new syllabus. At the time of writing, it does, however, still appear on at least one overseas A level syllabus (Malta, but there may be others that I'm not aware of). If in doubt, check your syllabus.
Otherwise, ignore the rest of this page. It is an alternative (and, to my mind, more awkward) way of looking at the formation of a polar bond. Reading it unnecessarily just risks confusing you.
The polarising ability of positive ions
What do we mean by "polarising ability"?
In the discussion so far, we've looked at the formation of polar bonds from the point of view of the distortions which occur in a covalent bond if one atom is more electronegative than the other. But you can also look at the formation of polar covalent bonds by imagining that you start from ions.
Solid aluminium chloride is covalent. Imagine instead that it was ionic. It would contain Al3+ and Cl- ions.
The aluminium ion is very small and is packed with three positive charges - the "charge density" is therefore very high. That will have a considerable effect on any nearby electrons.
We say that the aluminium ions polarise the chloride ions.
In the case of aluminium chloride, the electron pairs are dragged back towards the aluminium to such an extent that the bonds become covalent. But because the chlorine is more electronegative than aluminium, the electron pairs won't be pulled half way between the two atoms, and so the bond formed will be polar.
Factors affecting polarising ability
Positive ions can have the effect of polarising (electrically distorting) nearby negative ions. The polarising ability depends on the charge density in the positive ion.
Polarising ability increases as the positive ion gets smaller and the number of charges gets larger.
As a negative ion gets bigger, it becomes easier to polarise. For example, in an iodide ion, I-, the outer electrons are in the 5-level - relatively distant from the nucleus.
A positive ion would be more effective in attracting a pair of electrons from an iodide ion than the corresponding electrons in, say, a fluoride ion where they are much closer to the nucleus.
Aluminium iodide is covalent because the electron pair is easily dragged away from the iodide ion. On the other hand, aluminium fluoride is ionic because the aluminium ion can't polarise the small fluoride ion sufficiently to form a covalent bond.
© Jim Clark 2000 (last modified March 2013) | http://www.chemguide.co.uk/atoms/bonding/electroneg.html | 13 |
72 | |Paraguay Table of Contents
Until the Spanish established Asunción in 1537, economic activity in Paraguay was limited to the subsistence agriculture of the Guaraní Indians. The Spanish, however, found little of economic interest in their colony, which had no precious metals and no sea coasts. The typical feudal Spanish economic system did not dominate colonial Paraguay, although the encomienda. System was established. Economic relations were distinguished by the reducciones (reductions or townships) that were established by Jesuit missionaries from the early seventeenth century until the 1760s. The incorporation of Indians into these Jesuit agricultural communes laid the foundation for an agriculture-based economy that survived in the late twentieth century.
Three years after Paraguay overthrew Spanish authority and gained its independence, the country's economy was controlled by the autarchic policies of José Gaspar Rodríguez de Francia (1814- 40), who closed the young nation's borders to virtually all international trade. Landlocked, isolated, and underpopulated, Paraguay structured its economy around a centrally administered agricultural sector, extensive cattle grazing, and inefficient shipbuilding and textile industries. After the demise of Francia, government policies focused on expanding international trade and stimulating economic development. The government built several roads and authorized British construction of a railroad.
The War of the Triple Alliance (1865-70) fundamentally changed the Paraguayan economy. Economic resources were employed in and destroyed by the war effort. Paraguay was occupied by its enemies in 1870; the countryside was in virtual ruin, the labor force was decimated, peasants were pushed into the environs of Asunción from the east and south, and the modernization of the preceding three decades was undone. Sleepy, self-sufficient Paraguay, whose advances in agriculture and quality of life had been the envy of many in the Southern Cone, became the most backward nation in that subregion.
To pay its substantial war debt, Paraguay sold large tracts of land to foreigners, mostly Argentines. These large land sales established the base of the present-day land tenure system, which is characterized by a skewed distribution of land. Unlike most of its neighbors, however, Paraguay's economy was controlled not by a traditional, landed elite, but by foreign companies. Many Paraguayans grew crops and worked as wage laborers on latifundios (large landholdings) typically owned by foreigners.
The late 1800s and the early 1900s saw a slow rebuilding of ports, roads, the railroad, farms, cattle stock, and the labor force. The country was slowly being repopulated by former Brazilian soldiers who had fought in the War of the Triple Alliance, and Paraguay's government encouraged European immigration. Although few in number, British, German, Italian, and Spanish investors and farmers helped modernize the country. Argentine, Brazilian, and British companies in the late 1800s purchased some of Paraguay's best land and started the first large-scale production of agricultural goods for export. One Argentine company, whose owner had purchased 15 percent of the immense Chaco region, processed massive quantities of tannin, which were extracted from the bark of the Chaco's ubiquitous quebracho (break-axe) hardwood. Large quantities of the extract were used by the region's thriving hide industry. Another focus of large-scale agro- processing was the yerba maté bush, whose leaves produced the potent tea that is the national beverage. Tobacco farming also flourished. Beginning in 1904, foreign investment increased as a succession of Liberal Party (Partido Liberal) administrations in Paraguay maintained a staunch laissez-faire policy.
The period of steady economic recovery came to an abrupt halt in 1932 as the country entered another devastating war. This time Paraguay fought Bolivia over possession of the Chaco and rumors of oil deposits. The war ended in 1935 after extensive human losses on both sides, and war veterans led the push for general social reform. During the 1930s and 1940s, the state passed labor laws, implemented agrarian reform, and assumed a role in modernization, influenced in part by the leadership of Juan Domingo Perón in Argentina and Getúlio Dornelles Vargas in Brazil. The 1940 constitution, for example, rejected the laissez-faire approach of previous Liberal governments. Reformist policies, however, did not enjoy a consensus, and by 1947 the country had entered into a civil war, which in turn initiated a period of economic chaos that lasted until the mid-1950s. During this period, Paraguay experienced the worst inflation in all of Latin America, averaging over 100 percent annually in the 1950s.
After centuries of isolation, two devastating regional wars, and a civil war, in 1954 Paraguay entered a period of prolonged political and economic stability under the authoritarian rule of Alfredo Stroessner Mattiauda. Stroessner's economic policies took a middle course between social reform, desarrollismo, and laissez-faire, all in the context of patronage politics. Relative to previous governments, Stroessner took a fairly active role in the economy but reserved productive activities for the local and foreign private sectors. The new government's primary economic task was to arrest the country's rampant and spiraling price instability. In 1955 Stroessner fired the country's finance minister, who was unwilling to implement reforms, and in 1956 accepted an International Monetary Fund ( IMF) stabilization plan that abolished export duties, lowered import tariffs, restricted credit, devalued the currency, and implemented strict austerity measures. Although the sacrifice was high, the plan helped bring economic stability to Paraguay. Labor unions retaliated with a major strike in 1958, but the new government, now firmly established, quelled the uprising and forced many labor leaders into exile; most of them remained there in the late 1980s.
By the 1960s, the economy was on a path of modest but steady economic growth. Real GDP growth during the 1960s averaged 4.2 percent a year, under the Latin American average of 5.7 percent but well ahead of the chaotic economy of the two previous decades. As part of the United States-sponsored Alliance for Progress, the government was encouraged to expand its planning apparatus for economic development. With assistance from the Organization of American States (OAS), the Inter-American Development Bank (IDB), and the United Nations Economic Commission for Latin America (ECLA), in 1962 Paraguay established the Technical Planning Secretariat (Secretaría Técnica de Planificación--STP), the major economic planning arm of the government. By 1965 the country had its first National Economic Plan, a two-year plan for 1965-66. This was followed by another two-year plan (1967-68) and then a series of five-year plans. Five-year plans--only general policy statements--were not typically adhered to or achieved and played a minimal role in Paraguay's economic growth and development. Compared with most Latin American countries, Paraguay had a small public sector. Free enterprise dominated the economy, export promotion was favored over import substitution, agriculture continued to dominate industry, and the economy remained generally open to international trade and market mechanisms.
In an economic sense, the 1970s constituted Paraguay's miracle decade. Real GDP grew at over 8 percent a year and exceeded 10 percent from 1976 to 1981--a faster growth rate than in any other economy in Latin America. Four coinciding developments accounted for Paraguay's rapid growth in the 1970s. The first was the completion of the road from Asunción to Puerto Presidente Stroessner and to Brazilian seaports on the Atlantic, ending traditional dependence on access through Argentina and opening the east to many for the first time. The second was the signing of the Treaty of Itaipú with Brazil in 1973. Beyond the obvious economic benefits of such a massive project, Itaipú helped to create a new mood of optimism in Paraguay about what a small, isolated country could attain. The third event was land colonization, which resulted from the availability of land, the existence of economic opportunity, the increased price of crops, and the newly gained accessibility of the eastern border region. Finally, the skyrocketing price of soybeans and cotton led farmers to quadruple the number of hectares planted with these two crops. As the 1970s progressed, soybeans and cotton came to dominate the country's employment, production, and exports.
These developments shared responsibility for establishing thriving economic relations between Paraguay and the world's sixth largest economy, Brazil. Contraband trade became the dominant economic force on the border between the two countries, with Puerto Presidente Stroessner serving as the hub of such smuggling activities. Oblservers contended that contraband was accepted by many Paraguayan government officials, some of whom were reputed to have benefited handsomely. Many urban dwellers' shelves were stocked with contraband luxury items.
The Paraguayan government's emphasis on industrial activity increased noticeably in the 1970s. One of the most important components of the new industrial push was Law 550, also referred to as Law 550/75 or the Investment Promotion Law for Social and Economic Development. Law 550 opened Paraguay's doors even further to foreign investors by providing income-tax breaks, duty-free capital imports, and additional incentives for companies that invested in priority areas, especially the Chaco. Law 550 was successful. Investments by companies in the United States, Europe, and Japan comprised, according to some estimates, roughly a quarter of new investment. Industrial policies also encouraged the planning of more state-owned enterprises, including ones involved in producing ethanol, cement, and steel.
Much of Paraguay's rural population, however, missed out on the economic development. Back roads remained inadequate, preventing peasants from bringing produce to markets. Social services, such as schools and clinics, were severely lacking. Few people in the countryside had access to potable water, electricity, bank credit, or public transportation. As in other economies that underwent rapid growth, income distribution was believed to have worsened in Paraguay during the 1970s in both relative and absolute terms. By far the greatest problem that the rural population faced, however, was competition for land. Multinational agribusinesses, Brazilian settlers, and waves of Paraguayan colonists rapidly increased the competition for land in the eastern border region. Those peasants who lacked proper titles to the lands they occupied were pushed to more marginal areas; as a result, an increasing number of rural clashes occurred, including some with the government.
In the beginning of the 1980s, the completion of the most important parts of the Itaipú project and the drop in commodity prices ended Paraguay's rapid economic growth. Real GDP declined by 2 percent in 1982 and by 3 percent in 1983. Paraguay's economic performance was also set back by world recession, poor weather conditions, and growing political and economic instability in Brazil and Argentina. Inflation and unemployment increased. Weather conditions improved in 1984, and the economy enjoyed a modest recovery, growing by 3 percent in 1984 and by 4 percent in 1985. But in 1986 one of the century's worst droughts stagnated the economy, permitting no real growth. The economy recovered once again in 1987 and 1988, growing between 3 and 4 percent annually. Despite the economy's general expansion after 1983, however, inflation threatened its modest gains, as did serious fiscal and balance-of-payments deficits and the growing debt.
More about the Economy of Paraguay.
Source: U.S. Library of Congress | http://countrystudies.us/paraguay/38.htm | 13 |
15 | Ramsar Sites (Wetlands of International Importance)
Global list of internationally recognised wetlands
Ramsar sites are wetlands of international importance, recognised globally due to the Ramsar Convention, which is an international treaty for the conservation and wise use of wetlands. Upon joining, each Contracting Party is obliged to designate at least one wetland site for inclusion in the List of Wetlands of International Importance (often called “Ramsar Sites”). The main objective of this key obligation is “ to develop and maintain an international network of wetlands which are important for the conservation of global biological diversity and for sustaining human life through the maintenance of their ecosystem components, processes and benefits/services ”.1 There are currently 160 Contracting Parties to the Ramsar Convention. The Convention uses a broad definition of wetlands that includes lakes and rivers, swamps and marshes, wet grasslands and peatlands, oases, estuaries, deltas and tidal flats, near-shore marine areas, mangroves and coral reefs, and human-made sites such as fish ponds, rice paddies, reservoirs, and salt pans.
The Convention on Wetlands of International Importance (Ramsar Convention)
Year of creation
Global network of over 1800 wetland sites within 160 countries.2
Sites are selected by the Contracting Parties for designation under the Convention by reference to the Criteria for the Identification of Wetlands of International Importance. Sites must meet one or more of the following nine criteria:3
- Contains a representative, rare or unique example of a natural or near-natural wetland type found within the appropriate biogeographic region.
- Supports vulnerable, endangered, or critically endangered species or threatened ecological communities.
- Supports populations of plant and/or animal species important for maintaining the biological diversity of a particular biogeographic region.
- Supports plant and/or animal species at a critical stage in their life cycles, or provides refuge during adverse conditions.
- Regularly supports 20,000 or more waterbirds.
- Regularly supports 1% of the individuals in a population of one species or subspecies of water birds.
- Supports a significant proportion of indigenous fish subspecies, species or families, life-history stages, species interactions and/or populations that are representative of wetland benefits and/or values and thereby contributes to global biological diversity.
- Is an important source of food for fishes, spawning ground, nursery and/or migration path on which fish stocks, either within the wetland or elsewhere, depend.
- Regularly supports 1% of the individuals in a population of one species or subspecies of wetland-dependent non-avian animal species.
Responsibility for the management of Ramsar Sites is at the national level, by the officially appointed Administrative Authority of the Contracting Party. In some cases, Ramsar sites are transboundary in which case more than one Contracting Party is responsible for their conservation and management. Management focuses on the ‘wise use’4 concept adopted by the Convention, and a range of non-binding implementation guidelines have been adopted by Conferences of the Contracting Parties (COPs) to support delivering the wise use of Ramsar Sites and other wetlands.
The Convention outlines several compliance requirements for the Contracting Parties.4 In cases where changes in ecological character of a Ramsar Site have occurred, are occurring, or are likely to occur as a result of technological developments, pollution or other human interference, the Contracting Party is required to report this, without delay, to the Ramsar Secretariat (article 3.2), which is required to report all such notifications to the next Conference of Contracting Parties, which may make recommendations to each Contracting Party concerned (article 8). The Contracting Party can choose to place such Wetlands of International Importance on the so-called “Montreux Record”. Sites on the Montreux Record face concerted action by the Ramsar Secretariat, the Scientific and Technical Review Panel and the Contracting Party concerned to resolve the ecological character changes or likely changes, including through international expert Ramsar Advisory Missions. The boundaries of a designated Ramsar Site may only be restricted or deleted if the Contracting Party determines that it is in its “urgent national interest” (article 2.5), in which case the Party must make adequate compensatory provisions (article 4.2).
A number of guidelines have been prepared to support management of Ramsar sites.5 They include:
- A Conceptual Framework for the wise use of wetlands and the maintenance of their ecological character;
- Strategic Framework and guidelines for the future development of the List of Wetlands of International Importance;
- New Guidelines for management planning for Ramsar sites and other wetlands;
- Guidelines for establishing and strengthening local communities’ and indigenous people’s participation in the management of wetlands;
- Principles and guidelines for wetland restoration;
- Wetland Risk Assessment Framework;
- Guidelines for the management of groundwater to maintain wetland ecological character;
- Principles and guidelines for incorporating wetland issues into Integrated Coastal Zone Management;
- Guidelines for reviewing laws and institutions to promote the conservation and wise use of wetlands;
- An Integrated Framework for the Ramsar Convention’s water-related guidance;
- Guidelines for integrating wetland conservation and wise use into river basin management;
- River basin management: additional guidance and a framework for the analysis of case studies;
- Guidelines for the allocation and management of water for maintaining the ecological functions of wetlands.
Legal and compliance: Protection under national law is not a precondition for designating a site as a Wetland of International Importance, but legal recognition and protection is likely to be present for many Ramsar sites. There are several guidelines, which necessitate ensuring protection of Ramsar sites by the Contracting Parties. The Convention requires Contracting Parties to ‘formulate and implement their planning so as to promote the conservation of the wetlands included in the List, and as far as possible the wise use of wetlands in their territory’ (article 3.1).
As internationally recognised sites, Ramsar sites enjoy a high degree of local, national and international attention. Based on their level of visibility and importance for conservation, they are referred to in a number of safeguard standards of financial institutions, including the International Finance Corporation6, the European Investment Bank7, the Asian Development Bank8, the European Bank for Reconstruction and Development9, and the Inter-American Development Bank10, whereby operations within Ramsar sites are unlikely to be funded. These standards refer to those that have been designated as well as areas officially proposed for protection. Ramsar sites are also referred to in the standards of certification schemes in a range of business sectors such as the Global Tourism Sustainability Criteria (GSTC)11 and the Roundtable on Sustainable Biofuels (RSB)12, as well as standards such as the Climate and Community and Biodiversity Alliance standards13. For example, under RSB standards Ramsar sites are considered no-go areas, and under the GSTC these are areas that the business is required to contribute support towards.
Biodiversity: Wetlands are recognised as internationally important sites for biodiversity conservation. The criteria used for their inclusion on the list include aspects of high irreplaceability and vulnerability of species and habitats, and therefore many of these areas are likely to be of high global biodiversity value. As site-scale areas, these sites are of high relevance for business in terms of mitigating and avoiding risk from biodiversity loss and identifying opportunity associated with biodiversity conservation.
Socio-cultural: Although these sites are identified on ecological criteria, they are managed under the concept of ‘wise-use’, and therefore sustainable human activities and the involvement of local communities in management are to be expected in these areas. The Ramsar Convention has adopted ‘Guidelines for establishing and strengthening local communities’ and indigenous people’s participation in the management of wetlands’ as annex to Resolution VII.8. The Guidelines address the need for involvement of local communities and indigenous peoples in a management partnership.
- The Ramsar Sites Information Service delivered by Wetlands International provides access to information on Ramsar sites, including downloadable GIS data of the sites.
- Protected Planet is a tool for visualizing, mapping and contributing to information on protected areas. This includes information on Ramsar sites. Protected Planet brings together spatial data, descriptive information and images from the World Database on Protected Areas, the Global Biodiversity Information Facility (GBIF), WikipediaTM, PanaramioTM, FlickrTM, and Google MapsTM.
- The Integrated Biodiversity Assessment Tool (IBAT) for business provides a visualisation and GIS download tool for protected areas, including Ramsar sites.
- Resolution IX.1 Annex B (2005) Revised Strategic Framework and guidelines for the future development of the List of Wetlands of International Importance.__
- An annotated list of Ramsar sites and up to date figures can be accessed on the official site for the Ramsar Convention
- The Criteria for Identifying Wetlands of International Importance. Adopted by the 7th (1999) and 9th (2005) Meetings of the Conference of the Contracting Parties.__
- Convention on Wetlands of International Importance especially as Waterfowl Habitat. Ramsar (Iran), 2 February 1971. UN Treaty Series No. 14583. As amended by the Paris Protocol, 3 December 1982, and Regina Amendments, 28 May 1987.__
- Guidance documents officially adopted by the meetings of the Conference of the Contracting Parties.__
- IFC (2006) Performance Standard 6: Biodiversity Conservation and Sustainable Natural Resource Management. International Finance Corporation, Washington, DC.
- EIB (2009) Statement of Environmental and Social Principles and Standards. European Investment Bank, Luxembourg.
- ADB (2009) Safeguard Policy Statement. Asian Development Bank, Manila.
- EBRD (2008) Environmental and Social Policy. European Bank for Reconstruction and Development, London.
- IDB (2006) Environment and Safeguards Compliance Policy. Inter-American Development Bank , Washington, DC.
- GSTC (2008) The Partnership for Global Sustainable Tourism Criteria. Global Sustainable Tourism Criteria.
- RSB. (2009) Annex to the Guidelines for Environmental and Social Impact Assessment, Stakeholder Mapping and Community Consultation Specific to the Biofuels Sector- Ecosystem and Conservation Specialist. Version 1.0. Roundtable on Sustainable Biofuels, Lausanne.
- CCBA. (2008) Climate, Community & Biodiversity Project Design Standards Second Edition. The Climate, Community and Biodiversity Alliance, Arlington, VA.
- The Ramsar ‘Toolkit’ of Wise Use Handbooks.__
- The role of the private sector in achieving the goals of the Convention are recognised in Resolution X.12 that provides principles for partnerships between the Ramsar Convention and the business sector
- Resolution X.26 (2008) refers specifically to wetlands and extractive industries due to the particular vulnerability of wetlands to the impacts of extractive industries
- Strategy 1.10 of the Ramsar Strategic Plan 2009-2015 promotes the involvement of the private sector in the conservation and wise use of wetlands
Dowload this factsheet as a PDF
If you see any errors or have any questions or suggestions on what is shown on this page, please email them to [email protected] so that we can correct or extend the information provided | http://www.biodiversitya-z.org/areas/30 | 13 |
163 | MONEY AND ITS PURCHASING POWER
Money has entered into almost all our discussion so far. In chapter 3 we saw how the economy evolved from barter to indirect exchange. We saw the patterns of indirect exchange and the types of allocations of income and expenditure that are made in a monetary economy. In chapter 4 we discussed money prices and their formation, analyzed the marginal utility of money, and demonstrated how monetary theory can be subsumed under utility theory by means of the money regression theorem. In chapter 6 we saw how monetary calculation in markets is essential to a complex, developed economy, and we analyzed the structure of post-income and pre-income demands for and supplies of money on the time market. And from chapter 2 on, all our discussion has dealt with a monetary-exchange economy.
The time has come to draw the threads of our analysis of the market together by completing our study of money and of the effects of changes in monetary relations on the economic system. In this chapter we shall continue to conduct the analysis within the framework of the free-market economy.
Money is a commodity that serves as a general medium of exchange; its exchanges therefore permeate the economic system. Like all commodities, it has a market demand and a market supply, although its special situation lends it many unique features. We saw in chapter 4 that its “price” has no unique expression on the market. Other commodities are all expressible in terms of units of money and therefore have uniquely identifiable prices. The money commodity, however, can be expressed only by an array of all the other commodities, i.e., all the goods and services that money can buy on the market. This array has no uniquely expressible unit, and, as we shall see, changes in the array cannot be measured. Yet the concept of the “price” or the “value” of money, or the “purchasing power of the monetary unit,” is no less real and important for all that. It simply must be borne in mind that, as we saw in chapter 4, there is no single “price level” or measurable unit by which the value-array of money can be expressed. This exchange-value of money also takes on peculiar importance because, unlike other commodities, the prime purpose of the money commodity is to be exchanged, now or in the future, for directly consumable or productive commodities.
The total demand for money on the market consists of two parts: the exchange demand for money (by sellers of all other goods that wish to purchase money) and the reservation demand for money (the demand for money to hold by those who already hold it). Because money is a commodity that permeates the market and is continually being supplied and demanded by everyone, and because the proportion which the existing stock of money bears to new production is high, it will be convenient to analyze the supply of and the demand for money in terms of the total demand-stock analysis set forth in chapter 2.
In contrast to other commodities, everyone on the market has both an exchange demand and a reservation demand for money. The exchange demand is his pre-income demand (see chapter 6, above). As a seller of labor, land, capital goods, or consumers’ goods, he must supply these goods and demand money in exchange to obtain a money income. Aside from speculative considerations, the seller of ready-made goods will tend, as we have seen, to have a perfectly inelastic (vertical) supply curve, since he has no reservation uses for the good. But the supply curve of a good for money is equivalent to a (partial) demand curve for money in terms of the good to be supplied. Therefore, the (exchange) demand curves for money in terms of land, capital goods, and consumers’ goods will tend to be perfectly inelastic.
For labor services, the situation is more complicated. Labor, as we have seen, does have a reserved use—satisfying leisure. We have seen that the general supply curve of a labor factor can be either “forward-sloping” or “backward-sloping,” depending upon the individuals’ marginal utility of money and marginal disutility of leisure forgone. In determining labor’s demand curve for money, however, we can be far more certain. To understand why, let us take a hypothetical example of a supply curve of a labor factor (in general use). At a wage rate of five gold grains an hour, 40 hours per week of labor service will be sold. Now suppose that the wage rate is raised to eight gold grains an hour. Some people might work a greater number of hours because they have a greater monetary inducement to sacrifice leisure for labor. They might work 50 hours per week. Others may decide that the increased income permits them to sacrifice some money and take some of the increased earnings in greater leisure. They might work 30 hours. The first would represent a “forward-sloping,” the latter a “backward-sloping,” supply curve of labor in this price range. But both would have one thing in common. Let us multiply hours by wage rate in each case, to arrive at the total money income of the laborers in the various situations. In the original case, a laborer earned 40 times 5 or 200 gold grains per week. The man with a backward-sloping supply curve will earn 30 times 8 or 240 gold grains a week. The one with a forward-sloping supply curve will earn 50 times 8 or 400 gold grains per week. In both cases, the man earns more money at the higher wage rate.
This will always be true. In the first case, it is obvious, for the higher wage rate induces the man to sell more labor. But it is true in the latter case as well. For the higher money income permits a man to gratify his desires for more leisure as well, precisely because he is getting an increased money income. Therefore, a man’s backward-sloping supply curve will never be “backward” enough to make him earn less money at higher wage rates.
Thus, a man will always earn more money at a higher wage rate, less money at a lower. But what is earning money but another name for buying money? And that is precisely what is done. People buy money by selling goods and services that they possess or can create. We are now attempting to arrive at the demand schedule for money in relation to various alternative purchasing powers or “exchange-values” of money. A lower exchange-value of money is equivalent to higher goods-prices in terms of money. Conversely, a higher exchange-value of money is equivalent to lower prices of goods. In the labor market, a higher exchange-value of money is translated into lower wage rates, and a lower exchange-value of money into higher wage rates.
Hence, on the labor market, our law may be translated into the following terms: The higher the exchange-value of money, the lower the quantity of money demanded; the lower the exchange-value of money, the higher the quantity of money demanded (i.e., the lower the wage rate, the less money earned; the higher the wage rate, the more money earned). Therefore, on the labor market, the demand-for-money schedule is not vertical, but falling, when the exchange-value of money increases, as in the case of any demand curve.
Adding the vertical demand curves for money in the other exchange markets to the falling demand curve in the labor market, we arrive at a falling exchange-demand curve for money.
More important, because more volatile, in the total demand for money on the market is the reservation demand to hold money. This is everyone’s post-income demand. After everyone has acquired his income, he must decide, as we have seen, between the allocation of his money assets in three directions: consumption spending, investment spending, and addition to his cash balance (“net hoarding”). Furthermore, he has the additional choice of subtraction from his cash balance (“net dishoarding”). How much he decides to retain in his cash balance is uniquely determined by the marginal utility of money in his cash balance on his value scale. Until now we have discussed at length the sources of the utilities and demands for consumers’ goods and for producers’ goods. We have now to look at the remaining good: money in the cash balance, its utility and demand.
Before discussing the sources of the demand for a cash balance, however, we may determine the shape of the reservation (or “cash balance”) demand curve for money. Let us suppose that a man’s marginal utilities are such that he wishes to have 10 ounces of money held in his cash balance over a certain period. Suppose now that the exchange-value of money, i.e., the purchasing power of a monetary unit, increases, other things being equal. This means that his 10 gold ounces accomplish more work than they did before the change in the PPM (purchasing power of the monetary unit). As a consequence, he will tend to remove part of the 10 ounces from his cash balance and spend it on goods, the prices of which have now fallen. Therefore, the higher the PPM (the exchange-value of money), the lower the quantity of money demanded in the cash balance. Conversely, a lower PPM will mean that the previous cash balance is worth less in real terms than it was before, while the higher prices of goods discourage their purchase. As a result, the lower the PPM, the higher the quantity of money demanded in the cash balance.
As a result, the reservation demand curve for money in the cash balance falls as the exchange-value of money increases. This falling demand curve, added to the falling exchange-demand curve for money, yields the market’s total demand curve for money—also falling in the familiar fashion for every commodity.
There is a third demand curve for the money commodity that deserves mention. This is the demand for nonmonetary uses of the monetary metal. This will be relatively unimportant in the advanced monetary economy, but it will exist nevertheless. In the case of gold, this will mean either uses in consumption, as for ornaments, or productive uses, as for industrial purposes. At any rate, this demand curve also falls as the PPM increases. As the “price” of money (PPM) increases, more goods can be obtained through expenditure of a unit of money; as a result, the opportunity-cost in using gold for nonmonetary purposes increases, and less is demanded for that purpose. Conversely, as the PPM falls, there is more incentive to use gold for its direct use. This demand curve is added to the total demand curve for money, to obtain the total demand curve for the money commodity.
At any one time there is a given total stock of the money commodity. This stock will, at any time, be owned by someone. It is therefore dangerously misleading to adopt the custom of American economists since Irving Fisher’s day of treating money as somehow “circulating,” or worse still, as divided into “circulating money” and “idle money.” This concept conjures up the image of the former as moving somewhere at all times, while the latter sits idly in “hoards.” This is a grave error. There is, actually, no such thing as “circulation,” and there is no mysterious arena where money “moves.” At any one time all the money is owned by someone, i.e., rests in someone’s cash balance. Whatever the stock of money, therefore, people’s actions must bring it into accord with the total demand for money to hold, i.e., the total demand for money that we have just discussed. For even pre-income money acquired in exchange must be held at least momentarily in one’s cash balance before being transferred to someone else’s balance. All total demand is therefore to hold, and this is in accord with our analysis of total demand in chapter 2.
Total stock must therefore be brought into agreement, on the market, with the total quantity of money demanded. The diagram of this situation is shown in Figure 74.
On the vertical axis is the PPM, increasing upward. On the horizontal axis is the quantity of money, increasing rightwards. De is the aggregate exchange-demand curve for money, falling and inelastic. Dr is the reservation or cash-balance demand for money. Dt is the total demand for money to hold (the demand for nonmonetary gold being omitted for purposes of convenience). Somewhere intersecting the Dt curve is the SS vertical line—the total stock of money in the community—given at quantity 0S.
The intersection of the latter two curves determines the equilibrium point, A, for the exchange-value of money in the community. The exchange-value, or PPM, will be set at 0B.
Suppose now that the PPM is slightly higher than 0B. The demand for money at that point will be less than the stock. People will become unwilling to hold money at that exchange-value and will be anxious to sell it for other goods. These sales will raise the prices of goods and lower the PPM, until the equilibrium point is reached. On the other hand, suppose that the PPM is lower than 0B. In that case, more people will demand money, in exchange or in reservation, than there is money stock available. The consequent excess of demand over supply will raise the PPM again to 0B.
The purchasing power of money is therefore determined by two factors: the total demand schedule for money to hold and the stock of money in existence. It is easy to see on a diagram what happens when either of these determining elements changes. Thus, suppose that the schedule of total demand increases (shifts to the right). Then (see Figure 75) the total-demand-for-money curve has shifted from DtDt to Dt'Dt'. At the previous equilibrium PPM point, A, the demand for money now exceeds the stock available by AE. The bids push the PPM upwards until it reaches the equilibrium point C. The converse will be true for a shift of the total demand curve leftward—a decline in the total demand schedule. Then, the PPM will fall accordingly.
The effect of a change in the total stock, the demand curve remaining constant, is shown in Figure 76. Total quantity of stock increases from 0S to 0S'. At the new stock level there is an excess of stock, AF, over the total demand for money. Money will be sold at a lower PPM to induce people to hold it, and the PPM will fall until it reaches a new equilibrium point G. Conversely, if the stock of money is decreased, there will be an excess of demand for money at the existing PPM, and the PPM will rise until the new equilibrium point is reached.
The effect of the quantity of money on its exchange-value is thus simply set forth in our analysis and diagrams.
The absurdity of classifying monetary theories into mutually exclusive divisions (such as “supply and demand theory,” “quantity theory,” “cash balance theory,” “commodity theory,” “income and expenditure theory”) should now be evident. For all these elements are found in this analysis. Money is a commodity; its supply or quantity is important in determining its exchange-value; demand for money for the cash balance is also important for this purpose; and the analysis can be applied to income and expenditure situations.
In the case of consumers’ goods, we do not go behind their subjective utilities on people’s value scales to investigate why they were preferred; economics must stop once the ranking has been made. In the case of money, however, we are confronted with a different problem. For the utility of money (setting aside the nonmonetary use of the money commodity) depends solely on its prospective use as the general medium of exchange. Hence the subjective utility of money is dependent on the objective exchange-value of money, and we must pursue our analysis of the demand for money further than would otherwise be required. The diagrams above in which we connected the demand for money and its PPM are therefore particularly appropriate. For other goods, demand in the market is a means of routing commodities into the hands of their consumers. For money, on the other hand, the “price” of money is precisely the variable on which the demand schedule depends and to which almost the whole of the demand for money is keyed. To put it in another way: without a price, or an objective exchange-value, any other good would be snapped up as a welcome free gift; but money, without a price, would not be used at all, since its entire use consists in its command of other goods on the market. The sole use of money is to be exchanged for goods, and if it had no price and therefore no exchange-value, it could not be exchanged and would no longer be used.
We are now on the threshold of a great economic law, a truth that can hardly be overemphasized, considering the harm its neglect has caused throughout history. An increase in the supply of a producers’ good increases, ceteris paribus, the supply of a consumers’ good. An increase in the supply of a consumers’ good (when there has been no decrease in the supply of another good) is demonstrably a clear social benefit; for someone’s “real income” has increased and no one’s has decreased.
Money, on the contrary, is solely useful for exchange purposes. Money, per se, cannot be consumed and cannot be used directly as a producers’ good in the productive process. Money per se is therefore unproductive; it is dead stock and produces nothing. Land or capital is always in the form of some specific good, some specific productive instrument. Money always remains in someone’s cash balance.
Goods are useful and scarce, and any increment in goods is a social benefit. But money is useful not directly, but only in exchanges. And we have just seen that as the stock of money in society changes, the objective exchange-value of money changes inversely (though not necessarily proportionally) until the money relation is again in equilibrium. When there is less money, the exchange-value of the monetary unit rises; when there is more money, the exchange-value of the monetary unit falls. We conclude that there is no such thing as “too little” or “too much” money, that, whatever the social money stock, the benefits of money are always utilized to the maximum extent. An increase in the supply of money confers no social benefit whatever; it simply benefits some at the expense of others, as will be detailed further below. Similarly, a decrease in the money stock involves no social loss. For money is used only for its purchasing power in exchange, and an increase in the money stock simply dilutes the purchasing power of each monetary unit. Conversely, a fall in the money stock increases the purchasing power of each unit.
David Hume’s famous example provides a highly oversimplified view of the effect of changes in the stock of money, but in the present context it is a valid illustration of the absurdity of the belief that an increased money supply can confer a social benefit or relieve any economic scarcity. Consider the magical situation where every man awakens one morning to find that his monetary assets have doubled. Has the wealth, or the real income, of society doubled? Certainly not. In fact, the real income—the actual goods and services supplied—remains unchanged. What has changed is simply the monetary unit, which has been diluted, and the purchasing power of the monetary unit will fall enough (i.e., prices of goods will rise) to bring the new money relation into equilibrium.
One of the most important economic laws, therefore, is: Every supply of money is always utilized to its maximum extent, and hence no social utility can be conferred by increasing the supply of money.
Some writers have inferred from this law that any factors devoted to gold mining are being used unproductively, because an increased supply of money does not confer a social benefit. They deduce from this that the government should restrict the amount of gold mining. These critics fail to realize, however, that gold, the money-commodity, is used not only as money but also for nonmonetary purposes, either in consumption or in production. Hence, an increase in the supply of gold, although conferring no monetary benefit, does confer a social benefit by increasing the supply of gold for direct use.
A. Money in the ERE and in the Market
It is true, as we have said, that the only use for money is in exchange. From this, however, it must not be inferred, as some writers have done, that this exchange must be immediate. Indeed, the reason that a reservation demand for money exists and cash balances are kept is that the individual is keeping his money in reserve for future exchanges. That is the function of a cash balance—to wait for a propitious time to make an exchange.
Suppose the ERE has been established. In such a world of certainty, there would be no risk of loss in investment and no need to keep cash balances on hand in case an emergency for consumer spending should arise. Everyone would therefore allocate his money stock fully, to the purchase of either present goods or future goods, in accordance with his time preferences. No one would keep his money idle in a cash balance. Knowing that he will want to spend a certain amount of money on consumption in six months’ time, a man will lend his money out for that period to be returned at precisely the time it is to be spent. But if no one is willing to keep a cash balance longer than instantaneously, there will be no money held and no use for a money stock. Money, in short, would either be useless or very nearly so in the world of certainty.
In the real world of uncertainty, as contrasted to the ERE, even “idle” money kept in a cash balance performs a use for its owner. Indeed, if it did not perform such a use, it would not be kept in his cash balance. Its uses are based precisely on the fact that the individual is not certain on what he will spend his money or of the precise time that he will spend it in the future.
Economists have attempted mechanically to reduce the demand for money to various sources. There is no such mechanical determination, however. Each individual decides for himself by his own standards his whole demand for cash balances, and we can only trace various influences which different catallactic events may have had on demand.
One of the most obvious influences on the demand for money is expectation of future changes in the exchange-value of money. Thus, suppose that, at a certain point in the future, the PPM of money is expected to drop rapidly. How the demand-for-money schedule now reacts depends on the number of people who hold this expectation and the strength with which they hold it. It also depends on the distance in the future at which the change is expected to take place. The further away in time any economic event, the more its impact will be discounted in the present by the interest rate. Whatever the degree of impact, however, an expected future fall in the PPM will tend to lower the PPM now. For an expected fall in the PPM means that present units of money are worth more than they will be in the future, in which case there will be a fall in the demand-for-money schedule as people tend to spend more money now than at the future date. A general expectation of an imminent fall in the PPM will lower the demand schedule for money now and thus tend to bring about the fall at the present moment.
Conversely, an expectation of a rise in the PPM in the near future will tend to raise the demand-for-money schedule as people decide to “hoard” (add money to their cash balance) in expectation of a future rise in the exchange-value of a unit of their money. The result will be a present rise in the PPM.
An expected fall in the PPM in the future will therefore lower the PPM now, and an expected rise will lead to a rise now. The speculative demand for money functions in the same manner as the speculative demand for any good. An anticipation of a future point speeds the adjustment of the economy toward that future point. Just as the speculative demand for a good speeded adjustment to an equilibrium position, so the anticipation of a change in the PPM speeds the market adjustment toward that position. Just as in the case of any good, furthermore, errors in this speculative anticipation are “self-correcting.” Many writers believe that in the case of money there is no such self-correction. They assert that while there may be a “real” or underlying demand for goods, money is not consumed and therefore has no such underlying demand. The PPM and the demand for money, they declare, can be explained only as a perpetual and rather meaningless cat-and-mouse race in which everyone is simply trying to anticipate everyone else’s anticipations.
There is, however, a “real” or underlying demand for money. Money may not be physically consumed, but it is used, and therefore it has utility in a cash balance. Such utility amounts to more than speculation on a rise in the PPM. This is demonstrated by the fact that people do hold cash even when they anticipate a fall in the PPM. Such holdings may be reduced, but they still exist, and as we have seen, this must be so in an uncertain world. In fact, without willingness to hold cash, there could be no monetary-exchange economy whatever.
The speculative demand therefore anticipates the underlying nonspeculative demands, whatever their source or inspiration. Suppose, then, that there is a general anticipation of a rise in the PPM (a fall in prices) not reflected in underlying supply and demand. It is true that, at first, this general anticipation raises, ceteris paribus, the demand for money and the PPM. But this situation does not last. For now that a pseudo “equilibrium” has been reached, the speculative anticipators, who did not “really” have an increased demand for money, sell their money (buy goods) to reap their gains. But this means that the underlying demand comes to the fore, and this is less than the money stock at that PPM. The pressure of spending then lowers the PPM again to the true equilibrium point. This may be diagramed as in Figure 77.
Money stock is 0S; the true or underlying money demand is DD, with true equilibrium point at A. Now suppose that the people on the market erroneously anticipate that true demand will be such in the near future that the PPM will be raised to 0E. The total demand curve for money then shifts to DsDs, the new total demand curve including the speculative demand. The PPM does shift to 0E as predicted. But now the speculators move to cash in their gain, since their true demand for money really reflects DD rather than DsDs. At the new price 0E, there is in fact an excess of money stock over quantity demanded, amounting to CF. Sellers rush to sell their stock of money and buy goods, and the PPM falls again to equilibrium. Hence, in the field of money as well as in that of specific goods, speculative anticipations are self-correcting, not “self-fulfilling.” They speed the market process of adjustment.
Long-run influences on the demand for money in a progressing economy will tend to be manifold, and in both directions. On the one hand, an advancing economy provides ever more occasions for new exchanges as more and more commodities are offered on the market and as the number of stages of production increases. These greater opportunities tend greatly to increase the demand-for-money schedule. If an economy deteriorates, fewer opportunities for exchange exist, and the demand for money from this source will fall.
The major long-run factor counteracting this tendency and tending toward a fall in the demand for money is the growth of the clearing system. Clearing is a device by which money is economized and performs the function of a medium of exchange without being physically present in the exchange.
A simplified form of clearing may occur between two people. For example, A may buy a watch from B for three gold ounces; at the same time, B buys a pair of shoes from A for one gold ounce. Instead of two transfers of money being made, and a total of four gold ounces changing hands, they decide to perform a clearing operation. A pays B two ounces of money, and they exchange the watch and the shoes. Thus, when a clearing is made, and only the net amount of money is actually transferred, all parties can engage in the same transactions at the same prices, but using far less cash. Their demand for cash tends to fall.
There is obviously little scope for clearing, however, as long as all transactions are cash transactions. For then people have to exchange one another’s goods at the same time. But the scope for clearing is vastly increased when credit transactions come into play. These credits may be quite short-term. Thus, suppose that A and B deal with each other quite frequently during a year or a month. Suppose they agree not to pay each other immediately in cash, but to give each other credit until the end of each month. Then B may buy shoes from A on one day, and A may buy a watch from B on another. At the end of the period, the debts are canceled and cleared, and the net debtor pays one lump sum to the net creditor.
Once credit enters the picture, the clearing system can be extended to as many individuals as find it convenient. The more people engage in clearing operations (often in places called “clearinghouses”) the more cancellations there will be, and the more money will be economized. At the end of the week, for example, there may be five people engaged in clearing, and A may owe B ten ounces, B owe C ten ounces, C owe D, etc., and finally E may owe A ten ounces. In such a case, 50 ounces’ worth of debt transactions and potential cash transactions are settled without a single ounce of cash being used.
Clearing, then, is a process of reciprocal cancellations of money debts. It permits a huge quantity of monetary exchanges without actual possession and transfer of money, thereby greatly reducing the demand for money. Clearing, however, cannot be all-encompassing, for there must be some physical money which could be used to settle the transaction, and there must be physical money to settle when there is no 100-percent cancellation (which rarely occurs).
A popular fallacy rejects the concept of “demand for money” because it is allegedly always unlimited. This idea misconceives the very nature of demand and confuses money with wealth or income. It is based on the notion that “people want as much money as they can get.” In the first place, this is true for all goods. People would like to have far more goods than they can procure now. But demand on the market does not refer to all possible entries on people’s value scales; it refers to effective demand, to desires made effective by being “demanded,” i.e., by the fact that something else is “supplied” for it. Or else it is reservation demand, which takes the form of holding back the good from being sold. Clearly, effective demand for money is not and cannot be unlimited; it is limited by the appraised value of the goods a person can sell in exchange and by the amount of that money which the individual wants to spend on goods rather than keep in his cash balance.
Furthermore, it is, of course, not “money” per se that he wants and demands, but money for its purchasing power, or “real” money, money in some way expressed in terms of what it will purchase. (This purchasing power of money, as we shall see below, cannot be measured.) More money does him no good if its purchasing power for goods is correspondingly diluted.
We have been discussing money, and shall continue to do so in the current section, by comparing equilibrium positions, and not yet by tracing step by step how the change from one position to another comes about. We shall soon see that in the case of the price of money, as contrasted with all other prices, the very path toward equilibrium necessarily introduces changes that will change the equilibrium point. This will have important theoretical consequences. We may still talk, however, as if money is “neutral,” i.e., does not lead to such changes, because this assumption is perfectly competent to deal with the problems analyzed so far. This is true, in essence, because we are able to use a general concept of the “purchasing power of money” without trying to define it concretely in terms of specific arrays of goods. Since the concept of the PPM is relevant and important even though its specific content changes and cannot be measured, we are justified in assuming that money is neutral as long as we do not need a more precise concept of the PPM.
We have seen how changes in the money relation change the PPM. In the determination of the interest rate, we must now modify our earlier discussion in chapter 6 to take account of allocating one’s money stock by adding to or subtracting from one’s cash balance. A man may allocate his money to consumption, investment, or addition to his cash balance. His time preferences govern the proportion which an individual devotes to present and to future goods, i.e., to consumption and to investment. Now suppose a man’s demand-for-money schedule increases, and he therefore decides to allocate a proportion of his money income to increasing his cash balance. There is no reason to suppose that this increase affects the consumption/investment proportion at all. It could, but if so, it would mean a change in his time preference schedule as well as in his demand for money.
If the demand for money increases, there is no reason why a change in the demand for money should affect the interest rate one iota. There is no necessity at all for an increase in the demand for money to raise the interest rate, or a decline to lower it—no more than the opposite. In fact, there is no causal connection between the two; one is determined by the valuations for money, and the other by valuations for time preference.
Let us return to the section in chapter 6 on Time Preference and the Individual’s Money Stock. Did we not see there that an increase in an individual’s money stock lowers the effective time-preference rate along the time-preference schedule, and conversely that a decrease raises the time-preference rate? Why does this not apply here? Simply because we were dealing with each individual’s money stock and assuming that the “real” exchange-value of each unit of money remained the same. His time-preference schedule relates to “real” monetary units, not simply to money itself. If the social stock of money changes or if the demand for money changes, the objective exchange-value of a monetary unit (the PPM) will change also. If the PPM falls, then more money in the hands of an individual may not necessarily lower the time-preference rate on his schedule, for the more money may only just compensate him for the fall in the PPM, and his “real money stock” may therefore be the same as before. This again demonstrates that the money relation is neutral to time preference and the pure rate of interest.
An increased demand for money, then, tends to lower prices all around without changing time preference or the pure rate of interest Thus, suppose total social income is 100, with 70 allocated to investment and 30 to consumption. The demand for money increases, so that people decide to hoard a total of 20. Expenditure will now be 80 instead of 100, 20 being added to cash balances. Income in the next period will be only 80, since expenditures in one period result in the identical income to be allocated to the next period. If time preferences remain the same, then the proportion of investment to consumption in the society will remain roughly the same, i.e., 56 invested and 24 consumed. Prices and nominal money values and incomes fall all along the line, and we are left with the same capital structure, the same real income, the same interest rate, etc. The only things that have changed are nominal prices, which have fallen, and the proportion of total cash balances to money income, which has increased.
A decreased demand for money will have the reverse effect. Dishoarding will raise expenditure, raise prices, and, ceteris paribus, maintain the real income and capital structure intact. The only other change is a lower proportion of cash balances to money income.
The only necessary result, then, of a change in the demand-for-money schedule is precisely a change in the same direction of the proportion of total cash balances to total money income and in the real value of cash balances. Given the stock of money, an increased scramble for cash will simply lower money incomes until the desired increase in real cash balances has been attained.
If the demand for money falls, the reverse movement occurs. The desire to reduce cash balances causes an increase in money income. Total cash remains the same, but its proportion to incomes, as well as its real value, declines.
Cf. Edwin Cannan, “The Application of the Theoretical Analysis of Supply and Demand to Units of Currency” in F.A. Lutz and L.W. Mints, eds., Readings in Monetary Theory (Philadelphia: Blakiston, 1951), pp. 3–12, and Cannan, Money (6th ed.; London: Staples Press, 1929), pp. 10–19, 65–78.
From this point on, this nonmonetary demand is included, for convenience, in the “total demand for money.”
Cf. Irving Fisher, The Purchasing Power of Money (2nd ed.; New York: Macmillan & Co., 1913).
A typical such classification can be found in Lester V. Chandler, An Introduction to Monetary Theory (New York: Harper & Bros., 1940).
See Mises, Theory of Money and Credit, p. 98. The entire volume is indispensable for the analysis of money. Also see Mises, Human Action, chap. xvii and chap. xx.
See chapter 12 below for a discussion of the concept of social benefit or social utility.
J.M. Keynes’ Treatise on Money (New York: Harcourt, Brace, 1930) is a classic example of this type of analysis.
On the clearing system, see Mises, Theory of Money and Credit, pp. 281–86.
Since no one can receive a money income unless someone else makes a money expenditure on his services. (See chapter 3 above.)
Strictly, the ceteris paribus condition will tend to be violated. An increased demand for money tends to lower money prices and will therefore lower money costs of gold mining. This will stimulate gold mining production until the interest return on mining is again the same as in other industries. Thus, the increased demand for money will also call forth new money to meet the demand. A decreased demand for money will raise money costs of gold mining and at least lower the rate of new production. It will not actually decrease the total money stock unless the new production rate falls below the wear-and-tear rate. Cf. Jacques Rueff, “The Fallacies of Lord Keynes’ General Theory” in Henry Hazlitt, ed., The Critics of Keynesian Economics (Princeton, N.J.: D. Van Nostrand, 1960), pp. 238–63. | http://mises.org/rothbard/mes/chap11a.asp | 13 |
15 | The colonies suffered a constant shortage of currency with which to conduct trade. There were no gold or silver mines and currency could only be obtained through trade as regulated by Great Britain. Many of the colonies felt no alternative to printing their own paper money in the form of Bills of Credit. But because there were no common regulations and in fact no standard value on which to base the notes, confusion ensued. The notes were issued by land banks, or loan offices, which based the value of mortgaged land. Some notes paid interest, others did not, some could be used only for purchase and not to repay debt. Some were issued only for public debts and could not be used in private transactions. There was no standard value common to all of the colonies. British merchant-creditors were very uncomfortable with this system, not only because of the obvious complexity, but because of the rapid depreciation of the notes due to regular fluctuations in the colonial economy. On September 1, 1764, Parliament passed the Currency Act, effectively assuming control of the colonial currency system. The act prohibited the issue of any new bills and the reissue of existing currency. Parliament favored a "hard currency" system based on the pound sterling, but was not inclined to regulate the colonial bills. Rather, they simply abolished them. The colonies protested vehemently against this. They suffered a trade deficit with Great Britain to begin with and argued that the shortage of hard capital would further exacerbate the situation. Another provision of the Currency Act established what amounted to a "superior" Vice-admiralty court, at the call of Navel [sic] commanders who wished to assure that persons suspected of smuggling or other violations of the customs laws would receive a hearing favorable to the British, and not the colonial, interests. | http://www.ushistory.org/Declaration/related/currencyact.htm | 13 |
25 | Vaccines produce their protective effect by inducing cell-mediated immunity and serum antibodies, which can be demonstrated by their detection in the serum. Immunity can either be passive or active. Active immunity can be induced by receiving a vaccine, while passive immunity can be receipt of immunoglobulin. Passive immunity is short lived.
Generally speaking, the priority groups for vacccines are the most vulnerable populations; they are recommended for the youngest age group at risk for developing the disease. Hence the need to protect infants before they are exposed to the disease. Infants born prematurely regardlesss of birth weight should be vaccinated at the same chronological age and according to the same schedule and precautions as full-term infants and children.
courtesy (EPI), Ministry of Health; Immunization Maual for Health Professionals, 3rd edition
Vaccination (Principles of)
CHRONOLOGICAL HISTORY OF VACCINATION
1100s: Variolation for smallpox first reported in China
1721: Variolation introduced into Great Britain
1796: Edward Jenner inoculates James Phipps with cowpox, and calls the procedure vaccination ("vacca" is latin for cow)
1870: Louis Pasteur creates the first live attenuated bacterial vaccine (chicken cholera)
1884: Pasteur creates the first live attenuated viral vaccine (rabies)
1885: Pasteur first uses rabies vaccine in a human
1901: First Nobel Prize in Medicine to von Behring for diphtheria antitoxin
1909: Smith discovers a method for inactivating diphtheria toxin
1909: Calmet and Guerin create BCG, the first live attenuated bacterial vaccine for humans
1933: Goodpasture describes a technique for viral culture in hen's eggs
1949: Enders and colleagues isolate Lansing Type II poliovirus in human cell line
Note:It is important to keep in mind that since 2000, many countries, islands and territories have changed and since upgraded their immunization schedule to include more vaccine-preventable illnesses such as: HPV (Human Papilloma Virus), Rotavirus, Y/F (Yellow Fever) and HepB (Hepatitis B) vaccines in accordance with most recent EPI (Expanded Programme on Immunization) guidelines. For more detailed and most recent updates, it is advisable to contact the Ministry of Health/Public Health Department of country, island or territory of interest for current guidelines on immunization, THANK You.
Most international travelers need a combination of routine immunization and specially recommended vaccination for the country of travel; immunization is EXTREMELY important to prevent the importation of vaccine-preventable diseases from one country to another.
20. Cholera vaccine
21. Typhoid vaccine
courtesy (EPI), Ministry of Health; Immunization Manual for Health Professionals, 3rd edition
Click here for more in depth & precise information on vaccinations and vaccinations' schedules
Tuberculosis (TB) is caused by a bacterium, Mycobacterium tuberculosis,that is carried by more than two billion persons worldwide. The disease usually attacks the lungs but other parts of the body, for example, bones, jonts, and brain may also become infected. There is a difference between tuberculosis infection and disease. Individuals with the infection usually do not feel sick and often has no symptoms. The infection may last for years to decades, lifetime and the infected person may never develop the disease. Persons with the infection but not the disease cannot spread the infection to others. TB does not discriminate with regards to age; all age groups can contract tuberculosis. It spreads rapidly, especially in crowded living conditions, and places where health access is poor and individuals malnourished.
TB is spread through the air. When a diseased individual coughs or sneezes, the germs enter the air. A person who inhales the contaminated air that contains TB germs may become infected. It is possible to become infected from cattle with the disease by drinking unpasteurized milk. Incubation period for TB is 1-12 weeks, however, the infection may persist for months or years before the disease develops. A diseased individual can infect others for several weeks after he/she starts treatment. The risk of developing TB is highest in children aged under three (3) years and in the very old population. Persons with a weakened immune system, for example HIV/AIDS persons, are more than likely to develop the disease than those with normal immune systems. Concerns about TB have been heightened recently because some strains of the causative organism have developed resistance to a number drugs.
SIGNS & SYMPTOMS:
Symptoms of TB may include general weakness, weight loss, fever and night sweats. In pulmonary TB (TB of the lungs), the symptoms include persistent cough, hemoptysis (coughing up blood) and pleuritic pain (chest pain). In young children, however, the only sig of lung TB may be stunted growth or failure to thrive (FTT).
TB generally weakens the body, resulting in increase risk that the affected individual will contract other diseases or that existing diseases will become more severe.
Proper and correct treatment for TB is a complete course of chemotherapy, which normally involvestaking two or more anti-tuberculosis drugs for at least six months.
The best protection at the present moment for children against TB infection is the IMMUNIZATION with BCG vaccine. In persons who have been vaccinated , it is impossible to determine whether a positive tuberculin skin test reaction is due to immunization or infection with the TB bacterium. Because of the strict control and prevention measures taken, mortality and morbidity from TB have declined to very low rates.
Poliovirus (Enterovirus) types 1, 2, 3 can cause paralytic poliomyelitis. Type most common, type 2 least common. Most cases of vaccine-associated are due to types 2 & 3. Humans are usually the reservoirs. Long-term carriers have not been found.
The disease is primarily spread through person-to-person contact, principally through the fecal-oral route. The virus is more easily detected in the feces than in throat secretions. Water and sewage are rarely implicated in the transmission/spread of the virus. The virus enters the body through the mouth when people eat or drink food or water that is contaminated by feces with the virus; hence the reason why the virus tend to spread in poor sanitation conditions. Spread can also occur via the air-borne route: sneezing and coughing. The virus spreads through the body via the bloodstream can invades certain types of nerve cells (motor neurons) resulting in loss of myelin sheath, reduced conduction rates and paralysis.
SIGNS & SYMPTOMS:
Incubation period is commonly 7-14 days for paralytic cases, may range from 3-35 days. Some people with the virus may not feel ill. Some may complain of influenza-like symptoms such as fever, loose stools, sore throat, stomach upset headache; occasionally there may be pain and/or stiffness in the neck, back and legs. THE most serious form of the disease is the paralytic polio! Paralysis usually develops during the first week of illness. The use of one or both legs or arms may be lost and breathing may be impossible without the help of a respirator. Recovery level varies from person to person.
SUSCEPTIBILITY AND RESISTANCE:
Susceptibility to infection is common, paralysis rarely occurs.
A small percentage of infected children may progress to paralysis; dealth may occur IF the respiratory muscles are paralysed and no respirator is available.
No treatment is known; symptoms can be reduced.
PREVENTION: Polio prevention involves vaccination with oral polio vaccine (OPV) or Inactivated polio vaccine (IPV). Antibodies from the mother provide protection to the infant for two to three months after birth. Infected people who recover can develop natural immunity that protects them against future infection. TOP
Diphtheria is caused by the bacteria Corynebacterium diphtheriae of gravis, mitis or intermedius biotype. Toxin synthesis occurs when the bacteria are infected by corynebacteriophage containing the diphtheria toxin gene tox. The toxin produced can harm or destroy the human body tissue and organs. One type of the disease affects the pharynx and other parts of the throat. Tends to be a disease of the colder months and of temperate climate zones.; Diphtheria affects people of all ages, but mostly non-immunized children less than 15 years of age.
Man is the known reservoir. The type of diphtheria that affects the pharynx and other parts of the throat are spread in droplets and scretions from the nose, throat and eyes when there is close contact between infected and uninfected persons. The other type is spread when contact with skin ulcers has taken place; the disease is often spread here clothing and other dress wear have been contaminated with fluids from the skin ulcers. The disease is more easily sprread when there is overcrowding and poor living conditions.
INCUBATION & COMMUNICABILITY:
Incubation period is usually 2-5 days. Infected individuals usually become ill within 2-4 days and symptoms may appear after six days. Persons who are infected can usually spread the disease to others for up to 4 weeks. Effective antibiotic therapy quickly terminates the shedding.
SUSCEPTIBILITY & RESISTANCE:
In fants born to immune mothers are relatively immune; protection is passive and usually lost before the sixth month. Recovery from a clinical attack is not always followed by lasting immunity.
SIGNS & SYMPTOMS:
When Diphtheria affects the throat and tonsils, the early symptoms are sore throat, loss of appetite and slight fever. Within two to three days a blisish-white or grey membrane forms on the throat and tonsils. Bleeding may occur. The membrane sticks to the soft palate Patients with severe disease may develop swelling in the neck and obstruction of the airway.
Abnormal heart beats may occur during the early phase of the illness or weeks later and heart failure may occur. Death may result in 5-10% of cases.
Diphtheria antitoxin and antibiotics (ie erythromycin; penicillin) should be administered. Throat cultures should be obtained to ensure correct diagnosis. Patients become non-infectious about two days after the commencement of antibiotic management.
The most effective manner of preventing diphtheria is to maintain a high level of immunization in the community; a mother can pass protective antibodies to her baby but this protection lasts about six months. A combination of tetanus and diphtheria vaccine may be recommended as a booster to maintain protection every ten years.
PERTUSSIS (Whooping Cough)
Pertussis is an ailment of the pulmonary tract (Lungs) caused by the bacteria (germ) Bordetella pertussis which lives in the mouth, nose and throat. Children with the disease have coughing bouts/spells that may last for many weeks, four to eight. The condition is common in non-immunized children all over the world. The disease is most dangerous in children less than one (1) year old. Whooping Cough is a serious communicable disease; the greatest incidence of complications and highest mortality occur in infancy. Complications can include pneumonia, encephalitis, severe nutritional disturbances and death. Pertussis or Pertussis-like illnesses are still being found in the West Indies.
The ONLY host is considered to be HUMANS! Pertussis spreads quite easily from person to person in droplets produced by coughing or sneezing. The majority of individuals exposed to the germ become infected. The disease is most readily transmitted from seven days after a person has been exposed to the germs until three weeks after the start of coughing. Incubation period varies between six to 21 days.
SIGNS & SYMPTOMS
Normally there are three levels of the illness. At the start, the child appears to have a common cold, runny nose (Rhinoritis), watery eyes, sneezing, fever and slight cough. Cough progressively worsens and the second level involves numerous burst of rapid coughing. At the end of these bursts of coughing, the child takes in air with a high pitched whoop. The child may turn blue due to lack of oxygen during a long burst of coughing. Vomiting and exhaustion often follow the coughing spell(s), more common at night. The attacks become milder with time. The third level is marked with recovery taking place, coughing slowly becomes less intense and stops in several weeks.
Complications are more common in (young) infants.The most common cause of death is bacterial pneumonia. Convulsions and seizures may occur, these complications may arise of the reduced oxygen supply to the brain during the couging bouts; less common complications: loss of appetite, otitis media and dehydration.
Usually erythromycin may make the illness less severe. The use of antibiotics also reduces the ability of the patient to infect others because the treatment kills the germ in the nose and throat.
Prevention includes immunization with pertussis vaccine, usually given in combination with dipheria and tetanus vaccines. Newborns and infants are not protected against pertussis by maternal antibodies. A person infected with pertussis usually acquires lifelong immunity.
Tetanus is an cute and frequently deadly disease caused by Clostridium tetani an organism which produces a very potent neurotoxin. The toxin produced poisons the neurons (nerves) that control the muslces which result in stiffness. The diseaes is qiute common and servere in newborns; referred to as neonatal tetanus. Tetanus germ is found throughout the environment. Bacteria form spores that can survive in the environment for many years. Wounds with devitalised tissue or deep ouncture wounds are at greatest risk of becoming infected with the germ.
Tetanus is not transmitted from person to person. An individual may become infected if soil or manure contaminates a wound/cut.
SIGNS & SYMPTOMS:
Incubation period is usually between three and ten days, but may be as long as as three weeks. The shorter the incubation period, the higher the risk of death. Lock-Jaw (muscular stiffness) is an early sign. This is followed by by neck stiffness, difficulty swallowing, muscle spasm, sweating and fever.
Fractures of the spine and other bones may occur as a result of muscle spasm and seizures. Death is especially high/common in the very young and very old.
Wounds/Cuts must be thoroughly washed and cleaned and dead/devitalized tissue excised (removed). Tetanus toxoid and antibiotics may also be used. Persons who recover from tetanus DO NOT have natural immunity.
The prevention of neonatal tetanus requires women of childbearing age to receive vaccine containing tetanus toxoid. This results in the mother being protected and tetanus antibodies being transferred from mother to fetus. Being vaccined on during toddler and kindergarden years ensures a certain degree of protection/immunity. TOP
Measles kills more children than any of the other EPI diseases. The virus that causes measles belong to the genus Morbillivirus of the family Paramyxoviridae. Humans are the reservoir. It is very infectious and is quickly spread. It persists in some populations often is the cause of epidemics. Epidemics tend to occur in conditions of crowding and poverty where large numbers of non-immunized people are in close contact. The disease is more severe in infants and adults than in children. It is a highly communicable illness and is and a leading cuase of death in children in developing states. The measles virus is present throughout the world. It is important to ensure high immunization coverage as the Americas focus on elimination and eradication of the disease.
Measles is spread by contact with nasal and pharyngeal secretions of infected people and in droplets released when an infected person sneezes or coughs. An infected person can infect other persons several days before and several days after an individual develops symptoms. A herd immunity of 95% or greater may be needed to disrupt community transmission because of its high communicable infectious state. The time frame of infectious state is from slightly before the beginning of the prodromal period to four (4) days after appearance of the rash and is minimal after the second day of the rash being present.
SYMPTOMS & SIGNS:
The period of incubation ranges from 7 to 18 days. The 1st sign of infection is a high fever lasting one to seven days. This period may be characterized by rhinorrhea (runny nose), watery eyes, cough and Koplik's spots (small white spots inside the cheeks). After several days, a slighty raised rash develops, spreading from the face and upper neck to the body and to the hands and feet over a short 3 day period. The rash fades over a one week period.
Complications may occur particularly in children less than five years old an din grown-ups over twenty years old. In infants, severe diarrhea resulting in dehydration may occur, children may have inflammation of the middle ear (otitis media), respiratory tract infection and croup. Pneumonia is the commonest cause of death associted with measles most likely the measles virus weakens the immune system. Encephalitis may be another serious problem as a result of measles.
Vitamin A administration can help avoid the complications of eye damage and blindness. Maintenance of proper nutritional support and proper management of dehydration state with oral rehydration solution is key in the correct treatment of measles.
The prevention of measles involves vaccination; children should receive their 1st dose of MMR at 12 months of age. Children admitted to hospital with measles should be isolated for at least four days after the skin rash appears. Malnourished kids with measles should be isolated for the duration of the sickness. ALL individuals who have not had the disease or who have not been succesfully immunized are susceptible. Measles immunity is acquired after illness and is permanent. Infants born to mothers who have had the disease are immune for a period of first 6-9 months depending on the amount of residual maternal antibody at the time of pregnancy and the rate of antibody degradation. Immunization at 12-15 months produces immunity in 95-98% range; re-immunization may increase immunity levels to as high as 99%! Chilren born to mothers with vaccine-induced immunity receive less passive antibody and thus may become susceptible to measles and require measles vaccination at an earlier age.
An acute viral disease caused by the paramyxo-virus. Affects the salivary glands and is characterized by parotitis (inflamed parotid glands) and may cause testicular inflammation. It is highly contagious and is spread usually by droplets infection from person to person. The virus enters the body via the mouth.
Mumps is spread from individual to individual via respiratory droplets; the virus replicates in the naso-pharynx and the regional lymph nodes. In 12 to 25 days viremia occurs which lasts from 3-5 days. During the viremia stage, the virus spreads to multiple tissues: meninges, glands like the salivary, pancreas, testes and ovaries.
SYMPTOMS & SIGNS:
Incubation period is usually 14-18 days, may range from 12-24 days. Prodromal symptoms are non-specific and may include myalgia (musle pain), anorexia, malaise, headache, low grade fever. The pain becomes worse on taking liquids that contain acid: vinegar, fruit juice. Parotitis lasts between one to ten days; parotitis occurs in 30-40% of infected individuals and may be unilateral (one side) or bilateral (both sides) and any combination of single or multiple salivary glands may be affected.
Central Nervous System (CNS) may become involved in the form of aseptic meningitis; Meningitis or encephalitis may be associated with headache, vomiting, stiff neck, backache, lethargy and a hot fever lasting for approx. five days. Adults are at higher risk for complications than children; males usually more affected than females.
No direct treatment is known; treatment is as needed (PRN).
The risk of serious infection from mumps is small in infants, that is why mumps vaccine should be given to children 12 months and older. TOP
RUBELLA (German Measles)
A mild, viral infection with a transient rash which mimics measles. Is a communicable illness of limited duration. Ailment lasts a several days with malaise, low-grade fever, headache, anorexia, conjunctivitis with palpable occipital and post auricular lymph nodes.
The virus is transmitted predominantly via nasopharyngeal secretions of infected persons. Infection is by droplet spread or by direct contact with patients or by indirect contact with articles freshly soiled with discharges from nose and throat, blood, urine and feces. Viremia occurs 5-7 days after exposure, with spread of virus throughout the body. In (CRS) Congenital Rubella Syndrome, transplacental infection of the fecus occurs during viremia.
SYMPTOMS & SIGNS:
Incubation period varies from twelve (12) to 23 days; symptoms are often mild less than half of the cases may be subclinical or unapparent. In children, a rash is usually the 1st manifestation; it may be faint, discrete or maculopapular; may 1st appear on the face then spread quickly to the neck, trunk and extremities. Rash disappears by the third day, seldom pruritic. In older children and adults, a 1-5 day prodrome with low-grade fever, malaise, swollen glands and (upper) pulmonary infection preceding the rash. Lymphadenopathy. The disease is so mild that persons affected hardly ever seek medical attention. Arthralgia and arthritis occur frequently in adults, may be considered a part of the illness as opposed to a complication.
Complications are rare but tend to occur more so in adults than in children. The main complications: Arthralgia/Arthritis; Encephalitis; Hemorrhage
No treatment for the acute condition is known; supportive therapy geared towards relief of symptoms.
Prevention of Rubella disease emcompasses vaccintion with rubella vaccine. Emphasis should be placed on women of child-bearing age who are NOT pregnant.
CONGENITAL RUBELLA SYNDROME (CRS):
Rubella immunization policy is focussed on the prevention of CRS. Rubella can have serious consequences in early gestation, leading to fetal demise, premature delivery and many congenital defects. Spontaneous abortion and stillbirths are common. The CLINICAL manifestations of CRS are : CNS - microcephaly, mental retardation, meningo-encephalitis; EYES - cataracts, glacoma or retinitis; EAR defects - deafness that may be bilateral or unilateral; HEART conditions - pulmonary artery stenosis, patent ductus arteriosus; HEPATIC and SPLEEN problems (hepatosplenomegaly, hepatitis); Gestation period - INTRA-UTERINE GROWTH RESTRICTION and post-natal - FTT; Blood conditions - PURPURA and THROMBOCYTOPENIA; Pancreas - DM. CRS is diagnosed by laboratory testing.
VIRAL HEPATITIS B
Disease is caused by the hepatitis b virus, and affects the liver. Individuals usually recover but some continue to carry the virus for may years and can spread the infection to others throughout the time they are chronic carriers. Hepatitis B Virus (HBV) is one of several viruses that causes hepatitis. HbsAg has been found in virtually all body secretions and excretions; only blood and serum-derived fluids, saliva, semen and vaginal fluids have shown to be infectious. The presence of e antigen or viral DNA indicates high virus titer and higher infection rate of these fluids. Infection is commonly associated with exposure to body fluids blood and blood products. The is found the world over and affects all age groups. Most chronic carriers may be found China, South-East Asia and Africa.
Incubation period averages six weeks but may be as long as six months; the variation is relatedin part to the amount of virus in the inoculum, the mode of transmission and host factors Transmssion occurs by percutaneous (IV, IM, SC or intradermal) and permucosal exposure to infective body fluids. Because HBV is stable on environmental surfaces for >7 days, indirect inoculation of HBV can also occur via inanimate objects. Fecal-oral or vector borne transmission has not been demonstrated. ALL persons who are HbsAg positive are potentially infectious.
SIGNS & SYMPTOMS:
There is general susceptibility. Often, the illness is milder and and often anticteric in children and in infants it is usually asymptomatic. Protective immunity follows infection if antibody to HbsAg (anti-HBs) develops and HbsAg is negative. Persons with Down Syndrome, HIV infection and on hemodialysis appear to be more likely to develop chronic infection. The younger a person is when infected the more likely it is that he/she will show no signs or symptoms. A person with no symptoms may remain infected for many years and can spread the infection to other persons. Infected persons may feel weak and experience flu-like symptoms. They may also have very dark urine and/or very pale stools. Jaundice may appear (Icteric). Symptoms may last several weeks; general weakness and fatigue may last for months. A laboratory blood test is required to determine with certainty a person has hepatitis B virus or disease. Most acute infections in adults are followed by complete recovery and affected individuals rarely become chronic carriers. Children, on the other hand, though not acutely sick, as a rule, do become chronic carriers and often develop severe complications.
The result of acute infection can be quite serious. Death occurs in a small percentage of adults. Most serious complications include chronic hepatitis, cirrhosis, liver failure and liver cancer in long standing liver disease.
No treatment for acute condition. In chronic infection the illness can occasionally be stopped with certain medications.
It is advocated that chilren receive three doses of HepB vaccine during the first year of life. Persons with Hepatitis B virus should not donate blood and should not allow others to come into contact with their blood or other body fluids. The should use barrier methods when engaging in sexual intercourse and should not share eating utensils, toothbrushes, needles nor razors with other individuals. Health care professionals should be vaccinated against the illness and use ALL necessary precautions with all patients because patients who are carriers of the virus can spread the infection to them quite easily through blood contact.
VIRAL HEPATITIS A
HAV,a picornavirus, a positive-stranded RNA virus has been classified as Hepatovirus, related to the family Picornaviridae. Occurs globally, sporadically and in epidemics. In developing countries, adults usually immune and epidemics of HA are uncommon. Improved sanitation in many countries around the world as left many grown-ups susceptible, outbreaks are increasing. Where environmental sanitation is poor, infection is common and occurs at an early age.
Reservoir of HA is mainly humans and rarely captive chimpanzees. An enzootic focus has been noted in Malaysia, but there is no suggestion of transmission to humans. Disease is most common among school-aged children and young adults. In approx. 25% of outbreaks, the source of infection is unknown. The method of individual to individual transmission is by the fecal-oral route. The infectious agent is found in feces and reaches peak levels the week or two before onset of symptoms. Common-source outbreaks have been related to contaminated water, food, sandwiches and salads that are not cooked or infected handlers. Incubation period is fifteen to50 days depending on dose.
SYMPTOMS & SIGNS:
Onset is normally sudden with fever, malaise, anorexia, nausea and abdominal discomfort, with jaundice a few days later. The disease varies in clinical severity.Homologous immunity after infrction probably lasts for life.
Normally, severity increases with age, complete recovery without sequelae or recurrences is the rule. Many infections are asymptomatic; many are mild, without jaundice, esp. in children.
There is no specific treatment for acute illness. Supportive therapy is geared to alleviate signs and symptoms. Diagnosis is established by detection of IgM antibodies against HepA virus in the serum of acutely or recently ill patients.
Educate public about good sanitation and personal hygiene; special emphasis on hand-washing, proper sanitary disposal of feces.
Provide proper water treatment and distribution systems
Management of daycare centres should stress measures to minimize possibility of fecal-oral transmission
ALL travellers to intermediate or highly endemic areas: Africa, Middle East, Asia and Central and South America, may be given prophylactically doses of IG or HepA vaccine.
Oysters, clams and other shellfish from contaminated regions should be heated to a temp. of 85 - 90 oC (185 - 194 oF) for four minutes or steamed for 90 seconds before eating.
An inactivated Hep A vaccine available; vaccine shown to be safe, immuogenic and efficacious. TOP
HAEMOPHILUS INFLUENZAE TYPE B
A gram-negative coccobacillus, generally aerobic, but can grow as a facultative anaerobe. Six capsular types of the microbe is known; however, type B organisms account for all strains that cause invasive disease. Risk factors to Hib illness include host factors that increase the chances of exposure to Hib. Exposure factors include household crowding, large household size, daycare attendance, low socio-economic status, low parental education levels and school-aged siblings. Protective factors (effect limited to < 6 months of age) may include breast-feeding and passively acquired maternal antibodies.
The germ (microbe) enters the body via the nasopharynx, colonize it and may remain transiently or for several months in the absence of symptoms. The manner in which the organism invades the blood stream in not quite known. Primary spread is presumably by respiratory droplets. Humans are the only reservoir. Contact with discharges from conjunctivae or upper respiratory tracts of infected individuals from contaminated fingers, clothing and other articles may spread the germ.
SIGNS & SYMPTOMS:
Incubation period may be as short as 25 - 75 hours. Children under the age of 5 years are most often affected and the incidence decreases with age. Clinical conditions caused by Hib may include: meningitis, epiglottitis, septic arthritis, cellulitis, pneumonia. Classic signs of neck stiffness, poor feeding and fever may be present; others may include: respiratory obstruction, stridor and drooling.
Complications of the specific invasive disease it may cause. Hib was the leading cause of meningitis (bacterial) in children under 5 years of age before the development and usage of Hib vaccine.
Treatment is with antimicrobial therapy: chloramphenicol or an effective third generation cephalosporin.
Mode of prevention is by vaccination/immunization of children. This reduces the risk that unvaccinated children will be exposed.
An acute illness of short duration, caused by a flavivirus and is endemic in tropical regions of Central and South America and Africa. Affects persons of all age groups. Immunity after immunization lasts for many years (10). Revaccination every ten years is required for international travel and is strongly recommended for those persons at risk like laboratory workers, hunters and forest personnel. YELLOW FEVER IS ONE OF THE INTERNATIONAL NOTIFIABLE DISEASES.
Reservoir for yellow fever in urban areas are humans and the Aedes aegyptimosquito. In forested areas, vertebrates other than humans, mainly monkeys, marsupials and mosquitoes are the reservoirs. The disaese is highly communicable where many susceptible individuals and abundant vector mosquitoes coexist.
SIGNS & SYMPTOMS:
Incubation period is three to six days. The illness may be so mild that it is not noticed or diagnosed. In can be confused with malaria and hepatitis. Sx and Si may include: fever, chills, headache, backache, general muscle pain and vomiting.With disease progression, individuals become slow and weak, may experience bleeding gums and hematuria (blood in the urine), jaundice and hematemesis (vomiting of blood).
Illness usually lasts two weeks, thereafter the person reovers or dies. Death may follow seizures and coma. Disease is diagnosed by conducting a laboratory blood test. Individuals who recover from the illness have life-long immunity.
No known specific treatment. Patients may need fluids for rehydration.
The disease is prevented by vaccination with YF vaccine. The vaccine is very safe and effective, producing antibodies against YF virus which may last for 30 years or more. Prevention should also involve the elimination of the accumulation of stagnant water in which vector mosquitoes breed. Second attacks are unknown; life-long immunity.
This disease is caused by the germ Streptococcus pneumoniae, a gram-positive diploccocus and is more prevalent in the winter time and early spring. Predisposing factors like season, crowding and pulmonary infection have significant impact on disease occurrence.
Transmission of Streptococcus pneumoniae occurs as a consequence of direct individual to individual contact via droplets and by auto-inculation. S. pneumoniae is the most common cause of pneumonia acquired in nursing homes.Penicillin-resistant S. pneumoniae is an increasing concern in the US and around the globe.
SIGNS & SYMPTOMS:
The important clinical syndrome of invasive pneumococcal illness include: meningitis, pneumonia and bacteremia. The disease most often occurs when a predisposing condition exists. Incubation period of pneumococcal pneumonia is short, approx. 1 to 3 days; Sx tend to include abrupt onset of fever and shaking chills or rigors. Other Sx include pleuritic chest pain, productive cough of mucopurulent, rusty sputum, dyspnea (SOB), tachypnea (rapid breathing), hypoxia (poor oxygenation), tachycardia (rapid heart rate), malaise and weakness. Pneumococci is responsible for up to 36% of adult community-acquired pneumonia and 50% of hospital acquired pneumonia.Fatality rate is 5 - 7% and may be higher in the elderly.
Empyema (infection of the pleural space), pericarditis, endobronchial obstruction with atectasis and lung abscess. Bacteremia is present in approx. 25 - 39% of patients.
TREATMENT: Pencillin is the medication of choice; patients who are allergic to penicillin can be given cephalosporins or erythromycin for oneumonia and chloramphenicol for meningitis.IM or IV immunoglobin administration may be useful for preventing pneumococcal infection in children with congenital or acquired immunodeficiency diseases. There are no specific recommendations concerning isolation of patients with pneumococcal disease.
Penicillin prophylactcally is one mode of preventing pneumococcal infection
Oral penicillin G or V is recommended for prevention of pneumococcal illness in children with functional or anatomic asplenia, regardless of whether or not they have been immunized.
Oral penicillin V when administered to infants and young children with sickle cell disease has reduced the incidence of severe bacterial infections.TOP
An acute, contagious disease caused by the varicella zoster virus; it is a member of the Herpes Virus The primary varicella infection is known as chickenpox. The recurrent infection is known as shingles. The VZ virus persists in the sensroy nerve ganglia and herpes zoster is the result of recurrent infection.
VZ virus enters the body via the respiratory system and conjunctiva and is believed to replicate at the site of entry in the nasopharynx and in the regional lymph nodes. The infection (virus) is spread of vesicle fluid or secretions of the respiratory tract. Spread can also occur indirectly through articles freshly soiled by discharges from vesicles and mucus membranes of infected individuals. IN CONTRAST TO VACCINIA AND VARIOLA, SCABS FROM VARICELLA LESIONS ARE NOT INFECTIVE. Chickenpox is one of the most readily/easily communicable diseases, especially in the realy stages of the eruption. Herpes Zoster has a much lower rate of transmission. Period of communicability may be as long as 5 days, but is usually 1 - 2 days before onset of rash. Contagiousness may be prolonged in patients with altered immunity. Patients with zoster infection may be a source of infection for a week after the appearance of their vesico-pustular lesions. Susceptible persons should be considered to be infectious 10 - 20 days following exposure.
SIGNS & SYMPTOMS:
Susceptibility to chickenpox is universal among those not previously infected and ordinarily a more severe disease occurs in adults than children. Incubation time is from 2 - 3 weeks; infection may be prolonged after passive immunization against varicella and in immunocompromised persons. A mild prodrome may procede the onset of the rash. THE RASH IS GENERALIZED, PRURITIC AND RAPIDLY PROGRESSES FROM MACULES TO PAPULES TO VESICLES BEFORE CRUSTING. The rash usually appears on the scalp, trunk and then extremities. Clinical course in normal children is generally mild with malaise, pruritic and fever for 2 - 3 days. Pulmonary and gastrointestinal symptoms are usually absent. Infection confers long immunity, second attacks are rare.
The commonest complications that can result from the infection that requires hospitalization are bacterial infections of skin lesions, pneumonia, dehydration, encephalitis and hepatitis.
TREATMENT: There is no specific treatment but antiviral agents may alter or dimish the course or severity of the illness. Other supportive therapy may be needed.
The illness is prevented by immunization with varicella vaccine. The vaccine is very safe and effective, producing antibodies that appearto be long-lasting.
Rota virus is adouble stranded RNA virus. There are at least 10 serotypes. The microbes are very stable and may remain viable in the environment for weeks or months in not disinfected, Reservoir is most likely humans. The animal viruses do not produce illness in humans. Rotavirus infection is nearly universal and the incidence is similar in developed and developing countries.
The virus enters the body through the mouth and virus replication occurs in the small intestine; infection may result in decrease intestinal absorption of sodium, glucose and water and decreased levels of intestinal lactase, alkaline phosphatase, sucrase activity and lead to isotonic diarrhea. The spread is porbably fecal-oral route. There is some evidence that rotavirus may be present in contaminated water. Period of communicability is mainly during the acute stage of the disease. Rotavirus is not usually detectable after about the eight day of infection.
SIGNS & SYMPTOMS:
Incubation period for rotavirus diarrhea is approx. 24 - 72 hours and the clinical features may vary dependent on whether it is the first infection or re-infection. Susceptibility is greatest between 6 and 24 months of age; by age 3 years, most persons would have acquired rotavirus antibodies. Illness may be asymptomatic, may cause self-limited watery diarrhea or result in severe dehydrating diarrhea with fever and vomiting. Clinical features are non-specific and confirmation of a diarrheal illnessas due to rotavirus requires laboratory testing.
Infection in infants and young children can result in severe diarrhea or dehydration, electrolyte imbalance, metabolic acidosis.
TREATMENT: No specific treatment is available other than supportive therapy. Parental education as to prevention and management of diarrhea and dehydration is critical.
Prevention can be accomplished by immunization with the rotavirus vaccine. Recovery from rotavirus infection does not confir or result in permanent immunity.
Rabies is an almost invariably fatal disease. The virus is an RNA Rhabdovirus contained in the saliva and certain body materials such as brain tissue and cerebrospinal fluid (CSF) of rabid animals and human beings. It is a disease mainly of animals. Urban (canine) rabies is transmitted by dogs whereas sylvatic rabies is a disease of wild carnivores and bats, with sporadic spillover to dogs, cats and livestock.
Many wild and domestic animals are reservoirs: dogs, foxes, coyotes, wolves, jackals, skunks, raccoons, mongooses and other biting mammals. Virus-containing saliva of a rabid animal is introduced into humans by a bite or scratch. Transmission from individual to individual is theoretically possible since saliva of the infected person may contain virus, this, however, has never been proven nor documented.
SIGNS & SYMPTOMS:
Incubation period is usually 3 - 8 weeks; It depends, however, on the severity of the wound, site of the wound in relation to the quantity of nerve fibres present and its distance from the brain, the amont and strain of virus inoculated, victims defense system and protection provided by clothing. Onset of illness is heralded by a feeling of apprehension, headache, fever, makaise and sensory changes esp. at wound site. The disease progresses to paresis and paralysis; spasm of esophageal musculature causes difficulty in swallowing and hydrophobia (fear of water). Delirium and convulsions usually folllow. Without medical intervention, death is rapid (2 - 6 days) and usually by respiratory paralysis. Diagnosis is confirmed by specific FA staining of brain tissue or by virus isolation.
About 20% of patients develop an ascending symmetric paralysis with flaccid and decreased tendon reflexes dominating the acute phase. If the person does not die of cardio-pulmonary failure, he/she goes into an irreversible coma. MOST SIGNIFICANT COMPLICATIONS ARE MYOCARDITIS AND PITUITARY DYSFUNCTION EXPRESSED AS EITHER DIABETES INSIPITUS OR INAPPROPRIATE SECRETION (RELEASE) OF ANTIDIURETIC HORMONE (IADH).
The chief requirement for local treatment is that it is prompt and thorough. Prevention of rabies after animal bites should consist of the following:
Treatment of bite wounds: immediate and thorough cleaning with soap and/or detergent and flushing with water. Wounds should not be sutured; large wounds should have loose sutures inserted and not interfere with free bleeding and drainage.
Specific immunologic protection: immunologic prevention of rabies in humans is provided by administration of human rabies immune globulin (HRIG) as soon as possible after exposure to neutralize the virus at the bite wound site and then by giving vaccine at a different site to elicit active immunty.
Preventive Measures: a) Register, license and immunize all dogs in enzootic countries. Immunize all cats; b) Maintain acyive surveillance for rabies in animals. Lab. capacity should be developed to perform FA testing on all wild animals involved in human or domestic animal exposures. Educate physicians, veterinarians and animal-control officials to obtain/euthanize/test animals involved in human and domestic animals exposure; c) Detain and clinically observe for approx. 10 days any healthy-appearing dogs or cats that were involved in biting of persons, dogs and cats showing suspicious signs of rabies should be sacrificed and tested for rabies; d) Individuals at high risk (vets, wildlife conserationists, etc, etc) should receive pre-exposure immunization. TOP
Meningococcal Meningitis and septicemia are systemic infections caused by Neisseria meningitidis, a gram negative diplocloccus with multiple serogroups: A, B, C, X, Y, Z, W-135 and L; Serotype D is rare. There is an association between the onset of seasonal Influenza activity and meningococcal disease. The illness is highest in infants.
The disease is a world wide illness, but is commonest in poor overcrowded areas. Meningococci are transmitted by droplet spread or direct contact from carriers or from persons in the early stage of illness. The most likely route of invasion is via the nasopharynx. The reservoir is humans. The individual is infectious as long as the meningococci are present in secretions/discharges from nostrils and mouth. Most carriers do not develop the illness but they may transmit the disease for approx. six months.
SIGNS & SYMPTOMS:
Incubation period varies from 2 - 10 days, normally is 2 - 3 days. The onset of illness varies from fulminant to insidious with mild prodromal symptoms. Early Sx and Si may include malaise, pyrexia and vomiting; headache, photophobia, drowsinss or confusion, joint pains, atypical hemorrhagic rash of meningococcal septicemia may develop. The rash may be purpuric and non-blanching. Patients may also present in a coma. The diagnosis should be suspected in the presence of voming, pyrexia, irritability, if still patent, raised (bulging) anterior fontanelle tension.
Susceptibility to the clinical disease is low and decreases with age. Individuals who are deficient in certain complement components are especially prone to recurrent disease. Splenectomized patients are susceptible to bacteremic illness.
Antibiotics of choice for immediate therapy include Benzylpenicillin; Penicillin will temporarily suppress the organisms, it does not normally eradicate them from the oro-nasopharynx.
Vaccines against serogroups A, C, Y and W-135 are available; there are no vaccines against serogroup B. Close contacts of persons with meningococcal meningitis have increase risk of becoming ill even with the use of appropriate chemoprophylaxis; the recommended prophylaxis is rifampicin. Alternative agents are ciprofloxacin and ceftriaxone for pregnant contacts. Vaccination appears to be effective in controlling epidemics and reducing infection rates but not carriage rates.
An acute viral illness of the respiratory tract affecting all ages. There are three types of influenza virus recognized: A, B, and C that are determined by the antigenic properties of the two relatively stable structural proteins: the nucleoprotein and the matrix protein. Emergence of completely new subtypes (antigenic shift) occurs at irregular intervals and only with type A viruses; they are responsible for pandemics and result from the unpredictable recombination of human and swine or avian antigens. Because of these minor antigenic changes (antigenic drift), there are frequent epidemics and outbreaks, making it necessary to, almost, yearly reformulate influenza vaccines. In temperate zones, epidemics tend to occur in winter; in tropics, they tend to occur in the rainy season, but outbreaks or sporadic cases may occur in any month.
Humans are the primary reservoir for human infections, however, mammalian reservoirs such as swine and avian reservoirs likes ducks are the likely sources of new human sub-types thought to emerge via genetic re-assortment. Airborne spread predominates among crowdedpopulations in enclosed spaces . Transmission may occur by direct contact. The period of communicability is approx. 3 - 5 days from clinical onset in adults and up to 7 days in young children.
SIGNS & SYMPTOMS:
Incubation period is short, 1- 3 days. Fever, headache, myalgia, coryza, sore throat and cough characterize influenza. Cough is often severe and protracted while otther manifestations are often self-limited with recovery in 2 - 7 days. Recognition is usually by epidemiological characteristic and sporadic cases identified by laboratory procedures. Influenza virus may cause the clinical picture of the common cold, croup, bronchiolitis, viral pneumonia and undifferentiated acute respiratory illness.
The importance of influenza is derived from the repidity with which epidemics evolve, the widespread morbidity and seriousness of complications, for example viral and bacterial pneumonia. During major epidemics, severe illness and death occur, principally among the elderly and those debilitated by chronic cardiac, pulmonary, renal or metabolic diseases or immuno-suppression. During the febrile stage of illness, lab. confirmation is made by isolation of influenza viruses from pharyngeal or nasal secretions.
Specific treatment - Amantadine or rimandine started within 48 hours of onset of influenza A illness and given approx. 3 - 5 days reduces symptoms and virus titres in respiratory secretions. Doses should be reduced for those > 65 years of age or those with decreased hepatic and renal function. Both pharmaceutical agents are associated withg CNS side effects. The use of these drugs should be considered in non-immunized persons or groups at high risk of complications such as institutions, nursing homes for the elderly. Medication should be continued throughout the epidemics; it will not interfere with the response to influenza vaccine.
When a new subtype appears, all children and adults are equally at risk, except those who have lived through realier epidemics caused by the same subtype. Infection produces immunity to the specific infecting virus. Vaccines produce serologic responses specific for the included viruses and elicit booster responses to related strains with which the individual has had prior experience. Preventive measures: a) Educate the public and healthcare staff in basic personal hygiene, esp. the danger of unprotected coughs and sneezes and hand-to-mucous membrane transmission; b) Immunization with available killed-virus vaccines may provide 70 - 80% protection against infection in healthy young adults when the vaccine antigen closely matches the circulating strains of virus. In the elderly, however, immunization may be less effective in preventing illness but may reduce the severity of the disease and the incidence of complications by 50 - 60% and death by approx.80%; c) A single dose suffices for those with prior exposure to influenza A and B viruses. Routine vaccination programmes should be directed primarily at those at greatest risk of serious complications or death and those who might spread infection to them (healthcare workers, household contacts of high-risk persons); d) Immunization should also be considered for those engaged in essential community services and is recommended for military staff. Yearly recommendations for vaccine components are based on the viral strains currently circulating, as determined by international surveillance.
There are few contraindications for immunization. Some health care providers have misconceptions about specific contraindications to vaccination. ALL vaccines should be given on schedule, even when a child has a low-grade fever, amild cold, diarrhea or other mild illness.
There are few ABSOLUTE contraindications to the Expanded Programme on Immunization (EPI) vaccines. The risk of delaying a vaccination because of mild illnesses is that the child may not return again and the opportunity for immunization is wasted. Infants born prematurely regardless of birth weight should be vaccinated at the same chronological age and according to the same schedule and precautions as full-term infants and children. Birth weight and size are generally NOT factors in deciding whether to postpone routine vaccination of a clinically stable, premature infant. The full recommended dose of each vaccine should be used. Divided or reduced doses are not recommended!
Neither killed nor live vaccines affect the safety of breast-feeding for mothers or infants. Breast-feeding does not adversely affect immunization and is NOT a contraindication for any vaccine. Breast-fed infants should be vaccinated according to routine recommended schedules.
Health Care personnel should use every opportunity to vaccinate eligible children at ALL times.
Generally, live vaccines should NOT be given to individuals with immune deficiency diseases or to persons who are immuno-suppresswed due to malignant disease, treatment with immuno-suppressive agents or irradiation. Both measles and oral poliomyelitis vaccines, however, can be given to individuals with HIV/AIDS.
Children with symptomatic HIV infection should NOT be vaccinated with BCG nor Yellow Fever vaccines.
A severe adverse event following a dose of vaccine (anaphylaxis, encephalitis/encephalopathy or non-pyretic (febrile) convulsions) is a TRUE contraindication to immunization.
Vaccines containing the whole-cell pertussis component should NOT be given to children with an evolving neurological disease: uncontrolled epilepsy or progressive encephalopathy.
Persons with a history of anaphylactic reaction(s): generalized urticaria, dyspnea (SOB), angioedema (swelling of mouth/lips and throat, hypotension, shork following egg ingestion, should NOT receive vaccine prepared on (hen's) egg tissue. Vaccines propagated in chicken fibroblast cells can usually be given safely to those persons.
Conditions which are NOT contraindications to immunization:
Minor ailments such as upper respiratory tract infections or diarrhea, with fever less than 38.5 oC
Allergy, asthma, or other atopic manifestations, hay fever or snuffles
Prematurity, small-for-date infants
Child being breast-fed
Family history of convulsions
Treatment with antibitics, lose dose corticosteroids, or locally acting example, topical or inhaled steroids
Dermatoses, eczema, localized skin infection
Chronic diseases of the heart, lung kidney or liver
Stable neurological conditions such as cerebral palsy and Down Syndrome
History of jaundice after birth
Conditions that are (ABSOLUTE) Contraindications to immunization:
Immunization should be postponed in persons suffering from severe infections with or without fever
Anaphylactic reaction to a previous dose contraindicates further immunization with that vaccine
Anaphylactic reaction to a vaccine constituent contraindicates the use of vaccines containing that substance
Meningococcal vaccine should not be used in pregnancy, unless there is a substantial risk of meningococcal infection; safety of the vaccine in pregnancy has not yet been established
Serious reactions should be reported promptly to your local Health Authorities via the Officer in charge of the EPI programme!
courtesy (EPI) Ministry of Health; Immunization Manual for Health Professionals, 3rd edition TOP | http://www.salutogena.com/vaccination.htm | 13 |
19 | Before the white settlers arrived, two groups of Indian tribes lived in the region that is now Montana. The Arapaho, Assiniboine, Atsina, Blackfeet, Cheyenne, and Crow tribes lived on the plains. The mountains in the west were the home of the Bannack, Flathead, Kalispell, Kootenai, and Shoshone tribes. Other nearby tribes (such as the Sioux, Mandan, and Nez Perce) hunted in the Montana region
Much of the region was acquired by the U.S. from France as part of the Louisiana Purchase in 1803. The northwestern part was gained by treaty with Great Britain in 1846. At various times, parts of Montana were in territories of Louisiana, Missouri, Nebraska, Dakota, Oregon, Washington and Idaho.
First explored for France by François and Louis-Joseph Verendrye in the early 1740s. The American explorers Meriwether Lewis and William Clark led their expedition across Montana to the Pacific Coast in 1805. They returned in 1806 and explored parts of Montana both going and coming. By 1807, Manuel Lisa set up Montana's first fur-trading post.
In 1841 missionaries built St. Mary's Mission, the first attempt at a permanent settlement. In 1847, the American Fur Company built Fort Benton on the Missouri River. This town is now Montana's oldest continuously populated town.
The U.S. claim to NW Montana, the area between the Rockies and the N Idaho border, was validated in the Oregon Treaty of 1846 with the British. Montana was then still a wilderness of forest and grass, with a few trading posts and some missions.
Cattle raising began in Montana in the mid-1850s, when Richard Grant, a trader, brought the first herd to the area from Oregon. Gold was discovered in Grasshopper Creek in 1862. Thousands of prospectors built mining camps throughout Montana as gold strikes were discovered. Some of these include Bannock, Diamond City, and Virginia City.
The mining camps had almost no effective law enforcement. Finally, the citizens took the law into their own hands. One famous incident involved the two biggest gold camps--Bannack and Virginia City. The settlers learned that their sheriff, Henry Plummer, was actually an outlaw leader. The men of Bannack and Virginia City formed a vigilance committee to rid themselves of the outlaws. These vigilantes hanged Plummer in January 1864. They adopted as their symbol the numbers "3-7-77." These numbers may have represented the dimensions of a grave: 3 feet wide, by 7 feet long, by 77 inches deep. Many outlaws were hanged or driven from Montana by the vigilantes.
A large number of early prospectors came from the South, particularly from Confederate Army units that broke up in the Civil War (1861-1865). One of the major gold fields was called Confederate Gulch, because three Southerners found the first gold there.
During the boom years, gold dust was the principal money. For example, missionaries did not pass collection plates at church services. They passed a tin cup for gold dust. Chinese laundrymen even found gold in their wash water after they washed the miners' clothing.
Sidney Edgerton, an Idaho official, saw the need for better government of the wild mining camps. At the time, Montana was part of Idaho Territory. Edgerton wrote to Washington, D.C., urging the creation of a new territory. Montana became a territory on May 26, 1864, and Edgerton served as its first governor.
In 1866, Nelson Story, a cattleman, drove a thousand longhorn cattle from Texas to Montana. Story's herd started the Montana cattle industry in earnest.
The coming of the Northern Pacific Railroad in 1883 opened the way to the eastern markets and caused even more growth. But disaster struck the cattle industry in the bitterly cold winter of 1886-1887. Cattle died by the thousands in the howling blizzards and frigid temperatures. Ranching continued after this, but on a smaller, more careful, basis.
In 1876, the U.S. Army arrived at the Little Bighorn River to place all Native Americans on reservations. In the famous battle known as “Custer's Last Stand,” Sioux and Cheyenne Indians killed Lieutenant George A. Custer and a large part of his men. The last serious Indian fighting in Montana started when the U.S. government tried to move the Nez Perce Indians from their lands in Oregon. Chief Joseph of the Nez Perce led his tribe toward Canada through Montana. The Indians and U.S. troops fought several battles in Idaho, and then a two-day battle at the Big Hole in southwestern Montana. Troops under Colonel Nelson A. Miles captured Chief Joseph's Indians about 40 miles (64 km) from the Canadian boundary in October 1877.
Between 1880 and 1890, the population of Montana grew from about 39,000 to nearly 143,000. The people of Montana first asked for statehood in 1884, but they had to wait five years. Finally, Montana was admitted as the 41st state on November 8, 1889. Joseph K. Toole of Helena became the first state governor.
Much of Montana's growth during the 1880s and 1890s came because of the mines at Butte. The earliest mines produced gold. Then silver was discovered in the rock ledges of the Butte Hill. Later, miners found rich veins of copper. Miners came to Butte from Ireland, England and other areas of Europe. Smelters were built, and more men were hired to operate them. The Butte Hill became known as "the Richest Hill on Earth."
Butte Hill was called the Richest Hill on Earth during the 1880s. Gold, silver, and eventually copper have been mined there. Marcus Daly and William Clark controlled the largest mines and competed both in business and politics.
Clark wanted to be a U.S. Senator, but Daly opposed him. In the campaign of 1899, Clark was accused of bribery. He won, but resigned rather than face an investigation by a Senate committee. Two years later, Clark won his Senate seat in a second election. He was helped by F. Augustus Heinze, another mine owner. Heinze had arrived in Butte long after Daly and Clark became millionaires. But Heinze became wealthy though clever use of mining law and court suits.
First Daly, then the others sold their properties to a single corporation, which became the Anaconda Company. The Company organized an electric power company, built a railroad, and constructed dams. It also controlled forests, banks, and newspapers. Anaconda became so important in the life of the state that Montanans referred to it simply as "The Company."
Montana became the 41st state on Nov. 8, 1889. In the years that followed, dams were built that provided water for irrigation and electricity for industrial use. Food processing plants opened and railroads were extended.
During the early 1900s, Montana made increasing use of its natural resources. New dams harnessed the state's rivers, providing water for irrigation and electric power for industry. The extension of the railroads assisted the processing industries. New plants refined sugar, milled flour, and processed meat. In 1910, Congress created Glacier National Park, which became an attraction for tourists.
Many lost their farms and their jobs. The U.S. government continued to develop natural resources in Montana. More than 10,000 workers were paid to build the Fort Peck Dam. Others helped with irrigation, soil conservation, and construction of parks and public roads. This program was called The New Deal.
Jeannette Rankin of Missoula was elected to the U.S. House of Representatives in 1916. She was the first woman to serve in Congress. She won fame in 1941 as the only member of Congress to vote against U.S. entry into World War II. Rankin said she did not believe in war and would not vote for it.
The Great Depression (1929-1939) also hit the nation. Demand for the state's metals dropped because of the nationwide lag in production. Drought contributed to the drop in farm income, crought on by the depression.
However, state and federal programs continued to develop Montana's resources during the 1930s. The building of the giant Fort Peck Dam helped provide jobs. Completion of the dam in 1940 provided badly needed water for irrigation. Other projects included insect control, irrigation, rural electrification, and soil conservation. The construction of parks, recreation areas, and roads also continued under government direction. In 1940, Montana voters elected Republican Sam C. Ford of Helena as governor. He was only the third Republican state governor in Montana history.
Montana's economy flourished during World War II (1941-1945). Flour, meat and metals were all in demand. After the war, prices for grain dropped and many farms were abandoned in search for work in the cities. Oil was discovered in Williston Basin and the Anaconda Aluminum Company opened a large plant in northwestern Montana.
In 1972, Montana voters narrowly approved a new state Constitution. The Constitution went into effect in 1973.
Montana's gas, oil, and coal industries expanded rapdily during the 1970s, when an energy shortage developed in the United States. Coal production increased sharply, from less than 3 million to more than 30 million tons per year. Huge, open-pit strip mines operated at Colstrip and other southeastern Montana sites. The Montana Power Company built four coal-burning, electric power plants at Colstrip. A 30 percent coal severance tax contributed needed funds to the state. But, in the early 1980s, fuel prices fell, and Montana's production leveled off.
Montana's traditionally important industries experienced major difficulties during the mid-1980s. Farmers suffered hardships brought on by drought, low farm product prices, and reduced sales to foreign markets. The lumber industry cut fewer logs than in the past. In addition, the mining industry lost thousands of jobs. The Anaconda Company, once the leading mining company in the state, gave up copper mining altogether.
Montana, today, remains a state rich in natural resources. But state leaders seek to broaden Montana's economy by attracting small business and by promoting electronics and other advanced-technology ventures. The Science and Technology Alliance, created in 1985, looks for new uses for raw materials. The state is also working to expand its tourist industry. | http://www.shgresources.com/mt/history/ | 13 |
14 | Continued from previous page
A visible impact of land degradation is soil loss. Information regarding rates of soil loss in Africa are fragmented and country-specific, with estimates ranging from 900 t/km2/yr to 7 000 t/km2/yr (Rattan 1988). Likewise, studies of the economic impacts of soil loss are localized and varied, but are estimated to reach up to 9 per cent of GDP (UNU 1998). Loss of soil not only impairs productivity for future cultivation, but also causes: sedimentation in dams and rivers; smothering of aquatic and coastal habitats; and eutrophication. This, in turn, leads to reduced biodiversity in, and productivity of, these systems. Ultimately, these effects are felt in the lowered economic and nutritional status of African people.
Over the past 30 years, soil structure has been damaged, nutrients have been depleted and susceptibility to erosion has been increased, as a result of: increasing application of chemicals; use of inappropriate equipment and technologies; and commercial monospecific plantations
With appropriate agricultural practices, rates of soil loss can be reduced, and soil fertility and productivity can be restored, as recently shown in Ethiopia, Kenya, Malawi, Senegal, Somalia and South Africa (Nana- Sinkam 1995, Hoffman and Todd 2000). A Soil Fertility Initiative for sub-Saharan Africa was established in the 1990s, and launched at the 1996 World Food Summit, in response to growing concerns over soil degradation and loss. This is a participatory initiative, with technical partners including: the International Fertilizer Industry Association; the International Food Policy Research Institute (IFPRI); the International Centre for Research in Agroforestry (ICRAF); the International Fertilizer Development Centre; the Food and Agriculture Organization (FAO); and the WB. The approach combines policy reform and technology adaptation, aimed at conserving natural resources and improving farmer's livelihoods through the design and implementation of integrated plant nutrient management programmes, which use a combination of available organic sources of nutrients, supplemented by mineral fertilizers. Many countries are currently preparing National Soil Fertility Action Plans as part of this programme (Maene 2001).
Desertification describes an extreme form of degradation in dryland areas, caused by climatic and management factors, where the land is no longer productive. Some 66 per cent of Africa is classified as desert or drylands and, currently, 46 per cent of Africa's land area is vulnerable to desertification, with more than 50 per cent of that under high or very high risk (Reich and others 2001). The most vulnerable areas are along desert margins, as shown in Figure 2f.5. These areas account for 5 per cent of the land area, and are home to an estimated 22 million people (Reich and others 2001). Climate change is predicted to reduce rainfall, to increase evaporation, and to increase the variability and unpredictability in rainfall for many areas of Africa (IPCC 1998, IPCC 2001). This, in turn, will lead to greater vulnerability to drought and desertification. In combination with continuing pressure for economic growth, and the rapid population growth rates, across the region, this will further threaten food security, unless coherent land tenure and management policies are established and enforced.
In recognition of their vulnerability to declining land quality and desertification, African countries were largely instrumental in establishing the United Nations Convention to Combat Drought and Desertification in Countries Experiencing Serious Drought and/or Desertification, Particularly in Africa (UNCCD) in 1992 (UNCCD 2000). Since then, most African countries have embarked on National Action Plans, together with awareness-raising campaigns and, by 2001, 17 countries had completed and formally adopted their programmes (UNCCD 2001). Action plans have also been developed at the sub-regional level: in northern Africa by the Arab Maghreb Union (AMU); in western Africa by the Permanent Inter-State Committee for Drought Control in the Sahel (CILSS); in eastern Africa by the Intergovernmental Authority on Development (IGAD); and for southern Africa by the Southern African Development Community (SADC) (UNCCD 2001). A Regional Action Programme is also being developed, and will be coordinated by the African Development Bank (ADB) in Abidjan. Desertification, poverty, development pressures and climatic factors interact in a complex manner to influence food security. It is, therefore, essential that desertification be tackled within a development framework, and in a participatory manner. The approach must combine: political and legal reform; economic and social development strategies; land tenure reform; international partnerships; capacity building; and financial sustainability.
Figure 2f.5: Vulnerability to desertification in Africa
Source: Reich and others 2001 | http://www.grida.no/aeo/175.htm | 13 |
27 | In the first two parts, representing, adding, and subtracting numbers using base ten blocks were explained. The use of base ten blocks gives students an effective tool that they can touch and manipulate to solve math questions. Not only are base ten blocks effective at solving math questions, they teach students important steps and skills that translate directly into paper and pencil methods of solving math questions. Students who first use base ten blocks develop a stronger conceptual understanding of place value, addition, subtraction, and other math skills. Because of their benefit to the math development of young people, educators have looked for other applications involving base ten blocks. In this article, a variety of other applications will be explained.
<b>Multiplying One- and Two-Digit Numbers</b> Continue Reading
In part one of this article, you read about representing and adding numbers using base ten blocks. Once these two skills are mastered, it is time to move onto many a child’s nightmare: subtraction. Subtraction, as you may have heard, is essentially addition in reverse. It can be an arduous task on paper, but it can be quite easy with base ten blocks.
Recall that there are four different base ten blocks: cubes (ones), rods (tens), flats (hundreds), and blocks (thousands). Groups of ten base ten blocks can be regrouped or traded for equivalent amounts of other base ten blocks; for instance, ten cubes can be traded for one rod because both are worth ten. For subtraction, it is useful to know how to trade down rods, flats, and blocks. Trading down means converting larger place value blocks into smaller place value blocks. For instance, one flat can be traded for ten rods since they are both worth 100.
Before describing the subtraction procedure, let’s go over some vocabulary . . . Continue Reading
Base ten blocks are an excellent tool for teaching children the concept of addition because they allow children to touch and manipulate something real while learning important skills that translate well into paper and pencil addition. In this article, I will describe base ten blocks and how to use them to represent and add numbers.
The numbering system that children learn and the one most of us are familiar with is the base ten system. This essentially means that you can only use ten unique digits (0 to 9) in each place of a base ten number. For instance, in the number 345, there is a hundreds place, a tens place and a ones place. The only possible digits that could go in each place are the digits 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9. In this example, the place value of the ones place is 5.
Base ten blocks turn the base ten concept into something children can see and touch.
Base ten blocks consist of cubes, rods, flats, and blocks. Cubes represent the ones place and look exactly like their name suggests – a small cube usually one centimeter by one centimeter by one centimeter. Rods represent the tens place and look like ten cubes placed in a row and fused together. Flats, as you might have guessed, represent hundreds, and blocks represent thousands. A flat looks like one hundred cubes place in a 10 x 10 square and attached together. A block looks like ten flats piled one on top of the other and bonded together. Continue Reading
Have you ever wondered how some people seem to get to grips with a new language very quickly? When learning a new language there is often a key that opens the door more quickly. This article will be that key!
The biggest difference between the English and Spanish languages is that Spanish gives lots of its words a gender, what that means is that the spelling of a word will be affected by what or who that word is referring to. If that sounds a little bit odd and not an easy Spanish lesson at all, let us look at a few examples.
A Spanish word for doctor is medico, medico signifies a male doctor. If you wanted to write about a female doctor you could use the word medica. Continue Reading
For most law students, networking with law firms is the best way to find a great internship. In large cities and small towns, the legal community is close knit and many times, it is who you know, not what you know. The more people you meet with, the better your chances of building your professional network, and finding a great intern position. Networking is best started with one’s own friends and acquaintances. You can gradually branch out to network with your friends’ friends, colleagues, and members of the legal profession, as well as others in the business community that can further your efforts.
Do not be shy about contracting people of the legal profession who are not known to you. Concentrate on lawyers who are active in your field of interest. Make a list of potential law firms and seek appointments to set up interviews. You can make it clear that you are not looking for a job or internship, but seeking their advice and suggestions on your common field of interest. In the process of meeting them, if they do have an opening for an intern, they may consider you. However, your primary concern at this time is to increase your networking. Think of it as personal marketing that will serve you well your entire career. Continue Reading
However, it doesn’t have to be, says Lorna Hill, regional marketing manager of American Education Services.
Hill advises parents and students to do their homework and meet with a financial aid professional to map out a plan.
“Oftentimes when families are worried they haven’t saved enough, they are embarrassed about speaking to someone about it,” she said. “It should never be this way. Most families are simply not aware of the options that are out there, which is why it is so important to meet with a financial professional to talk about their situation, whatever it may be.”
One of the first steps for any student looking for financial aid through a federal program is to complete the Free Application for Federal Student Aid (FAFSA), available online at www.fafsa. ed.gov and in high school guidance offices. Using the information provided in the FAFSA, the Department of Education determines a student’s eligibility for federal funds, then notifies the student of his or her status with the Student Aid Report.
Federal aid can come in many forms, including the Pell Grant, which is the largest federal grant program. Almost 4 million students received Pell Grants in 2003. Loans, scholarships and work-study programs are some other available aid resources.
“The breadth of aid options available can be a bit overwhelming, which is why I encourage families to do some research, figure out exactly how much money will be needed over the course of the student’s education and then start exploring the options,” Hill said. “EducationPlanner.org is an excellent online resource for students and parents. The Web site walks students through all of the steps to prepare for, apply, select, decide and pay for higher education.”
It’s never too early or too late to start planning for higher education. Do your homework, speak with a financial aid professional and plan on investing some time into the process.
“It doesn’t matter how far along your child is in the application process,” Hill said. “There’s no time like now to start planning for his or her collegiate future and your financial peace of mind.”
College freshmen face a long list of hassles when they leave home for the first time and move into a dorm or their own apartment. Today, thanks to the growth of Internet-based banks, setting up a checking account for college students is no longer on that list of dreaded tasks.
Internet-based banks, such as NetBank, allow students to set up a checking account from their parents’ home with just an e-mail account and a driver’s license. Students arrive at college with pre-printed checks, as well as an ATM, debit or credit card.
According to a survey by NetBank, nearly 92 percent of students said they arrive at college with their own computer and 87 percent are able to connect to the Internet from their dorm or apartment without any problems. Of those respondents, 50 percent use online bill-paying services, compared to 28 percent of the general banking population.
“Today’s college students represent a new wave of bank customers who will never set foot in a bank branch,” said Jerry McCoy, chief marketing executive for NetBank. “They recognize that banking online is more convenient to their lifestyle and allows them to focus on more important tasks such as schoolwork and extracurricular activities.”
Internet bank accounts provide additional convenience to students, such as NetBank’s free, unlimited access to online bill pay, which eliminates the monthly exercise of writing and mailing checks. It’s an easy way to ensure students don’t forget to make a payment or don’t get hit with a late fee during busy times like mid-term exams and homecoming week celebrations.
Students can also earn more interest and pay fewer fees, because an Internet bank doesn’t need to cover the costs associated with running a local branch.
A NetBank account is as mobile as today’s students. No matter where life takes them – a semester abroad, summer vacation, first job – there’s no need to switch banks.
If you want to understand basic Spanish you need to know that the main difference between English and Spanish is in the way that sentences are constructed. Firstly let us look at a typical Spanish sentence.
“Me gusta el vino espańol”.
This sentence means;
“I like Spanish wine”.
Did you notice that in the English version “wine” comes after “Spanish”, but in the Spanish sentence “vino” comes before “espańol”? This is because in the Spanish language the adjective (an adjective is word than is used to describe something,in this case we have used “espańol, which means Spanish), always comes after the noun (a noun is basically another name for a thing, in this case “vino” meaning wine).
So if I wanted to say, I like white wine, in Spanish I would say “Me gusta vino blanco”. Blanco means white in Spanish.
The rule applies whether we are referring to a drink or a person.
The English sentence “A Spanish man”.
Would translate in Spanish to “Un seńor espańol”
Have you noticed another difference between the English and Spanish sentences? In the example we have used we can see that “espańol” starts with a lower case, or small “e”, but in English when saying “Spanish” we use a capital “S”, this is because any reference to a country in English should have a capital letter at the start of the word, but in Spanish you would only use a capital letter when using the countries name directly.
If we say “Soy de Espańa”
This translates as,
“ I am of Spain”
Because we used “Espańa” which is the name on the country it gets a capital letter. Therefore if I say;
“Soy américano”(I am an American man). In Spanish we have a small “a”, as opposed to;
“Soy de América”(I am of America). Because we use the word for America (which is called a proper noun) we use an “A”.
How To Recognize Questions
In English we can change a statement to a question by adding the word DO and a question mark (?). As an example the statement “you have a pencil” could be something I say as I hand over a pencil or merely a statement of fact. But if I say “do you have a pencil?”, then there is no doubt that I am asking a question.
There is no word for DO in Spanish so we have to have another of way of knowing that the sentence we have just started is a question. To do this the Spanish language uses two question marks “¿?”, the inverted one at the start of the sentence and the standard one at the end. Therefore:
“Tiene un lapiz”, (“tiene” can mean “you have” and “lapiz” is “pencil”)
This statement becomes a question when we add ¿ and ?.
“¿Tiene un lapiz?” so if you see the question mark at the start of a sentence you know that you have to alter the tone of your voice to make it questioning.
Still struggling to pay off the college loan. Have you disconnected your cell phone fair to keep off those darn creditors. Need a quick beget rich scheme. Well, maybe not that merely in this article you volition discovery approximately practical ways on rescue money while attending your post secondary institution.
College volume prices rich person been rising along with the monetary value of tuition, and it is no wonder wherefore so many students wealthy person resorted to photocopying their texts. Alternatively of photocopying how come not bargain an previous(a) version Bible, all the information that you testament motivation to acknowledge is at that place and you tin hold that Christian Bible as a reference. It’s not like the gravitational acceleration of earth is going to change with the new to(p) variant, right. Too broke to steal books so attempt to uncovering your books at the local anesthetic depository library, not the school’s program library. You wish be astonied at however many books you bequeath breakthrough at the local anaesthetic subroutine library, not alone(p) is it handy simply is besides cheap.
True, if you borrow a Holy Writ for a semester it would price you a later(a) tip just I mean who cares right, since the $10 belated bung charge is nothing once compared to $200 price tag of a brand unexampled Scripture. Another upcoming trend in colleges these days is the trend for students to rent their books. I guess these students probably wanted to donjon their books as reference or may still motive them for the following year and thought they could make about duplicate immediate payment by rental them out. If you thinking about and so its best to rent to people that you recognize, and always give birth a legal signed document. Always effort to purchase secondhand.
Many colleges now deliver a Koran bribe back program, and these books tin can be found at the college exploited al-Qur’an room. Also, in that location a ton of online record exchange sites on the net which both free and ready to hand(p). When to corrupt ill-used & to grease one’s palms young. My rule is this, if a record book is say like four years Old and then its best to put-upon. The merely meter I recommend purchasing Modern is if the script is less than four years older, since it is more likely to being the latest variation, and you recognise for sure that the same is going to be victimized next term. This way you toilet sell your books for the maximum profit.
As an alternative of a pop, from the topical anesthetic vending machine not equitable a case of 24 to store at your locker or dorm room. You leave be astonished at much you make unnecessary you in bulk. Another way of deliverance on food is to visit your friend’s fridge on a daily basis and ever possible endeavor to attend to any free food locations (parties, club meetings…etc) on campus. Or else of those brand New “checkout marked” shoes, not preserve your money for something that you very want in college (laptop, books, contraceptives).
Try to visit the topical anaesthetic thrift store and check-out procedure Ebay on a daily basis, you be astounded at many $1 t-shirts out thither. Many students lay aside money on printing and photocopying by the use of a scanner. The student simply scans what he/she needs and emails the scanned file back to him/her self. Not simply you save up money, only too bear an electronic copy of the file which you could later download to your laptop or PDA.
If you real must print endeavour not to use the school printing services. Most of the metre these services toll you an supernumerary 5-10¢ per page, cents which could go towards the purchase of a raw ink cartridge. Rather of using a credit card, not just dungeon hard cash in your wallet/purse.
Earn a degree is the biggest and the most important investment for many people, choosing the right degree will secure for a brighter future. But, it sometimes can be difficult to choose a degree, as there are so many options and so much information to consider. Here are some tips to help you to decide the degree of your interest which in-line with your future career path.
<b>Consider your interests.</b>
You may be interested in pursuing an arts degree but because you believe in the myth of a Bachelor of Arts degree is not enough to find a well-paying job. Whereas, science and technology related degree has a better career path, so you put aside your degree of interest and force yourself you take a science and technology related degree. If you do this, you will find it harder to complete your degree and may give up the degree in half way or even if you have successfully earned the degree, you may find it harder to success in a career at a dislike working fields.
Hence, in the process of choosing what degree to pursue, you need to take into consideration of you interest. Sit down and carefully think of your future career which you are interested to build after your graduation, and from there, gather the information of the related degrees. | http://www.educarelab.com/author/admin2/ | 13 |
36 | Free computer Tutorials
Microsoft Excel 2007 to 2013
The financial function we're going to explore is called PMT( ). You use this function when you want to calculate things like the monthly payment amounts on a loan, or how much per month a mortgage will cost you. We'll use it to work out how much per month a loan will cost us. Here's what we'll do.
We've decided to take out a loan of ten thousand pounds from our friendly
banker. We're going to be paying it back over 5 years. The question
is, how much per month is this going to cost us?
The PMT( ) Function in Excel
The PMT( ) function expects certain values in between its two round brackets. The values that go in round brackets are known as arguments. The arguments for the PMT( ) function are these:
PMT(rate, nper, pv, fv, type)
Only the first three are needed, and you can miss the final two out, if you like.
We'll work out our monthly loan costs with the help of the PMT( ) function.
First, create a new spreadsheet like the one below:
If you look at cell B1 on the spreadsheet, you'll see a figure of £10, 000. This is the amount we want to borrow. The labels on Row 3 show what else we need: An interest rate, the number of payments we'll make over the 5 years, the present value of the loan, the amount we'll have to pay back each month, and the total amount paid back after 5 years. But we only need the first three for our PMT() function.
In cell A4, we'll need an interest rate. In cell B4 we'll need the number of payments, and in cell D4 we'll need the Present Value of the loan. First is interest rate.
Imagine that the interest rate given to us by the bank is 24 percent
per year. For the PMT( ) function, we need to divide this figure by
12 (the number of months in a year) So try this:
= 24% / 12
Now that we have an interest rate, the next thing we need for the PMT( ) function is how many payments there are in total. We have to pay something back every month for 5 years. Which is a simple formula. So,
= 12 * 5
This figure of 60 is for the second argument of the PMT( ) function - the nper. This is just the number of payments.
Now that you have a figure in cell A4 (rate), and a figure in cell B4 (nper), there's only one more to go - the Present Value (pv).
The Present Value of a loan, also known as the Principal, is what the
loan is worth at the present time. Since we haven't made any payments
yet, this is just 10, 000 for us.
OK, we now have all the parts for our PMT() function: a rate (A4), an nper (B4), and a pv (C4). Try this:
=PMT(A4, B4, C4)
Hit the enter key on your keyboard, and you'll see the monthly amount appear. The figure you should have is -£287.68. The reason there is a minus sign before the total is because it's a debt: what you owe to the bank.
But this is what your spreadsheet should look like:
The only thing left to do is see how much this loan will cost us at the end of 5 years. All you need to do here is multiply the monthly amount in cell D4 by the number of payments in cell B4. Enter your formula for this in cell E4, and you spreadsheet will look like ours below:
So a ten thousand pounds loan, at the interest rate the bank is offering, means we'll have to pay back just over 17 thousand pounds over 5 years.
Tweaking the Values
We can change the spreadsheet slightly to give us more control. For your figure in cell B4, the number of payments, you entered 12 * 5. This is 12 months multiplied by 5 years. But what if we wanted to pay the loan back over 10 years, or 15? How much will our monthly payments then be? And will be the final cost of the loan?
Also, the interest rate seems a bit high. What if we can get a better rate elsewhere?
By making a few changes to or spreadsheet, we can amend these values
more easily. First we'll need two new rows.
Inserting New Rows in Excel
We need to insert new rows in our spreadsheet. To insert a new row,
click into cell A2. Then click on the Home tab at the top of
Excel. Locate the Cells panel, and click the Insert item:
From the Insert menu, click on Insert Sheet Rows:
Excel will insert a new row for you. Do this again to get two blank rows. Add two new labels, Num of Years and Interest. Your spreadsheet sheet will then look like this:
Adapting the PMT Formula
We can adapt the formulas we've entered so far, in order to make them more usable. As an example, we'll adapt the interest rate.
To get the interest rate for cell A4, we entered a formula:
= 24% / 12
Instead of having the interest rate in cell A4, however, we can place it at the top, in cell B3 on our new Row. We can then alter the interest rate by simply typing a new one in cell B3. To clear all that up, try the following:
= 24% / 12
= B3 / 12
Hit the enter key on your keyboard and nothing should change on your
spreadsheet. But the difference is that you can enter a new interest
rate in cell B3, and see how this effects the loan amounts. Try it out
by typing 23% in cell B3:
As you can see, the interest rate has changed to a rather long figure. But notice the Monthly Amount - it has gone down to £281.90. The total amount we have to pay back has changed, too. Play around with the interest rate in cell B3, just to get a feel for how it works.
In cell B6 of your spreadsheet, you have the following formula:
= 12 * 5
This calculates the number of months for the loan. Change this formula so that the number of years is coming from B2. Your finished spreadsheet should look like ours below:
If you play around with the values in cells B1, B2 and B3 you should be able quickly see the new loan repayments.
In the next part, you'll see what Conditonal Logic is, and how to use it in Excel. First, try this project. It's all to do with Averages, so shouldn't cause you too many problems. | http://www.homeandlearn.co.uk/excel2007/excel2007s5p5.html | 13 |
323 | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
The gross domestic product (GDP) or gross domestic income (GDI) is a basic measure of a country's overall economic output. It is the market value of all final goods and services made within the borders of a country in a year. It is often positively correlated with the standard of living, though its use as a stand-in for measuring the standard of living has come under increasing criticism and many countries are actively exploring alternative measures to GDP for that purpose. GDP can be determined in three ways, all of which should in principle give the same result. They are the product (or output) approach, the income approach, and the expenditure approach. The most direct of the three is the product approach, which sums the outputs of every class of enterprise to arrive at the total. The expenditure approach works on the principle that all of the product must be bought by somebody, therefore the value of the total product must be equal to people's total expenditures in buying things. The income approach works on the principle that the incomes of the productive factors ("producers," colloquially) must be equal to the value of their product, and determines GDP by finding the sum of all producers' incomes.
Example: the expenditure method:
In the name "Gross Domestic Product,"
"Gross" means that GDP measures production regardless of the various uses to which that production can be put. Production can be used for immediate consumption, for investment in new fixed assets or inventories, or for replacing depreciated fixed assets. If depreciation of fixed assets is subtracted from GDP, the result is called the Net domestic product; it is a measure of how much product is available for consumption or adding to the nation's wealth. In the above formula for GDP by the expenditure method, if net investment (which is gross investment minus depreciation) is substituted for gross investment, then net domestic product is obtained.
"Domestic" means that GDP measures production that takes place within the country's borders. In the expenditure-method equation given above, the exports-minus-imports term is necessary in order to null out expenditures on things not produced in the country (imports) and add in things produced but not sold in the country (exports).
Economists (since Keynes) have preferred to split the general consumption term into two parts; private consumption, and public sector (or government) spending. Two advantages of dividing total consumption this way in theoretical macroeconomics are:
- Private consumption is a central concern of welfare economics. The private investment and trade portions of the economy are ultimately directed (in mainstream economic models) to increases in long-term private consumption.
- If separated from endogenous private consumption, government consumption can be treated as exogenous, so that different government spending levels can be considered within a meaningful macroeconomic framework.
Usually in this approach the economy is broken down into classes of enterprise: agriculture, construction, manufacturing, etc. Their outputs are estimated largely on the basis of surveys which businesses fill out. To avoid "double-counting" in cases where the output of one enterprise is not a final good, but serves as input into another enterprise, either only final goods outputs must be counted, or a "value-added" approach must be taken, where what is counted is not the total value output by an enterprise, but its value-added: the difference between the value of its output and the value of its input.
- Gross Value Added = Sum of values added by all enterprises = Sales of goods - purchase of intermediate goods to produce the goods sold
Depending on how gross value added has been calculated, it may be necessary to make an adjustment to it before it can be considered equal to GDP. This is because GDP is the market value of goods and services – the price paid by the customer – but the price received by the producer may be different than this if the government taxes or subsidises the product. For example, if there is a sales tax:
- Producer's price + sales tax = market price
If taxes and subsidies have not already been computed as part of GVA, we must compute GDP as:
- GDP = GVA + Taxes on products - Subsidies on products
In contemporary economies, most things produced are produced for sale, and sold. Therefore, measuring the total expenditure of money used to buy things is a way of measuring production. This is known as the expenditure method of calculating GDP. Note that if you knit yourself a sweater, it is production but does not get counted as GDP because it is never sold. Sweater-knitting is a small part of the economy, but if one counts some major activities such as child-rearing (generally unpaid) as production, GDP ceases to be an accurate indicator of production.
Components of GDP by expenditureEdit
GDP (Y) is a sum of Consumption (C), Investment (I), Government Spending (G) and Net Exports (X - M).
- Y = C + I + G + (X − M)
Here is a description of each GDP component:
- C (consumption) is normally the largest GDP component, consisting of private household expenditures in the economy. These personal expenditures fall under one of the following categories: durable goods, non-durable goods, and services. Examples include food, rent, jewelry, gasoline, and medical expenses but does not include the purchase of new housing.
- I (investment) includes business investment in plant, equipment, inventory, and structures, and does not include exchanges of existing assets. Examples include construction of a new mine, purchase of software, or purchase of machinery and equipment for a factory. Spending by households (not government) on new houses is also included in Investment. In contrast to its colloquial meaning, 'Investment' in GDP does not mean purchases of financial products. Buying financial products is classed as 'saving', as opposed to investment. This avoids double-counting: if one buys shares in a company, and the company uses the money received to buy plant, equipment, etc., the amount will be counted toward GDP when the company spends the money on those things; to also count it when one gives it to the company would be to count two times an amount that only corresponds to one group of products. Buying bonds or stocks is a swapping of deeds, a transfer of claims on future production, not directly an expenditure on products.
- G (government spending) is the sum of government expenditures on final goods and services. It includes salaries of public servants, purchase of weapons for the military, and any investment expenditure by a government. It does not include any transfer payments, such as social security or unemployment benefits.
- X (exports) represents gross exports. GDP captures the amount a country produces, including goods and services produced for other nations' consumption, therefore exports are added.
- M (imports) represents gross imports. Imports are subtracted since imported goods will be included in the terms G, I, or C, and must be deducted to avoid counting foreign supply as domestic.
Note that C, G, and I are expenditures on final goods and services; expenditures on intermediate goods and services do not count. (Intermediate goods and services are those used by businesses to produce other goods and services within the accounting year. )
According to the U.S. Bureau of Economic Analysis, which is responsible for calculating the national accounts in the United States, :In general, the source data for the expenditures components are considered more reliable than those for the income components [see income method, below]."
Examples of GDP component variablesEdit
C, I, G, and NX(net exports): If a person spends money to renovate a hotel to increase occupancy rates, the spending represents private investment, but if he buys shares in a consortium to execute the renovation, it is saving. The former is included when measuring GDP (in I), the latter is not. However, when the consortium conducted its own expenditure on renovation, that expenditure would be included in GDP.
If a hotel is a private home, spending for renovation would be measured as consumption, but if a government agency converts the hotel into an office for civil servants, the spending would be included in the public sector spending, or G.
If the renovation involves the purchase of a chandelier from abroad, that spending would be counted as C, G, or I (depending on whether a private individual, the government, or a business is doing the renovation), but then counted again as an import and subtracted from the GDP so that GDP counts only goods produced within the country.
If a domestic producer is paid to make the chandelier for a foreign hotel, the payment would not be counted as C, G, or I, but would be counted as an export.
Another way of measuring GDP is to measure total income. If GDP is calculated this way it is sometimes called Gross Domestic Income (GDI), or GDP(I). GDI should provide the same amount as the expenditure method described above. (By definition, GDI = GDP. In practice, however, measurement errors will make the two figures slightly off when reported by national statistical agencies.)
Total income can be subdivided according to various schemes, leading to various formulae for GDP measured by the income approach. A common one is:
- GDP = compensation of employees + gross operating surplus + gross mixed income + taxes less subsidies on production and imports
- GDP = COE + GOS + GMI + TP & M - SP & M
- Compensation of employees (COE) measures the total renumeration to employees for work done. It includes wages and salaries, as well as employer contributions to social security and other such programs.
- Gross operating surplus (GOS) is the surplus due to owners of incorporated businesses. Often called profits, although only a subset of total costs are subtracted from gross output to calculate GOS.
- Gross mixed income (GMI) is the same measure as GOS, but for unincorporated businesses. This often includes most small businesses.
The sum of COE, GOS and GMI is called total factor income; it is the income of all of the factors of production in society. It measures the value of GDP at factor (basic) prices. The difference between basic prices and final prices (those used in the expenditure calculation) is the total taxes and subsidies that the government has levied or paid on that production. So adding taxes less subsidies on production and imports converts GDP at factor cost to GDP(I).
Total factor income is also sometimes expressed as:
- Total factor income = Employee compensation + Corporate profits + Proprieter's income + Rental income + Net interest
Yet another formula for GDP by the income method is:
where R : rents
I : interests
P : profits
SA : statistical adjustments (corporate income taxes, dividends, undistributed corporate profits)
W : wages
Note the mnemonic, "ripsaw".
The production boundaryEdit
Not all useful human activity is counted in GDP. Indeed, not everything that economists recognise as "production" is counted in GDP. The economists who compile GDP readily admit even the latter point. However, it raises several questions: What does GDP actually measure? Is it a useful figure? Does it mean what most people think it means?
The economists who compile national accounts speak of a "production boundary" that delimits what will be counted as GDP.
"One of the fundamental questions that must be addressed in preparing the national economic accounts is how to define the production boundary – that is, what parts of the myriad human activities are to be included in or excluded from the measure of the economic production."All output for market is at least in theory included within the boundary. Market output is defined as that which is sold for "economically significant" prices; economically significant prices are "prices which have a significant influence on the amounts producers are willing to supply and purchasers wish to buy." An exception is that illegal goods and services are often excluded even if they are sold at economically significant prices (Australia and the United States exclude them).
This leaves non-market output. It is partly excluded and partly included. First, "natural processes without human involvment or direction" are excluded. Also, there must be a person or institution that owns or is entitled to compensation for the product. An example of what is included and excluded by these criteria is given by the United States' national accounts agency: "the growth of trees in an uncultivated forest is not included in production, but the harvesting of the trees from that forest is included."
Within the limits so far described, the boundary is further constricted by "functional considerations." The Australian Bureau for Statistics explains this: "The national accounts are primarily constructed to assist governments and others to make market-based macroeconomic policy decisions, including analysis of markets and factors affecting market performance, such as inflation and unemployment." Consequently, production that is, according to them, "relatively independent and isolated from markets," or "difficult to value in an economically meaningful way" [ie., difficult to put a price on] is excluded. Thus excluded are services provided by people to members of their own families free of charge, such as child rearing, meal preparation, cleaning, transportation, entertainment of family members, emotional support, care of the elderly. Most other production for own (or one's family's) use is also excluided, with two notable exceptions which are given in the list later in this section.
Nonmarket outputs that are included within the boundary are listed below. Since, by definition, they do not have a market price, the complilers of GDP must impute a value to them, usually either the cost of the goods and services used to produce them, or the value of a similar item that is sold on the market.
- Goods and services provided by governments and non-profit organisations free of charge or for economically insignficant prices are included. The value of these goods and services is estimated as equal to their cost of production.
- Goods and services produced for own-use by businesses are attempted to be included. An example of this kind of production would be a machine constructed by an engineering firm for use in its own plant.
- Renovations and upkeep by an individual to a home that she owns and occupies are included. The value of the upkeep is estimated as the rent that she could charge for the home if she did not occupy it herself. This is the largest item of production for own use by an individual (as opposed to a business) that the compilers include in GDP.
- Agricultural production for consumption by oneself or one's household is included.
- Services (such as chequeing-account maintenance and services to borrowers) provided by banks and other financial institutions without charge or for a fee that does not reflect their full value have a value imputed to them by the compilers and are included. The financial institutions provide these services by giving the customer a less advantageous interest rate than they would if the services were absent; the value imputed to these services by the compilers is the difference between the interest rate of the account with the services and the interest rate of a similar account that does not have the services. According to the United States Bureau for Economic Analysis, this is one of the largest imputed items in the GDP.
GDP vs GNPEdit
GDP can be contrasted with gross national product (GNP) or gross national income (GNI). The difference is that GDP defines its scope according to location, while GNP defines its scope according to ownership. GDP is product produced within a country's borders; GNP is product produced by enterprises owned by a country's citizens. The two would be the same if all of the productive enterprises in a country were owned by its own citizens, but foreign ownership makes GDP and GNP non-identical. Production within a country's borders, but by an enterprise owned by somebody outside the country, counts as part of its GDP but not its GNP; on the other hand, production by an enterprise located outside the country, but owned by one of its citizens, counts as part of its GNP but not its GDP.
To take the United States as an example, the U.S.'s GNP is the value of output produced by American-owned firms, regardless of where the firms are located.
Gross national income (GNI) equals GDI plus income receipts from the rest of the world minus income payments to the rest of the world.
In 1991, the United States switched from using GNP to using GDP as its primary measure of production. The relationship between United States GDP and GNP is shown in table 1.7.5 of the National Income and Product Accounts .
Year-over-year real GNP growth in the United States in 2007 was 3.2%.
The international standard for measuring GDP is contained in the book System of National Accounts (1993), which was prepared by representatives of the International Monetary Fund, European Union, Organization for Economic Co-operation and Development, United Nations and World Bank. The publication is normally referred to as SNA93 to distinguish it from the previous edition published in 1968 (called SNA68) Template:Why.
SNA93 provides a set of rules and procedures for the measurement of national accounts. The standards are designed to be flexible, to allow for differences in local statistical needs and conditions.
Within each country GDP is normally measured by a national government statistical agency, as private sector organizations normally do not have access to the information required (especially information on expenditure and production by governments).
- Main article: National agencies responsible for GDP measurement
Adjustments to GDPEdit
When comparing GDP figures from one year to another, it is desirable to compensate for changes in the value of money – inflation or deflation. The raw GDP figure as given by the equations above is called the nominal, or historical, or current, GDP. To make it more meaningful for year-to-year comparisons, it may be multiplied by the ratio between the value of money in the year the GDP was measured and the value of money in some base year. For example, suppose a country's GDP in 1990 was $100 million and its GDP in 2000 was $300 million; but suppose that inflation had halved the value of its currency over that period. To meaningfully compare its 2000 GDP to its 1990 GDP we could multiply the 2000 GDP by one-half, to make it relative to 1990 as a base year. The result would be that the 2000 GDP equals $300 million x one-half = $150 million, in 1990 monetary terms. We would see that the country's GDP had, realistically, increased by 1.5 times over that period, not 3 times, as it might appear from the raw GDP data. The GDP adjusted for changes in money-value in this way is called the real, or constant, GDP.
The factor used to convert GDP from current to constant values in this way is called the GDP deflator. Unlike the Consumer price index, which measures inflation (or deflation – rarely!) in the price of household consumer goods, the GDP deflator measures changes in the prices all domestically produced goods and services in an economy – including investment goods and government services, as well as household consumption goods.
Constant-GDP figures allow us to calculate a GDP growth rate, which tells us how much a country's production has increased (or decreased, if the growth rate is negative) compared to the previous year.
- Real GDP growth rate for year n = [(Real GDP in year n) - (Real GDP in year n - 1)]/ (Real GDP in year n - 1)
Another thing that it may be desirable to compensate for is population growth. If a country's GDP doubled over some period but its population tripled, the increase in GDP may not be deemed such a great accomplishment: the average person in the country is producing less than they were before. Per-capita GDP is the measure compensated for population growth.
The level of GDP in different countries may be compared by converting their value in national currency according to either the current currency exchange rate, or the purchase power parity exchange rate.
- Current currency exchange rate is the exchange rate in the international currency market.
- Purchasing power parity exchange rate is the exchange rate based on the purchasing power parity (PPP) of a currency relative to a selected standard (usually the United States dollar).
The ranking of countries may differ significantly based on which method is used.
- The current exchange rate method converts the value of goods and services using global currency exchange rates. The method can offer better indications of a country's international purchasing power and relative economic strength. For instance, if 10% of GDP is being spent on buying hi-tech foreign arms, the number of weapons purchased is entirely governed by current exchange rates, since arms are a traded product bought on the international market. There is no meaningful 'local' price distinct from the international price for high technology goods.
- The purchasing power parity method accounts for the relative effective domestic purchasing power of the average producer or consumer within an economy. The method can provide a better indicator of the living standards of less developed countries, because it compensates for the weakness of local currencies in the international markets. For example, India ranks 12th by nominal GDP, but fourth by PPP. The PPP method of GDP conversion is more relevant to non-traded goods and services.
There is a clear pattern of the purchasing power parity method decreasing the disparity in GDP between high and low income (GDP) countries, as compared to the current exchange rate method. This finding is called the Penn effect.
For more information, see Measures of national income and output.
Standard of living and GDPEdit
GDP per capita is not a measurement of the standard of living in an economy. However, it is often used as such an indicator, on the rationale that all citizens would benefit from their country's increased economic production. Similarly, GDP per capita is not a measure of personal income. GDP may increase while incomes for the majority of a country's citizens may even decrease or change disproportionally. For example, in the US from 1990 to 2006 the earnings (adjusted for inflation) of individual workers, in private industry and services, increased by less than 0.5% per year while GDP (adjusted for inflation) increased about 3.6% per year over the same period.
The major advantage of GDP per capita as an indicator of standard of living is that it is measured frequently, widely and consistently. It is measured frequently in that most countries provide information on GDP on a quarterly basis, which allows a user to spot trends regularly. It is measured widely in that some measure of GDP is available for almost every country in the world, allowing comparisons to be made between countries. It is measured consistently in that the technical definition of GDP is relatively consistent among countries.
The major disadvantage is that it is not, strictly speaking, a measure of standard of living. GDP is intended to be a measure of particular types of economic activity within a particular country. Nothing about the definition of GDP suggests it is necessarily a measure of standard of living. For instance, in an extreme example, a country which exported 100 per cent of its production and imported nothing would still have a high GDP, but a very poor standard of living.
The argument in favor of using GDP is not that it is a good indicator of the standard of living, but that, all other things being equal, the standard of living tends to increase when GDP per capita increases. As such, GDP can be a proxy for the standard of living, rather than a direct measure. The sometimes use of GDP per capita as a proxy of labor productivity is also problemmatic.
Limitations of GDP to judge the health of an economyEdit
GDP is widely used by economists to gauge the health of an economy, as its variations are relatively quickly identified. However, its value as an indicator for the standard of living is considered to be limited. Not only that, but if the aim of economic activity is to produce ecologically sustainable increases in the overall human standard of living, GDP is a perverse measurement; it treats loss of ecosystem services as a benefit instead of a cost. Other criticisms of how the GDP is used include:
- Wealth distribution – GDP does not take disparity in incomes between the rich and poor into account. However, numerous Nobel-prize winning economists have disputed the importance of income inequality as a factor in improving long-term economic growth. In fact, short term increases in income inequality may even lead to long term decreases in income inequality. See income inequality metrics for discussion of a variety of inequality-based economic measures.
- Non-market transactions – GDP excludes activities that are not provided through the market, such as household production and volunteer or unpaid services. As a result, GDP is understated. Unpaid work conducted on Free and Open Source Software (such as Linux) contribute nothing to GDP, but it was estimated that it would have cost more than a billion US dollars for a commercial company to develop. Also, if Free and Open Source Software became identical to its proprietary software counterparts, and the nation producing the propriety software stops buying proprietary software and switches to Free and Open Source Software, then the GDP of this nation would reduce, however there would be no reduction in economic production or standard of living. The work of New Zealand economist Marilyn Waring has highlighted that if a concerted attempt to factor in unpaid work were made, then it would in part undo the injustices of unpaid (and in some cases, slave) labour, and also provide the political transparency and accountability necessary for democracy. Shedding some doubt on this claim, however, is the theory that won economist Douglass North the Nobel Prize in 1993. North argued that the creation and strengthening of the patent system, by encouraging private invention and enterprise, became the fundamental catalyst behind the Industrial Revolution in England.
- Underground economy – Official GDP estimates may not take into account the underground economy, in which transactions contributing to production, such as illegal trade and tax-avoiding activities, are unreported, causing GDP to be underestimated.
- Non-monetary economy – GDP omits economies where no money comes into play at all, resulting in inaccurate or abnormally low GDP figures. For example, in countries with major business transactions occurring informally, portions of local economy are not easily registered. Bartering may be more prominent than the use of money, even extending to services (I helped you build your house ten years ago, so now you help me).
- GDP also ignores subsistence production.
- Quality of goods – People may buy cheap, low-durability goods over and over again, or they may buy high-durability goods less often. It is possible that the monetary value of the items sold in the first case is higher than that in the second case, in which case a higher GDP is simply the result of greater inefficiency and waste.
- Quality improvements and inclusion of new products – By not adjusting for quality improvements and new products, GDP understates true economic growth. For instance, although computers today are less expensive and more powerful than computers from the past, GDP treats them as the same products by only accounting for the monetary value. The introduction of new products is also difficult to measure accurately and is not reflected in GDP despite the fact that it may increase the standard of living. For example, even the richest person from 1900 could not purchase standard products, such as antibiotics and cell phones, that an average consumer can buy today, since such modern conveniences did not exist back then.
- What is being produced – GDP counts work that produces no net change or that results from repairing harm. For example, rebuilding after a natural disaster or war may produce a considerable amount of economic activity and thus boost GDP. The economic value of health care is another classic example—it may raise GDP if many people are sick and they are receiving expensive treatment, but it is not a desirable situation. Alternative economic measures, such as the standard of living or discretionary income per capita better measure the human utility of economic activity. See uneconomic growth.
- Externalities – GDP ignores externalities or economic bads such as damage to the environment. By counting goods which increase utility but not deducting bads or accounting for the negative effects of higher production, such as more pollution, GDP is overstating economic welfare. The Genuine Progress Indicator is thus proposed by ecological economists and green economists as a substitute for GDP. In countries highly dependent on resource extraction or with high ecological footprints the disparities between GDP and GPI can be very large, indicating ecological overshoot. Some environmental costs, such as cleaning up oil spills are included in GDP.
- Sustainability of growth – GDP does not measure the sustainability of growth. A country may achieve a temporarily high GDP by over-exploiting natural resources or by misallocating investment. For example, the large deposits of phosphates gave the people of Nauru one of the highest per capita incomes on earth, but since 1989 their standard of living has declined sharply as the supply has run out. Oil-rich states can sustain high GDPs without industrializing, but this high level would no longer be sustainable if the oil runs out. Economies experiencing an economic bubble, such as a housing bubble or stock bubble, or a low private-saving rate tend to appear to grow faster owing to higher consumption, mortgaging their futures for present growth. Economic growth at the expense of environmental degradation can end up costing dearly to clean up; GDP does not account for this.
- One main problem in estimating GDP growth over time is that the purchasing power of money varies in different proportion for different goods, so when the GDP figure is deflated over time, GDP growth can vary greatly depending on the basket of goods used and the relative proportions used to deflate the GDP figure. For example, in the past 80 years the GDP per capita of the United States if measured by purchasing power of potatoes, did not grow significantly. But if it is measured by the purchasing power of eggs, it grew several times. For this reason, economists comparing multiple countries usually use a varied basket of goods.
- Cross-border comparisons of GDP can be inaccurate as they do not take into account local differences in the quality of goods, even when adjusted for purchasing power parity. This type of adjustment to an exchange rate is controversial because of the difficulties of finding comparable baskets of goods to compare purchasing power across countries. For instance, people in country A may consume the same number of locally produced apples as in country B, but apples in country A are of a more tasty variety. This difference in material well being will not show up in GDP statistics. This is especially true for goods that are not traded globally, such as housing.
- Transfer pricing on cross-border trades between associated companies may distort import and export measures.
- As a measure of actual sale prices, GDP does not capture the economic surplus between the price paid and subjective value received, and can therefore underestimate aggregate utility.
- Austrian economist critique – Criticisms of GDP figures were expressed by Austrian economist Frank Shostak. Among other criticisms, he stated the following:
The GDP framework cannot tell us whether final goods and services that were produced during a particular period of time are a reflection of real wealth expansion, or a reflection of capital consumption.He goes on:
For instance, if a government embarks on the building of a pyramid, which adds absolutely nothing to the well-being of individuals, the GDP framework will regard this as economic growth. In reality, however, the building of the pyramid will divert real funding from wealth-generating activities, thereby stifling the production of wealth.Austrian economists are critical of the basic idea of attempting to quantify national output. Shostak quotes Austrian economist Ludwig von Mises:
The attempt to determine in money the wealth of a nation or the whole mankind are as childish as the mystic efforts to solve the riddles of the universe by worrying about the dimension of the pyramid of Cheops.
...the welfare of a nation [can] scarcely be inferred from a measure of national income...In 1962, Kuznets stated:
Distinctions must be kept in mind between quantity and quality of growth, between costs and returns, and between the short and long run. Goals for more growth should specify more growth of what and for what.
Alternatives to GDPEdit
- Human development index (HDI) - HDI uses GDP as a part of its calculation and then factors in indicators of life expectancy and education levels.
- Genuine progress indicator (GPI) or Index of Sustainable Economic Welfare (ISEW) - The GPI and the ISEW attempt to address many of the above criticisms by taking the same raw information supplied for GDP and then adjust for income distribution, add for the value of household and volunteer work, and subtract for crime and pollution.
- Gini coefficient - The Gini coefficient measures the disparity of income within a nation.
- Wealth estimates - The World Bank has developed a system for combining monetary wealth with intangible wealth (institutions and human capital) and environmental capital.
- Private Product Remaining - Murray Newton Rothbard and other Austrian economists argue that because government spending is taken from productive sectors and produces goods that consumers do not want, it is a burden on the economy and thus should be deducted. In his book, America's Great Depression, Rothbard argues that even government surpluses from taxation should be deducted to create an estimate of PPR.
Some people have looked beyond standard of living at a broader sense of quality of life or well-being:
- European Quality of Life Survey - The survey, first published in 2005, assessed quality of life across European countries through a series of questions on overall subjective life satisfaction, satisfaction with different aspects of life, and sets of questions used to calculate deficits of time, loving, being and having.
- Gross national happiness - The Centre for Bhutanese Studies in Bhutan is working on a complex set of subjective and objective indicators to measure 'national happiness' in various domains (living standards, health, education, eco-system diversity and resilience, cultural vitality and diversity, time use and balance, good governance, community vitality and psychological well-being). This set of indicators would be used to assess progress towards gross national happiness, which they have already identified as being the nation's priority, above GDP.
- Happy Planet Index - The happy planet index (HPI) is an index of human well-being and environmental impact, introduced by the New Economics Foundation (NEF) in 2006. It measures the environmental efficiency with which human well-being is achieved within a given country or group. Human well-being is defined in terms of subjective life satisfaction and life expectancy while environmental impact is defined by the Ecological Footprint.
Lists of countries by their GDPEdit
- Lists of countries by GDP
- List of countries by GDP (nominal), (per capita)
- List of countries by GDP (PPP), (per capita), (per hour)
- List of countries by GDP growth
- List of countries by GDP (real) growth rate, (per capita)
- List of countries by GDP sector composition
- List of countries by future GDP estimates (PPP), (per capita), (nominal)
- List of countries by past GDP (PPP), (nominal)
Australian Bureau for Statistics, Australian National Accounts: Concepts, Sources and Mathods, 2000. Retrieved November 2009. In depth explanations of how GDP and other national accounts items are determined.
United States Department of Commerce, Bureau of Economic Analysis, Concepts and Methods of the United States National Income and Product AccountsPDF. Retrieved November 2009. In depth explanations of how GDP and other national accounts items are determined.
- ↑ Sullivan, Arthur; Steven M. Sheffrin (1996). Economics: Principles in action, 57, 305, Upper Saddle River, New Jersey 074589: Pearson Prentice Hall.
- ↑ French President seeks alternatives to GDP, The Guardian 14-09-2009.
European Parliament, Policy Department Economic and Scientific Policy: Beyond GDP StudyPDF
- ↑ World Bank, Statistical Manual >> National Accounts >> GDP – final output, retrieved October 2009.
User's guide: Background information on GDP and GDP deflator. HM Treasury.
Measuring the Economy: A Primer on GDP and the National Income and Product Accounts. (PDF) Bureau of Economic Analysis.
- ↑ This calculation can be seen in United Kingdom, Annual Abstract of StatisticsPDF, 2008, p 254, Table 16.2 "gross domestic product and national income, current prices," near the top of the table. The United States appears to already include taxes minus subsidies in GVA, and thus equate it directly to GDP (BEA, Concepts and Methods of the National Income and Product Accounts of the United States, section 2-9).
- ↑ Thayer Watkins, San José State University Department of Economics, "Gross Domestic Product from the Transactions Table for an Economy", commentary to first table, " Transactions Table for an Economy". (Page retrieved November 2009.)
- ↑ Concepts and Methods of the United States National Income and Product Accounts, chap. 2.
- ↑ United States Bureau of Economic Analysis, A guide to the National Income and Product Accounts of the United StatesPDF, page 5; retrieved November 2009. Another term, "business current transfer payments," may be added. Also, the document indicates that Capital Consumption Adjustment (CCAdj) and Inventory Valuation Adjustment (IVA) are applied to the proprieter's income and corporate profits terms; and CCAdj is applied to rental income.
- ↑ BEA, Concepts and Methods of the United States National Income and Product Accounts, p 12.
- ↑ Australian National Accounts: Concepts, Sources and Methods, 2000, sections 3.5 and 4.15.
- ↑ This and the following statement on entitlement to compensation are from Australian National Accounts: Concepts, Sources and Methods, 2000, section 4.6.
- ↑ Concepts and Methods of the United States National Income and Product Accounts, page 2-2.
- ↑ Concepts and Methods of the United States National Income and Product Accounts, page 2-2.
- ↑ Australian National Accounts: Concepts, Sources and Methods, 2000, section 4.4.
- ↑ Concepts and Methods of the United States National Income and Product Accounts, page 2-2; and Australian National Accounts: Concepts, Sources and Methods, 2000, section 4.4.
- ↑ Concepts and Methods of the United States National Income and Product Accounts, page 2-4.
- ↑ Concepts and Methods of the United States National Income and Product Accounts, page 2-4.
- ↑ Concepts and Methods of the United States National Income and Product Accounts, page 2-5.
- ↑ United States, Bureau of Economic Analysis, Glossary, "GDP", retrieved November 2009.
- ↑ HM Treasury, Background information on GDP and GDP deflator
Some of the complications involved in comparing national accounts from different years are suggested in this World Bank document.
- ↑ Statistical Abstract of the United States 2008. Tables 623 and 647
- ↑ http://mises.org/story/770
- ↑ Simon Kuznets, 1934. "National Income, 1929-1932". 73rd US Congress, 2d session, Senate document no. 124, page 7. http://library.bea.gov/u?/SOD,888
- ↑ Simon Kuznets. "How To Judge Quality". The New Republic, October 20, 1962
- ↑ World Bank wealth estimates.
- ↑ First European Quality of Life Survey.
- World GDP Chart (since 1960)
- Australian Bureau of Statistics Manual on GDP measurement
- GDP-indexed bonds
- GDP scaled maps
- Euro area GDP growth rate (since 1996) as compared to the Bank Rate (since 2000)
- World Development Indicators (WDI)
- Economist Country Briefings
- UN Statistical Databases
- Bureau of Economic Analysis: Official United States GDP data
- Graphs of Historical Real U.S. GDP
- Historicalstatistics.org: Links to historical statistics on GDP for different countries and regions
- Complete listing of countries by GDP: Current Exchange Rate Method Purchasing Power Parity Method
- Historical US GDP (yearly data), 1790 - present
- Historical US GDP (quarterly data), 1947 - present
- OECD Statistics
Articles and booksEdit
- What's wrong with the GDP?
- Limitations of GDP Statistics by Schenk, Robert.
- whether output and CPI inflation are mismeasured, by Nouriel Roubini and David Backus, in Lectures in Macroeconomics
- "Measurement of the Aggregate Economy", chapter 22 of Dr. Roger A. McCain's Essential Principles of Economics: A Hypermedia Text
- Rodney Edvinsson, Growth, Accumulation, Crisis: With New Macroeconomic Data for Sweden 1800-2000PDF
- Clifford Cobb, Ted Halstead and Jonathan Rowe. "If the GDP is up, why is America down?" The Atlantic Monthly, vol. 276, no. 4, October 1995, pages 59–78.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | http://psychology.wikia.com/wiki/GDP | 13 |
22 | The oldest known evidence of human presence in present-day Honduras are stone knives, scrapers and other tools thought to be 6000 to 8000 years old, and uncovered by archaeologists in 1962 near La Esperanza, Intibucá. Central America’s earliest occupants almost certainly were Paleo-Indians from the north, but linguistic and other evidence suggests that many indigenous people present in Honduras today (Pech, Tawahka and probably Lenca) are descended from later migrations of people from rainforest regions of South America, especially present-day Colombia.
The Maya arrived in Honduras by way of Guatemala and Mexico, and settled in the fertile Sula, Copán and Comayagua valleys. Over centuries, they came to dominate the area, as they did much of Mesoamerica. Copán was a heavily settled, agriculturally rich trading zone and eventually became one of the great Maya city-states of the Classic Period (AD 300–900). The Classic Period ends with the rapid and mysterious collapse of most Maya centers, including Copán, where the last dated hieroglyph is from AD 800.
The Maya population declined precipitously, but did not disappear, of course. They were just one of many indigenous groups that made up Honduras’ native population when European explorers began their conquest of the American mainland. Copán has since returned to prominence as an archaeological mother lode, having more hieroglyphic inscriptions and stone monuments than any other Maya ruin. Copán was the first site visited by John Lloyd Stephens and Frederick Catherwood on their groundbreaking exploration of Mesoamerica in 1839. It was also the first site to be studied by Alfred Mausley (in 1885), whose compendium of Maya stone monuments remains a classic in the field, and whose work prompted the preeminent Harvard Peabody Museum to enter into Maya investigation (and which in turn selected Copán as its inaugural excavation). And it was the first stop for Sylvanus Morley and the Carnegie Institute in the 1920s. More recently, research has focused on Copán’s outlying areas; the site has provided important insight into the lives of ordinary Classic-era Mayas.
On his fourth and final voyage, Admiral Christopher Columbus made landfall near present-day Trujillo. The date was August 14, 1502, and was the first time European explorers set foot on the American mainland. Columbus named the area Honduras, or ‘depths, ’ for the deep waters there. Before the historic landing, Columbus also had had his (and Europe’s) first encounter with mainland indigenous people: a crew of a large canoe he spotted near the Bay Islands. Columbus commandeered the canoe, which was laden with trade goods, and forced its captain (probably a Mayan merchant) to serve as his guide. The expedition continued east around Cabo Gracias a Díos (another of Columbus’ placenames) all the way to present-day Panama, where the admiral dropped his unlucky captive before returning to Spain.
Having been the site of such a historic landing, the Honduran Caribbean coast was all but ignored by explorers for the next twenty years, who focused instead on Mexico, Panama and the Caribbean islands. Hernán Cortés’ expedition into the Aztec heartland, however, revived interest in Central America. Exploration of the region was marked by feuding among would-be conquistadores: Gil González Davila ‘discovered’ the Golfo de Fonseca and tried claiming it as his own, only to be captured by rival Spaniard Cristóbal de Olid, who had similar designs. González Davila turned the tables, however, by luring Olid’s men to his side, then capturing and beheading Olid. Hernán Cortéz and others tried to quell the feuding, but to no avail.
The discovery of gold and silver in the 1530s drew even more Spanish settlers and, more importantly, increased the demand for indigenous slave labor. Native Hondurans had long resisted Spanish invasion and enslavement, and in 1537, a young Lenca chief named Lempira led an indigenous uprising against the Spanish. Inspired by Lempira’s example, revolt swept the western region, and the Spanish were very nearly expelled. But Lempira was assassinated at peace talks arranged with the Spanish in 1538, and the native resistance was soon quelled. A cycle of smaller revolts and brutal repression followed, decimating the native population. African slaves were introduced in the 1540s to fill the growing labor shortage.
Mining sustained the colony for the remainder of the century, but a collapse of silver prices (and the constant challenges of excavating such rugged terrain) devastated the Honduran economy. Cattle and tobacco enterprises gained some traction, and a change in the Spanish throne in the early 1700s reduced corruption and helped revive the mining industry. However, another upheaval in Spanish rule in 1808 – when Napoleon installed one of his own on the Spanish throne – sparked revolts on both sides of the Atlantic, which irreparably damaged Spanish colonial rule.
On September 15, 1821, Honduras, Guatemala, El Salvador, Costa Rica and Nicaragua declared independence from Spain, and shortly thereafter joined the newly formed Mexican Empire. The relationship didn’t last long, and in 1823, the same countries declared independence from Mexico and formed the Federal Republic of Central America. Though Honduras was the poorest and least-populated of the countries, it produced some of the federation’s most important leaders. Chief among them was the liberal hero General Francisco Morazán, commonly dubbed the ‘George Washington of Central America’, who led the federation from 1830 to 1838. But bitter conflicts between liberals and conservatives proved too divisive for the nascent union, and in May 1838 the Central American Congress freed its members to form independent states – Honduras did so on November 15th of that year.
The liberal and conservative factions continued to wrestle for power in Honduras after independence. Conservatives favored a pro-church, aristocratic-style government, while liberals supported free market development of the kind taking place in the US and parts of Western Europe. Power alternated between the two factions, and Honduras was ruled by a succession of civilian governments and military regimes. (The country’s constitution would be rewritten 17 times between 1821 and 1982.) Government has officially been by popular election, but Honduras has experienced hundreds of coups, rebellions, power seizures, electoral ‘irregularities, ’ foreign invasion and meddling since achieving independence from Spain.
Fighting between liberals and conservatives was briefly suspended in the 1880s when an American adventurer named William Walker launched a bizarre and ill-fated attempt to conquer Central America. He succeeded in gaining control of Nicaragua in 1856, but a joint Central American military effort forced Walker back to the US within a year. He returned in 1860, landing near Trujillo. He was captured by British agents and turned over to Honduran authorities, who promptly executed him. He is buried in Trujillo.
Where William Walker failed, US free enterprise succeeded. In the 1880s the New York and Honduras Rosario Mining Company (NYHRMC) revived Honduras’ promising but underdeveloped mining industry. The company enjoyed almost unfettered (and untaxed) access to the ore-rich mountains near the town of El Rosario, east of Tegucigalpa. In 74 years of operation – the area was turned into a national park in 1954 – the NYHRMC extracted an estimated US$100 million of gold, silver, copper and zinc; little of that money or product remained in Honduras, however.
But it was the banana that would most entangle Honduras with foreign interests and governments. In 1899 the Boston Fruit Company merged with the Snyder Fruit Company to form United Fruit Company. The new company imported most of its fruit from Panama and Costa Rica, but soon acquired seven small banana operations in Honduras. That same year, three Italian brothers named Luca, Felix, and Joseph Vaccaro founded Vaccaro Brothers & Co – the predecessor of Standard Fruit Company – and began exporting bananas from the La Ceiba region to their base in New Orleans. In 1902 Russian émigré Samuel Zemurray established the Hubbard-Zemurray company, which would eventually become the Cuyamel Fruit Company. United purchased Cuyamel in 1929 and made Zemurray company president in 1933. United and Standard –which are today known as Chiquita and Dole fruit companies – have been battling for control of the Honduran (and world) banana market ever since.
Bananas accounted for 11% of Honduras’ exports in 1892, 42% in 1903, 66% in 1913, and 80% in 1929. The spectacular economic success of the banana industry made the banana companies extremely powerful within Honduras, with the rival companies allying themselves with competing political parties. Political, environmental, labor and bribery scandals have marred the industry throughout its existence, including Zemurray’s support of a 1908 coup attempt against a Vaccaro-friendly president, Chiquita’s 1975 and 1976 bribery of the Honduran minister of economy, and in 1998, allegations of repressive labor practices and use of toxic pesticides in Honduras and Colombia. A two-month strike in 1954 – in which as many as 25, 000 banana workers and thousands of sympathizers in textile, mining and other trades participated – remains a seminal moment in Honduran labor history.
The Spanish-American war in 1898 laid the groundwork for increased US involvement in the region. The US averted and mediated a number of conflicts in Central America, including Nicaragua’s 1907 invasion of Honduras and a border dispute between Guatemala and Honduras in 1917. Of course, American involvement in those and other disputes had everything to do with protecting American business interests, especially its banana companies, by force if necessary. When workers struck against Standard Fruit Company in 1920, the US sent advisors – and a warship.
In 1932 General Tiburcio Carías Andino was elected president amid a deep worldwide depression. Carías strengthened the armed forces, thus gaining favor with banana companies by opposing strikes, and with foreign governments by strictly adhering to debt payments. He also consolidated his own power, outlawing the Honduran communist party and restricting the press. The Honduran constitution did not allow reelection so Carías had it amended, extending the presidential term from four to six years. He served as a virtual dictator, and did not step down until 1949, and only under pressure from the US.
In 1956 a power grab by the country’s vice president prompted a military coup, the first (but not the last) in Honduran history. The military soon stepped aside for civilian elections, but a new constitution ratified in 1957 made the head of the armed forces – not the president –the country’s top military authority. In 1963, ten days before the next presidential election, the military again seized power. Colonel López Arellano suspended elections for two years, then ran himself (and won). He served the full six-year term, notable for his authoritarian excess and disregard for bureaucratic process. He stepped aside for civilian elections in 1971, only to be reinstalled a year later following another military coup.
A succession of military leaders, each as corrupt and ineffective as the last, ruled the country from 1972 to 1981. Arellano was removed following allegations he had accepted a US$1.25 million bribe from United Brands Company (formerly United Fruit Company); for his part, United Brands chief Eli Black committed suicide by jumping from his New York City office window when the accusations surfaced. Arellano was succeeded by General Juan Alberto Melgar Castro, who succumbed to a scandal implicating members of the military with murder and drug trafficking and was replaced by General Policarpo Paz García. Paz García was the only one to follow through with a long-standing promise to return Honduras to civilian rule. In 1980, voters elected a congress, and in 1981, a president. Honduras’ era of military rule was over.
During the 1980s, Honduras found itself surrounded on all sides by political upheaval and popular uprisings. In Nicaragua, the Somoza dictatorship was overthrown by Sandinista rebels in 1979, its guardsmen fleeing across the border into Honduras. The following year, full-scale war broke out in El Salvador as the government cranked up its repression of opposition leaders (Archbishop Oscar Romero was assassinated in March 1980) and the new Nicaraguan government provided insurgents with a fresh supply of weapons. Meanwhile, the civil war in Guatemala continued unabated.
Although Honduras experienced some unrest, the country never broke into out-and-out civil war, a fact that is puzzling to many observers. Certainly the conditions for civil unrest were there: military rule, a repressed (but organized) working class, a history of foreign meddling and exploitation, especially by the US, not to mention the example set by its neighbors.
Historians and political scientists point to a variety of factors to explain Honduras’ emergence from the 1980s revolution-free. The long-standing domination of the banana companies seems to have prevented the development of a native-born economic and political elite. Honduras did not have the Somozas, whose excesses of wealth and power in Nicaragua were legendary, or the ‘fourteen families’ of El Salvador whose control of the coffee industry and connections with the military turned the country into an agricultural oligarchy.
This in turn opened political space for genuine agrarian reform, the lack of which had heightened working-class frustration and militancy in other countries. Honduras has long had one of Central America’s most effective and organized labor movements. Despite the overwhelming power of banana interests, Honduran campesinos and other workers have consistently managed to wrest concessions (and accept compromises) without resorting to violence. Notably, labor disputes in Honduras have rarely included a call for upending the government, rather the enforcement of existing laws. The Honduran military, more democratic and less beholden to the nation’s elite than in other countries, played a more stabilizing, not repressive, role.
Of course, the US had a powerful interest in keeping Honduras stable. With Marxist revolutions erupting on all sides (and Cuban and Soviet influence plain to see) the US viewed Honduras as a crucial battleground in its effort to halt the so-called ‘domino effect’ and the spread of communism in the Americas. Economic aid poured into Honduras, quickly making it one of the top-ten recipients of US military and economic aid. In return, the US used Honduras as a staging ground for counterinsurgency efforts throughout the region. Nicaraguan refugee camps in Honduras were used as bases for a US-sponsored undeclared covert war against the Sandinista government, which became known as the Contra war. At the same time the US was training the Salvadoran military at Salvadoran refugee camps inside Honduras.
Economic aid slowed local opposition, but it wasn’t long before Hondurans began agitating against US militarization in their country. Demonstrations drew 60, 000 demonstrators in Tegucigalpa and 40, 000 in San Pedro Sula, and a few nascent revolutionary groups appeared. In reply, military commanders ordered the kidnapping and killing of hundreds of opposition and student leaders – a first for Honduras. The tactic backfired, swelling the ranks of demonstrators and alienating many in the military establishment, who were themselves growing uneasy about the army’s complicity with increasingly brutal US-sponsored conflicts in the region. In March 1984 the military’s pro-American commander was toppled in a bloodless coup by his fellow officers. General Walter López Reyes was appointed the successor, and the Honduran government promptly announced it would reexamine US military presence in the country. In August 1984 it suspended US training of Salvadoran military within its borders.
In 1986 Washington was rocked with revelations that the Reagan Administration had secretly and illegally used money from the sale of arms to Iran to support anti-Sandinista Contras operating out of Honduras. The scandal rekindled demonstrations in Honduras; in November 1988, the Honduran government refused to sign a new military agreement with the US, and then-president José Azcona Hoyo said the Contras would have to leave Honduras. With the election of Violeta Chamorro as president of Nicaragua in 1990, the Contra war ended and the Contras were finally out of Honduras.
Elections in 1989 ushered in Rafael Leonardo Callejas Romero of the National Party, who had lost in 1985, to the presidency in Honduras; he won 51% of the votes and assumed office in January 1990. Early that year, the new administration instituted a severe economic-austerity program, which provoked widespread alarm, unrest and protest. Callejas had promised to keep the lempira stable; instead during his tenure the lempira’s value jumped from around two lempiras to eight against the US dollar. Prices rose dramatically to keep pace with the US dollar, but salaries lagged behind. Hondurans grew poorer and poorer, a trend that continues today.
In the elections of November 1993, Callejas was convincingly beaten by Carlos Roberto Reina Idiaquez of the center-left Liberal Party, who campaigned on a platform of moral reform, promising to attack government corruption and reform state institutions, including the judicial system and the military. Reina had inherited an economically depressed country and a currency that seemed to be in an unstoppable slide. By 1996 it had fallen past 12 lempiras to the US dollar and was heading for 13; today it is at nearly 20.
On January 27, 1998, Carlos Roberto Flores Facusse took office as Honduras’ fifth democratically elected president. A member of the Liberal Party, like his predecessors, he was elected with a 10% margin over his nearest rival – National Party nominee Nora de Melgar – in elections that were considered fair and clean. He instigated a program of reform and modernization of the economy. The arrival of Hurricane Mitch on October 1998, at that time the strongest Atlantic hurricane on record, dashed those plans. In fact, President Flores would later say the storm had erased 50 years of progress in Honduras.
Honduras’ tourist industry was just recovering from Hurricane Mitch, when the September 11, 2001 terror attacks slashed the number of travelers once more, especially the all-important American diver market. Later that year Hondurans elected Ricardo Maduro as their president, on promises to promote tourism and, more importantly, to reduce crime.
Gang violence was then – and still is today – the prevailing preoccupation of average Hondurans. Rival gangs (maras) have spread to Honduras from El Salvador, where gang members deported from the US, especially Los Angeles, had taken root. (Central American countries have long called on the US to stop deporting known gang members but to no avail –some 20, 000 felons were sent to Central America between 2000 and 2004.) Maduro’s own son was kidnapped and murdered in 1997, and he promised a get-tough approach to gangs. Maduro proposed legislation called ‘Mano Dura’ (Hard Hand), which dramatically increased penalties for gang-related crimes, and broadened the definition of ‘illicit association’. The gang violence has been curbed and convictions have risen but with these changes come disturbing allegations of government-sponsored ‘death squads’ and prisoner abuse.
Maduro was succeeded in November 2005 elections by Manuel Zelaya, a cowboy hat-wearing rancher from Olancho. On April 1 2006, Honduras became the second Central American country, after El Salvador, to implement the Central America and Dominican Republic Free Trade Agreement, or CAFTA-DR. The trade deal, signed into law in 2005 by US President George W Bush after a bitter congressional fight, will end tariffs on as much as US$33 billion in goods and services when in full effect. The pact covers the US, El Salvador, Honduras, Guatemala, Nicaragua, Costa Rica and the Dominican Republic. All but Costa Rica have ratified the pact, but were required to make legal and regulatory changes before implementing it. Advocates say it will open markets to US businesses, especially farmers and ranchers, while providing manufacturing jobs to Central Americans that would otherwise go to Asia. American labor unions fought the plan, saying it will take jobs from Americans and did not provide enough protections for Central American workers. In Central America opposition came from the left which predicted the plan, like NAFTA before it, would lead to increased disenfranchisement of small farmers and business owners. | http://www.lonelyplanet.com/honduras/history | 13 |
17 | Now, let's look at some specific examples. One type of atom that does not normally react is Neon. (See the picture to the left.) It already has the correct number of electrons in it's outside electron layer so Neon does not react. Neon, along with Helium and Argon are known as non-reacting gasses because they do not need to react to be stable.
Other types of atoms such as Hydrogen, Carbon, and Oxygen do not have the correct number of electrons to be stable by itself. Instead they have to share electrons in molecules to get the correct number of electrons in their outside electron layer.
Since we only have to look at the atom that is in the center of the molecule to find out it's shape, we will concentrate only on Carbon and Oxygen. All the molecules illustrated on this page either have a Carbon or an Oxygen as the center atom. Carbon will especially be of interest since Carbon is the center atom for all the different Amino Acids.
Both Carbon and Oxygen have a deficiency. Neither C nor O have the proper number of electrons in their outside electron layer. Because of that, they are not stable by themselves. They must react with other atoms to get the proper number of electrons in the outside layer.
Oxygen is short 2 electrons. So it must form two covalent bonds to obtain 2 more electrons than it normally has by itself. The picture to the left will help you visually to see how covalent bonds can help increase the number of electrons that an atom can have.
Oxygen can either form two single bonds or one double bond. Water is a good example where Oxygen attaches to 2 different atoms, each by a single bond. Carbon dioxide is a good example where Oxygen attaches to just one molecule through a single double bond.
Either way, the Octet Rule is satisfied and the molecule is stable.
Carbon is short 4 electrons. It must form four covalent bonds in any combination of single and double bonds so that it ends up with 4 extra electrons.
Looking at the picture to the left (or above) we see that Carbon can be satisfied with either 4 single bonds or 2 double bonds. (A third alternative is that 1 double bond and 2 single bonds will also work.)
A double bond allows 4 electrons to be shared. 2 electrons from one atom and 2 electrons from the other atom. A double bond allows an atom to gain 2 more electrons through sharing.
Looking at the picture to the left (or above) we can see that Carbon usually shares all its electrons with other atoms. It does this because it has to double the number of electrons to get an octet. Oxygen on the other hand shares only two electrons with other atoms. The other 4 electrons it keeps for itself.
What Determines the Shape of a Molecule?
Now that we know about covalent bonds and how an atom achieves an octet, we only need one more fact to understand why molecules have specific shapes.
Here it is. All electrons are negatively charged. What do we know about like charges? They repel each other.
We can see the same exact thing happen with magnets. If we have two magnets and we try to push two like poles together (Either North with North or South with South), we see that they push each other away.
That is what the electrons do to each other. They try to get as far away from each other as possible.
Now remember, covalent bonds have two electrons. These two electrons because they are part of the same bond, are forced to be in the same area because they act as a single unit, a covalent bond.
So what happens is that each bond tries to get as far away from all the other bonds. They spread apart since they repel each other.
In the Water molecule pictured to the left (or above) we see that it has two pairs of unshared electrons. These behave very much like the electrons in covalent bonds. They stick together in pairs.
So whether electrons are shared or not they behave the same. They repel each other.
In the Carbon dioxide molecule, 4 electrons in each double bond are held together. Since Carbon dioxide has two double bonds, and since a double bond acts as a unit, the two double bonds try to get as far away from each other as possible. What they do is get on the opposite side of the central Carbon from each other. This molecule is straight!
Both Methane and Water have a similar shape. In both structures, we have 4 pairs of electrons trying to get as far away as possible from each other. So they go in all different directions. Water is a bent molecule because the unshared electrons force the two Hydrogens to come toward each other a little bit. This allows all the electrons to be more or less equally spaced apart.
Methane should be very interesting to us because it's structure is just like the Amino Acids that we are going to be looking at. All four Hydrogens are spread apart as far as they can be from each other.
The Structure of Amino Acids
Let's look at the central carbon of an Amino Acid. It is called the a Carbon. The a Carbon has the same distribution of electrons as we saw in Methane. The four bonds are spread apart as far as they can be from each other.
Often when we draw molecules on paper. We tend to think that the farthest the bonds can get is up, down, right, and left. However we must remember that molecules are not limited by 2 dimensions (like what we see on paper). Instead, the bonds spread out in all 3 dimensions of space. The angle from one covalent bond to another is 109.5o
This shape that the a Carbon bonds take is called a Tetrahedrial Shape. If we were to look at a 3 sided pyramid. (4 sides if you count the bottom) The a Carbon would be in the center and all the points (one pointing straight up, and the three others pointing toward the ground) would be sticking into the points of the pyramid.
In this structure, every covalent bond is angled 109.5o degrees from all the other covalent bonds. So, between every two bonds in this structure is an angle of 109.5o degrees. All the angles Equal each other.
Now we are ready to start looking at the structure of the Amino Acids.
An Amino Acid has a central a Carbon that has four groups attach to it. As you can see in the picture to the left (or above), the groups are: An Amino group, a Carboxyl group, a Hydrogen, and a side chain.
There are 20 different Amino Acids, In addition there are several other non-standard Amino Acids that are found in various peptides, polypeptides, and proteins. Each of these different Amino Acids have different side chains. So each Amino Acid has it's own specific structure, and the place where they are different, is the side chain. The side chain is what allows all the different Amino Acids to have their own specific characteristic.
The Amino and Carboxyl groups are also important because they are what allow Amino Acids to link together to form long chains forming peptides, polypeptides, and proteins. What happens is that the Amino group of one Amino Acid reacts with the Carboxyl group of another Amino Acid. This produces a peptide bond, which allows the two Amino Acids to be attached to each other. This process continues until long chains of Amino Acids can be produced. So the Amino and Carboxyl groups make up the backbone of protein chains.
In physiological condition, (meaning the conditions inside the body) a Amino Acids form what is called a "Zwitterion". What this means is that the structure of the Amino Acid has both a (+) positive charge and a (-) negative charge. We can see that the Amino group is (+) positively charged and the Carboxyl group is (-) negatively charged.
The side chains hang free and they cause proteins to have the characteristics that they have.
Most of the Amino Acids have a characteristic of shape that we need to understand. They are Chiral, meaning that they have a structure that cannot be superimposed on its mirror image.
We can look at our own body parts to know what this means. If we look at our hands and feet, we can see that they look somewhat identical except that they are backwards from each other. On our right foot, the big toe is on the left side, and on our left foot, the big toe is on the right. They are backwards from each other!
They are actually mirror images of each other which do not superimpose. But rather, they look different from each other. They are Nonsuperimposable mirror images.
It is easier to look at your hands. There is no way you can make your one hand look like your other hand. You either have your thumbs pointing in opposite directions or you are looking at opposite sides of the hand.
So both hands and feet are Chiral objects.
Other objects such as balls, glasses, and baseball bats (ignoring abnormalities such as the grain and name plate on the bat, etc.), we can always make these mirror images look like each other. The mirror images will superimpose. There is no such thing as a left-handed bat or a right-handed bat. They are all the same! So balls, glasses, and baseball bats do not have Chirality.
The a Carbon in most Amino Acids is also Chiral. A Chiral Carbon is a carbon atom that is bonded to four different groups.
The two Amino Acids on the left (or above) are mirror images of each other (just like our feet and hands). You can not make these two molecules look like each other. You can turn the molecule on the right around so that the H is on the right side and the NH3+ group is on the left but they still will not look like each other. The H and NH3+ groups will be going away from you instead of going toward you the way it is pictured on the left molecule.
The right and left form of amino acids are Isomers meaning that the two molecules have the same molecular formulas but different structures. In other words, the two molecules have the same atoms, but they only have them arranged differently. Any two molecules that have the same atoms are isomers. They do not even have to look like each other, they only have to have the same number of all the same atoms.
However, Amino Acids not only form isomers; The right and left form of Amino Acids are actually mirror images of each other. This fact makes them Enantiomers, which means they are two molecules that are nonsuperimposable mirror images of each other.
These same amino acids are also Stereoisomers which means that the two molecules differ in their three-dimensional shapes only but that they have the same structural formulas. This means they have the same exact groups attached in the same way. Only the three-dimensional orientation of these groups are different.
So, 19 of the 20 Amino Acids form isomers that are are both Enantiomers and Stereoisomers because their functional groups only differ in their three-dimensional orientation in such a way that they form nonsuperimposable mirror images of each other.
The a Carbon in 19 out of the 20 amino acids is a Chiral Carbon. Hence, 19 out of the 20 amino acids are Enantiomers (mirror images of each other). Partly because of this Stereochemistry, these molecules have become important to the Amino Acid dating process.
For the moment, let's look at the Amino Acid that does not have a Chiral carbon in it. It is Glycine. The reason why Glycine does not have a chiral center is because it has two Hydrogens attached to it. (The side chain is also a Carbon.)
Remember the definition of a Chiral Carbon was that four different groups had to be attached to it. Every one of the four groups has to be different, in order for it to be chiral. In Glycine, only three types of groups are attached to the central a Carbon. Just like the balls, bats and glasses, we can always make one molecule look like the other one. So Glycine does not form Stereoisomers.
In all of the other 19 amino acids, bonds must actually be broken and the molecules be put back together before the two molecules can look like each other.
This breaking apart of the molecule and putting it back together is exactly what has happened to the amino acids in the fossils. So of course this is the basis that some scientists use amino acids for, in seeing how long fossils have been in the ground. It is assumed that the rate that the amino acids have changed has been constant enough for it to be used as a dating process.
How does a Scientist tell the difference between different Stereoisomers?
This is an interesting problem. A scientist has to be able to distinguish the different Stereoisomers from each other. How does he do it?
We can easily tell the difference between left and right hands and feet just by looking at them. Hands and feet are chiral just like the amino acids we want to look at. However left (L) and right (D) forms of amino acids can be extremely hard to distinguish if we look at the wrong feature.
Stereoisomers, the left (L) and right (D) handed forms of amino acids, have essentially the same structures. They have the same exact chemical structures except that they are mirror images of each other. So they also have both the same exact physical and chemical characteristics!
They will boil and freeze etc. at the very same temperatures, they will react in the same way with other molecules. They do everything the same, except for one thing.
Actually there are at least two ways that left (L) and right (D) handed forms of amino acids can be distinguished. One is by reaction where an enzyme controls the reaction. Enzymes uses the shape of molecules to speed up its reaction. This by the way is why virtually all the amino acids in animals are in the left (L) handed form. Enzymes only incorporate and produce the (L) form.
Also, Enzymes will only react with amino acids that are left (L) handed. We can see how this works with a simple handshake. When we shake hands each of us hold out our right hands and the two hands fit into each other. They fit perfectly and we shake hands. If I were to take my left hand and try to shake his right hand, my fingers would be going in the wrong direction. The two hands would clash and not fit into each other at all. It just doesn't work. Even if I were to turn my left hand around so that it is upside down, I would find that now my fingers go in the right direction but my thumb would till be in the wrong place to match the other hand. The two hands would not fit properly. A handshake only works when both individuals use their right hands, or when both individuals use their left hand. In Biological systems, the same is true. Only when all the amino acids are left (L) handed, will the different enzymes and amino acids fit into each other.
Now, the other way that we can distinguish between left (L) and right (D) handed amino acids is that they rotate light in opposite directions. A way to measure the rotation of light is to use polarized light.
To understand what polarized light is, why don't you try an experiment. The next time you are in a department store, or some other store that has glasses, go to where they are and find the polaroid glasses. Pick two of them up and putting one pair of glasses in front of the other, view through two lenses at once, just like the picture to the left (or above).
You will find that when you rotate one of the lenses that the view through the glasses will go dark. You rotate the glasses back and then you can see through the lenses again.
Actually, to make this experiment easier, you can put one pair on your head, then view through the other pair. Rotate it and see what happens.
This is real neat but how does it work? Well we need to look at the nature of light to understand polarized light.
Light is a form of electromagnetic radiation, just like Radio waves, television waves, radar, microwaves, infrared waves, X-rays, and Gamma rays. A distinctive feature of electromagnetic radiation is that the velocity is always the same. The light goes at the speed of light, which in a vacuum, is around 186,000 miles per second.
Another characteristic of Light is that light is broken up into discreet units. They are actually bundles of energy which we call photons. Just like in a stream of water, it is actually water molecules (H2O) which are moving down the river. In a beam of light, it is actually photons of light which are moving along at the speed of light.
Light, like all electromagnetic radiation, exhibit the properties of wavelength and frequency. So we know that light acts like a wave.
For simplicity sake, let's describe a wave as a force that makes photons vibrate sideways. Looking at the picture to the left (or above) we see what looks like a wave. A photon of light is traveling from left to right. (We can see the arrow on the right so we know that the photon is going right.)
Now remember, we are keeping things simple. As light goes from left to right it actually follows the wavy line that goes up and down as it goes toward the right. So we can see that the photon of light can vibrate up and down as it goes toward the right.
Now each photon is independent from the other photons. So we could have some photons vibrate up and down, others vibrate in other directions. That is exactly what happens. Each photon vibrates in it's own plane, or it's own direction.
What a polaroid lens does is to let through the light that vibrates only in the proper direction. The picture to the left (or above) shows the first lens (in both Experiment 1 &2), as only letting through the light that vibrates up and down. All the other light is stopped.
Now, it's what we do with the second lens that determine the outcome of the experiment. If we have the second lens oriented in the same direction as the first lens, (as in Experiment 1) then only the light that vibrates up and down will pass through both lenses.
If on the other hand, the second lens is oriented as in Experiment 2, (letting only the light that vibrates right and left) then no light will reach your eyes.
Scientists use a Polarimeter to detect stereoisomers. If you look at the picture to the left (or above) you can see that a Polarimeter is very similar to our department store experiment. Except that an additional tube (a Polarimeter Tube) is added. The polarimeter tube contains a solution of a stereoisomer substance such as an amino acid.
Once the light goes through the first polarizer lens (just like a polarid lens) only the light that vibrates up and down get through. Now the light enters the tube that is filled with the amino acid. When it goes through the solution, the light begins to twist. The plane of light changes so that after the light comes out of the tube, it is now vibrating in another direction, not up and down, but a different direction.
It is the job of the second polarizer lens to determine how much the light has twisted or rotated. This second polarizer is rotated by the scientist until the light disappears. Then the angle is noted and recorded. So a Polarimeter actually measures how much the light has been rotated by a specific substance.
To test another substance, the scientist can replace one tube with another tube that contains a different substance.
Amino Acid Dating
Now, lets use this knowledge of chirality, stereoisomers, and left (L) and right (D) forms to help us understand how amino acids can be used as a dating mechanism. When the fossil was first buried in the ground, two different things start happening to the amino acids in the protein.
- Amino acids are unstable and they start decomposing with time.
- Most amino acids have at least one chiral carbon, hence, they have a left-handed (L) form and right-handed (D) form. With time, the amino acids undergo a process called racemization, where all the left-handed amino acids found in proteins change to a 50:50 mixture of (D) and (L) forms.
Both of these processes can potentially be used as a dating tool. Let's look at both of them.
Using the Stability of Amino Acids as a Dating Tool.
Some amino acids are more stable than other amino acids. So what happens is that as a fossil gets older, only the more stable amino acids are found in the fossil. So determining which of the amino acids are still in the fossil can be used as a dating tool.
The break down of amino acids occur at predictable rates. So that means that the rate of decomposing amino acids can be used as a dating tool. This was seen as far back as 1955 (Abelson 1955).
However, there are two problems with using this in dating fossils:
- There is a lot of variation in the numbers of amino acids found in living organisms. It might be that some of the fossils, when they were alive, had ratios that were more like the surviving amino acids that are known to be more stable. So because we are unable to know what the original ratios of amino acids were when the fossils were alive, it would be extremely hard to use the degradation process as a dating tool.
- Amino acids are expected to survive only a few million years at best. So detectable levels of most of the amino acids we see in fossils should not be present, if the long ages of Evolution are assumed.
This is the enigma I spoke earlier concerning the surviving amino acids in fossils. For a more complete discussion, see: the presence of amino acids in fossils. Concerning the survivability of Biological Macromolecules and even spores, see: the presence of DNA and bacterial spores in fossils.
Using the Racemization Rate of Amino Acids as a Dating Tool.
All of the 20 amino acids except glycine has a chiral carbon. As was mentioned before, these molecules can be found in either a (L) left handed form, or a (D) right handed form.
Unlike the above degradation process, we know what the forms of all the amino acids were in the living fossil, and we know what change will occur when the fossil gets old.
In living organisms, all amino acids are found in the (L) left handed form only. (There are some rare exceptions, for example, in the cell walls of bacteria, D-alanine is used so that the normal enzymes of most attacking predators will not be able to break down the bacterial wall.)
After the death of an organism, as when a fossil is buried, the population of (L) left handed amino acids start changing so that eventually, a 50:50 mix of both (L) left handed and (D) right handed amino acids will exist in the fossil.
The random process that produces this change is called Racemization. Without describing the intramolecular shift of hydrogen atoms that allow it to happen, we can simply say that the amino acid molecules slowly shift from (L) to (D) forms and then back to (L) forms in a random process.
The Racemization process is very much a random process and is somewhat like the random processes that involve radioactive isotopes for dating.
Initially, at the time of death, the reaction strongly goes to the right producing the (D) form quite rapidly, as is indicated in the first reaction in the graphic above. The graph to the left also shows that the loss of (L) form and the formation of (D) form is the most rapid in the initial stage at the time of burial.
As the concentration of the (L) form decreases and the concentration of the (D) form increases, equilibrium is approached. As equilibrium approaches, the net increase in the concentration of the (D) form from the (L) form decreases.
When Equilibrium is actually reached, then there is now no net change. The forward Racemization reaction is running at the same rate as the reverse Racemization reaction.
So it seems, at least on the surface, that the racemization of amino acids can be a very predictable process, and that it can be used as a dating tool because it is so predictable.
Unfortunately, this is not the case. Unlike radioactivity, which is extremely predictable, there are a lot of factors that can affect the rate that amino acids transform from one form to another.
Factors that affect the Racemization Rate
The following factors have been found to affect the speed of the reactions that causes amino acids to undergo racemization.
- Water concentration in the environment
- pH (acid/base measurement) in the environment
- Bound state versus free state
- Size of the macromolecule, if in a bound state
- Specific location in the macromolecule, if in a bound state
- Contact with clay surfaces (Catalytic effect)
- Presence of aldehydes, particularly when associated with metal ions
- Concentration of buffer compounds
- Ionic strength of the environment
The first three factors: Temperature, water concentration, and pH especially affect racemization. But, temperature is the factor which dramatically affect this process. If the temperature goes up 1o than the rate of racemization increases by 25%!
So it is acknowledged that if this process is to be used as a dating tool at all, it must be calibrated so that it's answers can agree with other dating tools such as Carbon 14.
What scientists try to do is to save money by using amino acid data on various fossils that they think have had similar characteristics and which have had experienced similar conditions such as temperature. They think that relative ages can be obtained through this method. This might be possible if the assumptions that they have made are correct; However, let's look at the data a little closer. It may be that the data would fit just as well, or even better when considering a short age synario. Let's see how probable it is that they are getting the answers they think they are getting with amino acid dating.
Racemization Rate vrs. Assumed Age
Let's look at the graph below. If Amino Acid dating was a predictable process, like other dating techniques with a predictable rate, the points on the chart would align themselves in a horizontal line. That would indicate that the Racemization constant really is a constant. It would mean that this method would be able to predict an age by itself. It would indicated that the rate would be the same rate for all the samples collected.
This is definitely not the case. Looking at the graph we can see that the Racemization constant changes almost as much as the predicted date!
What is really amazing to see is that the rate of racemization changes almost as much as the age, yet it is used as a dating method! It almost looks like the racemization rate could be independent of the assumed ages of the fossils. Maybe what is being measured is a difference in some of the factors which especially affect racemization. Namely: temperature, water concentration, and pH.
Because the rate of the racemization constant changes almost the same degree that the assumed ages of the specimens changes, the possibility must exist that the assumed ages are totally fictitious. In fact, if the assumed ages are tossed out and a approximate date for the occurrence of Noah's global flood is inserted; The graph would indeed approach a horizontal line. The very same result that is shared by other predictable dating techniques.
Another interesting issue concerning the racemization rate of amino acids is the assumption that amino acids within the matrix of a protein would tend to have a slower rate of change versus. other amino acids which are in a free condition, away from the interior of a protein. In the chart, some difference is detectable, however it is an insignificant difference when comparing all the other specimens. The chart shows that whether free or incorporated within a protein, the racemization rate of amino acids is more or less the same.
I would be dubious of any kind of amino acid racemization data because the racemization constant must be adjusted to give the answer that the researchers are looking for. Since amino acid dates are usually adjusted to match the dates of, say Carbon 14, the results are that of Carbon 14 dating and not amino acid dating. It should be clear that amino acid dating poses absolutely no threat to the Creation paradigm.
The insignificant differences found between free and non-free amino acids help bring into question the explainations used to explain why amino acids have not broken down. Have a look at my two pages on these issues. One is on the presence of amino acids in fossils, the other on the presence of DNA and bacterial spores in fossils.
Limitations of the Historical Sciences
In any kind of a historical science, assumptions have to be made in the assessing of historical dates. Because it is assumed that man, for example, has ascended over a long period of time, researchers would automatically want to lengthen the amount of time indicated by the artifacts uncovered in archeological digs. They are looking for answers that would fit their present model. I am not trying to say that they are falsifying their data. On the contrary they wouldn't need to falsify anything. Historical data can be so inconclusive that a host of positions is possible from almost any set of data that is collected.
Man is thought to have progressed through a long period of prehistory (cave man's experience) before some sort of civilization is started. Only after civilization begins can we begin to gather some sort of data from the discovery of the artifacts that are found (Pieces of pottery, etc.). The artifacts according to today's traditional thinking should be slowly progressing in complexity as it is thought that man is progressing in his abilities and ideas that he uses.
If man is thought to have progressed over long periods of time, even within the later civilization phase of his existence, than surely as the artifacts are recovered from archaeological sites, the theories and ideas developed will reflect the scientist's own original thinking. This is how science normally works. They normally work within fairly well defined set of theories that have become a paradigm. A paradigm is a theory that is so well accepted that no one seriously questions it. This way of doing science is most prominent when the evidence is fragmentary at best.
Assumptions throughout the scientific process are extremely important because they must hold the facts together. Only when specific data comes that either substantiates or falsifies the previously held assumption, can it be known if the thinking was originally correct. Unfortunately, with fragmentary data, the artifact that might falsify a theory is extremely hard in coming or it could easily be overlooked. So the problem must be solved by a host of assumptions that will probably never be tested.
There is also the danger that good data could be thrown out because it doesn't fit with established thinking. For instance, I am told that there are sometimes found in the same level both "early" forms and "modern" forms of man. Because of what is considered to be an impossibility, the modern forms are assumed to have been examples of intrusions. The modern form is considered to have been buried much later in spite of the fact that the specimens are found in the same level.
The areas of science, which are the most successful, which the public notices, are the amazing discoveries in medicine, biology, space exploration, and the like. These are the areas that deal with the here and now. If an experiment is conducted and the information needed to answer the problem is not forthcoming, then another experiment can be designed to answer the problem. The process can continue until some answer to the problem is
understood. The problem is only limited by money, ingenuity, and the technical difficulties that have to be surmounted.
In addition to the above limitations of science, historical science is limited by the fragmentary nature of the artifacts it is able to find. In effect, the accuracy of ideas
is limited by the assumptions chosen by the researchers.
Hopefully you will start to see this page start to grow. Sorry for the delay.
Some Interesting Papers
Amino acid racemization and the preservation of ancient DNA. Poinar HN, Hoss M, Bada JL, Paabo S
Science 1996 May 10;272(5263):864-6
University of Munich, Germany.
The extent of racemization of aspartic acid, alanine, and leucine provides criteria for assessing whether ancient tissue samples contain endogenous DNA. In samples in which the D/L ratio of aspartic acid exceeds 0.08, ancient DNA sequences could not be retrieved. Paleontological finds from which DNA sequences purportedly millions of years old have been reported show extensive racemization, and the amino acids present are mainly contaminates. An exception is the amino acids in some insects preserved in amber.
Predicting protein decomposition: the case of aspartic-acid racemization kinetics. Review
Collins MJ, Waite ER, van Duin AC
Philos Trans R Soc Lond B Biol Sci 1999 Jan 29;354(1379):51-64
Fossil Fuels and Environmental Geochemistry (Postgraduate Institute), NRG, University of Newcastle-upon-Tyne, UK. [email protected]
The increase in proportion of the non-biological (D-) isomer of aspartic acid (Asp) relative to the L-isomer has been widely used in archaeology and geochemistry as a tool for dating. the method has proved controversial, particularly when used for bones. The non-linear kinetics of Asp racemization have prompted a number of suggestions as to the underlying mechanism(s) and have led to the use of mathematical transformations which linearize the increase in D-Asp
with respect to time. Using one example, a suggestion that the initial rapid phase of Asp racemization is due to a contribution from asparagine (Asn), we demonstrate how a simple model of the degradation and racemization of Asn can be used to predict the observed kinetics. A more complex model of peptide bound Asx (Asn + Asp) racemization,which occurs via the formation of a cyclic succinimide (Asu), can be used to correctly predict Asx racemization kinetics in proteins at high temperatures (95-140 degrees C). The model fails to predict racemization kinetics in dentine collagen at 37 degrees C. The reason for this is that Asu formation is highly conformation dependent and is predicted to occur extremely slowly in triple helical collagen. As conformation strongly influences the rate of Asu formation and hence Asx
racemization, the use of extrapolation from high temperatures to estimate racemization kinetics of Asx in proteins below their denaturation temperature is called into question. In the case of archaeological bone, we argue that the D:L ratio of Asx reflects the proportion of non-helical to helical collagen, overlain by the effects of leaching of more soluble (and conformationally unconstrained) peptides. Thus, racemization kinetics in bone are potentially unpredictable, and the proposed use of Asx racemization to estimate the extent of DNA depurination in archaeological bones is challenged.
The kinetics of diastereomeric amino acids with o-phthaldialdehyde.
Meyer MW, Meyer VR, Ramseyer S
Chirality 1991;3(6):471-5 PMID: 1812958, UI: 92256092
Institute of Organic Chemistry, University of Bern, Switzerland.
The kinetics of the reaction of the amino acid epimers L-isoleucine, D-allo-isoleucine, L-threonine, and D-allo-threoninewith o-phthaldialdehyde and mercaptoethanol were determined at 25 degrees C. L-Isoleucine reacts faster than its D-epimer whereas L-threonine reacts slightly slower than its D-epimer. In the case of isoleucine, the consequence can be an allo/iso ratio which in the worst case is 25% too low if these amino acids are quantified by liquid chromatographyand o-phthaldialdehyde fluorescence detection. The effect on dating of fossils by amino acid racemization is discussed. | http://www.creation-science-prophecy.com/amino/ | 13 |
15 | ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you.
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Teacher Resources by Grade
|1st - 2nd||3rd - 4th|
|5th - 6th||7th - 8th|
|9th - 10th||11th - 12th|
Critical Perspectives: Reading and Writing About Slavery
|Grades||3 – 5|
|Lesson Plan Type||Standard Lesson|
|Estimated Time||Five 60-minute sessions|
- Practice effective reading strategies by making predictions and activating prior knowledge before reading, and making connections during and after reading
- Develop a deeper comprehension of slavery and the Underground Railroad by pairing the fictional story Sweet Clara and the Freedom Quilt with the nonfiction text The Underground Railroad
- Examine the moral issues of slavery, considering the perspectives of both slaves and slave owners
- Synthesize what they have learned by participating in creative writing projects, in which they write from the perspective of a slave or slave owner
|1.||Gather students in a comfortable area with their journals and pencils. Enlarge the K-W-L chart (or re-create one) and post it where students can see it.
|2.||Begin the lesson by doing a "picture walk" through the fictional book Sweet Clara and the Freedom Quilt by Deborah Hopkinson. Explain to students that this book is an example of historical fiction. Historical fiction includes real events amidst a made-up storyline. Tell students that while some parts of the story are made up, others like the time period and some of the happenings are based on true events. Let students enjoy the illustrations in the book, and then ask them to write their predictions about the story in their journals.
|3.||After providing time for students to share their predictions, activate their prior knowledge about slavery and the Underground Railroad. It is important to allow everyone time to share their thoughts and ideas; one student's prior knowledge often serves as a springboard for others. While students are sharing, take notes on sticky notes and post them under the K column on the K-W-L chart.
|4.||Begin reading the story, pausing at the end of each page for students to check their predictions and make connections (i.e., text-to-self, text-to-text, and text-to-world). Note: If students are unfamiliar with making connections, you may want to conduct a minilession to teach them this strategy. The lesson "Guided Comprehension: Making Connections Using a Double-Entry Journal" can be modified to meet your students' needs.
|5.||At certain points during the story, stop reading and have students talk to one another about the predictions, connections, and questions they have been writing in their journals. Prompt students to predict the significance of the quilt to see if they are following the storyline.
|6.||At the end of the story, conduct a class discussion by posing the following questions:
|1.||Gather students in a comfortable area and have sticky notes and pencils available.
|2.||Post the K-W-L chart and review with students what they learned about slavery and the Underground Railroad from Session 1.
|3.||Introduce the nonfiction book The Underground Railroad by Raymond Bial. Tell students that you will be reading this book aloud so that they can add information to the K-W-L chart. Help students make the distinction between fiction and nonfiction. Remind them that the book in Session 1 was an example of historical fiction (i.e., the story and characters were made up, but the setting and some of the events were based on fact). Explain that in this session they will be listening to a nonfiction text. The author's purpose is to describe and explain the Underground Railroad by providing true information.
|4.||While reading, stop periodically and prompt students to make text-to-text connections to the book Sweet Clara and the Freedom Quilt or any other books they have read. Ask students to also use the sticky notes to write down new things they learn about the Underground Railroad.
|5.||Before sharing the new information learned in this session, read through the sticky notes in the L column that were posted in Session 1. See if any of those are confirmed through the nonfiction reading. This is a way to show that historical fiction often includes facts. On the other hand, this may also be a good opportunity to identify parts students thought were true in Sweet Clara that were actually fictional.
|6.||Next, have students share their sticky notes from this session and attach them to the K-W-L chart in the L column. Review the W column from Session 1 to see if any of their questions were answered. Place a star or some distinguishing mark on the questions that were answered. After looking at the questions, ask students if there is any more information that they would like to learn about the Underground Railroad. Add those questions to the W column. Display this chart in the classroom for students to refer and add to as they progress through the lesson.
|1.||Revisit the K-W-L chart, focusing on the L column. Begin by sorting the sticky notes in the L column into two groups: information learned about slaves and information learned about slave owners. Ask students to think about the two books they read, Sweet Clara and the Freedom Quilt and The Underground Railroad. What did they learn about slave owners? What conclusions can they draw about slave owners based on the text and illustrations? For example, from Sweet Clara, students can draw the conclusion that slave owners had land that needed to be worked on, did not like when slaves ran away, would sometimes hurt slaves if they tried to escape, and had more money than slaves. Write the conclusions that students come up with on sticky notes and add them to the L column. Make sure that their conclusions can be supported by words or text from one of the books they read.
|2.||Next, ask students whether slaves were right or wrong to run away from their slave owners. After a few responses, ask them to consider this same question from the perspective of the slave owners. Why do they think that each group would have a different perspective, and is it justified to say that one perspective was right or wrong considering the historical context?
|3.||Gather students into small groups. Have each group make a two-column chart (i.e., T-chart) on a sheet of notebook paper, labeling the left side "Slave Owners" and the right side "Slaves." Working together with their group, students should list all the reasons why slave owners kept slaves on the left side, and all the reasons why slaves thought this was unfair on the right side. Allow students to refer to the K-W-L chart as they are writing their lists.
|4.||Once groups have had time to write their lists, gather students together and ask each group to share the reasons they listed. Have groups actively listen to other groups by adding new ideas to their lists or checking ideas off that are similar to or the same as other groups.
|5.||To end the session, have students respond to one of the following prompts in their journals:
In this session, give students the option to choose one of three creative writing projects to synthesize what they have learned and demonstrate comprehension of the critical perspectives surrounding slavery. Students who choose the same option can be paired up to work on their project together. Each option includes the use of technology to create the project. Before students begin, share the creative writing rubrics (Coded Message Rubric, Letter Writing Rubric, and Newspaper Rubric) to give students a set of goals and expectations for their projects.
- Option 1: Coded Message
The quilt in the story Sweet Clara and the Freedom Quilt contained a coded message that only other runaway slaves would recognize to help them find the Underground Railroad. Discuss why runaway slaves needed coded messages.
The task for this pair of students is to create a secret message that runaway slaves would be able to use to find the Underground Railroad. Allow students to use their creativity in constructing their secret message (e.g., through a quilt design or picture, an encoded booklet or map, a story with a secret message). Students may choose to use the online Flip Book to create their coded message as appropriate.
- Option 2: Newspaper
The task for this pair of students is to publish a one-page newspaper using the ReadWriteThink Printing Press. Students have the option to create a newspaper aimed at slave owners or publish an underground newspaper for slaves. The articles for the newspaper should be written following the writing process. Articles should also express the perspective of the selected audience and an understanding of the social context and time period.
- Option 3: Letter Writing
For this option, pairs of students can take the perspective of either Clara and write a letter home to her aunt about her experiences traveling the Underground Railroad, or a slave owner and write a letter to another slave owner about one of his slaves who escaped. Students can use the online Letter Generator to type and print their letters.
Remind students to refer to the L column of the K-W-L chart to incorporate things they have learned about the Underground Railroad in their projects. They can also refer back to the books they read during Sessions 1 and 2, or do further research to be able to incorporate additional details as needed.
Circulate while students are working on their projects to answer questions or help them with the writing process. Also, remind students to check their work for spelling and grammatical errors before printing a final copy.
|1.||Allow each pair of students to share their creative writing project with the class. To keep students engaged, you may want to schedule a few presentations to be made at the beginning of several sessions. After each pair shares, ask the class to comment on the things they liked about the project and one thing that could be improved.
|2.||After all of the projects have been presented, take time to meet with the class for a closing discussion. This is a good time for the class to think about the implications of considering more than one perspective. Start by discussing what they learned about slavery and how looking at both sides helped to better understand the critical issues surrounding slavery.
|3.||Ask them to share a time in their lives when there was conflict (e.g., fighting with a friend or sibling, not getting permission to do something, getting in trouble at school). After they have discussed their side of the story, challenge them to consider the perspective of the other person. For example, why wouldn't their parents give them permission? Did they have a good reason? Were they just being mean? Were they trying to help or protect them? Discuss how considering other perspectives might help them to think in different ways and give them ideas for compromise.
|4.||End the discussion by creating a class chart that includes important ideas to remember about critical perspectives. The chart can include the definition of critical perspectives and ways it can help them in their reading and daily life. Post the chart in the classroom and use it as a reference when reading about other topics throughout the year.
- Consider the following ReadWriteThink.org lessons as extensions to this lesson:
- Escaping Slavery: "Sweet Clara and the Freedom Quilt"
- "Strategic Reading and Writing: Summarizing Antislavery Biographies"
- "Traveling the Road to Freedom Through Research and Historical Fiction"
- "Fighting Injustice by Studying Lessons of the Past"
- "Literature as a Catalyst for Social Action: Breaking Barriers, Building Bridges"
- "Blending Fiction and Nonfiction to Improve Comprehension and Writing Skills"
- Escaping Slavery: "Sweet Clara and the Freedom Quilt"
- Ask students to consider an injustice that exists today, perhaps one that affects their own lives or their local community. Have them discuss the perspectives of the opposing groups involved in addition to their own personal perspectives on the issue. Can they propose a solution that respects the different perspectives but achieves a more equitable situation? If so, students may feel motivated to take steps to promote and implement their solution through a service project.
- Ask students to choose a questions from the W column of the K-W-L chart that did not get answered and try to research the answer using online resources or nonfiction texts.
- Use the Observation Rubric during Sessions 1 through 3 to assess students’ participation and understanding. Use the information gained from the assessment to adjust the pace of each session or to support a student who seems to be struggling with the content. For example, if there is a student who does not offer input during class discussions, find an opportunity during the discussion to prompt him or her with guided questions.
- Assess each pair’s creative writing project with the Coded Message Rubric, Newspaper Rubric, and Letter Writing Rubric. | http://www.readwritethink.org/classroom-resources/lesson-plans/critical-perspectives-reading-writing-1060.html?tab=4 | 13 |
16 | This lesson defines and compares the National Debt with the National Deficit. Students will discover the differences between the two and look at current trends. Students will examine the amount of per-capita debt and be exposed to the reality of the amount the national debt is increasing every day or two despite recent budget surpluses.
Students will visit “A Citizen’s Guide to the Federal Budget,” and use the federal government web site to obtain information which will help them understand basic information about the budget of the United States Government for the current fiscal year.
The seasonally adjusted rate of change in the consumer price index during the month of September 2002 was 0.2 percent (an increase of two-tenths of one percent). The rate of increase in the consumer price index over the past twelve months was 1.5 percent. In September, the core consumer price index, which excludes energy and food prices, increased by 0.1 percent.
The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information.
Teaching Financial Crises is an eight lesson resource that provides an organizing framework in which to contextualize all of the media attention that has been paid to the recent financial crisis, as well as put it in a historical context. The current events stories, opinion pieces, and other popular media pieces that are today in great supply have generally not connected to educational objectives, historical analysis, and economic processes and concepts that are used in the high school classroom. In Teaching Financial Crises, teachers will find a non-partisan and non-ideological resource to help them simplify and offer balanced perspectives on this challenging subject matter.
6 out of 9 lessons from this publication relate to this EconEdLink lesson.
This publication contains complete instructions for teaching the lessons in Capstone. When combined with a textbook, Capstone provides activities for a complete high school economics course. 45 exemplary lessons help students learn to apply economic reasoning to a wide range of real-world subjects.
3 out of 45 lessons from this publication relate to this EconEdLink lesson.
This revised edition features simulations, role plays, small-group discussions and other active-learning instructional activities to help students explore economic concepts through real-life applications.
3 out of 21 lessons from this publication relate to this EconEdLink lesson. | http://www.econedlink.org/economic-standards/EconEdLink-related-publications.php?lid=31 | 13 |
56 | The First FleetThe settlement of Australia by Europeans began with the departure of the First Fleet from England in 1787. The fleet was bound for Botany Bay, far off in the Southern Hemisphere, with a pathetic cargo of convicts. The east coast of New South Wales was regarded as a convenient dumping ground for convicts, now that the independent Americans had successfully rid themselves of English dominance.
New South Wales, as the eastern shore of the island continent was named by Captain Cook, would become a prison colony to which the unwanted criminal elements of Great Britain could be transported with no hope of escape.
The English Government provided none of the normal implements of commerce and trade, such as coins and banknotes. This was a time of severe currency shortage in the United Kingdom, with tradesmen's tokens, countermarked coins, bank tokens and other money of necessity circulating freely. No coinage was available to England's colonies. Officials of the new colony would be paid in produce from the company store and even free citizens would have little need of money.
When Captain Arthur Phillip, Governor of the new settlement, arrived in Sydney Cove in 1788 with eleven ships and 1,487 people, including 759 convicts, the small colony was virtually penniless. Apart from 300 pounds in currency held by the Governor, the only coins available were those carried in the pockets and purses of the passengers. It was not forseen that the small colony would almost starve to death because of drought, poor soil and various other problems which made it impossible for the settlement to become self-sufficient.
The first fleet brought sufficient provisions, with careful rationing, to keep its passengers alive for only two years. Their survival relied on the ability of the convicts to work the land and harvest crops. However, no-one had made allowances for the lack of farming experience, the unfamiliar soils or the harsh climate they found.
The first year's crop failed and most of the next year's harvest was required to replenish seed stocks. Draft animals and proper farming equipment were not available - it was more than 15 years before a plough was used in the colony. For the first five years the colony lived on the edge of starvation. The prolonged drought forced Phillip to spend some of his hoard of pounds to obtain supplies from Cape Town.
In 1792 the rain finally came and a successful harvest enabled the colony to become self sufficient in food production. In an ironic twist, it was only after this that trading ships from distant lands began to arrive, bringing with them the colony's first 'consumer goods' - clothing, boots, butter, tea and rum. The colonists now required some form of currency to pay the traders for the goods delivered. Attempts to induce the ships' captains to accept promissory notes (which could only be redeemed on the next visit to England), failed as they wanted cash to buy fresh cargos on the way.
The colonial authorities back in England had no inclination to send coinage to the settlement, knowing full well that it would be immediately spent on trade goods and disappear out of the colony within weeks of its arrival. Australia had a balance of payments problem from the very start.
Cut Dollars.The practice of cutting Spanish dollars into smaller segments to provide a fractional currency was a well accepted practice in many countries by the time the colony was established at Sydney Cove. The practice led to the famous term 'pieces of eight', where the original coin was quartered, and then each quarter segment was halved, leaving eight, approximately equal, pieces. The American term 'two bits', is derived from two of these pieces being equal to quarter of a dollar or 25 cents.
An interesting variation on this process occurred in the early NSW colony. The original Spanish dollar was quartered, often with a vertical and horizontal centre strip being removed and placed aside as illegally obtained profit for the fabricator. Each quarter segment was then cut into two pieces - a 2/3 segment and a 1/3 segment - producing eight pieces.
With little circulating money, other means of trade sprang up. People bartered for goods and services with anything that came to hand. Quite naturally, food stuffs - particularly rum, flour, pork, tobacco and tea - became the circulating medium. In 1796, a seat in Sydney's first theatre could be bought for a shillings worth of meat, spirits or flour. With necessity came invention and the next few years provide an extremely interesting period in Australia's numismatic history.
The three most prominent mediums of exchange in this cruicial 25 year period from the early 1790's until the arrival of Governor Macquarie in the second decade of the new century were Promissory Notes and I.O.U's., Governor King's Proclamation Coins including Boulton's Cartwheel coinage and goods, more particularly, the Rum Trade.
Promissory Notes and I.O.U's.Officially, prices in the colony were in Pounds, Shillings and Pence (Sterling) but the only notes with Sterling value were Government Bills of Exchange on the British Treasury, Commissariat Store Receipts and Paymaster's Notes issued to the New South Wales Corps - the regiment of soldiers which arrived with the second fleet to maintain law and order and to run the general administration. The paymaster's notes were preferred and carried premiums as high as 25 percent over copper cash, and more over personal promissory notes or I.O.U's.
A number of stores and merchants issued private promissory notes, which took on the role of a circulating currency, however government regulation was clearly needed to stop unscrupulous traders from issuing worthless notes with no monetary backing. The story goes that a prominent publican was known to bake his notes in an oven before their issue to render them brittle.
Macarthur sued, claiming the exact number of bushels expressed on the face of the note, irrespective of value. The court decided that the note was an expression of value, not quantity. Macarthur took the decision as a personal affront, setting in motion a chain of events which would eventually undermine Bligh's standing as Governor and see him ousted in the Rum Rebellion.
Even more remote than the isolated colony at Sydney Cove, the traders in Tasmania - Van Diemen's Land as it was then known - faced a cronic shortage of coins and banknotes. This led to a profusion of private promissory notes. See the separate article on Tasmanian Promissory Notes.
Private and Government notes were used as the exchange medium for the majority of transactions in the first decades of the colony. They were, in reality, the equivalent of our present day banknotes. See the separate article for a more detailed account of the issue and use of Paper Money in Early New South Wales. The use of paper for smaller denomination transactions was curtailed with the introduction of holey dollars, dumps and an increasing supply of copper coinage after 1814.
Proclamation CoinsWithin 3 years of the establishment of the colony, disputes began to emerge over the value of the coins circulating, particularly the Spanish dollar. To clarify the situation, Governor Phillip issued the first proclamation concerning mediums of exchange in 1791 when he fixed the value of the Spanish Dollar at five shillings.
By 1800, the population of the colony had passed 5,000. A crazy collection of currencies including guineas, guilders, johannas, mohurs, rupees, Spanish dollars and ducats and Matthew Boulton's copper 'cartwheel' twopences and pennies had been left behind by visiting ships and were circulating in the settlement.
There is some dispute in modern numismatic circles over both the importance placed on the 1800 proclamation in resolving the circulating coinage problems, and on the coins that were actually meant to be covered. Purists believe that only those coins specifically mentioned in the proclamation deserve the label 'Proclamation Coin', while others, particularly dealers looking to mark up the value of their offering, believe that by default, the multiples and fractions of coins specifically mentioned gained a valuation and should also be included in the definition. See the separate article for more information.
The 'Cartwheel' PenniesBig, ugly and heavy, Australia's first official coins - the 'Cartwheel' pennies - were not received enthusiastically. Historically, however, they are magnificent. They were the first coins officially exported to the colonies and included the first copper pennies ever struck in England. They were also the first English coins to be struck using steam power.
The somewhat unimpressive designs were a collaboration between Boulton himself and Heinrich Kuchler. The significance of the coins lies in the skills used in developing the steam coin press, not in the coin designs.
Boulton was obviously very proud of his product. He wanted his coins to double as weights and measures and for this reason his penny piece contained one full ounce of copper from the Welsh mines and the twopenny piece - two ounces of copper. No wonder they were nicknamed 'cartwheels'. Grocers thought it a good idea though, and used the coins when weighing out flour, and so forth. The twopenny piece could also be used to measure cloth: eight lined up end-to-end equalled one foot (30 centimetres). More of Boulton's pennies appeared in 1806 and 1807 and these coins were the staple diet for small transactions for many years.
The Rum TradeFrom the departure of Captain Phillip at the end of his tenure as governor in 1792, until his official replacement by Captain Hunter in 1795, the infant colony was left in the care of lieutenant-governors - first Francis Grose, then William Paterson. Effectively, these two officers governed on behalf of, and in the personal interests of, the New South Wales Corps. They did not hesitate to take advantage of the situation presented to them.
Members of the New South Wales Corps had the ability to raise capital by borrowing against their regimental pay, which was accumulating back home in England. It was this facility which enabled the elite of the Corps to snatch control of trade in the colony and establish rum as the most common currency.
This happened in 1793 after the arrival of the American trading ship, the 'Hope', with 7,500 gallons of rum in her cargo. The other goods she carried were desperately needed but the Hope's incorrigible captain insisted that he would sell nothing to the colonists unless they first bought all of his rum. The New South Wales Corps officers saw this as an opportunity rather than a rort and formed a syndicate, with regimental paymaster John Macarthur at it's head, pulling the necassary financial strings.
They bought the cargo and distributed it at a huge profit. The vast pool of rum flooded into the market place at grossly inflated prices and at once became a means of exchange. For their efforts, the New South Wales Corps were immediately dubbed the 'Rum Corps', a name which stuck until their recall to England in 1810. The rich pickings they made from that first deal gave them the power to monopolise almost all trade, particularly that in rum (the name given to all spirits), for the next 17 years.
As rum grew to be the king of currency, the population of New South Wales soon became divided into two classes - those who dealt in rum and those who were paid with, and drank it. Rum, a liquid gold as the medium of exchange, was used and abused in more ways than one (hic..). Rum could buy anything. The wages for the construction of some of our most famous landmarks were paid out in rum. Rum was offered as a reward for the capture of bushrangers. The story goes that a man even sold his wife for four gallons of the stuff.
Many of the officers became publicans, with a very effective system set up to control their monopoly over rum. So openly brazen was the rum traffic that it was rumoured that even the chief constable held a liquor license and sold rum right opposite the gaol door. By 1806, rum had become far and away the major currency, and for rum the settlers mortgaged or sold their stock and their farms.
In another famous case, Macquarie paid for the building of a road between Sydney and Liverpool with 400 gallons of rum.
Manipulation of the value of Rum was not restricted to the cut and thrust of the business world. Inflating its value was considered to be fair play by those involved in undertaking 'God's work' when Australia's first church was built in 1793. The Reverend Richard Johnson paid out part of the workers' wages in rum he valued at 10 shillings per gallon but for which he had paid only 4s 6d a gallon.
The Rum RebellionWilliam Bligh arrived at the colony in 1806 to take up the post of Governor. He was already well known to the colonists for his remarkable navigational feat in guiding a cutter full of castaways to Java after the infamous mutiny on the Bounty. Bligh knew that the power of the New South Wales Corps had broken his predessors, Hunter and King, and he fully intended to do the breaking this time. King had sent Macarthur to Britain to stand trial after he had nearly killed a colonel in a duel. Instead of any penalty, Macarthur returned deemed innocent and with a land grant of 5,000 acres.
Before Bligh committed himself to any dramatic reforms, he took time to understand the complex problems he had inherited. He quickly saw the worst effects of the rum currency. Agriculture, the life-blood that kept the colony from starvation, was severely depressed by the trade with farmers forced to accept 'payment in property', as rum was euphemistically known, for their wheat or other produce, at enormously inflated value. In turn, they used the rum to pay their workers. The rum traffic helped the elite New South Wales Corps maintain its power, while at the same time it debauched the quality of labour.
To overcome such anomalies, Bligh set about closing off the officer's sources of supply. He issued two general orders, one suppressing the rum trade by fixing the price, and the other making all promissory notes payable in cash, thus attempting to break the hold that the Rum Corps had over commerce. Macarthur openly defied the orders leaving Bligh with no choice but to respond by arresting Macarthur and sending him to trial on a number of charges ranging from illegal possession of a still to sedition and resisting arrest.
The Corps responded by staging what amounted to a coup d^etat on January 26, 1908, the 20th anniversary of the foundation of the colony. On that day a free settler, George Suttor recorded what he saw:
... the greater part of the New South Wales Corps under Arms with fixed bayonets, marching down from the Barrack. (I) hastened among others to know the cause; and was informed that they were going to arrest the Governor; and on proceeding a short way with them, distinctly heard Serjeant Major Whittle make use of these expressions - 'Men, I hope you will do your duty and don't spare them.' The men replied, 'Never fear us.'Later, Major Whittle was heard to say, 'Children, go out of the way, for some of you I expect will be killed.' As events transpired, no one was actually killed. The Corps marched up Bridge Street to Government House, drums and all, and Governor Bligh was arrested, with some accounts claiming that he had to be ignominiously dragged out from under a bed. What followed the next day was reported later in the Sydney Morning Herald:
The day following was one of immense business. The Government was formally deposed, bonfires were lighted up at corners of almost every street, magistrates were dismissed, the Provost Marshal escorted to prison, the then sitting Criminal Court dissolved, troops were harangued ... the old Judge Advocate relieved of his offices ...The commandant of the New South Wales Corps, Major George Johnston, released Macarthur from jail on a warrant which he signed illegally as 'lieutenant-governor' and with that, the Rum Corps took over the running of the colony for the next two years. And with the rule of the Rum Corps, so continued the rule of rum as money.
After a period of detention, Bligh was allowed to board his old ship, the Porpoise, upon which he commanded the captain to bombard the town. The captain refused, and Bligh sailed out of Sydney Harbour to exile in Van Diemen's Land (later Tasmania), to return only after the arrival of his official replacement, Governor Lachlan Macquarie, on New Years Day, 1810.
Lachlan Macquarie arrived in Sydney with instructions to end the rum trade, to send the New South Wales Corps home and to arrest the leaders of the rebellion - Macarthur and Johnston. Both, however, had sailed for England to put their own cases before the Government. Johnston was cashiered but Macarthur returned to New South Wales in 1814 with orders that he stay out of public affairs for the time being.
In the meantime, Macquarie began to tackle the major problems that he had inherited. The colony was nearly broke; what coins there were continued to be shipped away by visiting traders and the illicit rum trade ruled commerce. Unlike his predecessor King, Macquarie's tactics proved to be more successful. He recognised that barter in spirits was only part of a larger problem. An acceptable currency had to be established, for as long as there was a barter economy and private promissory notes passed at great discounts, wealth and power would inevitably remain in the hands of a few.
A stable currency would strengthen the economy and underpin its fragile political and social structure. Macquarie also realised that the dictatorial methods employed by Bligh, Hunter and King were useless against the well entrenched and now enormously wealthy proponents of the rum trade - some of whom were former Rum Corps members who had elected to stay in New South Wales as free settlers. Macquarie fixed on a plan that was both economically sound and a brilliant strategic move.
First, he reduced the number of licensed houses in Sydney from 75 to 20 and enforced their closure on Sundays. While he believed it essential to prevent the destructive use of liquor, he also realised it was impossible simply to ban it. He then moved to increase the supply of rum, making it less of a luxury and therefore devaluing it as a currency and discouraging monopolistic trade by a few. He did this by allowing the free importation of spirits, but with a relatively high duty of four shillings per gallon.
The Holey Dollar & DumpHaving dealt with one problem, Macquarie turned his attention to the task of finding a replacement for rum as a currency. In October of his first year in the colony, he suggested the founding of a bank. The idea was subsequently rejected. In the meantime, he asked for copper coin to stop the 'shameful' traffic in valueless notes. This too was rejected. Silver coins were not available from England because supplies had been depleted in repaying British debts from the Napoleonic wars.
Part of his salvation came in the guise of the sloop of war, the Samarang. Macquarie came up with a plan to purchase the most common silver coins in the trading world at that time - Spanish eight reale coins ('pieces of eight'). The Samarang arrived at Port Jackson in 1812 with 40,000 of these 'Spanish dollars' in her hold. Macquarie knew that to release the dollars as they were would only be a temporary solution, as these too would be traded out of the colony. He wrote:
Having decided it essentially necessary to adopt every possible precaution to prevent this Useful Supply of Dollars from being Exported, or Carried out of the Colony, I gave immediate Direction for Constructing a Machine here for the purpose of Stamping, Milling and Cutting a piece out of the Center of each Dollar, previous to my circulating this Specie in the Colony. Intending that each Dollar, and the small piece Cut out of the Center of each, should have the Value thereof, respectively, and the Name of the Colony stamped on it. The Value I determined on giving to the Dollar was Five Shillings Sterling, and fifteen pence to the small piece Cut out of the Center of each Dollar.The strategy of mutilating each coin, which had been used in various other countries at that time, effectively caused traders to spurn it. Macquarie's master stroke had created two coins out of one. The outer ring which he called the Colonial dollar had a new value of five shillings. The inner portion, called the Dump, was valued at fifteen pence.
Macquarie's initiative dramatically increased the number of coins in circulation and distinguished them as local currency, therefore deterring their removal from the colony on departing trading ships. Not only did the coins solve the immediate currency problem, but Macquarie also made a substantial profit in the process. He had paid four shillings and nine pence for each silver dollar, but the new face values totalled six shillings and three pence.
To create the first truly indigenous Australian coin, Macquarie employed the services of William Henshall (or Hershall) to cut and restrike the coins. A recently freed convict, Henshall had the right credentials for the job - coming from Birmingham, England's centre of illicit coining where he worked as a cutler, whitesmith (worker in white metals) and silversmith, he had been sentenced to transportation for forging coins.He had completed the seven-year sentence only the previous year.
An interesting sidelight concerns the reverse of the coin which shows the pillars of Hercules (the Straits of Gibraltar) which have a ribbon woven around them. The 'S' swirl of the ribbon through the pillars is still seen in the modern-day version of our dollar sign - $.
Macquarie's new currency and rum policy, in combination with a number of other factors - the expansion of the settlement, the growth of businesses other than farming and the growing number of free settlers and emancipists - turned the tide against the rum standard. He broke effectively broke the exploitative system and introduced into colonial life his own evenhanded attitudes towards convicts. Setting the scene for social development in Australia, Macquarie fostered reform and emancipation amongst convicts, recognising that by the sheer force of their numbers, they and their descendants held the key to the future.
The Dollar ExperimentThe benefits of Macquarie's holey dollar to the New South Wales economy would have been even greater, had they not been hoarded. By 1820, the Bank of New South Wales reported that it held 16,680 holey dollars and 5,900 dumps.
Even so, Holey Dollars and Dumps continued as the most common coinage used in the colony until 1823, when Governor Brisbane issued a number of proclamations in an attempt to establish a Dollar standard. He recalled the coins and fixed the Spanish Dollar as equivalent to Five Shillings. The holey dollar and dump were marked down to four shillings and one shilling respectively, and then re-issued.
Brisbane was nearly 150 years ahead of his time when he tried to make the dollar the major unit of currency in New South Wales. From 1823 to 1826 all official script and bank issues were given in dollars. The system was soon undermined when substantial numbers of uncut Spanich dollars found their way to the colony as traders found it profitable to buy the coins overseas for around 4/4 each and sell them 5/-.
The experiment was short lived. The English Parliament passed the Sterling Silver Money Act in 1826 which officially ended the dollar standard. The conversion back to the sterling standard created severe problems for the fledgling Bank of New South Wales. Were it not for Government backing of it's note issues, it is doubtful that the Bank would have survived.
Official English CoinageThe tide turned back in favour of the English denoninations with the importation of nearly £100,000 worth of English coins in 1824 and 1825. On 7th August, 1829, Governor Darling issued a General Order where, on 30th September, 1829, the status of legal tender would be dropped from the Holey dollar and the Dump. Most were immediately swapped for circulating English silver coins. Although they continued to circulate in Tasmania until 1849, few have survived until today.
It is estimated that only about 350 Holey dollars (296 are known) and about 1,500 Dumps remain. Their scarcity and historic merit make them a well deserved Australian rarity. The Holey dollar and its smaller counterpart, the Dump, are now among the most prized of the early colonial issues.
Darling's proclamation also directed that Government transactions would not be conducted with foreign coins - English coinage was officially the preferred medium of exchange. From the mid 1830's until the introduction of Australia's Commonwealth coins in 1910 is regarded as a sort of 'Dark Ages' period to most collectors of Australia's official coinage. Apart from the token issues and the series of gold coins of the era, little interest is taken in the official English issues, even though it was a period of immense interest and innovation.
For eighty-five years, Australia received all of its silver and official copper issues from England. British sovereigns and half sovereigns also mingled with the home product. A steady flow of coins had come to the colony following the establishment in 1810-12 of the new London mint on Tower Hill, fitted with Boulton-Watt steam powered presses.
It was during this period that the famous St. George and the Dragon sovereign first appeared; and when the Godless florin was struck. It was also a time when designers were criticised for such designs as that of the jubilee of Queen Victoria, which made the stern looking monarch appear comical under a tiny crown that seemed about to topple off.
Other design errors of the time caused far more serious consequences. The reverse design of the jubilee sixpence closely resembled the shield reverse sovereign. Soon gilt-dipped sixpences were being passed for their golden counterparts. Bright shiny farthings were also passed or mistaken for half sovereigns and from 1897, the surfaces were deliberately darkened before issue to arrest this problem.
But of all the changes, the one which took place in 1860 is probably the most important. The heavy 'cartwheel' copper coins of the early colonial days were replaced by lighter denominations made from bronze. The alloy was one devised in France for the making of church bells and contained copper, zinc and tin. This metal combination and the size of the coins were to be adopted in the Australian issue of 1911.
In April, 1864, the Sydney Times reported:
The notion that the old pennies of the reign of George III, known as the rim pennies, were more valuable than the ordinary ones, is we are informed, quite erroneous, as the intrinsic value is now little more than half the nominal or circulating price, and nothing like the quantity of coins can be used by the Mint in the new issue. The rim pennies were originally 16 pieces to the pound. Reduced by friction, they now run about 18 to the pound. In the course of a short time, the tender of the old coins will be declared illegal, and can only be taken at the price of old copper, which is now about 11d per pound.'
Traders TokensTraders tokens were unofficial coin-like issues struck by a variety of merchants to overcome a severe small-change shortage of English coins caused largely by the population explosion which occurred during the gold rushes of the early 1850's. The majority of the tokens were issued between 1855 and 1865. Prior to this, many merchants who had been hampered in their trading by the shortage, were often forced to pay a premium for whatever change they could get.
As well as being a convenient means of exchange, the tokens were also an excellent vehicle for advertising particular businesses and products. Everyone from doctors to ironmongers issued tokens and although the authorities frowned on the practice to some degree, a blind eye was turned due to the serious needs of the time.
Thomas Stokes, who arrived in Melbourne about 1854, was responsible for the production of more tokens than any other maker. In 1857, he acquired the plant Taylor had prepared for the 'Kangaroo Office' and which failed to get started. It is difficult, if not impossible, to determine what is the work of Taylor and what is the work of Stokes with dies prepared by Taylor. Dies bearing Stokes' name commence in 1862, but in the next year tokens were declared illegal in Victoria.
Production continued for issuers in other Colonies, and Stokes produced the majority of tokens used in New Zealand where they were issued until 1881. Thomas Stokes died in 1910.
Many of Stokes token dies survive, which has proved to be a mixed blessing as there have been many re-strikes from them up until 1964, in both old and new combinations. Some of the dies are in the hands of Stokes (Australasia) Ltd, in Melbourne and others have been presented to Museums. The exact numbers and locations of all dies is unknown.
Colonial BanknotesCoverage of Australia's early currency would be incomplete without a reference to the banknotes issued by private banks. The Bank of New South Wales, Australia's first public company, was founded in 1817. The scheme to create the bank was undertaken by Governor Lachlan Macquarie and a group of wealthy colonial businessmen.
Many banks, with wealth invested from overseas, made the largest contribution to the fledgling economy, providing a large proportion of the capital needed for the pastoral expansion which occurred from the 1830's onwards. | http://www.australianstamp.com/coin-web/aust/earlyaus.htm | 13 |
14 | Grand Canyon Flooded to Improve EcosystemGRAND CANYON NATIONAL PARK, Arizona
, November 22, 2004 (ENS) - The Grand Canyon was flooded on Sunday in an attempt to rebuild beaches and sandbars by moving sediment along the Colorado River. The normal sedimentation pattern of the river was forever altered in the early 1960s when the Glen Canyon Dam and Lake Powell were constructed to generate power.
Most sediment entering Grand Canyon National Park now arrives from the Paria River and upper Marble Canyon tributaries below the dam. More than a million tons of sediment has accumulated in Marble Canyon.
The high water flows were started by U.S. Geological Survey (USGS) scientists at seven on Sunday morning.
The water flowing from Lake Powell through four bypass outlet tubes at the base of the Glen Canyon dam is expected to help improve Colorado River habitat for endangered fish and help scientists learn more about the river ecosystem to help guide future management decisions.
The peak high flows will run for 60 hours at about 41,000 cubic-feet per second. The goal is to stir up and redistribute sediment from the tributary rivers downstream from the dam to enlarge existing beaches and sandbars, create new ones, and distribute sediment into drainage channels.
About 800,000 tons of sediment are expected to move downstream before the high water flow ends on Thursday morning at one o'clock. "The sediment, sand, mud and silt play an important role in the ecosystem," said USGS Director Chip Groat.
Researchers from the USGS Grand Canyon Monitoring and Research Center are working with scientists and resource managers from the Bureau of Reclamation, the U.S. Fish and Wildlife Service, and the National Park Service, as well as the Arizona Game and Fish Department, and Northern Arizona University to prepare, conduct, and evaluate the experiments.
Today, when water flows will be greatest, scientists will raft down the river to observe the high water's effects. Archaeological, biological and hydrological studies will be conducted to determine the flood's effects.
The Colorado River was once filled with silt and sediment. Now, the river deposits its load of silt as it enters Lake Powell near Hite, Utah. Water released from the dam is clear and the Colorado River is muddy only when downstream tributaries contribute sediment.
As the habitat has changed, so have plant and animal species. Native fish, unable to survive in the colder water, have left the river and have been replaced by non-native species. The high water flow is expected to help displace non-native species to the benefit of endangered native fishes such as the humpbacked chub.
USGS scientists will be monitoring how the high flow releases affect the survival of a population of young humpback chub in the Grand Canyon near the confluence of the Little Colorado River.
Non-native rainbow trout, a predatory species, are a resource for anglers below Glen Canyon Dam in the first 15 miles, to Lees Ferry. Surveys to determine the relative abundance of trout were just completed by the Arizona Game and Fish Department. These surveys will be repeated in mid-December to determine the effect of the high flows on trout populations and trout diet.
The water released during the experiment will not change the amount of water to be released over the course of the 2005 Water Year. The Annual Operating Plan calls for releasing approximately 8.23 million acre-feet of water from Glen Canyon Dam. That water is sent down river and captured in Lake Mead for use by the Lower Colorado River Basin States. The test flows are factored into that annual volume.
Flows later in the year will be adjusted downward to factor in the additional water released this week.
EPA Files Suit to Keep Anacostia River Free of Raw SewageWASHINGTON, DC
, November 22, 2004 (ENS) - The U.S. Environmental Protection Agency (EPA) has filed a lawsuit against the Washington Suburban Sanitary Commission (WSSC) for violating the Clean Water Act by permitting raw sewage to flow into streams and rivers in Maryland's Montgomery and Prince George's counties. Discharges of raw sewage are illegal under the Clean Water Act.
The lawsuit was welcomed by an alliance of conservation groups that threatened a similar suit two months ago. On September 22, four conservation groups - the Natural Resources Defense Council (NRDC), the Anacostia Watershed Society, the Audubon Naturalist Society and Friends of Sligo Creek - announced their intent to sue WSSC for illegally allowing sewer overflows, polluting the Anacostia River and its tributaries, and endangering public health.
The federal agency's action prevents the conservation groups from filing their own lawsuit, but they have the option to intervene in the federal suit on behalf of their members.
"We are pleased that our threat to sue WSSC finally prompted the EPA to do its job to stop the sewer authority from allowing raw sewage to contaminate our streams, streets, and parks," said Nancy Stoner, director of NRDC's Clean Water Project.
"We will be watching closely to make sure that the government is truly serious about pursuing this case, that the problem is fixed, and that public health is protected."
The conservation groups are urging the Commission to overhaul its sewer collection and pipeline system and establish procedures to monitor and prevent overflows.
According to WSSC's reports to Maryland's Department of the Environment, from January 2001 through July of this year, WSSC's sewer system experienced 445 overflows that dumped more than 90 million gallons of raw sewage into the river.
WSSC's system includes approximately 640 pipe stream crossings and hundreds of miles of sewer pipes that run alongside Maryland rivers and streams. The sewer pipes are more than 50 years old, and many are broken, decaying and exposed.
The Anacostia Watershed Society, which has been documenting WSSC sewer system problems, estimates that there are hundreds of miles of broken and separated pipeline that may be leaking sewage into Maryland's ground water.
"The public is at risk for contracting such waterborne illnesses as gastroenteritis, which includes vomiting and diarrhea, and hepatitis," said Stoner. Boaters on the Anacostia River in Maryland have contracted skin infections on their hands and bodies after coming into contact with the water. And when sewers back up, local homeowners wind up with basements filled with sewage, which is a threat to their health.
Soybean Rust Found in Five StatesBELTSVILLE, Maryland
, November 22, 2004 (ENS) - Soybean rust, a destructive fungus that slashes soybean yields, has now been found in five southern states, the U.S. Department of Agriculture (USDA) laboratory in Beltsville, Maryland confirmed on Friday.
Soybean rust has been detected in Louisiana, Alabama, Georgia, Florida, and Mississippi, the USDA lab said.
Florida, which grows about 11,000 acres of soybeans, is the latest state to hear a rust diagnosis. Florida Agriculture Commissioner Charles Bronson said the fungus has been found in samples collected from an experimental test plot managed by the University of Florida/Institute of Food & Agricultural Sciences in Quincy, Florida.
Florida extension agents were prompted to look in their soybean test plots because of notification by Louisiana State University that Soybean rust had been found in their extension service test plots.
Pathologists strongly suspect that Hurricane Ivan, which hit the panhandle of Florida in mid-September is responsible for the spread of the disease from South America.
Severe outbreaks in the last few years in South America have heightened concern for the spread of the disease to North American soybean growers.
The Soybean rust pathogen, Phakopsora pachyrhizi, which is easily spread through windborne spores, is a fungus that causes small lesions on the foliage and pods of soybeans and several other legume hosts, including lima beans. Soybean rust can reduce yields by 50 percent or more.
Soybean rust also infects kudzu, the invasive nuisance weed that has spread throughout Florida, and serves as a reservoir for the soybean rust pathogen. Forage legumes, such as yellow sweet clover, also serve as a refuge for soybean rust in the off season.
The impact of the fungus this season is expected to be minimal because most soybeans have been harvested.
The Florida Agriculture Department is working jointly with the University of Florida/IFAS and the USDA to immediately determine the extent of the disease, coordinate diagnostic activities, and conduct training of surveyors and growers for accurate detection of the disease.
Current management strategies include emphasis on early detection and timely fungicide applications. Varieties of soybean that are rust resistant may become available.
The USDA says a coordinated approach will be required by all soybean producing states to effectively manage this disease.
Farmers' Guide to Navigating the Legal Minefield of GMOsPITTSBORO, North Carolina
, November 22, 2004 (ENS) - The commercial production of genetically modified organisms, (GMOs), "has created a legal minefield for American farmers and requires that farmers be particularly sure footed," says the "Farmers' Guide to GMOs," just released by the Farmers' Legal Action Group (FLAG) and Rural Advancement Foundation International-USA (RAFI-USA).
Co-author and attorney David Moeller of FLAG says that whether farmers grow genetically modified crops, conventional crops, or are certified organic, the use of GMOs in commercial agriculture can affect operations and have costly legal ramifications.
"After almost a decade of commercial production, we have reached that point," Moeller said, "where every farmer has a stake and has to be fully aware of the legal ramifications. No farmer should buy seed for next season without having a grasp of the information contained in this Guide."
Co-author Michael Sligh of RAFI, said, "The problems GMOs are creating for farmers are getting increasingly complex. We at RAFI felt it was time to invest in a collaborative effort to inform all farmers of the risks and legal liabilities involved and help them protect their self interests."
Contamination of organic or conventional crops is an ever-present risk. "In a world of widespread production of GMO crops, what one farmer plants may seriously affect all of his neighbors' crops. Certain crops, such as corn and canola, cross pollinate, causing genetic material to migrate," Moeller said.
"Farmers may be unable to market contaminated non-GMO crops, and GMO growers may face liability for unintentional contamination of their neighbors' crops."
Development and marketing of genetically engineered crops is concentrated in a few biotechnology companies - Monsanto, DuPont, Syngenta and Aventis - who control most of the technology and the resulting seed and chemical markets.
Moeller said farmers assume significant obligations and legal liabilities when they sign GMO contracts. "Common obligations include how and where to plant, including creating 'refuges' of non pest-resistant varieties; giving up the right to save seed; opening up their fields and all records, including filings usually subject to the Privacy Act, to inspections; and agreeing to specified remedies if the farmer violates the agreement."
FLAG is a nonprofit law center dedicated to providing legal services to family farmers and their rural communities to help keep family farmers on the land.
In most cases, saving seed is prohibited for GMOs and there are stiff penalties for saving seed from a GM crop.
A recent U.S. Supreme Court case limited a statutory seed saving exemption, and a Canadian case ruled that a farmer could not save seed from a crop contaminated with GMO technology. "Farmers may not save seed containing patented genes resulting from accidental cross pollination from a neighboring GMO group or any other source," Sligh said.
Farmers who sign a technology agreement have little recourse if the company asks to inspect their fields. Where there is no contract, farmers should seek legal counsel and require the company to show cause. In every case when samples are demanded, farmers should make sure an identical independent sample is taken and analyzed, Moeller said.
For conventional and organic farmers who want to keep their crops free of engineered genes, selection of uncontaminated seeds, planting at a distance from GMO crops, creating buffer areas, and meticulous cleaning of equipment and storage areas are all important.
Moeller counsels farmers to avoid making broad statements of non-GMO warranty and to emphasize efforts made to prevent contamination beginning, of course, with the statement that seed has been certified GMO free. Organic farmers risk losing their certification through contamination with transgenic characteristics.
Recent research on the costs and benefits of GMOs shows that pesticide use has increased on herbicide tolerant crops. Sligh says this is due primarily to farmers' reliance on a single herbicide - glyphosate, trademarked Roundup - that must be sprayed in increasing amounts to keep up with the shift in weed populations toward more difficult to control species and the development of resistance to certain weeds.
Read the Farmers' Guide to GMOs at: www.flaginc.com and www.rafiusa.org.
Northern Spotted Owl Still Threatened With ExtinctionWASHINGTON, DC
, November 22, 2004 (ENS) - The Northern spotted owl continues to warrant the protection of the Endangered Species Act as a threatened species, the U.S. Fish and Wildlife Service has concluded after a formal five year status review of the species.
The review found that the risks faced by the species when it was first listed such as habitat loss on federal lands, have been reduced due to the success of the Northwest Forest Plan and other management actions.Still, habitat loss continues, especially on private lands, and wildfires appear to be removing habitat at an increasing rate.
The species' overall population in Washington, Oregon and California continues to decline and new potential threats have emerged that need more research - fire, competition from barred owls, and West Nile virus.
The steepest declines were documented in British Columbia, Washington, and northern Oregon - about 50 percent of the geographic range of the northern spotted owl. This area supports about 25 percent of all known northern spotted owl activity centers, and contains more than 25 percent of all northern spotted owl habitat, most of which is federally managed.
The Service conducted its review following an April 2002 lawsuit filed by the Western Council of Industrial Workers, which represents about 10,000 workers in the forest products industry. They claim that protection afforded to the owl limits their logging activities.
The Service chose an independent contractor, Sustainable Ecosystems Institute (SEI), to review, analyze and summarize all scientific and demographic information about the northern spotted owl that has become available since it was listed on June 26, 1990.
SEI convened a panel of experts who, with a staff of scientists and outside experts, reviewed thousands of pages of data and reports over a period of 10 months. SEI held four public meetings to gather more information and to air preliminary findings.
SEI's report, "Scientific Evaluation of the Status of the Northern Spotted Owl," provided the primary biological basis for the conclusions of the five year review. The report made no recommendation on the listing classification of the owl.
The Service then convened a panel of seven agency managers, assisted by species experts, who met to review the SEI report and other information in the context of federal policy and guidelines.
"We can celebrate the success we've had in reducing habitat loss on federal lands, but at the same time we must recognize that there are new risks out there that could present an even greater threat to the species," said Dave Allen, director of the Service's Pacific Region. "Our conclusion is that while the species is still threatened it does not need to be elevated to endangered status."
The five year review can be found on the U.S. Fish and Wildlife Service Pacific Region website at: http://pacific.fws.gov/ecoservices/endangered/recovery/5yearcomplete.html.
Ohio EPA Calls Public Hearing on Ashland Chemical MessASHLAND, Ohio
, November 22, 2004 (ENS) - The Ashland Specialty Chemical Company at 1745 Cottage St. in Ashland, Ohio has contaminated the soil and groundwater around its 21.5 acre site to the extent that the Ohio Environmental Protection Agency (Ohio EPA) is recommending restrictions be imposed to prevent public exposure.
Deed restrictions would be placed on the property to limit future use of the site to commercial and industrial purposes. Restrictions also would prohibit the use of ground water and restrict excavating soil more than five feet below the surface without Ohio EPA approval, the agency said last week.
Contaminants at the site include toluene, ethylbenzene, xylenes, chloroform, chloride, dichloroethene and trichloroethene.
The state EPA has scheduled a public information session followed by a formal hearing at 7 pm on December 1, to answer questions and accept comments about a plan to clean up the Ashland Chemical site.
The western part of the property contains the main plant, which includes a building with manufacturing and warehousing operations, an aboveground raw materials tank farm, tanker loading area, an enclosed drum storage area, outdoor drum storage area and a 12,000 gallon aboveground fuel oil storage tank. The eastern area includes two buildings containing adhesive products and warehousing operations.
Previous cleanup activities at the site included upgrading the tank farm to a concrete structure to prevent chemicals from contacting the soil, closing leach beds and sealing floor drains inside the building.
Potential contaminant sources include two soil piles created during the tank farm upgrade and other soils near the tank farm and leach beds.
Ashland Specialty Chemical is a division of Ashland (NYSE: ASH), a transportation construction, chemical, and petroleum company. A Fortune 500 company, Ashland has sales and operations throughout the United States and in more than 120 countries around the world. Company operations include four wholly owned divisions: Ashland Paving And Construction, Ashland Distribution, Ashland Specialty Chemical and Valvoline.
A copy of the Ohio EPA's preferred cleanup plan has been made available for review at the Ashland Public Library, 224 Claremont St., Ashland. The preferred plan and related documents can be reviewed at Ohio EPA's Northwest District Office by calling 419-352-8461 for an appointment.
Ohio EPA will accept written comments on the plan through the close of business December 10. Anyone may submit written comments on the preferred plan by writing to Ghassan Tafla, Site Coordinator, Ohio EPA Northwest District Office, 347 North Dunbridge Road, Bowling Green, Ohio 43402. Comments also can be faxed to Tafla at 419-352-8468 or e-mailed to him at [email protected].
The public meeting will be held at Ashland City Council Chambers, 206 Claremont Ave. During the hearing, the public can submit comments for the record regarding Ohio EPA's recommended cleanup plan. These comments will then be considered before a final cleanup design is chosen.
Ohio EPA will consider all comments received, then will issue a final plan to "assure the site is cleaned and maintained in a manner that protects human health and the environment," the agency said.
Super Value Fined for New York Underground Storage TanksSPRING VALLEY, New York
, November 22, 2004 (ENS) - Super Value Incorporated, the Spring Valley, New York based owner of numerous gas stations throughout New York, New Jersey and Pennsylvania, has agreed to pay a penalty of $132,500 for violations of federal underground storage tank regulations at 12 of its New York facilities. Super Value is headquartered in Spring Valley, New York.
The U.S. Environmental Protection Agency (EPA) announced the agreement on violations that occurred at Super Value owned stations in Monsey, Spring Valley, West Haverstraw, Chester, Yonkers, Middletown, Brewster, New City and Stony Point.
"In areas like central New York where many people particularly those in more rural communities get their drinking water from wells in their backyards, it's essential that tank owners follow EPA's regulations to the letter," said Jane Kenny, EPA regional administrator.
"Gas leaking from underground tanks can contaminate residential wells. These situations are preventable and inexcusable," Kenny said.
As a result of information provided by the company at the EPA's request, the agency determined that 12 gas stations were operating tanks in violation of the Resource Conservation and Recovery Act. The violations included failing to upgrade the tanks by the December 22, 1998 deadline and failing to check for leaks.
Underground storage tanks range in capacity from a few hundred to 50,000 or more gallons, and are used to store gasoline, diesel, heating oil and other fuels, waste oil and hazardous substances at gas stations, marinas, government facilities and large industrial sites.
Leaks from tanks often contaminate the soil around the tanks, and can cause unhealthy gasoline vapors to settle into the basements of private homes and apartment buildings.
Underground storage tanks have historically been the nation's number-one source of ground water contamination, with over 30,000 leaks and spills from tanks reported annually. EPA and state underground storage tank regulations were put in place to prevent releases of petroleum, and, if a release does occur, to insure that it is addressed immediately.
In addition to the financial penalty, Super Value is required to submit proof to the EPA of compliance with all release detection and reporting regulations at all of its New York facilities. Super Value has also agreed to develop and implement a training course on compliance for all its employees responsible for the operation and maintenance of tanks. Failure to comply with these requirements may result in additional penalties.
U.S. EPA Helps Retrofit Chinese Buses with Clean DieselWASHINGTON, DC
, November 22, 2004 (ENS) - The U.S. Environmental Protection Agency (EPA), China's State Environmental Protection Administration, the Beijing Environmental Protection Bureau and other organizations have begun a project to retrofit a fleet of buses and trucks in China with clean diesel technology.
Environmental impacts of diesel exhaust emissions include its contribution to ozone formation and acid rain. In Beijing alone, close to 1,000 vehicles are being added to the roads each day.
The project parallels the 34 diesel retrofit projects that have been completed with EPA assistance or are currently underway across the United States.
The EPA is committing $250,000 and plus work hours to this demonstration project and other collaborative efforts to reduce emissions of particle pollution and other diesel emissions in China.
"We will share cleaner emissions control technologies and fuels with China as part of EPA's commitment to a cleaner global environment," said EPA Administrator Mike Leavitt. "It helps them and it helps us."
Fine particulate matter and other emissions from older diesel powered trucks and buses contribute to air pollution in Beijing and throughout China and pose serious public health concerns.
Because of the increasing number of vehicles on China's roads, emissions and air pollution are also increasing. By using cleaner fuel and new technologies, which can be installed rapidly and inexpensively on existing vehicles, this retrofit demonstration project is expected to reduce particulate emissions and other air pollutants in an existing diesel vehicle fleet by 40 percent.
As a member of the global Partnership for Clean Fuels and Vehicles, the United States is assisting developing countries to cut emissions from diesel trucks and buses. The EPA establised a retrofit partnership for 20 buses in Mexico in June, and the agency is working on similar projects with Chile, India and Thailand.
|International Hydropower Association accused of excluding indigenous peoples and supporting Taib’s corruption USCC Releases Model Rule for Composting Operations ADA Carbon Solutions Announces New Hire of Vice President of Sales and Key Executive Promotions| | http://www.ens-newswire.com/ens/nov2004/2004-11-22-09.html | 13 |
16 | With inferential statistics, you are trying to reach conclusions that extend beyond the immediate data alone. For instance, we use inferential statistics to try to infer from the sample data what the population might think. Or, we use inferential statistics to make judgments of the probability that an observed difference between groups is a dependable one or one that might have happened by chance in this study. Thus, we use inferential statistics to make inferences from our data to more general conditions; we use descriptive statistics simply to describe what's going on in our data.
Here, I concentrate on inferential statistics that are useful in experimental and quasi-experimental research design or in program outcome evaluation. Perhaps one of the simplest inferential test is used when you want to compare the average performance of two groups on a single measure to see if there is a difference. You might want to know whether eighth-grade boys and girls differ in math test scores or whether a program group differs on the outcome measure from a control group. Whenever you wish to compare the average performance between two groups you should consider the t-test for differences between groups.
Most of the major inferential statistics come from a general family of statistical models known as the General Linear Model. This includes the t-test, Analysis of Variance (ANOVA), Analysis of Covariance (ANCOVA), regression analysis, and many of the multivariate methods like factor analysis, multidimensional scaling, cluster analysis, discriminant function analysis, and so on. Given the importance of the General Linear Model, it's a good idea for any serious social researcher to become familiar with its workings. The discussion of the General Linear Model here is very elementary and only considers the simplest straight-line model. However, it will get you familiar with the idea of the linear model and help prepare you for the more complex analyses described below.
One of the keys to understanding how groups are compared is embodied in the notion of the "dummy" variable. The name doesn't suggest that we are using variables that aren't very smart or, even worse, that the analyst who uses them is a "dummy"! Perhaps these variables would be better described as "proxy" variables. Essentially a dummy variable is one that uses discrete numbers, usually 0 and 1, to represent different groups in your study. Dummy variables are a simple idea that enable some pretty complicated things to happen. For instance, by including a simple dummy variable in an model, I can model two separate lines (one for each treatment group) with a single equation. To see how this works, check out the discussion on dummy variables.
One of the most important analyses in program outcome evaluations involves comparing the program and non-program group on the outcome variable or variables. How we do this depends on the research design we use. research designs are divided into two major types of designs: experimental and quasi-experimental. Because the analyses differ for each, they are presented separately.
Experimental Analysis. The simple two-group posttest-only randomized experiment is usually analyzed with the simple t-test or one-way ANOVA. The factorial experimental designs are usually analyzed with the Analysis of Variance (ANOVA) Model. Randomized Block Designs use a special form of ANOVA blocking model that uses dummy-coded variables to represent the blocks. The Analysis of Covariance Experimental Design uses, not surprisingly, the Analysis of Covariance statistical model.
Quasi-Experimental Analysis. The quasi-experimental designs differ from the experimental ones in that they don't use random assignment to assign units (e.g., people) to program groups. The lack of random assignment in these designs tends to complicate their analysis considerably. For example, to analyze the Nonequivalent Groups Design (NEGD) we have to adjust the pretest scores for measurement error in what is often called a Reliability-Corrected Analysis of Covariance model. In the Regression-Discontinuity Design, we need to be especially concerned about curvilinearity and model misspecification. Consequently, we tend to use a conservative analysis approach that is based on polynomial regression that starts by overfitting the likely true function and then reducing the model based on the results. The Regression Point Displacement Design has only a single treated unit. Nevertheless, the analysis of the RPD design is based directly on the traditional ANCOVA model.
When you've investigated these various analytic models, you'll see that they all come from the same family -- the General Linear Model. An understanding of that model will go a long way to introducing you to the intricacies of data analysis in applied and social research contexts.
Copyright ©2006, William M.K. Trochim, All Rights Reserved
Purchase a printed copy of the Research Methods Knowledge Base
Last Revised: 10/20/2006 | http://www.socialresearchmethods.net/kb/statinf.htm | 13 |
36 | In 2012, a United States household of four (two children) was considered poor if its income was below approximately $23,000 (this does not include income from anti-poverty programs) (http://www.census.gov/hhes/www/poverty/data/threshld/index.html).
|Section 5: The Gold Standard|
|Macroeconomics - Unit 7|
Characteristics of a Gold Standard System
A gold standard is a system in which a certain fixed amount of a country's currency is legally exchangeable for gold. Because the ratio of gold to the money supply is fixed, the quantity of money can only grow as much as the supply of gold is growing. Because of the difficulty of mining and acquiring gold, gold supply growth is typically limited to 1 or 2% per year. If the government adheres to a pure gold standard, the money would grow by only 1 or 2%, as well.
Properly implementing a pure gold standard provides a better guarantee that inflation remains low or non-existent for many years to come. It is, therefore, a step in the right direction, compared to the system we currently have.
According to Andrew Bernstein (The Capitalist Manifesto, Bernstein A., 2005, p. 374):
"An international gold standard is mankind's primary protection against arbitrary expansion of the money supply by the politicians. Because gold is relatively rare in nature, and its mining generally involves laborious and expensive work, the money supply grows only gradually. The technological progress of free men leads to an increase in the supply of goods that generally exceeds the increase in the supply of gold."
George Reisman in Capitalism notes that "the result would be that prices would show a tendency to fall from year to year ...this is actually what happened in the nineteenth century, in the generation preceding the discovery of the California gold fields, and again, in the generation from 1873 to 1896, that is, during the Inventive Period." (Capitalism, Reisman, p. 107)
A gold standard has its disadvantages. We don't always have total control over the supply of gold in the world. Occasionally, the supply of gold varies by more than 1 or 2%. During these years, we may experience instability. Sometimes, countries manipulate the supply of gold to try to create unnatural swings in the price of gold.
In an ideal world, we do not need a gold standard. If a central bank, and in the United States, the Federal Reserve is disciplined, on its own, to keep the money supply constant, we will accomplish the same goals as being on a gold standard, without the disadvantages of the occasional instability that accompanies a gold standard. The only fluctuations in the value of the money we will experience will be due to free market fluctuations in money demand and supply. However, they are generally short-lived in nature. In the long run, a constant money system will lead to steadily falling prices that are beneficial to the long term economic health of the country.
Benefits of a Constant Money Supply System
A constant (invariable) money supply system is one in which the Federal Reserve or the central bank of a country holds the money supply constant at all times. This is desirable because it eliminates the disadvantages associated with inflation. A constant money supply does not discourage spending or production. We do not need to increase our money supply in order to encourage production. Greater production takes place because people have a natural human tendency to work and produce in order to satisfy basic human needs (food and shelter) and to progress and better themselves. Increased production leads to increased purchasing power. Increased purchasing power leads to increased wealth, a more comfortable lifestyle, more leisure time, and a higher overall standard of living.
In the above section, we make the assumptions that hard work and innovation are rewarded and lead to economic growth, even if the money supply stays constant. You may note that in some countries there is very little economic activity and no economic growth, and that this would not change even in a constant money supply system. Keep in mind that a constant money supply is a condition for improved economic health. However, other conditions must be met, as well. These conditions are listed and described in Section 3 of Unit 1 (protection of private property, elimination of corruption, etc.).
See: Reisman, G. (1996). Capitalism: A Treatise on Economics. Ottawa, IL: Jameson Books.
|Last Updated on Friday, 28 December 2012 09:15| | http://www.inflateyourmind.com/index.php?option=com_content&view=article&id=54:section-3-united-states-federal-government-expenditures&catid=7:unit-7&Itemid=81 | 13 |
18 | Instructor/speaker: Prof. Walter Lewin
So last lecture was arguably the most important of all my lectures.
We saw how a changing magnetic field can produce a current, an induced electric field, an induced EMF.
And Faraday expressed that in his famous law, his famous equation which we see there on the blackboard.
You select a closed loop in your circuit.
Any loop is OK.
You attach an open surface to that closed loop.
Any open surface is OK.
And you then get an EMF in the loop, and that's the time derivative of the magnetic flux through that surface.
And the minus sign indicates that the induced current itself produces a magnetic flux that opposes the flux change, and that we refer to as Lenz's Law.
Today, I will expand on this a lot further.
So let's start with a conducting loop and a magnetic field.
This is a conducting loop.
Let the dimensions be Y, X and let- I have a uniform magnetic field.
Magnetic field B is like so.
And I choose as the perpendicular vector to my surface, this is the surface that I attach to that closed loop, I choose it pointing up.
And so the angle between dA and B, say theta, but B is uniform.
So the flux, phi B, is defined as the integral of B dot dA, over this open surface.
Flux is a scalar.
It's plus or it's minus or it's 0.
Flux has no direction.
So the flux in this case would be XY, which is the area of this loop since the magnetic field is uniform.
That's a very easy integral and then I get the magnetic field B, and then I get the cosine of the angle.
So now according to Faraday, it is the time derivative of this quantity that determines the EMF.
And, you can do that in several ways.
You can have dB/dT, the change in the magnetic field.
This is the area A of the loop.
You can change the area.
You can have a dA/dt.
But you can also change theta.
You can have a d theta/dt.
And I will look at those today.
This number here, the way I have chosen my dA, is a positive number.
If somehow this number increases in positive value, the induced current that is going to run will try to create a magnetic field to oppose the change.
So in that case if the flux, which is now positive, is getting larger positive, then the current that's going to run will be in this direction.
That's Lenz for you.
So it creates by itself, this current will create a magnetic field in this direction.
And if the magnetic flux, which is now positive the way I've defined it, were decreasing, then the current would go the other way around.
Last time, I did several demonstrations whereby we changed B.
We had dB/dT's.
And there was one particular demonstration that blew your mind and that you will tell your grandchildren about and that you will always remember, I hope.
Today, I'm going to change theta and I'm going to change the area, which will also give me then induced EMF's and therefore induced currents into a closed conducting loop.
So let me make another drawing of the closed conducting loop.
This has length Y and width X, and I'm going to rotate this.
My idea is that you can see this three-dimensionally.
I'm going to rotate this about this axis with angular frequency omega.
Omega is 2 pi divided by the period.
The period is the time of one rotation.
Normally we choose for that capital T.
I don't want to do that today because T can confuse you with Tesla.
And so I'm going to rotate this around so the angle theta that you have there, theta then becomes theta 0 plus omega T, going back to 8.01.
And I choose this theta 0 such that at T 0, I choose my theta to be 0, and so I have nothing to do with theta 0.
So what now is the magnetic flux?
This is my loop.
I have to commit myself to a surface.
Well, I will just choose this flat surface, just like I did there.
I chose that flat surface.
I'm free to choose any surface, why not taking the flat one.
And so the flux through that flat surface is then the area which is X times A, X times Y, that's the area of this loop.
And then I have the magnetic field.
And then I have cosine omega T.
Maxwell tells me it's not the flux that matters.
It is the change in the flux that matters.
OK, so d phi/dt.
I've got the A, the area, I've got the magnetic field.
An omega pops out, and I get a sine of omega T and I get a minus sign.
Normally I don't care about minus signs, because I'm only interested in the magnitude of the induced EMF.
I always know in which direction the current will flow, I really do, because I know Lenz's law.
So you should never have too many hang-ups on those minus signs, but since I'm getting a minus sign out of this now here, it would be a little foolish not to put a minus here and make this into a plus because that, then, according to Faraday is immediately the EMF and that EMF is changing with time because you have this sine omega T in here.
And so the current that is going to flow, the induced current, which will also be time-dependent, is the EMF divided by the resistance in the loop, and this is the total resistance of that entire network.
There could be light bulbs in there, there could be resistances in there.
It's the total resistance.
And this current, when I rotate this loop, is going to alternate in a sinusoidal fashion.
And we call that alternating current, AC.
That's what's coming out of the wall, AC.
Suppose this loop was double, and what I mean by double is the following, that it works like this.
Follow my picture closely.
I will go slowly.
It's like this, like this, like this, so, back, and I close it here, so it's one closed loop, but I have two windings.
I have to attach a surface to this closed loop.
Farado- Faraday insists I attach an open surface to this closed loop.
What would it look like?
Well, I advise you to take that, dip it in soap, and look at it, and what you will see then, because the soap will attach everywhere to the closed loop, you're going to see one surface.
It's not two separate surface.
You don't have two separate loops.
It's one surface but sort of two layers.
One is lower and the other one comes on top.
And so, the magnetic flux will double now, because you're going to see that this magnetic field penetrates both this soap film and the one that is below, and so you get twice the EMF and if you have N windings in one closed loop, capital N, then the EMF that you get would be N times larger and you can make N 1000.
There is no problem with that.
I'm going to do a demonstration for you whereby I'm going to use the earth's magnetic field and a loop that you see here that has 42 windings.
So my capital N is 42.
Not just two like here, but 42.
And it is circular.
It has a radius.
I think it's about thirty centimeters.
Here you have it.
It's about thirty centimeters.
So the area, pi r squared, which is my capital A, pi r squared is about 0.28 square meters.
You may want to check that.
I use the Earth's magnetic field, which is about half a Gauss, so that's about 5 times 10 to the -5 Tesla, if we work in SI units.
And I'm going to rotate it around with a period, period of about 1 second.
That means omega, 2 pi divided by the period, is then about 6 radians per second.
2 pi -- I call that 6 for now.
And so what is the EMF that I'm going to get when I rotate it once around per second?
Well, the EMF will change as a function of time.
We're going to get 42, that's N.
We're going to get A, that is 0.28.
We're going to get B, that is 5 times 10 to the -5, and then we're going to get omega, that is 6, and then we get this sine of 6 T.
You see the equation there.
The only difference is we have a capital N out here because we have N windings in the closed loop.
And this number here in front of the sine 6 T, you should check that, is about 3.5 millivolts.
3.5 times 10 to the -3 times the sine of 6 T, and that now is in volts.
So you get an alternating EMF, positive, negative, and the maximum value that you would get is 3.5 millivolts.
If I look at the EMF as a function of time, it would be something like this.
And from here to here, would then be 1 second if I really rotated around in 1 second.
And so the current, the induced EMF, according to Ohm's Law, is always the induced current times the resistance of the whole loop, so the induced current will also have this shape, of course.
And how high that is depends on how large R is.
The EMF is independent of capital R.
The EMF follows exclusively from those numbers.
It's the current that depends on what the resistance is.
Suppose now I rotate twice as fast.
I double omega.
Two things are changing now.
For one thing, that the full period now goes from here to here, only in half a second.
But there's something else that changes.
The EMF now doubles, because look at my equation.
It's hiding behind the blackboard, I think.
There is an omega in there.
It's linearly proportional to omega, because it's d phi/dt that matters.
See, the omega pops out, and so you now get double the EMF, so the 3.5 millivolts maximum would become 7, and so if I try to make a drawing of that twice as high here, twice as low here, then you would get something like this, and so this omega is now twice this one.
You get double the maximum value of the EMF.
I'm going to show that here.
I'm going to improve on my lights.
You see there a current meter which is sign sensitive, can go to the right, can go to the left.
And I'm going to rotate this loop.
When you rotate a loop in a magnetic field, you can even rotate it in such a way that you get no EMF.
I can show that to you easily.
If this is the loop, and if somehow the magnetic field came in like this, if you rotated this loop now around this axis, there would never be an EMF, because the dA and B would always be perpendicular to each other, so there's never any flux going through this system.
No flux change.
But of course, if you rotate it around this direction, it would be fine.
So think about that.
Don't fall in that trap.
You can rotate in such a way that there is no flux change.
We don't have that problem at all because the magnetic field here on earth, in Boston, doesn't come straight from heaven down, but it comes rather steep, so there's never any problem here.
I don't have to worry about that.
So here is that loop, 42 windings.
The scale there is in microamperes, so if you want to you can calculate what the resistance of the loop is when I rotate, but that's really not my objective.
I want you to see that when I rotate it, that you get an alternating current.
Very modest, because I rotate very slowly.
Now I rotate faster, and it is proportional to omega, and so if I rotate faster you get a much larger maximum induced current.
A larger EMF, a larger current.
I don't know how fast I can go.
This is about as fast as I can go.
Gets almost up to 4 microamperes maximum, and so we are producing here AC, alternating current.
We have slipping contact here so that the system doesn't break, and we could put a light bulb here somewhere in this line and then the light bulb may glow.
In United States, what comes out of the wall is 60 Hertz.
So that means that the current through a light bulb becomes 0 120 times per second.
120 times per second do you go through the 0 if you have 60 Hertz.
Does it mean that 120 times per second there is no light from the light bulb?
No, it doesn't mean that because filaments get hot and so they still glow even when the current is 0.
But they don't cool that fast.
If you take a fluorescent bulb, then indeed, fluorescent tube goes completely off and on, 120 times per second, and therefore you can use them very nicely as stroboscopes, but of course the frequency is fixed.
You can't change the frequency.
It's 120 Hertz.
So now you're getting the idea of an electric generator, or what we call, if you want to, a dynamo, which produces AC.
You have a turbine, and the turbine rotates conducting loops in magnetic fields, and that according to Faraday will then produce your EMF.
And that runs our economy.
You have a permanent magnet and you rotate conducting loops, windings, in that magnetic field.
The higher your magnetic field, the higher the EMF.
The faster you rotate, the higher the EMF.
The more windings you have, the higher the EMF.
And the larger the area of your loops, the higher the EMF.
As you can see on the equation that I keep hiding, but that's where it is.
In the United States we have 60 Hertz as I mentioned, and we are committed to a maximum voltage coming out, that is the maximum value that you get from your alternating voltage, of 110 times the square root of 2 volts, and we call that 110 volts.
In Europe, we have 50 hertz and the maximum voltage there in the oscillation is 220 times the square root of 2.
You cannot change omega and go faster somewhere where you generate this electricity, because that would have major consequences.
Number one, the EMF that comes out of the wall would go up, so you might blow your television, your circuits.
But besides that, you would change also the frequency of the alternating current, and there are many systems that run in such a way that they're locked into that frequency, for instance, many electric clocks and certainly record players, if you still have one, are locked into the 60 Hertz and so if you were to increase omega your record player would go around faster and your clocks would go faster.
A long time ago, when I came over from Europe, I brought my record player with me.
The record player requires 220 volts, so I bought a transformer here to that, 110 volts at my home would become 220.
That was fine.
And so the record player was happy.
It was running.
But it ran twenty percent too fast because I had overlooked that there are 60 Hertz here and 50 Hertz in Europe.
It was going a little bit too fast, and you know what that means when it goes too fast -- it starts to sound very crazy so you can't even hear the music, and that's exactly what happened with my record player.
So if we look at a power station, as we discussed earlier in this course, and let us suppose to get some- some numbers, that the maximum EMF that the power station produces, let's say, is 300 kilovolts which it puts on the line.
And let's say we have a- we have loops that have an area of about 1 square meter, and that they use magnetic fields which are let's say half a Tesla.
It's by no means unreasonable numbers.
And if now you want 60 Hertz frequency, so your frequency F, 60 Hertz, so your omega is about 6 times higher, 2 pi higher.
It's about 360 radians per second.
If now you have about 1700 windings, and you can check that at home, there you get your 300 kilovolts.
Power is induced EMF times current and with Ohm's Law you can replace E by IR, and so you get I square R.
This is joules per second, and so someone has to do work.
Someone has to put in the energy, for which you need perhaps fossil fuel, have to burn oil or coal to keep the turbines going, or nuclear energy, or waterfalls, or winds.
But something gotta keep those windings going, to keep our economy going.
A typical power station in this country has about 1000, produces about 1000 megawatts.
It is about 1000 times a million joules per second.
I have here a generator which is run by manpower, and for this I need a strong man.
Who wants to volunteer?
You look very strong, there.
Ah, you don't want to look at me now.
Every morning we talk a little bit, but now you didn't see me.
This is a power generator, magnetic field.
You see the magnet here.
And there are current loops, windings, and when you crank this you turn those windings into these magnetic fields.
There's a light bulb here, 20 watts, and this gentleman is go- what is your name?
That's almost my last name.
Can you start turning and see whether you can produce 20 watts?
Put your foot on the -- yeah, yeah, keep going.
Ah, man, a little better! Keep going! That's not 20 watts yet! Are you sure you had a good breakfast this morning?
He's producing 20- roughly 20 joules per second now.
Will you stop a minute?
We have 6 light bulbs here.
Naveen, be my guest.
Man, where is Superman?
I see nothing! 120 joules per second, he doesn't even come close! Keep going, man, keep going.
You want me to stop the whole [inaudible], keep going.
Forget it! Forget it.
You tried, and that's all that matters.
But you see how difficult it is to produce 120 joules per second.
Now, think about it, when you run your 100 watt light bulb at home, and you do that for 10 hours, that is 1 kilowatt hour.
That costs you only 10 cents.
Would you run that for 10 hours for 10 cents?
You can't even do it, man! I'll show you something.
I do a lot of mountaineering, and in the mountains you want a light that always works.
When you need it the most, your batteries are flat, so you always have with you a dynamo.
This is my dynamo, hand-powered.
You see that?
That is Superman for you! This is 120 watt light bulb! And I can keep it going all the time.
I can do better for you.
I have a radio here.
And this radio has a little generator.
Magnetic field, constant magnet, permanent magnet, and windings which you turn around, and when I do that I do work, and I generate an EMF.
I charge batteries.
And then I can play this radio.
[radio voice] I don't know about that.
And it's designed in such a way that if you turn just for a minute that you have several hours that you can play the radio.
It's quite amazing.
Now, we're going to change the area.
So far we've changed theta.
Now we're going to change the area.
I have again a conducting loop here.
But now I have a crossbar here which I can move.
I can move it with a velocity V in this direction, or I can move it to the left.
Let this be L and let the length be X.
My surface that I'm going to choose, I always have to commit to an open surface, is a flat surface.
And I'll make life very simple for all of us, let's assume that the magnetic field going straight up.
Let my dA, it's perpendicular to the surface, B straight up, B and dA are in the same direction now.
Makes my life simple.
And so what is the flux now, going through my surface?
Well, that's the area, which is L X, times the magnetic field, which I will assume is uniform throughout this surface.
So as simple as you can have it.
Faraday says, "I don't care what the magnetic flux is!" I want to know how that magnetic flux is changing." All right, OK, Mister Faraday.
d phi/dt equals L times B times dx/dt.
But dx/dt is my velocity, and so I get here times the speed.
dx/dt is the velocity.
And this now is the magnitude of the EMF.
Notice I don't care about minus signs.
I just want to know how large the EMF is in terms of magnitude.
I always know the direction, because I know if I move this to the right that the flux is positive, the way I have chosen my dA, and as I move it to the right that flux is increasing and so I know that the current is going to run like this, which then creates a magnetic field that opposes the change.
And if I go in the other direction with the velocity, then of course the current will reverse.
Phi L X B, I can live with that.
d phi/dt, I can put a B here, if you like that, to remind you that we're dealing with magnetic fluxes, L B V.
If I look here at this rod, try to make you see three dimensionally this rod is coming straight out of the blackboard.
Then the current is now coming to you.
The magnetic field is pointing straight up, and so remember that the Lorentz force is always in the direction of I cross B, is in this direction.
That means the Lorentz force, FL, which in this case would be the current, times the length of this bar which is the length of this bar times B, that is the force that I have to apply if I pulled it to the right, because that force is to the left, so the force of Walter Lewin is the same but in this direction.
I have to overcome the force, the Lorentz force, in this direction.
And so it's clear that I have to do work.
I have a force in this direction and I move it in this direction, and so I do positive work.
What happens with that work, well, that comes out in the form of heat in the resistance of this conductor.
I'm creating an EMF.
A current is going to flow, and the power is the EMF times the current, I square R.
It comes out in the form of heat.
If I change the direction when I push in, velocity is now in this direction, then clearly the current is going to change direction.
And so when I push in, the Lorentz force will also flip over and so the force for me will flip over, so again I have to do positive work.
There's no such thing as a free lunch, no matter what I do.
Whether I pull this way or push in, I always have to do positive work and that work is always converted then to heat, in the resistance of that loop.
So the work that I do, let me express it in terms of- of power.
The power that I generate is my force, dot product with my velocity and remember from 8.01, the work that I do is force over a little element dx.
But power is work per unit time, so the dx/dt becomes velocity.
And my force and my velocity are always in the same direction when I push they're in this direction, and when I pull they're in this direction.
I always do positive work.
And so the power that I generate is my force.
That's the magnitude of my force, which is I L B times the velocity.
But that must also be the EMF times the current, and notice now that the EMF therefore I goes is L times B times V.
And so now I have shown you that the EMF is exactly what I found before in terms of magnitude but now I have not used Faraday's Law.
This is purely a derivation based off the work that I do, and the work per unit time.
So it's interesting that you can also think of it that way.
Let me check my equations.
E I, I R squared, I can live with that.
Power, force dotted with the velocity.
I L B V, this is the magnitude of the EMF, and that's fine.
If I have a conducting disk, solid disk, and I move that, I try to move it through a magnetic field, north pole, south pole.
This is the magnetic field.
It's a little weaker here, little weaker there.
I move this in.
Then there comes a time when this disk is here that magnetic field lines go through this portion.
That means the magnetic flux through this surface is changing.
Lenz doesn't like that.
Farado- Faraday doesn't like that.
And so what's going to happen, the current is going to go around now like this.
It's not so easy to precisely determine how that current exactly flows.
But this current will be, seen from above, clockwise, so that it produces a magnetic field in this direction to oppose the change in magnetic flux.
And we call these current eddy currents.
The eddy current produces heat in here.
The heat is the product, joules per second, of the power E times I.
I squared R always comes down to the same, so this disk will heat up a little bit.
The resistance now is the resistance there.
And that means that the disk will slow down.
At the expense of kinetic energy, heat is produced, and it won't go as fast through this field than the situation would be if there were no field.
And we call that magnetic braking.
And you can easily convince yourself, which you should do at home, that if you look at the current right here coming out of the blackboard and you calculate the Lorentz force right there, you will see that the Lorentz force is in this direction.
It's pushing it out.
It opposes the motion.
And I can demonstrate that to you.
I have here a pendulum.
The pendulum is a conducting copper plate like so, which I'm going to swing between magnetic poles which are here.
Going to swing it in this direction.
In fact, I have two pendulums, one whereby this is solid copper, and I have another one whereby it is slotted, like teeth.
If I'm going to oscillate this one in a magnetic field, you're going to get current there, eddy currents, sometimes clockwise sometimes counterclockwise depending upon how the magnetic flux through that surface is changing.
Whether it moves into the magnetic field or whether it moves out of the magnetic field, it will always oppose its motion.
And so it will damp, you will see that.
And it's at the expense of kinetic energy, heat will be produced in this copper.
If you do it with something like this, the damping will be substantially less.
Not 0, but substantially less, because now if there is an EMF that wants to drive a current, this current has to go through this opening which is air, which has a huge resistance, and remember, power is E times I.
And if the- and it's I square R and if the, if the current is extremely low because the resistance is so absurdly high then you don't dissipate much power, and so there's not much damping and I can show that to you.
By the way, this damping, this magnetic damping is used sometimes for scales that you weigh yourself on so that it doesn't oscillate for too long so it damps very quickly.
So you're going to see the oscillations there, and it's going to be a little dark but that's the best way that I can make you see it.
Turn on the power.
So you see there, the loop -- I'll give you a little light.
And first I will oscillate it without any magnetic field.
I can power this magnet because it has solenoids.
So we'll just oscillate it, no magnetic fields.
Give you a feeling how it oscillates.
What you see on the left is the reflection, by the way, against the magnetic poles.
So this gives you an idea of how it oscillates.
And now I will turn on the magnetic field, now.
Just likes going into mud.
I'll do it again.
Oh, hitting the magnetic poles.
We don't want that.
Now, amazing, isn't it?
And it doesn't matter whether it goes in or whether it goes out.
And now I will use the one with the teeth, and you will see there is damping, but it's substantially less, so this is without magnetic fields.
And now with, now.
You can see there's damping, but it's nowhere nearly as much as there was on the one that was- that didn't have teeth.
I have here a remarkable example of how our economy is run.
I have there some windings, not just some.
We don't even know how many, thousands, copper wire going around, going around, going around, going around.
It's one wire, and then there is a light bulb in that loop.
And here is a magnet.
We don't know the strength, but I would say it's not more than a kilogauss, probably a little less.
And when I move this between these poles, magnetic field let's say is going in this direction.
I don't know whether it's in this or that.
I don't know the color code.
But there is a magnetic field going through here, so there's a change in the magnetic flux through this surface.
Very crazy surface.
If there are 1000 wires, this surface goes 1000 times like this, remember?
And then there is going to be an induced EMF, and there's going to be an induced current and this light will glow a little.
If I go in very slowly, you'll just see teeny weeny little light.
If I go very fast, then the magnetic flux change is high, high EMF, lot of light.
So I'll make it dark, darker, so that you can see that.
Oh, we don't want this.
In fact, we don't need that display at all.
So, if you can see me, I have it now and I'm going to bring it in the magnetic poles and I go very slowly.
I do it now.
I pull out, a little bit of light, I go in, a little bit of light.
I'm right in now, holding it steady, nothing happens.
Because there's no flux change.
Magnetic field is very strong now through these loops.
Faraday doesn't care about how strong it is.
He only cares about the change.
I pull it out, a little bit of light.
Put it in, a little bit of light.
Whether I pull in or whether I pull out doesn't matter.
If I do it very fast, I may be able to generate so much current that the bulb may even blow.
I'll try that, because I know you like the idea of breaking things.
We all do.
You're not alone.
Let's see whether I managed.
Yes, I did.
It's broken now.
So you got something for your money, didn't you.
That runs our economy.
Windings, conducting windings that are being moved forcefully through magnetic fields.
Faraday was once interviewed by reporters when he came up with this law, and they said to him, "So what?
So fine, so you moved a winding through a magnetic field and so you get a little bit of electricity?
So what?" And his answer was, some day you will tax it.
And he was right.
He had vision.
The reporters didn't.
Part of life.
I can show you another striking example of, um, of magnetic braking.
I have here a magnet which I can also power with solenoids, and I have here two rings.
One ring, which is complete in the sense that it's like so, a conducting ring.
I drop it through the magnetic field and as the flux is changing the eddy current will flow in such a direction that it will oppose the change, and so it could either be in this direction or in this direction.
I don't know.
But it will flow to oppose the change.
And so as it enters the magnetic field, when the flux is increasing, it will be damped.
When it is in the magnetic field and the flux is not changing very much anymore there will be no damping, but when it comes out of the magnetic field the flux is changing again through the surface.
It will be damped again, and you can see that.
And then I will throw through there another ring which is the same dimension but this ring has an opening here.
Air, the resistance is huge.
So the current that is going to flow, this eddy current, is way lower because the resistance is so high and so there is no power dissipation because I is so low and so there is no heat produced at the expense of kinetic energy, so there is no damping.
There is no force, no strong force, that opposes it.
And I can show you both.
And for this I need the DC power on again, and we're going to project it there on the wall.
I have to wait and see that I get my carbon arc up.
There it comes.
So we're going to project this slot which is the opening between the pole shoes on the wall there, light off, light off, all off.
And you see it there.
This is that magnet.
And here comes the ring.
The ring, that is going to be decelerated heavily when it goes in.
Oh, small detail.
I forgot to turn the power on.
[inaudible] There we go.
Power goes on now.
Actually, you see now -- I did that purposely -- you see now how fast it should go if there is no magnetic field, and now there is a magnetic field.
Now, did you notice these three phases?
You get damping, and then when it is right in the magnetic field, when there is very little flux change, then it picks up speed again and then it slows down again.
Watch it again.
Now, the one with the slot.
Now one- once more th- one without the slot.
All of that result of eddy currents, all of that the result of Faraday's law.
Heat is produced at the expense of kinetic energy.
So if I summarize, then when we create an induced EMF and we run a current, we either have to change magnetic field in time, or we have to change the area in time, or we have to change the angle theta, but we must make a change in the magnetic flux through an open surface.
And the energy that is dissipated must come from somewhere.
When you rotate the coils, when you power your dynamo, you have to do work.
When you move the crossbar around, you have to do work.
When you move the coil as I did there, in between the magnetic poles to make the light glow, you have to do work.
You always experience a force that is against the direction of your motion, which is another manifestation of Lenz's Law.
And thank goodness it is that way, because if it were the other way around, our universe could not exist, and I'll give you an example.
Suppose we have a growing magnetic field somewhere.
And this growing magnetic field creates an EMF, and suppose that EMF supports the growth.
Then the EMF would produce a stronger magnetic field, and that keeps the EMF going in exactly the same direction, and so the B field would become even stronger and you get a runaway process.
Situation would get out of control.
It would also be a violation of the conservation of energy, and thank goodness physics is the way it is, because if it weren't that way you and I wouldn't be here.
We couldn't even exist.
See you Wednesday. | http://ocw.mit.edu/courses/physics/8-02-electricity-and-magnetism-spring-2002/video-lectures/lecture-17-motional-emf-and-dynamos/ | 13 |
94 | High School Lesson Plans for Business Classes
The Importance of Global Cooperation
This high school curriculum seeks to actively involve students in exploring and constructing an informed understanding of global
cooperation by studying the role of the International Monetary Fund (IMF). The activities are designed to
include a visit — or virtual visit — to the
IMF Center's exhibition,
Money Matters: The Importance of Global Cooperation.
They focus on the history, mission, structure and function
of the IMF, as well as its past and continuing contribution to the economic stability of nations and the living standards of individuals.
Note to the teacher: The curriculum includes activities suitable for high school students enrolled in world and American history, geography,
economics, and business courses. The curriculum begins with general activities, which can stand alone as an introduction to the IMF
and/or prepare students for a visit to the
IMF Center. (See II below.) These are followed by activities specific to students' courses
of study. (See III, IV, and V below.) Teachers may choose from among the activities to satisfy classroom and field-trip needs and
time constraints. Objectives and procedures are easily adaptable to the skill and knowledge level of students. Sections I and VI use
concept maps as assessment tools to measure students' entry knowledge before starting the curriculum and final understandings following
Students will be able to:
- Explain the role of the IMF as a facilitator of global cooperation:
- How the IMF functions as a cooperative international organization.
- How the IMF facilitates international trade.
- How the IMF strengthens its members' economies.
- Discuss the adaptations over time made by the IMF.
- Describe the interplay among sociocultural, political, and economic forces, and the impact of these forces on nations and
- Identify the essential mechanisms for productive cooperation when working with others, (e.g., negotiating, compromising,
seeking consensus, and managing conflicts).
- Initial Assessment: Concept Map
Note to the teacher: The curriculum begins with a measure of students' entry knowledge, using concept maps as the assessment tool.
Concept maps provide a quick read of students' prior knowledge, e.g., misconceptions, familiarity with relevant vocabulary. They also
serve to bring to the foreground both content and organization of current knowledge and attitudes, readying the student for what is
- With the class as a whole, the teacher models the drawing of a concept map of the term "money" by writing
it on the blackboard and asking, "What does this term mean?" As students respond, they and the teacher begin to map
and make connections among related concepts.
- Students draw individual concept maps of the term "International Monetary Fund" or "IMF." After
general discussion, the teacher collects the signed maps to be used as an assessment measure by teachers and students on
completion of the curriculum.
- Pre-Visit Activities: General Introduction
Note to the teacher: The first three activities provide students with the following: a general introduction to the IMF; practice in
the processes of "reading" images; experience conducting web-based research. These activities are designed to increase both
teacher and student awareness of competency and gaps in knowledge. Teachers may decide the number of class periods required for these
activities. Students should maintain a folder of materials to be drawn on throughout the three-part curriculum. Following completion
of this section, the lesson plans are tailored to specific courses.
Teacher Materials: Selected images from the exhibit, including the IMF logo,
"Who's Got the Gold?", "Anybody Have Any Suggestions?", "I Don't Even Understand the Old System",
"During Transition to a Market Economy, Fasten Seat Belts"; IMF video,
Millennium: Out of the Ashes.
Student Materials: Appendix A: Pre-Visit Materials, including 1.
Executive Board Room; 2.
Glossary of Terms; and 3.
"Researching the IMF" Worksheet.
- Advance Organizer
- The teacher shows an image of the IMF logo with the olive branch, followed by a brainstorming discussion of the
meaning of the symbols.
- The teacher shows exhibition images and leads a discussion of students' understanding or misunderstanding of
the IMF. (Examples of possible responses/misconceptions-oversees the free exchange of currencies to its member
nations, loans money, creates jobs, rebuilds cities.)
- Introduction to the IMF
- The teacher prepares students for viewing the "Millennium: Out of the Ashes" portion of the Millennium video by
asking them to think about the following:
- What are the goals of the IMF?
- How is the organization structured to achieve its goals?
- The teacher shows this video "Millennium: Out of the Ashes"
and facilitates a discussion during and/or following the video using the suggested questions below:
- What are the purpose and ultimate goals of the IMF? How does the IMF logo represent the ultimate goals?
- When was it founded? What is significant about the date?
- What are the major differences between the IMF and the World Bank?
- How many members sit on the IMF Executive Board? How does the Board make decisions?
- Where do Executive Board members get information on which to base their decisions?
- What do you know now about the IMF that you didn't know before?
- The teacher distributes and previews Appendix A 1,2, and 3. Students use the IMF web site and search
additional Internet sites to complete research on the mission, structure, and work of the IMF.
Note to the teacher: Include IMF Websites (www.imf.org):
"What is the IMF?"
"IMF At A Glance,"
and/or "Chronology," and sites addressing
current IMF activities and issues. See latest speech of the Managing Director
(www.imf.org/cgi-shl/create_x.pl?mds). In Appendix A 4
("Researching the IMF"), the teacher may select from the list of questions or adapt them as appropriate.
- Discussion of findings
The students report on the outcome of their web-based research.
- Pre-Visit Activities: Business Curriculum
Note to the teacher: The following curriculum activities, (III A and B) complete the preparation for the visit to the IMF Center. The
teacher previews processes and materials to be used by students at the Center, addresses the logistics for the visit, and introduces the
overarching theme: "Countries and the IMF: International partnerships in the transition to a free market economy."
Students will be able to:
Student Materials: Appendix A4:
Image Analysis Worksheet (Exhibit Area 6 Cartoon-"During transition to market economy,
fasten seat belt"); (Exhibit Area 3 Photograph -"Five cigarettes for an egg");
Appendix A5: Chart - Changing to a Free Market Economy;
Appendix B: "At the IMF Center": Exhibit Worksheet.
- "read" images;
- add new meaning to terminology introduced in the previous assignment
- explain the give and take of an IMF/member nation partnership.
- Individually or in pairs, students use the Image Analysis Worksheet (Appendix A 4) to "read" one photograph
and one political cartoon relating to a country's transition to a market economy and an international monetary system.
- The teacher facilitates a class discussion of the process of "reading" images as a means of interpreting
- The teacher divides the class into 5 or 6 groups (3-5 students) for conducting research on specific countries
experiencing problems in changing to a market economy. (The exhibit includes information on the following: Brazil; Korea; Mexico;
Poland; African countries.) Groups choose the country they will research. The teacher previews Chart (Appendix A5) and "At
the "At the IMF Center" Exhibit Worksheet (Appendix B.) Students initiate country research
prior to visiting the IMF Center.
Recording information on Chart 1, students document the following aspects of an IMF/member nation partnership:
(Note to Teachers: Suggested resources for this activity include:
http://www.imf.org (country page);
- What problems existed in this country?
- What assistance did the country seek?
- What risks did the country take?
- Who made compromises?
- What type(s) of assistance was (were) received?
- What were the results?
- Visit to the IMF Center
Note to the teacher: The visit to the Center provides direct experience for learning about the IMF and its role in fostering global
cooperation. The activities continue to build understanding of the overarching theme-"Countries and the IMF:
International partnerships in the transition to a free market economy."
An IMF representative is on hand to provide information and answer questions. The visit is organized to minimize crowding by assigning
groups to two parts of the IMF Center—exhibit areas and mini-theater. The visit takes approximately 1½ hours. Guided by
"At the IMF Center" Exhibit Worksheet (Appendix B), students will be able to:
Student Materials: Folder containing Appendices A and B, notepaper, pencils.
- "read" images;
- broaden points of view, getting perspectives of both the IMF and member nations;
- hypothesize on the role of international companies in the transition.
An IMF representative welcomes and orients students to the Center, and remains as a consultant to students during the
- IMF Center Assignments:
Students use materials in Appendices A5 (Chart) and B ("At the IMF": Exhibit Worksheet) to guide their activities.
Half of the students explore the exhibit
"Money Matters: The Importance of Global Cooperation,"
while half visits the
mini-theatre and views case study videos about African countries and Korea. After 45 minutes, students switch activities.
- Post-Visit Activities
Note to the teacher: These activities synthesize students' understanding of the IMF's role as a facilitator of global cooperation and the
role of international companies investing in developing countries. Students will be able to:
Student Materials: Appendices A and B, folders of cumulative materials, including assignments.
- Explain how IMF assistance contributes to a developing country's transition to a free market economy;
- Describe the process used by the IMF in providing assistance to a member nation;
- Hypothesize areas of research necessary for assessing an investment by the IMF or a company in a developing country;
- Identify the risks and benefits from the country's perspective
- Identify the risks and benefits from a business's perspective
- Discuss a multinational business' social responsibility
- Students regroup in the classroom, so that every country is represented in each new group. Using the Chart (Appendix A 5),
students compare their findings and identify similarities and differences among the countries. Using the format of the Chart,
students record findings on a transparency to be shared with the class.
- Continuing in their small groups, students brainstorm the research required by the IMF prior to assisting a member
- Final Class Discussion
Note to the teacher: This conversation uses the overarching theme, "Countries and the IMF: International partnerships
in the transition to a free market economy" to integrate students' new understanding of global cooperation. They may identify
such research areas as: natural resources; economic system; IMF relationship; monetary unit; legal system; Gross National Product
(GNP); Gross Domestic Product (GDP); infrastructure; competitive advantage; per capita income; currency value compared to US dollar.
The teacher facilitates a brainstorming session. Students take the perspective of a CEO of an international company. They identify
the steps and information required for sound investment decisions in a developing country. They produce a list of research
- Final Assignment
Note to the teacher: The final assignment provides a record of the students' new understanding of the IMF's role in fostering
global cooperation and the role of international companies investing in developing countries. The teacher may specify content and
length of the essay.
- The teacher provides a scenario emphasizing the human needs approach to economic development:
"You are the CEO of a multinational company interested in investing in a developing country. Assess the
various opportunities and risks of doing business in the country you have studied."
- Using the list of research criteria generated in the final class discussion above, students research and prepare a report
to a board of directors in response to the scenario assignment.
- Final Assessment: Concept Map
Note to the teacher: The curriculum ends with a measure of the students' new understanding. By comparing the initial and final
concept maps, both the teacher and the student are able to assess the growth of knowledge.
Student Materials: Initial concept map.
The teacher distributes the students' initial concept maps. The students draw a final concept map, with "IMF" at the Center.
Money Matters Curriculum Table of Contents | http://www.imf.org/external/np/exr/center/students/hs/business.htm | 13 |
16 | By Ken Myers
It is well-known that children who read well experience greater progress in their academic studies. However, literature also is a valuable tool for teaching and reinforcing positive social skills that can help keep children on the right track when it comes to behavior. In fact, the power of literature is so strong, that many juvenile correction systems are implementing the use of required reading as an alternative to other types of punishment. Because literature has the potential to inspire positive change in children, parents and other adults who work with youths may want to try a few of the following ideas in order to begin seeing the effects of literature on a child’s social and emotional development.
1. Create a ritual. Children thrive on routine. This is especially true for children who come from rough backgrounds or who have been forced to overcome significant challenges. Younger children may benefit from having a set bedtime story ritual, while older children can find a regular reading schedule calming. This way, there is a portion of the day set aside that they can depend upon always being the same.
2. Use a book to approach a difficult issue. Working with children can lead to a need for some difficult conversations. Often, adults and children may struggle with ways to bring up particularly challenging topics. For this reason, books are often the perfect way to introduce specific topics for conversation. Through literature, you can seamlessly ease into topics such as divorce, death, and abuse.
3. Explore a common interest. For many children, bonding is a difficult process. However, when a child shares a common interest with an adult, the child is more likely to trust the adult for advice. This can be especially vital for juveniles to make progress towards their goals for better behavior. For this reason, try finding a common interest that you and your child can explore through reading specific literature and books.
4. Make a memory book. When children attempt to learn how to make better decisions, you can help them learn how to focus on the positive aspects of their lives. In these instances, encourage children to create their own literature. By making memory books, children develop powerful resources to track the positive changes occurring in their lives. In a group setting, each member can choose to create a page that everyone can read.
5. Extend reading through activities. Children learn best when they actively participate in an experience. For this reason, extend a literary assignment to include a physical activity. For example, a child who reads a sports-themed book may then enjoy taking part in a real-life game. This can reinforce the concepts the child learned in the story, such as the importance of teamwork.
When children read books, they are able to enter into a world where learning can take place regarding a variety of subjects. Not only is literature an excellent tool for teaching academics, but it is also a valuable resource for helping children learn positive social skills that will enable them to make better decisions. This is especially true for children who may not have had positive role models in the past. Literature should be an important part of any child’s life and supported through the efforts of adults who are dedicated to ensuring the child will have the best opportunities for success.
Ken Myers is the editor in chief and frequent contributor of http://www.gonannies.com/. Ken helps acquire knowledge on the duties & responsibilities of nannies to society. You can reach him at [email protected].
Image: Frederick Noronha on flickr.com
By Mary Bell
Reading is definitely an escape from stress. It provides readers with an alternative world and imagination beyond recognition. It also provides information and different insights regarding recent and past issues that affect people of different statures. A relationship between readers and writers provide an ongoing cycle of demand and supply yet some are not aware of their rights as a producer and consumer.
Being a reader also has rights. Whether big or small, a bookworm can always be harassed into reading materials that he or she might not really want to entertain or acknowledge. Below is the list of rights of an avid reader. Knowing this might not only help them choose what to read, but also help them why and how to read. These may be obvious guidelines, but it will still help those who are still not aware of their rights.
1. The right to not read.
Like any other consumers, readers can choose what to and what not to read. You are not obliged to view materials that may be offensive or does notpertain to your field of interest.
2. The right to skip pages.
A reader may skip the pages of any book, magazine, leaflet, or handbook he/she buys. This exemplifies that the reader may not be entertained or satisfied with the contents of the page or the reader might have already read the contents of the pages already.
3. The right to not finish.
Whether it’s due to boredom or lack of interest, a reader may choose not to finish a certain reading material. He/she can always replace or put a book in the shelf if it does not satisfy his/her interest anymore.
4. The right to reread.
Obviously, readers have the right to read a book over and over again. May it be for research or just pure entertainment, the bookworm has the right to read his/her books any number of times he/she wants.
5. The right to escapism
The reader has the right to turn the book into an escape from reality. Whatever topic it may be, he/she is privileged to venture into another world through the pages of a book.
6. The right to read anywhere.
Readers need not to worry about the place they read their favorite books, as long as they are not offending anyone.
7. The right to browse.
Readers have the right to browse through a book before purchasing it. This enables them to get a preview of what content the book holds and may help them in being interested about a certain topic.
8. The right to read out loud.
A person is entitled to read out loud unless an area or institution prohibits noise. Try reading out loud in your room, kitchen, bathroom or wherever you want. It helps to bring out the emotions of the material you are reading.
9. The right to write about what you read.
Book lovers are entitled to be writers too. They can write anything about the books they are reading as well as give reviews and insights on its content.
On a writer’s point of view, creating a masterpiece takes a lot of time and effort. They are usually criticized on how they write the storylines and what content they put into their hard bounded memoirs. If you are interested in becoming a writer, you should know your rights and should not be afraid to emphasize them while doing your work. Below are the rights of writers and journalists. May these lines be helpful to you and your work.
1. The right to be reflective. Every writer has the right to reflect on what he/she is experiencing at the time. Whether it is a happy or painful experience, writers have the right to stop and reflect on the issues they are interested in writing about.
2. The right to choose a personally important topic.
A writer is has every right to write about an issue that affects him or her mostly. Giving insights on a certain topic, writers may express their feelings and insights whether it is favorable or not to a certain issue.
3. The right to go “off topic.”
Writers may choose to explore other topics that may still be related to the issue they are writing about. This gives new ideas and insights to the readers as well as aspiring bloggers and writers.
4. The right to personalize the writing process.
Every writer has the right to be recognized for his/her writing style. Remember, no two writers have the same style in writing. If so, that would be plagiarism.
5. The right to write badly.
Being an imperfect being, writers are also allowed to commit mistakes. That’s why they have a draft of their works so that they can edit it before publishing.
6. The right to “see” others write.
A writer has the right to observe other writers. This is essential for their work and may help them finish a book or article that they are currently working on.
7. The right to be assessed well.
Writers have the right to choose their review panel in order to have a feeling of fairness.
8. The right to go beyond formula.
Writers have the right to go beyond the traditional style of writing in order to create interesting and unique topics and storylines that capture the eyes and hearts of readers.
9. The right to find your own voice.
Writers have the right to find their own unique writing style in order to catch reader’s attention. Nothing prohibits a writer from becoming unique and creating his/her own voice.
These are but just simple and obvious privileges of writers and readers. We should be aware of every right and make sure to apply them whenever we feel violated and offended.
Mary Bell is a law and business blogger. She is a freelance lawyer and a full time mother of two wonderful kids. You may likely find her writing about related subjects and/or writing for companies like BailBondsDirect.com that has been in the bail bond industry since 1999. She has recently blogged about Bail Bonds.
By Jeffrey Roe
Most people intending to become librarians often have strong memories associated with their school libraries and the people who worked in them. Those memories are likely what draws some librarians back to primary school, where they work to foster and promote literacy, learning, and, simply, a love of books. Others opt to go into research, working in high profile special collections with fragile documents full of unique information or of particular significance to history.
Few library students probably envision working in a prison library as their ideal place of employment. Contrary to what you might think, working as a prison librarian isn’t a maligned path so much as an overlooked one; it’s simply not a job on most people’s radar. This is unfortunate, as working in a prison library offers librarians a unique environment, one that is proactive in promoting education, literacy, and civic engagement, among other ideals closely related to the mission of libraries everywhere.
Becoming a prison librarian isn’t particularly difficult. As with all professional libraries, prison librarians must have a degree in library science, generally at the master’s level (MLS). Experience working in a civilian library (such as a school or public library) is also generally required. Some experience working in corrections is also ideal, but not required; it’s simply a good idea to understand the constraints that prison puts upon both the incarcerated and those who serve them. You could accomplish this by volunteering at a prison.
It’s important to understand what a library is to someone who’s been incarcerated: It is a place where inmates escape from the drudgery of day-to-day life, where they learn to improve their literacy, write letters, watch instructional videos and so much more. Prison libraries don’t differ much from public libraries in terms of content, though some do have dedicated legal sections. Prison libraries even sometimes host book clubs! Library services can be integrated with other services for the incarcerated, like visitation.
Prison libraries, like public libraries, suffer at the whims of state finances, but differ from their public counterparts in other significant ways. Internet is often unavailable to inmates or librarians; when it is available to librarians, it is only during hours when inmates are not present. Prison librarians also act as corrections officers, taking on the responsibility of supervising both the inmates working in the library and those using its services. Generally, inmates tend to treat librarians with a degree of respect since the services the library provides offer prisoners a respite from prison life and a way to better themselves and their situation. Prisoners who engage in educational programs, such as library services, tend to stay out of prison upon release at higher rate than those without access to such programs. Just another reason to consider becoming a prison librarian.
Jeffrey Roe is the community manager for the University of Southern California’s Rossier School of Education. USC Rossier Online provides current teachers and those working on becoming a teacher with the opportunity to earn a masters in education completely online. In his free time, Jeff enjoys attending concerts and developing his talents as a videomaker.
By Colin Ollson
If you decided to sit your child down and announced that today you were going to give little Jacob or Emma a lesson in compassion, what do you think his or her reaction would be? More than likely, it would not be squeals of delight and a question about whether there would be a quiz at the end. Whether children realize it or not, learning how to be compassionate toward others is something they can start developing when they are quite young. The five books that make kids more compassionate listed here are great choices to help them learn that lesson without making them feel as if they are in school.
Milton’s Secret by Eckhart Tolle
This book, which is written for 4-8 year-olds, focuses on a young boy who is worried about the possibility of encountering a bully at school. Children learn compassion for the child who may be a target and through discovering this book with their parents can start a discussion about the bigger issue of bullying, why some children (and adults) behave that way, and how it makes the target of this type of behavior feel.
Another theme of this book is that we must learn to take each moment as it comes, without worrying about the future. This idea of being fully present in the here and now is one which will benefit a youngster as he or she grows into adulthood.
The Giving Tree by Shel Silverstein
The idea behind this beloved story is a very simple one. The main character is a tree which simply gave everything it had to a boy out of love, including simple things like shade to help keep him cool in hot weather or a larger request like a place to build a tree house. Children aged 4-8 will learn that giving out of love is the right thing to do.
Unexpected Treasures by Victoria Osteen
Author Victoria Osteen explores the theme that being kind to other people is the right thing to do, even when circumstances are difficult. In this story, Pirate Fred and Curly Beard are rescued from a sinking ship by Captain Jon and First Mate Sue. The rescued pirates are grumpy at first, but learn about friendship and sharing as the story moves on. This story is a good choice for children between the ages of 3-7.
The Ant Bully by John Nickle
The Ant Bully is a story about a bully having the tables turned on him by finding out how his actions affect others. This story, which is a good choice for children aged four and up, focuses on Lucas, a kid who is taunted by another child who turns on his bully with a squirt gun and uses it on an ant colony as well.
The ants use a magical green potion to shrink Lucas down to their size and sentence him to hard labor. He learns his lesson while living among the ants and children will learn the lesson that treating someone else badly because of the actions of a bully is not a way to show compassion for others.
The Recess Queen by Laura Huliska-Beith
This is another story which would be appropriate for children ages four and up. Its plot focuses on Mean Jean, who simply was the Recess Queen. No one on the playground did anything unless Jean told them it was all right to do it. She ruled the roost, until one day a new girl came to school and everything changed.
Katie Sue was not intimidated by Mean Jean. She asked Mean Jean to jump rope with her instead. This simple act of friendship (and compassion) made the difference in the story and it is an effective way to teach children that reaching out to others can be a way to diffuse a situation.
When you are exploring these five books that make kids more compassionate with the young people who mean the most to you, don’t forget to ask questions about their experiences as you read the story. The book can be a wonderful starting point for this ongoing life lesson.
Colin is an in-house copywriter at http://www.essaypedia.com/. He specializes in writing of custom research papers and essays on history and arts.
By Tam Neville
After lunch the group heard a presentation on “Research: Does it work?” led by Ron P. Corbett Jr. He began by saying that evidence-based practices are used in many settings.
Is there empirical support for what you do?
Is it having the effects you want on the people you work with?
All in Changing Lives Through Literature believe that it does change lives. A recidivism study has recently been done at UMass/Boston by retired professor Taylor Stoehr, Professor of Sociology, Russell Schutt, and Associate Professor, faculty member of the Criminal Justice Program, Xiaogang Deng. The study showed a reduction in offending for CLTL graduates.
Do we have the ability to help people reduce offending sometimes or altogether?
There was an experimental group and a control group. We looked at behavior 18 months before CLTL and 18 months after CLTL. There were 600 participants in the study. There was a 60 % drop for CLTL participants and 16% for others. Both the number and severity of incidents were reduced. Also the participants worked with a parole officer and took one other program (such as substance abuse, batterers, etc.).
What is it about Changing Lives that leads to a reduction in offending? What is the link between graduates of the program and those who offend less? Stoehr reports on this study:
“This group was larger than the Jarjoura/Rogers study and ran for a longer time. We had five jurisdictions: New Bedford, Lynn/Lowell, Dorchester, and two smaller courts. We had a larger range of information.
For the probationers, someone was paying attention to them. This is what was missing from their lives. In the Dorchester men’s class we have big groups so we break them up into smaller groups. Once in a class discussion, we had five guys who were great talkers, all talking at once. Then one held up his hand and said, “This is what our problem is, we don’t listen, we just talk.” Moments like this begin to happen in the third class. The process is unpredictable. You let go of controls. In Dorchester we don’t stick so hard to the text. The main thing is what happens in the classroom.
In the Dorchester program, we have a set of questions that we work with that go in a sequence. For example: What does it take to grow up? Does anybody ever learn things in school? And towards the end of the semester – What does it take to hit bottom? The questions get bigger and bigger.
In mid-semester we ask, “What is your evaluation of street smarts?” By this time there is trust. On street smarts – almost all are proud of their street smarts. The staff has a different view: street smarts prevent you from learning anything new. Many students cling to street smarts. The most important thing about Changing Lives is that people belong to a community that has the same concerns that they have. We have so little of that in America – where does that happen in your life? That makes a huge difference in what you do with your life.”
Books bring universality. A student realized, “I’m not the only one with this problem.” Through books students learn how to fight with words, not fists. They build a community together.
Reading is a cognitive behavior intervention – it makes thinking more flexible and more expansive, more empathetic.
The program boosts self-esteem too. To have a conversation with a judge can boost a student’s confidence. A student completes an assignment, voices an opinion, and is listened to.
Judge Kane said, “We’ve had the program for 20 years and there has never been a scary incident in these years. We get gratitude from our students.”
Judge Dever said, “People come into the program looking at life subjectively. In this program, through literature, they start looking at life objectively. This changes their ability to communicate. This then may help them with job interviews, things they thought were unattainable.
Reading slows you down – you have to find a quiet place and be by yourself. This is new for them – it leads to self-reflection.”
Stoehr talked about juveniles saying, “They don’t’ have a place to go with no noise and they’re full of hormones. Think of something you can do at the meeting, very short things (maybe rap), something that gives them a little challenge at the moment.”
Teresa Owens (PO, Taunton Division) said, “CLTL gives them a safe setting. One thing that always came out of the Dorchester women’s class was the question of choices. Were there other choices I could have made? Or, you can go to someone else to ask and say ‘I don’t know what to do.’ Also, people in class were accountable to each other in terms of doing the reading, homework, etc.”
CLTL is a team experience. When people have a chance to reflect on choices, this is their time, a time they can actually think. They don’t have that luxury in their lives. In CLTL they learn that there are more options, more choices.
Professor Waxler said, “We collectively make a community. The activity is primarily verbal. Reading brings engagement with narrative – you see that you are connected to other people. The story that I just read is my story too. Then discussion with everyone sitting around a table, there’s an open relationship between our experience and narrative. Story gives us meaning and helps us put ourselves in someone else’s shoes.”
To begin the final session, a probation officer new to the program spoke using herself as an example. She said, “Say I want to start a program. How do I get a judge involved, a facilitator, and probation officers?”
Judge Kane answered her on the matter of judicial involvement. “At least have someone who will let you run the program. You will need a judge’s support to get POs behind it. Having a judge is very important.”
Jean Trounstine added, “Get a judge talking to a judge. This will increase the chance of their going to class. You then have to go out and find facilitators.”
Someone else commented, “You have to get the judge to commit to an incentive if CLTL is not a condition of probation.”
The question of incentive: Outcomes are more positive where a court can create incentives such as six months off probation period, discount on supervision fees, etc. This information is in the literature and on the website. Dee Kennedy pointed out that, “Many students start off by saying, ‘I never would have taken this without the time off’ but by graduation, their attitude has changed.”
To find a facilitator ask Jean Flanagan. Jean Trounstine added, “Try to find a facilitator who has a connection with a school. It’s good to have a school as a place to meet. Call an English department. We can help you – you don’t need to do this in a vacuum.”
Ideally, a university campus is the best place to hold a class. The students get a taste of college life and it makes them proud to go to a college campus. This is especially important with juveniles.
To start a class, ask probation officers to recruit students from among their probationers. Myrna Thornquist (PO, Waltham District Court) advised, “I check a person out – do they like to read? What is their education? In the beginning I don’t tell them what I’m thinking – that they would be a good candidate. I do a little research on a person. Then, are they interested? Sometimes it takes 6-12 months to be sure of someone as a candidate.”
On books, Jean Trounstine said, “We give the students the books, they don’t buy them and the facilitator is reimbursed for these. We also encourage every student to get a library card.”
How many students should be in a class? We have had classes with 5 or with 13. Taylor Stoehr said, “One day we had 50. We split into two groups, then used small groups of 4 to 5.”
Any staff has to be regular. It’s important that all the staff agrees on the class ground rules. If we have an issue sometimes we talk about it afterwards. For the most part we tell the students, be sober and straight, do your homework and be on time.”
For the graduation ceremony, the Lynn/Lowell programs hold graduation in the court house during the first session. Those in the dock witness graduation. The graduates receive books and a certificate. It’s a day for celebration.
This meeting was a very successful one and we now have several courts who are interested in starting a program. We need facilitators. If you, or anyone you know, would like to facilitate a Changing Lives Program please get in touch with Jean Trounstiine at: [email protected]
By Tam Neville
This program is a great experiment about what democracy can mean. All masks, roles, hierarchies, fall away. There is a moment of beauty. In a class we have the voice, the breath of human beings, the flow of the human heart.
Dr. Robert P. Waxler
On May 10, 2012, Judges, probation officers, and facilitators of the Changing Lives Through Literature program met at the Worcester Law Library. The purpose of the meeting was to assist potential participants in starting new programs. There were many new faces in the room and familiar faces too. Despite losing our funding in 2008, we are still going strong with ten programs running in Massachusetts and hopefully, with gatherings like this one, more will follow.
The day began with a presentation of the history of the Changing Lives Through Literature Program led by Hon. Robert J. Kane and Dr. Robert P. Waxler. Judge Kane talked briefly about the first CLTL class that took place in New Bedford with a group of men, all of whom had serious convictions. The idea was to try the new program on the toughest candidates. If it worked on them, that meant the program was sound.
Judge Kane said the program works because “the act of reading and writing allows people to learn, to learn to listen instead of just reacting.”
All programs have autonomy. Dorchester may use just one text, supplemented with stories, Roxbury may use poems, and another program may use film.
Classes democratically respond to works of literature and this dialogue leaves a deposit in everyone. Judge Kane said, “This was dramatically illustrated by a man with a rough history that we had as a student. He was scared and wanted to stir something up. We gave this turbulent student a different point of view that gave him the chance to reflect. I saw him the other day – he gave me a smile and handshake. This student got a different view of a judge. We, in turn, learn to drop any facile notion of what brings an offender into court. Changing Lives brings me energy and a sense of curiosity. CLTL is a vocation. I’d like to thank Ron Corbett whose great support gives us renewed spirit for the future of the program.”
Next Prof. Waxler spoke about the programs history and its implications.
“The center of the program is literature. Literature is one tool we have that can keep people human. Every time we walk into a class we have that possibility. Our program has a different effect than an anger management or a job-hunting class. The program began in l991 with those who had a major offence. We saw how the men in this first class changed. Watching them walk on campus – after 6-7 weeks they looked different, they looked much more like the other students.”
An independent study (the Jarjoura/Rogers study) was done and was helpful in the beginning of the program. It demonstrated that CLTL graduates had a lower rate of recidivism. 45% re-offended in the control group and of the CLTL group only 18% re-offended.
Not only do the students change but probation officers and judges change as well. Judge Dever said, ‘It has been the joy of my judgeship.’”
Waxler continued, “CLTL is a movement, not an organization or institution. We have 12 states that are involved: Massachusetts, Rhode Island, Connecticut, Maine, New York, Virginia, Pennsylvania, Florida, Kansas, Texas, Arizona, California, and one program in great Britain. The goal is to have a program in every state, every court. We have three books written about the program, a website, and a blog.
I think the program works because people get excited about reading. Thinking and self-reflection (through the process of reading) can be more exciting than dealing drugs. After the third session one of our roughest students said ‘I never thought I would find anything as exciting as being out on the street selling drugs – but I have.’ Reading and being able to come in and engage in discussion with PO’s, other students, and a judge, was inspirational for him.
This program is a great experiment about what democracy can mean. All masks, roles, hierarchies, fall away. There is a moment of beauty. In a class we have the voice, the breath of human beings, the flow of the human heart. People find their own voice and also participate in a communal voice. Many people are stuck in a perpetual present, repeating the same behavior. As Franz Kafka said, literature can break through that frozen sea within us. When that happens through narrative you feel a stirring of desire. You see the future and remember parts of the past and break out of the prison of present moment.
I will tell you about one night in class, we were reading Sea Wolf by Jack London. The hero is a tough guy, but with some narcissist elements. He believes that might makes right and is stuck in this, can’t move off his own center. In the midst of discussion – one student said, ‘I used to be just like Wolf Larsen.’ He recognized himself but was also saying ‘I am now free of that personality.’ Stories can open things up. People are always more extraordinary than the stereotypes. People in the program feel they are not good people. They are down-and-out and believe others see them this way. As we read we see something different – complex human beings – and the students realize that they have that complexity.”
The second session of the day, led by Jean Trounstine, was on program modeling, or how to teach a particular book or story. The discussion was based on Toni Cade Bambara’s short story “The Lesson.”
Trounstine began by asking, “What’s the lesson and who learns it?”
One participant said that Miss Moore exposed kids from a poor neighborhood to the outside world. She took them to F.A.O. Schwartz and here they began to learn about a larger world. Here there were new toys with high prices. The children learned that such things existed and about the inequality in the world.
Sylvia was one of the strongest characters of the story. She learns what she didn’t want to see and she says – “Why am I feeling ashamed when I walk into this store?” She didn’t fit in – she felt, “They are better than I am.” In her own world she ruled the roost. The story shows the limitations of poverty and how it’s difficult for people to see beyond it. Sugar expresses the inequality, “You know Miss Moore, I don’t think all of us here eat as much in one year as that sailboat costs.” Miss Moore is a radical in her own way. She was trying to show children that these inequalities exist and that you can work with them.
What was Sylvia’s world view before she goes to F.A.O. Shwartz? Sylvia’s view is, “My world’s ok, don’t rock the boat,” a predicable response. Now she has to look at a bigger picture and this “rocks Sylvia’s boat.”
Sylvia is angry because of her background. This is connected to our own classes and the question of how to draw students out of anger.
When they first go into the store, the children feel, “White people, crazy, wearing fur coats in the summer. But if everything you see glorifies a certain standard of living . . .” The children are frustrated by Miss Moore who says “Where we are is who we are.” She challenges them with the question of how to change this.
Do you like or dislike Miss Moore? She challenges them not with words or morals but by letting them have their own experience. Miss Moore doesn’t care if the children like her. The kids have a grudging respect for her. She is confrontational and persistent.
Taylor Stoehr asked, “What do you do with that anger? You have to learn this yourself. The lesson for us in this story is that the best you can do is open up the world. There is an analogy between Miss Moore and what we do in this program. In CLTL students are self-obsessed but without any self-esteem.”
Jean Trounstine said, “Let’s focus on what I would do with this in a CLTL class. You’re in a room with chairs in a circle. This is a good story to use at the beginning of semester. No one knows anyone. I have everyone read the story together. The students get over any fear of not understanding. Then I ask, ‘What did you get out of the story?’ Then we would start a discussion. It’s important not to instruct, but to choose a story good enough to make them think.”
Waxler added, “I’ve used this in a regular college classroom. Why does Miss Moore have to put it right in their faces – that they are poor? We are left with questions. Unlike other disciplines, literature doesn’t work for solutions.”
Ron Corbett asked, “Is it important that the characters have some characteristics that students have?” Trounstine answered, “I always pick things I think students will relate to. We used Their Eyes were Watching God by Zora Neale Hurston. Once they come to class, they see the book differently.”
By Brittany Allcorn
When I was younger all of the children from my neighborhood would normally gather together to play outside, but on one particular day my friends were too busy (having more fun than me!) to come outside. As I was sitting outside on the sidewalk, playing with a twig, or some other earthly thing I had made into a “toy,” a woman came up to me and asked me if I liked to read. At that time, I really hadn’t thought about whether or not I liked to read, but I hesitantly said yes anyway. This kind woman, someone who I had never met before, asked me if I would like to borrow a book. Of course I said yes. I was so bored that I would accept anything to get me away from the boredom of the sweltering, friendless day. The book she leant me was The Lion, the Witch and the Wardrobe.
It was fantastic! After immersing myself in the world of the endearing characters of the novel, I could answer the question “Do you like to read” with an affirmative yes. Through this book I learned that I don’t have to be sitting on the sidewalk on a sweltering day playing with a stick, I could be sitting in a horse drawn carriage on a bone chilling day with the White Witch. I learned that I could experience whole other worlds and whole other lives. I could be anyone and anywhere I wanted to be. Not only did reading offer me new experiences, but it also offered me new friends. For me, characters aren’t just words on a page. They are real people with desires and emotions. They are people with whom I can sympathize with and develop a connection to.
Having taken a course on literacy in the classroom with Maureen Hall, I have learned that the experiences I felt from reading are greatly embedded in the deep reading process. Deep reading involves readers making a connection to the text in both an imaginative and emotional way. Readers who go beyond the literal meaning of literature and “map” their experiences on to the text are experiencing deep reading.
Robert Waxler and Maureen Hall in their book, Transforming Literacy: Changing Lives through Reading and Writing, explain that, “literature, filled with ambiguity, always opens itself to the reader, calls to the reader, encouraging and demanding that the reader participate in the making of its ongoing meaning.” Narratives have a literal meaning that all readers can understand, but they can also be manipulated by individual readers who develop their own meaning and interpretation of a text based on their own experiences. The meaning readers develop from a text is important because it leads to a better understanding of the self.
I also learned several great ways teachers can incorporate the deep reading process in their classrooms. My personal favorite technique is provoking discussions through questions. These questions should be open-ended, with no right or wrong answers, because these are the types of questions that really get students thinking. Questioning not only guides readers to meaning making, but it can also allow students to make more connections between the characters and their own lives. Questioning is a great practice because it can be done at any stage of the reading process and can lead to better understanding, development, and epiphany of the self.
As Waxler and Hall explain, questioning, or the act of conversation, can spark the desire for, “students [to] wrestle with the story… [and] struggle to make meaning out of their personal and collective experience.” Discussing the text develops a community of learners who are able to learn and grow with one another by sharing their ideas. Understanding the self leads to a deeper understanding of the interconnectedness humans share and the vulnerability every individual has. This recognition allows for deeper discussions and thus a deeper understanding of a text and of the self.
Another important way for a teacher to accurately implement the deep reading process in his or her classroom is by experiencing deep reading firsthand. In order to better understand the experiences of deep reading, teachers should also have this experience. If a teacher has never experienced the process of deep reading he or she will not understand what his or her students are going through and will not know how to encourage students to participate in the process of meaning making that develops from deep reading. When appropriate, teachers can share some of their experiences with their students in order to make a stronger community of learners who feel comfortable enough to discuss their ideas and feelings about the text with not only their peers, but also with the teacher.
Through gaining knowledge of the deep reading process I have learned that my experience as a child reading T. S. Eliot’s work really had an impact on me and that I can share this experience with my own future students. By taking the journey along with the characters in The Lion, the Witch and the Wardrobe and by participating in the experience of deep reading as a child, I learned that I disagreed with the White Witch’s methods of power and with Edmund’s original alliance with the witch and betrayal of his brothers and sisters, but more importantly I learned about the power of family and the strength of love and kindness. I was able to make connections between the events in the book and my own life. Without the experience of deep reading I wouldn’t have been able to make these connections and learn from the story. Deep reading has a powerful impact on the individual reading. It can start at any age and can blossom into a better understanding of not only the self, but also of all humanity.
By Tara Knoll
Alternative sentencing works to treat offenders as individuals. A crucial part of this function is the acknowledgment that female offenders are different in significant ways from their male counterparts. By observing both the predominantly male New Bedford CLTL program and the all-female Lynn-Lowell CLTL program, I found that, rather then equate male and female offenders, CLTL calls upon their different pathways to crime and gender-specific needs to structure programs that address the different issues they face. That isn’t to say that CLTL’s division of gender in these programs makes essentialist claims, but rather CLTL acknowledges that, realistically, male and female offenders often face different issues in prison and in reentry in terms of life circumstances and risk factors.
A recent report on programming needs for women offenders by the National Institute of Justice emphasizes, “Women offenders have needs different from those of men, stemming in part from their disproportionate victimization from sexual or physical abuse and their responsibility for children.” Further, according to the Center for Effective Public Policy’s 2010 “Reentry Considerations for Women,” it is critical to consider that “women [offenders] have different communication styles than men….” Robert Waxler and Jean Trounstine prefer to have all-male and all-female participants, respectively, in the classroom. While there are several co-ed CLTL groups, Waxler explained to me, “There are single gender groups primarily. For me, I find that it’s helpful to have an all male group in terms of picking which stories to use and anticipating lines of inquiry.”
When I observed the New Bedford session in which the participants discussed Russell Banks’ Affliction, I found that the session reflected the New Bedford program’s focus on male-centered issues. Affliction is in many ways a text about what it means to be male. Its time frame is deer-hunting season, “an ancient male right,” its male characters are rugged and violent, and most importantly, the protagonist, Wade, is “very good at being male in this world.” During the discussion, the participants considered what it means to be a father and a man, a discussion prompted in part by the protagonist’s own earnest question, “What do men do?” As the narrator reaches the conclusion of the story, he reflects, “[O]ur stories, Wade’s and mine, describe the lives of boys and men for thousands of years, boys who were beaten byt heir fathers, whose capacity for love and trust was crippled almost at birth….” The participants identified with the narrative in part because the narrator solicited their identification, encouraging them to realize that the issues they face—a troubled relationship with violence or a broken childhood—have been faced by men just like them throughout history. The narrator’s “I” because “we” as he observes the difficult nature of “how we absent ourselves from the tradition of male violence.”
But in order for the narrator’s “we” to include “boys and men for thousands of years,” he must exclude the girls and women who have been afflicted by the same violence, whose “capacity for love and trust” was also “crippled almost at birth.” In a male-dominated criminal justice system, female offenders are often overlooked in studies and in policy development. Although women represent a smaller proportion of the offending population than men, the increase in the female offender population in prisons is alarming. According to a report commissioned by the Institute on Women and Criminal Justice, female imprisonment increased 757 percent between 1977 and 2004, surpassing that of men in all 50 states. One-fifth of men are incarcerated for drug offenses, compared to one-third of women. According to the National Institute of Corrections, incarcerated women, on average, are survivors of physical and/or sexual abuse, have multiple physical and mental health problems, and are the single parents of children, accounting for nearly 250,000 children whose mothers are in prison.
“When people talk about offenders and even about this program, so many times it’s not discussed in terms of women,” Trounstine told me. “People imagine it’s more serious if we discuss these things in terms of men. Men are the ‘real’ offenders, the ‘serious’ ones. If a [female offender] is reading a book it’s just a way to appease her.” According to Trounstine, incorporating men into the all-female Lynn-Lowell sessions would certainly add diverse perspectives, but the women feel most comfortable communicating in a room of only women. That is, apart from Judge Dever, whose presence in the room seems to help the women gain confidence. I observed the sixth and penultimate session of the Lynn-Lowell program, and it was clear that the classroom at Middlesex Community College was a comfortable space. The women conversed playfully with Trounstine, whom they called Jean, and Judge Dever, or “Judge D.” “I want them to feel relaxed and safe in this room,” Trounstine explained to me. “At the same time, I want them to see me as a role model.”
During the session, the participants discussed Zora Neale Hurston’s Their Eyes Were Watching God, a mainstay of Trounstine’s syllabus. Trounstine launched the discussion with a question many participants had asked or hinted at during “the Go-Round,” a pre-discussion exercise in which each participant expresses her thoughts on a provided question: does Tea Cake, a young and charming suitor, really love the novel’s protagonist, Janie? The participants used this question as a lens to discuss romance, courtship, hardship, and physical abuse. One participant introduced a significant complication to Tea Cake’s alleged love for Janie. “If he loves her, why does he beat her?” she asked. Trounstine later explained to me, “The male-female relationship is huge for them, and they often talk about the men in their lives through the characters.” According to the Center for Effective Public Policy, “the criminal experiences of women are often best understood in the context of unhealthy relationships (e.g., a male partner who encourages substance abuse or prostitution).” The participants turned to a passage in which Tea Cake beats Janie and engaged in a difficult discussion that exposed the complexity of the relation between love and abuse. They struggled to reconcile Tea Cake’s charming behavior with his violence.
After the discussion had progressed, one participant expressed a striking claim: “Janie moves from object so subject of her own life.” Her transition is a remarkable one. The participants recognized that a love story can be a tool to talk about something else; at the same time, however, they explored the human qualities of this narrative device. One participant pointed out, “Janie shapes herself around Tea Cake.” By discussing Janie’s voice and her silence during her three marriages, the participants investigated what it means to shape oneself around a significant other, or a family member, or a source of addiction.
Toward the end of the discussion, one participant asserted, “I admired Janie in the end. For not settling, and standing up and being a strong person.” The desire for a transition from object to subject that the participants observed in the texts reflects the participants’ own positions in reentering society after their experience in prison or with the criminal justice system in general. Like Janie, CLTL participants struggle to be perceived as a person instead of as a project to be shaped or a failure to be scorned. Deep engagement with and discussion of literature provides a forum for former offenders to express and question this struggle. In CLTL, the text selection and discussion is structured in a way that recognizes the similarities inherent in offenders’ experiences with the criminal justice system while simultaneously acknowledging that gender matters.
Tara Knoll, a student at Princeton, is a regular contributor to the Changing Lives Through Literature blog and is currently working on her senior thesis. She can be reached for contact here.
by Sarah Fudin, c/o the University of Southern California
In celebration of National March Into Literacy Month, the MAT@USC has created a fun and informative infographic entitled, “The Most Loved Children’s Books”. In it, they have recounted their favorite books as a way to celebrate children’s literature throughout the years.
Changing Lives Through Literature is committed to promoting the access to literature, not just for children but for the world at large. We live in a society where social and economic inequality has become a norm. This disparity has unfortunately burdened with a vicious cycle of crime and incarceration that only we have the power to break. CLTL seeks to end this cycle by taking whatever means necessary to ensure that individuals in the prison system are making productive use of their time through literacy development. However, our efforts do not end there. Preventative measures must also be exhausted by taking the time to effectively communicate the power of reading to the youth of the nation.
Books can unlock a wealth of opportunities for individuals, and gaining access to literature at an early age allows us to tap into our youths’ potential as they grow up. Join us as we celebrate the value of books and how they contribute to changing lives!
(click image for to view the infographic in full-screen)
Sarah Fudin works for the University of Southern California’s Rossier School of education. She can be reached by e-mail here. For more information on becoming a teacher through USC Rossier Online, visit Become a Teacher.
by: Tara Knoll
When I participated in the last session of Professor Waxler’s fall-cycle ’11 Changing Lives Through Literature program, held at the University of Massachusetts, Dartmouth campus, the participants discussed Russell Banks’ Affliction before their graduation from the program.
Though I had observed Waxler’s program before, I was still surprised by the participants’ emphatic reactions to the text. One participant pointed to a moment in which a child suffers from his parent’s mistakes, explaining that the scene, “burns a hole in me.” Another expressed his frustration with one of the characters, for, “he had all the time in the world to help his brother, but he didn’t.” I was also still surprised to find my own reactions to the text both enforced and called into question by the discussion, as various participants introduced perspectives and questions that hadn’t occurred to me in my reading of the novel.
Affliction is a story about storytelling. Ostensibly the story of Wade Whitehouse, a troubled, middle-aged, part-time cop who lives in a small New Hampshire town who struggles with alcoholism, depression, and the loss of his family, Affliction is told from the perspective of Wade’s younger brother, Rolfe. Rolfe has long-since escaped the small town of Lawford and his dysfunctional family, while Wade remains. Rolfe tells the story of how Wade tries to reassemble his life one November; “all he really wanted” was “to be a good father,” and to be a “good man.”
Despite his good intentions, however, Wade gets caught up in an obsessive search for the truth regarding a hunting accident; a search that propels his internal and external dissolution and ends with his committing several tragic crimes and, subsequently, disappearing. Banks subverts our expectations when the pseudo-detective figure degenerates into the criminal whom he seeks. But this story isn’t all about Wade, and Rolfe concedes: “Oh, I know that in telling Wade’s story here I am telling my own as well….”
Though I found myself identifying with Rolfe as the narratorial voice and the more distant character from the action taking place, many participants strongly identified with Wade and disliked Rolfe. “Wade’s just trying to be a good parent. I can feel for what that’s like,” one participant explained. Another elaborated, “Wade is a better person than Rolfe. At least he stayed to battle his problems, at least he tried to make things better.” Others admired Wade, in a sense, as “from the beginning he never wants to be part of someone else’s story,” while still others “feel bad for him” because “he wants so bad to be a good dad but doesn’t know how because of how he’s raised.” My initial view of Rolfe as a character was intensely complicated by our discussion. Did he break free from, or abandon his family? Was his leaving courageous, or cowardly? Now I’m not so sure.
The identification with, and sympathy for, Wade that marked the discussion provoked questions of inevitability. Professor Waxler asked, “Is Wade destined for a kind of victimage?” One participant pointed out that his impulsiveness predisposed him to certain problems: “Wade doesn’t think, he just reacts.”
What is the root of Wade’s dilemma? As Waxler observed, our discussion mimicked Wade’s obsession: the more we think about it, the more difficult it is to grasp the truth. Since the book is written from Rolfe’s hindsight perspective, it was tempting for me to view Wade’s downfall as unavoidable. Waxler’s prompts and the participants’ observations both took this issue up and caused me to question it.
A strikingly inescapable extension of the question of the origin of Wade falling apart connects Wade with his violent and alcoholic father. Several participants emphasized that Wade “is trying to be a good son to his father” even after he has become an adult, yet, he’s frighteningly, “becoming like his father.” In a powerful scene, Wade erupts in anger against his daughter. When the heat subsides and his daughter has left, Wade notices his father standing alongside the decrepit house—observing, grinning. Rolfe imagines his father’s thoughts: “the son finally had turned out to be a man just like the father.”
Waxler notes that this scene illustrates a disconcerting reformulation, even distortion, of the parental blessing. Is Banks suggesting that at some level a child still wants the acknowledgment and blessing of the parent, even if at the same time he hates everything that the parent represents? The question is a troubling one. While I found myself reflecting on my role as a daughter, many participants connected their roles as son or daughter with their roles as a parent.
Judge Kane argued against this suggestion of inevitability vis–à–vis generational inheritance. “A blood tie can be broken,” the judge asserted. “You can find your own place, be your own person—it wasn’t inevitable.” As we all struggled to pinpoint what caused Wade’s life to fall apart and whether or not his ultimate downfall could have been avoided, many participants called attention to the way language works in the novel to achieve this sense of spiraling out of control in tandem with Wade.
There is a pointed madness in Banks’ deliberateness, and several participants wondered if Wade was really losing his mind. I hadn’t considered this interpretation before, and the hints of it in the text underscore the richness of Banks’ language. One participant highlighted the effectiveness of the language: “I felt crazy after reading it.” Several others felt uneasy with “how long it took [Banks] to get to the point.” The lengthy paragraphs of description and the incisively illustrative quality of Banks’ writing frustrated some, pleased others, and seemed to engender in all of us a building sense that “things are going to be okay…but then something collapses.” Banks’ postmodern exploration of time and chaotic transitions between geographical and temporal locales added to the sense of confusion, inviting us to identify with Wade. “It can be difficult to follow,” one participant noted. “You’re always jumping to someplace else.”
At the heart of this jumping is Rolfe’s control as narrator. Throughout the discussion, each one of us either reflected on or voiced aloud the recurrent question—how do we really know any of this? Slightly defensive and earnest in his explanation as to his seeming omniscience, Rolfe responds to the hypothetical question of the reader by asserting, “I do not, in the conventional sense, know many of these things. I am not making them up, however. I am imagining them.” Rolfe establishes a new dimension of story telling, yet his narration implicitly wrests control away from Wade.
“Wade is pushing through and trying to make his own story, but he doesn’t have the power, and no one will let him,” one participant contended. I stopped writing as this observation was made. I was struck by how it really gets to the heart of the struggle faced by each participant who is working their way through the court system and trying to reassert and reformulate their place outside of their docket number designation or charges faced. As we all explored our relationship as readers to the story, it became clear that Rolfe used the narration of another’s story as a means to better understand his own.
Narration’s ability to channel self-awareness extends past the realm of writing, in Rolfe’s case, and applies to readers, too. Just as Rolfe narrates Wade’s story in order to comprehend his own, so too did everyone present at the Changing Lives discussion appropriate the stories in the novel as a means to approach broader issues of society and all of our roles within it. Whether the participants identified with aspects the story—the difficulties of the court system, of being a parent, of starting over—or recognized their distance from it, the power of narrative was universal. As one participant observed, “You can be rich, have a good life, whatever, and one thing could make it spiral out of control.” Both the novel and the discussion produced a humanizing, universalizing effect.
While some saw the conclusion of Affliction as hopeful, perhaps an indication of Rolfe’s ability to “exorcise” Wade’s story (which is his own “ghost life”) and move on, the language itself is unclear. Rolfe tells us that, unless Wade is caught, “The story will be over. Except that I continue.” We don’t know whether Rolfe continues in a brave sense, reasserting his agency and control over his own life, or in a despondently cyclical sense, obsessing over Wade’s past and his own culpability. In taking up the story of Affliction, we as readers ascribe our own meaning to Rolfe’s ambiguous declaration. In so doing, we realize Wade’s unfulfilled desire to assert, as stated by one participant, “It’s my story. I’ve got control.” | http://cltlblog.wordpress.com/category/teaching/ | 13 |
31 | A balanced budget occurs when an entity’s spending matches its revenue during a period of time. An excess of revenue over spending is referred to as a budget surplus and an excess of spending over revenue is referred to as a budget deficit. On average over time healthy governments run balanced budgets. Persistent government deficits lead to a crisis of confidence in the ability of a government to pay back its debt.
Indeed, membership in the European Union requires countries to limit their budget deficits to 3 per cent of GDP. Unfortunately, in 2009 these limits have been exceeded by Greece, Portugal, Ireland, Spain, Italy and the UK.
Persistent government deficits lead to an accumulation of government debt, which leads to a high and rising ratio of government debt to GDP (referred to as a country’s debt-to-GDP ratio). From 1980 until 2010 the debt-to-GDP ratio in the US rose from 26.1% to 62.2%, without any clear need for spending to exceed revenue during this entire time period.
Governments have sometimes run budget deficits to avoid high tax rates in the face of uncertain spending and revenue. Typically, a government will run a budget deficit during periods in which spending is unusually high relative to revenue, such as times of war or recession.
For instance, to finance World War II, the US ran a large budget deficit so that its debt as a fraction of GDP rose from 44.2% in 1940 to 108.6% in 1946. This debt-to-GDP ratio fell to 26.1% by 1980. In the UK, public debt rose to 238% of GDP in 1947 and fell to 42% by 1980. | http://lexicon.ft.com/Term?term=budget-balance | 13 |
15 | The Greenland ice sheet covers roughly 85% of the land surface of the island and rises to an average height of 2.3 km (1.6 miles). The immense weight of the ice sheet has pushed the center of the island roughly 300 meters (1000 ft) below sea level. The icy expanse of Greenland, like the rest of the Arctic, not only represents an important climatological indicator, it also is critical to future global climate. Were all of Greenland's ice to melt, global sea level would rise 7 meters (23 feet). Greenland's ice sheet is slowly melting due to warming temperatures, and there is great concern that this melting will accelerate and contribute to sea level rise of several feet later this century.
Apart from its potential effect on global sea level rise, Greenland has important regulatory effects on world climate, through its impact on ocean circulation, regional atmospheric circulation, and global heat transfer. The oceanic conveyor belt, or Meridional Overturning Circulation (MOC), is driven by differences in density between salty and fresh water. Greenland's glacial melting dumps fresh water into the ocean, thereby affecting the balance of fresh to salt water and the MOC as a whole. In addition to its effects on the ocean conveyor belt, Greenland also regulates atmospheric temperatures, which affect not only the climate of the island itself, but much of the region as well. Greenland's icy interior functions topographically much like mountain ranges do on land, cooling warm air as it rises as it meets the mountains. Changes to the icy ranges of Greenland will, therefore, affect climactic processes in the region.
Warming temperatures in Greenland and the Arctic will also affect the global climate. The greater the difference in temperature between two places, the faster the heat moves from the warmer to the colder region. As the polar regions become warmer, the temperature difference between the equator and the poles shrinks, making equatorial heat move much more slowly to the poles. However, the flow of heat through the atmosphere from the equator to the poles is what powers global atmospheric circulation. If that flow changes, the path of the jet stream will also be altered, resulting in new storm tracks and precipitation patterns. While some regions will benefit from more favorable rains, others will experience increased drought and water availability problems.
Scientists were uncertain until recent years if there had been an increase or decrease in the total amount of water stored in the Greenland ice sheet during the 1990s and early 2000s. However, almost all studies now agree that Greenland is losing mass due to melting, the calving action of glaciers, and sublimation of ice into water vapor. Warmer temperatures in the region have brought increased precipitation to Greenland, and part of the lost mass has been offset by increased snowfall. Determining whether Greenland is melting or gaining mass is difficult, since there are a small number of weather stations on the island. Satellite data can examine the entire island, but has only been available since the early 1990s, making determination of long term trends difficult. According to the 2007 IPCC report (see Figure 4.18), Greenland may have gained as much as 25 gigatons of ice per year between 1961 and 2003--or lost as much as 60 gigatons per year during that period. Between 1993 and 2003, Greenland lost between 50-100 gigatons of ice per year, and had even higher rates of loss between 2003 and 2005.
Despite uncertainty about the mass balance of the Greenland ice sheet, the interior of Greenland is gaining mass (Figure 1). Scientists deduced this by analyzing data collected by European Space Agency satellites that use laser-based altimeters to measure the height of the ice sheet. The sheet has gained an average of 5 cm (2 inches) of snow per year since 1992.1 This is not a sign that Greenland ice sheet is healthy, though. The extra precipitation falling on Greenland is a result of warming in the region, since there is greater evaporation of water from the oceans and more water vapor present in the atmosphere at warmer temperatures.
In addition to increasing precipitation in the interior, higher temperatures have extended the summer melt season in many places in Greenland. The coastal zones are experiencing the largest increase. In 2005, these areas had up to 20 extra days of melt each year.2 In 2007, this increased to 25-30 melt days above average. Additionally, the trend extended further inland, affecting a larger area of the southern part of the island than the past (Figure 2). The amount of snow melt at elevations above 2000 meters set a record in 2007.
|Figure 2. This image shows the difference between the number of melt days during in 2007 and the average number of melt days during the period 1988-2006. Image credit: NASA/Earth Observatory.|
The ice sheet is losing volume from its edges. Glaciers appear to be losing ice at a much faster rate than predicted, and the sides of the ice sheet are losing mass much more quickly than it can accumulate in the interior. This loss is partially due to the increase in runoff from melting caused by warm-weather events, a process called dynamic thinning. Dynamic thinning does not affect all glaciers equally—glaciers lying on top of bedrock that smoothly slopes toward the sea are most strongly affected. Scientists at NASA are currently mapping the topography beneath key glaciers to aid efforts to predict which glaciers are located on slopes conducive to dynamic thinning.3
Dynamic thinning is, in a way, a positive feedback loop. When it gets warm enough, the surface snow and ice begin to thaw. The melt water either pools or flows in rivers along the surface, or begins flowing under the snow that covers the ice of the sheet. In the process, it flows into small cracks, enlarging them as it moves towards the bottom of the ice sheet. The amount of melt water traveling through these fissures varies greatly. Waleed Abdalati, head of NASA Goddard Space Flight Centers Cryospheric Sciences Branch, mentioned that "for the first few weeks, the melt water sounds like a peaceful stream. Soon it takes on the menacing roar of a rushing river."4
As surface melt increases, it collects into rivers that carry it to turquoise blue pools or plunge into crevasses or ice tunnels called moulins or glacier mills. Moulins, like the one in Figure 3, can extend downwards hundreds of meters, reaching the base of the glacier, or can flow within the glacier. Wherever the water ends up, moulins can affect both the melting rate and also the velocity of a glacier. The streams bring surface heat in the form of water down through the glacier to the bottom of the ice sheet. Once the water reaches the bottom of the glacier, it acts as lubrication for the glacier, which then gains speed as it flows downhill towards the sea. Thus, a little melting can have a large effect.
A multinational study that used data collected by a NASA altimeter in its model showed that the combination of dynamic thinning and increased melting from warm weather caused a 35% increase in the annual rate of ice loss in 2003, compared to the period 1993 to 1999.5 Luthcke et al. (2006) reported that the acceleration in the rate at which a number of Greenland's largest glaciers are sliding toward the sea was greater than expected. At the same time, the terminus of the glaciers actually appeared to be retreating due to the rapid melt rates affecting the leading edge.6 In 2007, which also saw record low sea ice extent in the Arctic, an unprecedented rate of glacial discharge occurred, and melting at the top of the ice sheet approached 150% of its average rate, also a record.7 These are much higher rates than predicted in the most recent IPCC report, which was based on data from 2005.
Greenland's Kangerdlugssuaq, one of the world's fastest moving glaciers, shows evidence of this trend. About 4% of Greenland's ice flows through Kangerdlugssuaq into the Atlantic. In 2001, the glacier advanced toward the sea at roughly 5km (3 miles) per year. In 2005, however, it had quickened its pace to 14 km (9 miles) per year, and the forward edge had retreated by 10 km (6 miles), its largest retreat on record.8 High melt rates also affected lower altitude areas in 2007, up 30% from average. Although this isn't record breaking, it is the fifth highest melt rate on record, according to a 2007 NASA study.9 What is causing all of the recent melting? Well, a big part of the melting is caused by increased sea surface temperatures (SSTs) in the region. Between 1990 and 2011, SSTs warmed by 1 - 2° C (1.8 - 3.6° F) over an extensive region of the waters surrounding Southern Greenland (Figure 4). Much of this increase in SSTs is due to the drastic decline of Arctic sea ice in recent years. Regions near the coast of Greenland that used to see wintertime sea ice are now ice free year-round. With Arctic sea ice expected to continue to decline and possibly disappear in summer later this century, expect SSTs to continue to increase around Greenland.
The Greenland ice sheet has experienced conditions as warm as those today in the past. Chylek et al. (2007)10 found that the area experiencing melting in west Greenland was even greater in the 1930s and 1940s than during the 2000s. Bjork et al. (2012) found that the rate of retreat of glaciers that terminated on land in Greenland underwent a more rapid retreat in the 1930s than in the 2000s, whereas marine-terminating glaciers and ice-sheet glaciers retreated more rapidly during the warming of the 2000s. Lowell et al. (2007)11 found organic remains in eastern Greenland that had just been exposed by melting ice, and dated these remains at between A.D. 800 to 1014. Thus, this portion of Greenland was ice-free about 1000 years ago, and temperatures were presumably similar to today's. Erik the Red took advantage of this warm period to establish the first Norse settlements in Greenland around 950 A.D. However, the climate cooled after 1200 A.D., and the Norse settlements disappeared by 1550.
Greenlanders Experience the Visible Effects of a Changing Climate
The population of Greenland is trying to deal with the effects of a warming planet. A changing climate has many benefits in the eyes of some Greenlanders. For instance, many new fisheries are open now that ocean temperatures are rising. Less ice means safer seas and also allows boats to fish closer to their own shores. In addition, changes in ocean temperature have expanded the ranges of many fish north to the waters around Greenland. Many fishermen are eagerly anticipating huge hauls of cod, which returned to Greenland's waters in 2006. Additionally, halibut, a stock traditionally found in the area, are increasing in size and, therefore, commercial value.
The warming has also made the climate in parts of Greenland similar to that of Northern Europe. Farming on the island, which was previously limited, is now increasing due to a longer growing season and opening of new land to cultivation. Potato and radish crops had bumper years even as far north as Nuuk, only 185 miles from the Arctic Circle. The 2007 growing season marks the first time that Greenlanders have been able to grow broccoli. Carrots and cauliflower are also available from local farms, although all three crops still can't be produced in large enough quantities to feed the population without help from sources in Denmark. A growing number of sheep and cows have appeared, also taking advantage of a longer summer season.
Greenland is experiencing increasing attention for its recently accessible mineral resources, such as gold, oil, diamonds, and gas. There is so much interest in Greenland's mineral resources from abroad, it is possible that Greenland might be able to have a sustainable economy leading to full sovereignty in the near future.
While warmer temperatures have allowed for increased opportunities in fishing and agriculture, there are negative impacts as well. For those who depend on the ice for transportation and platforms for subsistence hunting and fishing, warmer weather is not always welcome. Longer summers mean that there is a shorter amount of time when sea ice is stable enough to allow passage by sled dog or snowmobile. Many communities in Greenland, especially the Inuit communities of the North, have no roads connecting them to the rest of the island. They are dependent upon being able to travel over sea ice. Longer melt seasons and warmer temperatures make the ice precarious for most travelers. A bigger fear is that increased access for boats allowed by melting sea ice will overload fisheries that are not able to handle the stress of heavier harvesting.12
The sheer volume of water created by the melting of the Greenland ice sheet would cause global sea level to rise 7 meters (23 ft) in total.13 During the warm period before the most recent ice age, 120,000 years ago, roughly half of the Greenland ice sheet melted. This melting plus the melting of other smaller Arctic ice fields is thought to have caused 2.2-3.4 meters of the 4 - 6 meter sea level rise observed during that period14. Temperatures in Greenland are predicted to rise 3°C by 2100, to levels similar to those present during that warm period 120,000 years ago. At those temperatures, a chain of positive feedbacks would lead to the inevitable melting of the ice sheet. As the top of the ice sheet encounters warmer air temperatures, it melts and exposes the underlying layers to the warmer air, lowering the elevation of the island, bringing even warmer temperatures to the ice sheet, since temperatures are cold at high elevations and warmer at low elevations. A 4 - 6 meter rise in global sea level similar to that observed 120,000 years ago would probably result. However, the 2007 IPCC report expects melting of the Greenland ice sheet to occur over about a 1,000 year period, delaying much of the expected sea level rise for many centuries. This means that gradually, over centuries, cities such as London, New York, Shanghai, Boston, and Los Angeles will flood, Florida will be mostly underwater, and countries such as Bangladesh and the Maldives will disappear under the sea. Currently, over one-third of the global population lives in or near a coastal zone. Rising sea levels will dislocate many of them. Additionally, coastal zones are sites of incredible economic and agricultural activity, which would also be negatively affected by higher sea levels. The global impact of these impacts would be "staggering".15 Additionally, higher sea levels will cause increased erosion, salt water intrusion, and storm surge damage in coastal areas, in addition to a loss of barrier formations such as islands, sand bars, and reefs that would normally protect coastal zones from battering by waves and wind.
Greenland's ice isn't going to be melting completely and catastrophically flooding low-lying areas of the earth in the next few decades. Between 1993 - 2012, sea level rose at about 3.1 mm per year (1.2 inches per decade.) However, the risk later this century needs to be taken seriously. Because of their complexity, many of the processes governing ice melt and formation (especially sea ice) are not incorporated fully into the models that we currently have. This means that agreed-upon estimates of sea-level rise could be too low. In the latest IPCC document released in November 2007, the group acknowledges that their estimated range of sea level rise by 2100 of 0.18-0.59 meters (0.6-1.9 feet) does not "provide an upper bound for sea level rise," and that uncertainties in changes in ice sheet flow could lead to higher sea level rises.16 The results of coral fossil studies presented by Rohling et al. (2007)13 showed that sea levels rose 1.6 meters (5.3 feet) per century 120,000 years ago, and the climate of that period may be similar to what will be observed in 2100. Greenland's contribution to global sea level rise doubled in the five years ending in 2007 (Stearns and Hamilton, 2007)17, and was responsible for approximately 10 - 15% of the global annual sea level rise in 2007 (IPCC, 2007, Figure 4.18). By 2012, this percentage had risen to 20 - 25%, and melting ice from Greenland was thought to cause about 0.7 mm/year of global sea level rise, and was on pace to double in ten years (Sasgen et al., 2012.) A study of glacier speeds between 2000 - 2010 (Moon et al., 2012) determined that Greenland's contribution to sea level rise would likely be less than 9.3 cm by 2100, based on the glacier speeds measured in 2000 - 2010. They stated: "Earlier research (Pfeffer et al., 2008) used a kinematic approach to estimate upper bounds of 0.8 to 2.0 m for 21st-century sea level rise. In Greenland, this work assumed ice-sheet-wide doubling of glacier speeds (low-end scenario) or an order of magnitude increase in speeds (high-end scenario) from 2000 to 2010. Our wide sampling of actual 2000 to 2010 changes shows that glacier acceleration across the ice sheet remains far below these estimates, suggesting that sea level rise associated with Greenland glacier dynamics remains well below the low-end scenario (9.3 cm by 2100) at present. Continued acceleration, however, may cause sea level rise to approach the low-end limit by this century's end. Our sampling of a large population of glaciers, many of which have sustained considerable thinning and retreat, suggests little potential for the type of widespread extreme (i.e., order of magnitude) acceleration represented in the high-end scenario (46.7 cm by 2100). Our result is consistent with findings from recent numerical flow models (Price et al., 2011)."
Edalin Michael and Jeff Masters, Weather Underground
Dr. Jeff Masters' Recent Glaciers Blogs
|Figure 4. Change in sea surface temperature (SST) using the NOAA Extended SST data set averaged over the year 1990, compared to 2011. SSTs have warmed by as much as 2°C (3.6° F) along the west coast of Greenland, due to major declines in sea ice. Image credit: NOAA Earth System Research Laboratory.|
Header: Image of the eastern Greenland coast during the 2007 summer thaw. Modified from NASA Earth Observatory.
1 European Space Agency, "ERS altimeter survey shows growth of Greenland Ice Sheet interior," November 4, 2005.
2 Haven, K., "Greenland's Ice Island Alarm: A New Perspective," August 28, 2007.
3 Mentioned in "NASA Satellites Yield Best-ever Antarctic Maps," Science Daily, January 26, 2006. Check out the progress yourself at the NASA and JPL's Glaciers and Ice Sheets In a Changing Climate website (click here for the Greenland specific site).
4 Haven, K., "Greenland's Ice Island Alarm: Introduction," August 28, 2007.
5 Krabill, W. et al., "Greenland Ice Sheet: Increased coastal thinning," Geophysical Research Letters, 31, L2440, December 28, 2004; NASA, "Greenland's Ice Thinning More Rapidly at Edges," December 15, 2004.
6 NASA, "Greenland Ice Sheet on a Downward Slide," October 19, 2006; Luthcke, S. B., H. J. Zwally, W. Abdalati, D. D. Rowlands, R. D. Ray, R. S. Nerem, F. G. Lemoine, J. J. McCarthy, and D. S. Chinn, "Recent Greenland Ice Mass Loss by Drainage System from Satellite Gravity Observations," Science, 314 (5803), November 24, 2006, 1286.
7 Tedesco, M., "A New Record in 2007 for Melting in Greenland," EOS, 88:39, 2007, 383.
8 Rignot, E. and P. Kanagaratnam, "Changes in the Velocity Structure of the Greenland Ice Sheet," Science, 311 (5763), February 17, 2006, 986.
9 Tedesco, M., "A New Record in 2007 for Melting in Greenland," EOS, 88:39, 2007, 383.
10Chylek, P., M. McCabe, M.K. Dubey, and J. Dozier, 2007, "Remote sensing of Greenland ice sheet using multispectral near-infrared and visible radiances", J. Geophysical Res. 112, D24S20, doi:10.1029/2007JD008742, 2007.
11 Lowell, et al., 2007, "Organic Remains from the Istorvet Ice Cap, Liverpool Land, East Greenland: A Record of Late Holocene Climate Change,", Eos Trans. AGU, 88(52), Fall Meet. Suppl., Abstract C13A-04.
12 Struck, D., "Icy Island Warms to Climate Change: Greenlanders Exploit Gifts From Nature While Facing New Hardships," Washington Post, June 7, 2007.
13 Intergovernmental Panel on Climate Change, Fourth Assessment Report, Climate Change 2007: Synthesis Report Summary for Policymakers, 8.
14 Otto-Bliesner, et al., "Simulating Arctic Climate Warmthand Icefield Retreat in the Last Interglaciation", Science 24 March 2006:Vol. 311. no. 5768, pp. 1751 - 1753 DOI: 10.1126/science.1120808.
15 Haven, K., "Greenland's Ice Island Alarm: Why Does the Greenland Ice Sheet Matter?," August 28, 2007.
16 Intergovernmental Panel on Climate Change, Fourth Assessment Report, Climate Change 2007: Synthesis Report Summary for Policymakers, 8.
17Stearns, L. A., and G. S. Hamilton, 2007, New States of Behavior: Current Status of Outlet Glaciers in Southeast Greenland and the Potential for Similar Changes Elsewhere, Eos Trans. AGU, 88(52), Fall Meet. Suppl., Abstract C13A-06.
18Bjork et al., 2012, An aerial view of 80 years of climate-related glacier fluctuations in southeast Greenland, Nature Geoscience 5, 427-432, doi:10.1038/ngeo1481.
19Moon et al., 2012, 21st-Century Evolution of Greenland Outlet Glacier Velocities, Science, 4 May 2012: Vol. 336 no. 6081 pp. 576-578 DOI: 10.1126/science.1219985
20Sasgen et al., 2012, Timing and origin of recent regional ice-mass loss in Greenland, Earth and Planetary Science Letters, 333-334, 1 June 2012, Pages 293-303.
21Pfeffer, W.T., J.T. Harper, and S. O'Neel, 2008, "Kinematic Constraints on Glacier Contributions to 21st-Century Sea-Level Rise", Science 321 no. 5894, pp. 1340-1343, 5 September 2008. DOI: 10.1126/science.1159099
22Price, S.F., A.J. Payne, I.M. Howat, and B.E. Smith, 2011, Committed sea-level rise for the next century from Greenland ice sheet dynamics during the past decade, Proc. Natl. Acad. Sci. U.S.A. 108, 8978 (2011).
Wikipedia article on moulins
Wikipedia article on Greenland
Greenland Home Rule Homepage
International Glaciological Society
National Snow and Ice Data Center
NASA Earth Observatory
Duval-Smith, A., "Arctic booms as climate change melts polar ice cap," Guardian Unlimited, November 27 2005.
Johannessen, O. M., K. Khvorostovsky, M. W. Miles, L. P. Bobylev, "Recent Ice-Sheet Growth in the Interior of Greenland," Science, Published Online October 20, 2005. | http://italian.wunderground.com/climate/greenland.asp | 13 |
25 | A healthy weight is what your body naturally weighs when you consistently eat a nutritious diet and balance the calories you eat with physical activity you do. Weight, however, is only one measure of your health. People who are thin, but don’t exercise or eat nutritiously aren’t necessarily healthy. Likewise, a person who is overweight may be healthy if he or she eats healthfully and exercises regularly.
Obesity results from the excessive accumulation of fat that exceeds the body's skeletal and physical standards. According to the National Institutes of Health (NIH), 97 million Americans (more than one-third of the adult population) are overweight or obese today. An estimated 5 to 10 million of those are considered morbidly obese.
Morbid obesity is a chronic disease, meaning that its symptoms build slowly over an extended period of time. It is typically defined as being 100 lbs. or more over ideal body weight or having a Body Mass Index (BMI) of 40 or higher. According to the NIH Consensus Report, morbid obesity is a serious disease and must be treated as such.
Additional Health Resources
Obesity becomes "morbid" when it reaches the point of significantly increasing the risk of one or more obesity-related health conditions or serious diseases (also known as co-morbidities) that can result either in significant physical disability or even death. An estimated 5-10 million Americans are considered morbidly obese.
Causes of Morbid Obesity
The reasons for obesity are multiple and complex. Despite conventional wisdom, it is not simply a result of overeating. Studies have demonstrated that dieting and exercise programs have a limited ability to provide effective long-term relief for morbid obesity. Research has shown that in many cases, there is a significant, underlying cause of morbid obesity, including, but not limited to:
- Genetics. Studies show that your genes play an important role in your tendency to gain excess weight. Adopted children, for example, show no correlation with the body weight of their adoptive parents, but an 80 percent correlation with their genetic parents, whom they have never met. We probably have several genes directly related to weight. Just as some genes determine eye color or height, others affect our appetite, our ability to feel full or satisfied, our metabolism, our fat-storing ability, and even our natural activity levels.
- Environment. Environmental and genetic factors are closely intertwined. If you have a genetic predisposition toward obesity, then the modern American lifestyle and environment may make controlling weight more difficult. Fast food, long days sitting at a desk, and suburban neighborhoods that require cars all magnify hereditary factors such as metabolism and efficient fat storage.
- Metabolism. We used to think of weight gain or loss only as a function of calories ingested and burned, but now we know the equation isn't that simple. Researchers talk about the "set point" theory, a sort of thermostat in the brain that makes people resistant to either weight gain or loss. Try overriding the set point by drastically cutting calories and your brain responds by lowering metabolism and slowing activity. You then gain back any weight you lost.
- Medical Conditions. Medical conditions, such as hypothyroidism, as well as eating disorders, can also cause weight gain. That's why it's important that you work with your doctor to make sure you do not have a condition that should be treated with medication and counseling.
Obesity-Related Health Conditions (Co-morbidities) Whether alone or in combination, these health conditions are commonly associated with morbid obesity. Your doctor can provide you with a more detailed list:
- Type 2 Diabetes. Obese individuals develop a resistance to insulin, which regulates blood sugar levels. Over time, the resulting high blood sugar can cause serious damage to the body.
- High blood pressure/Heart disease. Excess body weight strains the ability of the heart to function properly. Resulting hypertension (high blood pressure) can cause strokes, as well as inflict significant heart and kidney damage. Osteoarthritis of weight-bearing joints. Additional weight placed on joints, particularly knees and hips, results in rapid wear and tear, along with pain caused by inflammation. Similarly, bones and muscles of the back are constantly strained, resulting in disk problems, pain and decreased mobility.
- Sleep apnea/Respiratory problems. Fat deposits in the tongue and neck can cause intermittent obstruction of the air passage. Because the obstruction is increased when sleeping on your back, you may find yourself waking frequently to reposition yourself. The resulting loss of sleep often results in daytime drowsiness and headaches.
- Gastroesophageal reflux/Heartburn. Stomach acid seldom causes any problem when it stays there. When acid escapes into the esophagus through a weak or overloaded valve at the top of the stomach, the result is called gastroesophageal reflux. "Heartburn" and acid indigestion are common symptoms.
- Depression. Seriously overweight people face constant emotional challenges: repeated failure with dieting, disapproval from family and friends, sneers and remarks from strangers. They often experience discrimination at work, cannot fit comfortably in theater seats or ride in a bus or plane.
- Infertility. Morbidly obese women have a diminished ability to get pregnant. Those who do have a higher risk of miscarriage. Menstrual irregularities. Morbidly obese individuals often experience disruptions of the menstrual cycle, including interruption of the menstrual cycle, abnormal menstrual flow and increased pain associated with the menstrual cycle.
- Urinary stress incontinence. A large, heavy abdomen and relaxation of the pelvic muscles, often associated with childbirth, may cause the valve on the urinary bladder to be weakened, leading to leakage of urine with coughing, sneezing, or laughing.
Weight Loss Surgery As the advantages of weight-loss surgery become more apparent, it is being prescribed by more physicians than ever as a viable treatment for patients with morbid obesity. And while there are risks associated with any major surgery, including weight-loss surgery, in many cases, the risks from not having the surgery may be greater. In most cases, weight loss (bariatric) surgery is recommended by your physician when:
- All other attempts at weight loss have failed
- Health conditions (co-morbidities) have created a medical need for surgery
- Surgery seems to be the only option to achieve necessary weight loss
- The patient is physically and mentally stable enough for major surgery
Sarasota Memorial performs two bariatric surgery procedures that are recognized and approved by the American Society for Bariatric Surgery and the National Institutes of Health.
- Roux-en-Y Gastric Bypass
- Adjustable Gastric Band, utilizing the Lap Band® and the Realize Adjustable Band®.
Roux-en-Y Gastric Bypass In the health care industry today, Roux-en-Y gastric bypass is the current gold standard procedure for weight loss surgery. It is one of the most frequently performed weight loss procedures in the United States.
In this procedure, stapling creates a small (15 to 20cc) stomach pouch. The remainder of the stomach is not removed, but is completely stapled shut and divided from the stomach pouch. The outlet from this newly formed pouch empties directly into the lower portion of the jejunum, thus bypassing calorie absorption. This is done by dividing the small intestine just beyond the duodenum for the purpose of bringing it up and constructing a connection with the newly formed stomach pouch. The other end is connected into the side of the Roux limb of the intestine creating the "Y" shape that gives the technique its name. The length of either segment of the intestine can be increased to produce lower or higher levels of malabsorption.
Patrick Fitzgerald, MD and John Nora, MD perform the Roux-en-Y procedure at Sarasota Memorial Hospital.
- Higher Average Weight Loss. Patients generally lose more weight after the Roux-en-Y procedure than other procedures or pure dietary restriction methods. One year after surgery, patients weight loss can average 77% of excess body weight.
- Maintained weight loss. Studies show that after 10 to 14 years, 50-60% of excess body weight loss has been maintained by most patients.
- Reduced or Resolved Co-morbidities. A study of 500 patients showed that 96% of certain associated health conditions known as co-morbitity factors, including back pain, sleep apnea, high blood pressure, diabetes and depression were improved or completely resolved.
All of the following deficiencies can be managed through proper diet and vitamin supplements. Poor absorption of iron and calcium can result in a predisposition to iron deficiency anemia. Women already at risk for osteoporosis after menopause should be aware of the potential for heightened bone calcium loss. Some patients can experience metabolic bone disease resulting in bone pain, loss of height, humped back and fractures of the ribs and hip bones.
Other negative side effects of gastric bypass surgery can include chronic anemia due to Vitamin B12 deficiency (manageable with Vitamin B12 pills or injections). A condition known as "dumping syndrome " can occur as the result of rapid emptying of stomach contents into the small intestine (sometimes triggered when too much sugar or large amounts of food are consumed). Not generally considered a serious risk, it can be extremely unpleasant and can include nausea, weakness, sweating, faintness and, on occasion, diarrhea after eating. Some patients are unable to eat any form of sweets after surgery.
Adjustable Gastric Banding
At Sarasota Memorial, the gastric band procedure is available to patients using either the LAP BAND® by Allergan, Inc., or the REALIZE® Personal Banding System from Ethicon Endo-Surgery, Inc.
Both gastric bands serve to reduce stomach capacity and restrict the amount of food that can be consumed at one time. In both cases, this minimally invasive procedure does not require stomach cutting and stapling or gastrointestinal re-routing to bypass normal digestion.
In gastric banding surgery, an implanted medical device, a silicone ring, is placed around the upper part of the stomach and filled with saline on its inner surface. This creates a new, smaller stomach pouch that can hold only a small amount of food, so the food storage area in the stomach is reduced. The band also controls the stoma (stomach outlet) between the new upper pouch and the lower part of the stomach. When the stomach is smaller, you feel full faster, while the food moves more slowly between your upper and lower stomach as it is digested. As a result, you eat less and lose weight.
During this procedure, surgeons usually use laparoscopic techniques to wrap the band around the patient’s stomach. A narrow camera is passed through a port so the surgeon can view the operative site on a nearby video monitor. Like a wristwatch, the band is fastened around the upper stomach to create the new stomach pouch that limits and controls the amount of food you eat. The band is then locked securely in a ring around the stomach.
- Minimally Invasive. A laparoscopic procedure with no gastrointestinal rerouting, band surgeries are considered the safest, least invasive, and least painful of all weight-loss surgeries.
- Quick Recovery. Like most minimally invasive surgeries, hospital stays are shorter and patients recover quickly.
- Adjustable. The band’s diameter can be modified to meet the patient’s individual needs. Pregnant patients can expand their band to accommodate a growing fetus, while patients who aren’t experiencing significant weight loss can have their bands tightened. If for any reason the band needs to be removed, the stomach generally returns to its original form.
Risks specific to this surgery include infection, spleen bleeding or injury, gastric perforation (a tear in the stomach wall), and access port leakage. Beyond surgical risks, most patients experienced at least one side effect during recovery. Common side effects include nausea and vomiting, heartburn, abdominal pain, and slippage of the band.
Scott Stevens, MD performs the Roux-en-Y procedure at Sarasota Memorial Hospital. | http://smh.com/p.aspx?p=49 | 13 |
16 | Updated 11 months ago
The Gulf of Mexico has a leaky bottom. Each year, about 15 million gallons of oil and other hydrocarbons ooze out of fissures in the seabed called "oil seeps." Unlike the gusher unleashed by the failure of the Deepwater Horizon well, the leaks from the seeps are small, distributed widely across the floor of the Gulf and don't harm the underwater ecosystem.
An oceanographer at Florida State University, Ian MacDonald, uses images from special cameras and satellite photos to chart the oil seeps and to estimate how much oil is being discharged. Using satellite data, he was among the first scientists to question BP's estimates of how much oil was escaping from its damaged well.
FSU professor Ian MacDonald is an expert in methane hydrates, substances that form when methane and water, under pressure, combine and create an ice-like substance. MacDonald says the Gulf of Mexico "is one of the few places where you can actually see hydrates exposed on the seafloor."?
[Photo: Ray Stanyard]
MacDonald is also an expert in what happens to the oil and gas that leaks from the seeps, including the formation of substances called methane hydrates.
He explains that methane, the gas that's the main component in the natural gas we use to cook and heat, is produced naturally in the Earth's crust. Typically, methane remains trapped deep underground. In many coastal areas, however, some of the gas leaks out along with the oil from the seeps.
In shallow, warmer waters, the methane rises through the water and dissipates into the atmosphere. At depth, however, cold temperatures and high pressure trap some of the methane in a web of water molecules on or just under the seafloor, creating ice-like substances called methane hydrates. MacDonald says the hydrates look like frozen snow or hard white ice; if more oil is mixed in, the hydrate deposits look yellow to dark brown.
The Gulf, MacDonald says, "is one of the few places where you can actually see hydrates exposed on the seafloor."
Scientists have understood the basic chemistry of methane ice, as hydrates are also called, since the 1800s. But it wasn't until recently that they began to appreciate how much methane hydrate is distributed in coastal ocean sediments and permafrost around the world — and there are very compelling reasons for them to want to learn more about where it forms and how much there is.
"If we can produce gas from the deep ocean and replace coal, then that's a real win for the planet," says Ian MacDonald, an expert in methane hydrates.
Scientists were taking measurements in the Gulf — coincidentally not far from the Deepwater Horizon explosion — in 2009, looking for the flow of methane from deepwater hydrates. [Photo: SRI]
The other side of that coin is that while methane emits much less carbon dioxide than gasoline or coal, it's still a potent greenhouse gas. Some fear that even without any methane-mining, current climate trends — rising temperatures in the permafrost, for example — could release masses of methane into the atmosphere that could bring huge climate changes. Some scientists believe — there is no consensus on the topic — that massive methane releases are responsible for mass extinctions millions of years ago that have been revealed by fossil and geologic studies.
Methane hydrates are found all over the world, both undersea in coastal regions and onshore in permafrost. If only 1% of the methane in the North Slope area of Alaska can be recovered, "the nation would more than double its natural gas resources," according to the USGS.
For both energy and climate-related reasons, the federal government has been studying hydrates for some time. Since 2008, the Department of Energy's Methane Hydrates Program has funded at least 34 methane hydrate projects. That year, for example, the Department of Energy spent $2 million on five projects, including assessments of how to measure how much methane hydrate there is, how best to find it and whether it can be safely and economically extracted for use as fuel. With DOE assistance, ConocoPhillips and BP Exploration Alaska are creating test wells in Alaska this year and in 2011 to explore how to extract methane from methane hydrates. A regional government in Alaska also is working on a methane hydrate project.
In 2007 and again in late 2009, MacDonald was among the scientists on a research ship in the Gulf that took measurements in an area called Mississippi Canyon 118, within just a few miles of the Deepwater Horizon well. The scientists, who of course had no idea the well would explode a few months later, ended up with a trove of data that's being used to compare the Gulf's water quality pre- and post-spill.
What they were looking for at the time, however, was the flow of methane into — and from — areas where hydrates are known to form. The research venture was part of a program called Hyflux that is funded by the DOE, which expects commercial production of methane from methane hydrates by 2015 from arctic deposits and by 2025 from deposits on the seafloor in the Gulf and elsewhere. Meanwhile, countries such as Korea, Japan and India that have few or no oil resources "are looking very closely at their gas hydrates," MacDonald says, and have cooperated with U.S. scientists and the U.S. government on research. "China has a very well funded center" investigating hydrate production.
According to the Department of Energy, the most promising technology for producing methane from hydrates is simply finding a way to depressurize the hydrate and then capturing the released methane. A major challenge in extracting methane is for the extraction method not to destabilize the larger deposit. MacDonald says the pinnacle of hydrate research involves a method of pushing carbon dioxide into a hydrate in such a way that it would shove the methane out while preserving the hydrate. The approach, he says, would be both elegant and ecologically beneficial: The hydrate's solid structure itself would remain intact, even as methane was removed for use as an energy source; meanwhile, the swap would create a "sink" where CO2, a greenhouse gas, could be stored. "That's the Holy Grail," he says.
For MacDonald, the wisdom of exploiting hydrates is clear: "We do need to be concerned about methane as a driver of greenhouse gas,'' he says. "But if we can produce gas from the deep ocean and replace coal, then that's a real win for the planet. Burning methane is better than burning coal or oil. If we could stop burning coal altogether, it could buy us some time to reduce CO2 emissions."
The ultimate challenge hydrates pose is to our political will to make good, rational decisions, MacDonald says. Weighing all the considerations, he sees the offshore production of gas — plentiful in waters near Florida — as preferable to drilling for oil, for example, and much preferable to blowing the tops off mountains in West Virginia and Kentucky to mine coal.
But he cautions that the Deepwater Horizon spill has created an awareness that the world — particularly the exotic realms of the deep ocean — is "more complicated than we imagined" in terms of our ability to set processes and events in motion "that we can't lasso or contain." Framing decisions constantly as a battle between our ecology and the economy is counterproductive, he says. "We need to make difficult choices with an eye to the future and an eye to the real costs that they incur. As we now see, the real costs of oil production in the ocean are far greater than we've been paying, particularly if we're paying for the cure rather than prevention, as with BP."
"In that context," MacDonald says, "gas production in the ocean, if done with rigorous regulatory oversight, remains a viable option."
Methane hydrates represent "the Earth's biggest potential source of hydrocarbon energy," according to the U.S. Department of Energy — more carbon, possibly, than is contained in all the other known coal, gas and oil on the planet. [Photo courtesy Ian MacDonald] | http://www.floridatrend.com/print/article/3396 | 13 |
28 | The advent of mechanisation and the spread of more specialist forms of farming helped changed both the nature of work and household structures. By 1800 the earning of wages became increasingly important for the survival of working class families. The process of manufacture moved outside the home though the transition had never been total. Earlier forms of domestic production, in clothing, toy-making and now even computer services, are still visible today. Employment was as diverse and the locations for that employment. It would be difficult to overestimate the importance of work in working class life. Work helped determine two fundamental elements of working class existence: the ways in which workers spent many, if not most, of their waking hours; and the amounts of money they had to their disposal. Work also determined most other aspects of working class life: the standards of living they enjoyed; standards of health; the type of housing they lived in; the nature of the family and neighbourhood life; the ways in which leisure time was spent and the social, political and other values that were adopted.
The swing away from domestic forms of production can be roughly explained by three developments: the growth of population, the extension of enclosure with a consequent reduction in demand for rural labour and the advent of mechanised production boosting productivity and fostering the growth of new towns and cities. The result was a change in the structure of the labour market.
- The enclosure of common lands had a profound impact on the livelihood of rural workers and their families. It led to a contraction of resources for many workers and a greater reliance on earnings. The spread of enclosure pushed rural labourers on to the labour market in a search for work that was made the more frenzied by falling farm prices and wages between 1815-35, in the aftermath of the Napoleonic war. The result of the growth in labour supply and agricultural depression was the collapse of farm service in the south and east of the country. It had been customary for farm workers to be hired for a year, to enter service in another household and to live with another family, receiving food, clothes, board and a small annual wage in return for work, only living out when they wished to marry.
- Added to this was the development of factory-based textile production that had a profound effect on the other source of earned income for rural workers: outwork. Different parts of the country were associated with different types of product: lace-making round Nottingham, stocking-knitting in Leicester, spinning and weaving of cotton and wool in Lancashire and Yorkshire. The appearance of the mills damaged the status and security of some very skilled branches of outwork. Many rural households found themselves thrown into poverty as such work became increasingly scarce and available only at pitifully low rates of pay. The fate of the handloom weavers, stocking-frame knitters and silk weavers in the 1830s and 1840s, all reflected the impact of technological change on the distribution of work. Textiles were not the only industry to experience such structural changes. In both town and country, mechanisation had a marked impact on a wide variety of employments and the position of some skilled workers was undermined while the demand for new skills grew.
Urban workers had always been more reliant on the cash nexus [wages] than had their rural counterparts. Pre-industrial towns had tended to be commercial centres [markets] rather than centres of manufacture and employment there had been more specialised than elsewhere. Small units of production in which worked skilled artisans, providing local services and goods rather than commodities for export operated largely on a domestic basis through frequently under the control of the craft guilds. These stipulated modes of recruitment and training and the quality of products and founded the vocabulary of the rights of 'legal' or 'society' men who worked in 'legal' shops that permeated craft unions in the nineteenth century. The nineteenth century saw the position of the skilled urban artisan increasingly under threat from semi-skilled and less well-trained workers.
The Elizabethan Statute of Artificers [or Apprentices] 1563 provided a legal framework of craft regulation but had fallen into abeyance long before its apprenticeship clauses were repealed in 1811. Under the old system of apprenticeship, the pupil was formally indentured at 14-16 and joined a master's house for a period traditionally specified as seven years before being recognised as a journeyman, qualified to practice the trade. It was also usual for journeymen to 'live in', entitled to bed, board and wages in return to work, only moving out on marriage. Often journeymen tramped the country in search of work in part to extend their experience and knowledge of their trade but also to escape increasingly uncertain employment prospects in their immediate locality. To become a master the journeyman had to produce his 'masterpiece', demonstrating his mastery of the skills of the specific trade. From the early nineteenth century fewer apprentices were completing their indentures and journeymen's wages were falling, both signs that employers were no longer bothered about hiring only men who had served their time. This led to a dilution in the labour force and an increased blurring of the boundaries between 'society' and 'non-society' men, a situation made worse by the mechanisation of production that required fewer skills than handwork.
The nature of training for skilled work changed; apprenticeships were shortened and concentrated on specific skills rather than on an extensive understanding of all aspects of production. Lads worked alongside journeymen rather than being attached to a master's household with various adverse results
- The new system bore heavily on apprentices' families, who frequently still paid for indentures while the apprentice lived at home and could expect little or no wages for his efforts until his time was served.
- The old stipulated ratios between journeymen and boys were increasingly ignored and apprentices became a cheap alternative for adult labour thus depressing the adult labour market.
- Such developments were resented by the journeymen expected to train recruits, souring relations and often making training uncooperative.
- The fate of boys was instant dismissal as soon as they were old enough to command an adult rate
Such practices were more common during depressed times. This abuse of apprenticeship provoked sporadic industrial disputes as skilled workers tried to protect their position and to prevent their trade from being flooded [or diluted] by excess labour.
At the same time, new mechanised processes facilitated cheaper forms of bulk production. As a result the market became saturated with semi-skilled workers, who knew something of the trade but did not possess the full range of skills expected of the qualified man. Henry Mayhew, chronicling London's labour market in the 1840s, contrasted the position of the 'honourable' tradesman with the 'slop' workers whose wages and product undercut old recognised prices and reduced job security long assumed to belong to the man with an established craft.
The most obvious impact of industrialisation was found in the more intense and strictly disciplined nature of work in those industries transformed by the new technology: textiles, coal-mining, metal-processing and engineering. Early mills were manned by convict and pauper labour [mostly children] because the regularity of work was alien to the adult population used to a greater degree of autonomy in conducting their working lives. The higher wages available in factories provided insufficient compensation for this loss of 'freedom'. Impoverished handloom weavers would send their daughters to work on the power looms but resisted the prospect themselves. Hours in the early factories were probably no longer than those in the domestic trades but what made it far less acceptable was the mind-crushing tedium of the work involved, the loss of public feast days and holidays and, for middle class commentators, the physical consequences of long hours and the appalling conditions in the factory towns.
The growth of labour market conditions in the nineteenth century makes it quite impossible to make clear distinctions between the employed, the unemployed, the underemployed, the self-employed and the economically inactive. Subcontracting was rife, notably in the clothing trade where middlemen 'sweated' domestic women to earn a profit. The 'slop' end of the fashion and furnishing trades competed frantically for such orders as were available at almost any price. Casualism became more visible towards 1900 as cities spread in size. Short-term engagements and casual employment were particularly associated with the docks and the construction industries.
Variations in standards of living, wages and working conditions were at least as great in towns as in the countryside. Average urban wages were certainly higher but so were rent and food so that urban dwellers were not necessarily better off than their rural counterparts. Women's wages were invariably well below those of men and families dependent on a sole female wage earner were among the poorest of the urban population. Jobs guaranteeing a regular weekly wage, with little cyclical unemployment, were rare, highly prized and jealously guarded. Cyclical unemployment was the norm for most workers and was a major factor in the urban labour market and in turn had a significant impact on standards of living, quality of housing and the residential areas to which people could aspire.
The urban population was organised in hierarchical terms, largely in terms of levels of skill:
At the base of the urban labour hierarchy were the genuinely casual workers who formed a residual labour force that was often entered on initial migration to a town when no other work was available. Such work as hawking and street trading, scavenging, street entertainment, prostitution and some casual labouring and domestic work fell into this category. Below these were begging and poor relief.
- Casual trades were largely concentrated in large cities, especially London, and the number fluctuated considerably.
- Very low and irregular incomes condemned families dependent on casual work to rooms in slums, but in London they would emerge from the rookeries of St Giles to sell their goods in the cities or in middle class residential districts.
- Large numbers of street traders in prosperous middle class areas caused antagonism and sometimes fear so that the police were often called to control street trading activities helping to reinforce middle class stereotypes of a dirty and dangerous sub-class that should be confined to the slums.
Above the casual street traders was a whole range of unskilled mainly casual occupations in which workers were hired for a few hours at a time and could be laid off for long periods without notice. These included labourers in the building trades, in sugar houses and other factories, carters, shipyard workers and especially dockers. All towns had such workers but they were especially important in port cities such as London, Liverpool, Bristol and London and in industries like coal mining or clothing that had a partly seasonal market.
- Precise numbers involved in casual work are impossible to determine. In Liverpool over 22 per cent of the employed population in 1871 were general, dock or warehouse labourers, many casual. When in work Liverpool dockers earned high wages, ranging from 27s for quay porters to 42s for a stevedore but few maintained such earnings for any length of time and in a bad week many earned only a few shillings.
- Conditions changed little between 1850 and 1914. They were frequently in debt and regularly pawned clothes. In good times they would eat meat or fish but normally their diet consisted largely of bread, margarine and tea. Illness or industrial injury [common in dangerous dockland working conditions] would have led to financial disaster.
- Casual workers needed to live close to their workplace since employment was often allocated on a first-come, first-served basis. Liverpool dockers mostly lived close to the docks and this limited their housing choice to old, insanitary but affordable accommodation.
Factories provided more regular employment after 1830 as did public services as railway companies and many commercial organisations. Skilled manual labour was relatively privileged: a Lancashire skilled cotton spinner earned 27-30s per week in 1835 and a skilled iron foundry worker up to 40s. In coal mining skilled underground workers earned good wages and in key jobs such as shot-firing, putting, hewing and shaft sinking usually had regular employment although this often meant moving from colliery to colliery and between coalfields.
- Textile towns like Manchester, Bradford and Leeds and metal and engineering centres such as Sheffield and the Black Country tended to suffer less from poverty from irregular earnings than cities like Glasgow, Cardiff, Liverpool or London.
- Skilled engineering trades were amongst the earliest to unionise, along with artisans and craftsmen, particularly in London and northern industrial towns. They protected their interests jealously and, despite some dilution in their position, they commanded higher wages and regular employment. This conferred many advantages: renting a decent terrace house in the suburbs thus avoiding the squalor of Victorian slums but with a long walk to work or the use of the 'workmen's trains'.
After 1850 the number of workings in white-collar occupations increased and a lower middle class emerged among the petit-bourgeoisie of small shopkeepers and white-collar salaried occupations of clerks, commercial travellers and schoolteachers. White-collar employment increased from 2.5 per cent of the employed population in 1851 to 5.5 per cent by 1891.
- Such employment was found in all towns but especially in commercial and financial centres such as Glasgow, Manchester, Liverpool and Bristol. White-collar workers were a diverse group: insurance and bank clerks commanded the highest incomes of over £3 per week and the greatest prestige; in contrast railway clerks often earned little more than skilled manual workers but had greater security of employment. White-collar employees certainly perceived themselves, and were perceived by others, to be in a secure and privileged position.
- White-collar workers could afford not only a decent terrace house, but by 1880 could commute over longer distances by public transport, especially after 1880 when the suburban railway and tram network were established.
- Despite long hours of work for clerks and shopkeepers, their occupations were less hazardous than most factory employment and, with more regular incomes and better housing, they were more likely to enjoy good health than most industrial workers.
Women were employed in all categories of work and in textile districts female factory employment was very significant. Single women often entered domestic service but married women who needed to supplement a low male wage or widows supporting several children, were severely limited in choice. Away from the textile districts most found work as domestic cleaners, laundry workers, in sewing, dressmaking, boot and shoemaking and other trades carried on either in the home of small workshops. Wages were always low with piece rates producing incomes ranging from 5s. to 20s per week.
- The proportion of women in industry declined from the 1890s, except in unskilled and some semi-skilled work but their role in higher professional, shop and clerical work increased.
- The telephone and typewriter revolution from the 1880s saw the army of male clerks replaced by female office workers.
- The revolution in retailing provided additional employment for women and by 1911 one-third of all shop assistants were female.
The number of women in commerce and many industries increased between 1891 and 1951, but the proportion of women in paid employment hardly changed and remained around 35 per cent. But the characteristics of female employment changed substantially. Before 1914 domestic service was still the overwhelming source of employment for women and girls, though the clothing and textile trades employed more women than men. Women, however, were also beginning to infiltrate the lower grade clerical and service occupations. In 1901 13 per cent of clerks were women, but by 1911 this had risen to 21 per cent, though the higher clerical grades remained almost exclusively male. Nevertheless the employment status of women remained inferior to that of men: in 1911 52.1 per cent of women occupied semi-skilled or unskilled jobs compared to 40.6 per cent of men.
The major restructuring of the British economy brought significant changes in the working conditions and operation of the labour market after 1890. Women played an increasingly important role in the workforce, new technology and machinery created different jobs demanding new and often less individually crafted skills. Older workers, particularly in heavy industries, often found it difficult to adjust to new work practices. The years 1890-1914 were a transitional period that retained many of the characteristics of the nineteenth century economy whilst signs of the new work patterns of the inter-war years began to develop.
John Benson The Working Class in Britain 1850-1939, Longman, 1989, pp.9-38 is the best introduction to this issue. Patrick Joyce (ed.) The historical meanings of work, CUP, 1987 is an excellent collection containing a seminal introduction by the editor. Patrick Joyce 'Work' in F.M.L. Thompson (ed.) The Cambridge Social History of Britain 1750-1950: volume 2 People and their environment, CUP, 1990, pp.131-194 is a short summary of recent research.
See Duncan Bythell The Handloom Weavers, CUP, 1969 and The Sweated Trades, Batsford, 1978 for a detailed discussion of this issue.
See E.J. Hobsbawm 'The tramping artisan' in his Labouring Men, Weidenfeld, 1964, pp.34-63 and E.P.Thompson The Making of the English Working Class, Gollancz, 1963, Penguin, 1968 and 'Time, Work-Discipline and Industrial Capitalism', first published in Past and Present, no.38 [December 1967], reprinted in Customs in Common, Merlin, 1991, pp.352-403.
Henry Mayhew London Labour and the London Poor, 1861-2, 4 volumes, New York, 1968 and E.P.Thompson and E. Yeo (eds.) The Unknown Mayhew: Selections from the Morning Chronicle 1849-50, Penguin, 1971 provide evidence for the 1850s and should be used in conjunction with the six volumes of his The Morning Chronicle Survey of Labour and the Poor, 1849-50, Caliban, 1980. Anne Humpherys Travels into the Poor Man's Country: The Work of Henry Mayhew, University of Georgia Press, 1977 is the most recent biography.
On this see Elizabeth Roberts Women's Work 1840-1940, Macmillan, 1988.
For a classification of the labouring population up to 1850 see Richard Brown Society and Economy in Modern Britain 1700-1850, Routledge, 1991, pp.323-328.
On the emergence of trade unions see Henry Pelling A history of Trade Unionism, Penguin, 5th., ed., 1990, Ben Pimlott and Chris Cook (eds.) Trade Unions in British Politics: The First 250 Years, Longman, 2nd., ed., 1991 and the more specific John Rule (ed.) British Trade Unions 1750-1850: The Formative Years, Longman, 1988. | http://richardjohnbr.blogspot.com/2008/05/work-in-victorian-britain.html | 13 |
16 | Within each tabbed topic area, you will find suggested videos, websites, and readings related to that topic. We hope you find these resources helpful. If you have suggestions for resources related to these topics that you’d like to share with other teachers, please let us know and we will add them here!
Climate Change South America
In layman’s language appropriate for kids as well as adults, discusses signs of climate change in different regions of South America, the role of the Amazon basin to the health of our planet, and environmental concerns such as deforestation that can have major impacts on the changing climate. (10:57)
The Páramos: Climate change threatens a fragile ecosystem in the Andes
The páramo is a high mountain ecosystem in South America’s Andes rich with biodiversity and an important source of water for millions of people. This video from the International Research Institute for Climate and Society discusses how climate change is impacting this fragile environment. (4:21)
National Geographic Global Warming Video Collection
A collection of about 20 short videos that examine causes, effects, and potential solutions to global climate change. The first video in the series, “A Way Forward: Facing Climate Change,” provides an overview of this global issue. 7:43
Discovery Channel Global Warming and Climate Change Video Collection
A collection of nine short videos looking at the impacts of climate change in different parts of the world and on different animal species. Go to news.discovery.com to see the full collection.
Bolivia’s Glaciers Offer Climate Change Clues
Scientists at the world’s highest atmospheric monitoring station in Bolivia are studying the impact of climate change on the region. 2:04
Series of videos on climate change indicators with related lesson plans for secondary school teachers. Produced by the National Earth Science Teachers Association and Windows to the Universe in conjunction with NBC Learn and the National Science Foundation. View the secondary school lessons that accompany the video set at www.windows2universe.org.
Hot Planet? – BBC Documentary
A recent hour-long BBC documentary that explores the world’s leading climate scientists’ vision of the planet’s future. Fast-paced, engaging resource that is likely to elicit discussion among middle- and high-school-aged kids in particular. 59:22
Journalist Simeon Tegel: Reporting on Climate Change in Latin America
British journalist Simeon Tegal outlines the environmental issues spanning Latin America due to climate change in the region. Video is recent, from August 2012. 4:41
Global Warning: Early Warnings on Adaptation
In this video, several leaders of indigenous peoples’ organizations, represented in the Arctic Council, share their thoughts and concerns about the changes in their lifestyles brought on by the changing climate. Produced by the European Environment Agency.
State of the Planet’s Oceans: Retreat of the South American Glaciers
Hosted by Matt Damon, this video is part of Journey To Planet Earth, a current PBS series that explores the fragile relationship between people and the world they inhabit. 1:59
The Nansen Conference 2011 – Climate Change and Displacement
A short video from the Nansen Conference on Climate Change and Displacement, the first large-scale conference on climate change and displacement, which took place in Oslo, Norway, June 5-7, 2011. 2:30
European Environment Agency Climate Change Videos
A series of videos on climate change specific to climate change impacts in Europe, available from the European Environment Agency. To view the full set, go to www.eea.europa.eu/themes/climate/multimedia.
The Arctic Ice Is Melting Faster Than Expected
From February 2011. Professor Stefan Rahmstorf, Postdam Institute for Climate Impact Research, Germany, explains that sea ice is melting faster and the sea level is rising faster than expected by the Intergovernmental Panel on Climate Change (IPPC) in 2007. He argues that we’re running out of time and that it’s necessary to limit global temperature warming to less than two degrees to prevent critical changes. 2:11
Arctic Changes: The Big Picture
From March 2010. Clear explanations with graphics, appropriate for K-12 students. Over recent decades, the Arctic has been the fastest-warming region on the planet. This video tells the story of how it has been changing, as seen from satellites above, and submarines below, touring through years of hard research — in three minutes. Some of the key findings: sea ice is thinning even faster than it is shrinking in area, and Greenland has been shedding ice at an accelerating pace — with consequences for sea level. 3:06
Global Warming: It’s All About Carbon
A series of five short animated videos (3-4 minutes each) explaining the chemistry behind climate change. Produced by National Public Radio.
NASA: Global Climate Change
Winner of a Webby Award for Best Science Site in 2011, this site provides climate change visualizations and statistics, news features, educator resources, and much more.
Everything You Need to Know about Climate Change
Interactive graphic that provides a guide to global warming, from science and politics to economics and technology.
UK-based; has nice step-by-step intro to climate change, as well as lessons and activities.
Climate Hot Map: Global Warming Effects Around the World
Interactive map to explore some of the effects of global warming around the world. This site also includes information about the causes of global warming, the impacts of global warming on different ecosystems and on people worldwide, and suggested solutions, which can be viewed continent by continent.
NPR Climate Connections: A Global Journey
Explore global warming issues using an interactive map that looks at how climate changes people and how people change climate around the world.
NASA Climate Kids
Includes resources/activities for kids and teachers alike.
It’s Getting Hot in Here
An interactive graphic on climate change through history.
The Adopt a Negotiator Project
Tracking international efforts to deal with climate change.
An independent, nonprofit journalism and research organization with articles and media focused on helping people understand how climate change connects to them.
The Climsave Project
Developing an interactive web-based tool to assess climate change impacts and vulnerabilities for European nations.
SustainUS is a nonprofit, nonpartisan organization of young people advancing sustainable development and youth empowerment in the United States.
Our World 2.0
The Our World 2.0 web magazine shares the ideas and actions of citizens around the world who are transforming lives for the better. This magazine, produced by the United Nations University Media Centre, shares these insights through video briefs, articles, debates, photo essays, and public events.
A social networking site of sorts where individuals share recorded observations from the natural world (many captured via mobile devices). Thousands of species have been captured by camera and plotted with google maps on the website, creating a database that visitors can sort through by location, species, or observer. The “Species” and “Projects” links will probably be most of interest to schools.
Environmental Literacy Council
Has resources for educators along with information about a variety of topics (climate change, energy, water, environment and society, etc.) geared toward students.
Yale Environment 360
Has a great collection of articles, links, and videos related to environmental issues, climate, energy, oceans, sustainability, water, etc., and you can view topics by continent as well as overall.
United Nations Environment Programme
The mission of UNEP is “to provide leadership and encourage partnership in caring for the environment by inspiring, informing, and enabling nations and peoples to improve their quality of life without compromising that of future generations.”
Earthtimes Encyclopaedia of Environmental Issues
This website includes encyclopedia-like entries with text, images, and links on many of the most popular environmental terms. The site also has news articles and blog entries about a variety of topics related to the environment.
Vital Climate Change Graphics for Latin America and the Caribbean 2010 from the United Nations Environment Programme (UNEP)
Climate Change in Latin America, a report from the European Commission
Arctic Warming Unlocking A Fabled Waterway
(Jackie Northam, National Public Radio, August 15, 2011) First in a six-part series examining what’s at stake, who stands to win and lose, and how Arctic warming and melting sea ice could alter the global dynamic.
Arctic Melts Faster Than IPCC’s Forecasts
(Irene Quaile, Deutsche Welle, June 17, 2011)
Arctic Ice Melt Could Pause in Coming Decades
(National Science Foundation, August 11, 2011)
Arctic Shortcut Beckons Shippers as Ice Thaws
(New York Times, September 10, 2009, by Andrew E. Kramer and Andrew C. Revkin)
Ships Take to Arctic Ocean as Sea Ice Melts: Journey Times between Europe and China Can Be Reduced by Half
(MSNBC, September 28, 2010) Includes a video, “Melting Arctic Ocean.”
Global Climate Change Indicators (National Oceanic and Atmospheric Administration, National Climatic Data Center)
Hot Spots Where Heatwaves Could Pose Greater Health Risk
(ScienceDaily, June 12, 2010) Heatwaves could especially pose an increased health risk this century in Southern European river valleys and along the Mediterranean coast, a study by two scientists from ETH Zurich has revealed.
Melting Ice Caps Open Up Arctic for ‘White Gold Rush’
(Terry Macalister, The Guardian, July 4, 2011) As rising temperatures expose more land for exploration, prospectors are rushing to the far north in the hope of carving out a new mineral frontier.
Thawing Arctic Opens Up New Shipping Routes on the ‘Roof of the World’
(Terry Macalister, The Guardian, July 5, 2011) An increasing amount of seaborne traffic is moving along a new Siberian coastal route, cutting journey time and boosting trade prospects.
Arctic Report Card: Update for 2010
BBC – Wild South America
Short introduction to some of the important natural resources and wonders of South America. (2:37)
Andes to Amazon in HD from BBC Motion Gallery
This video has no narration, but presents a stunning series of images from across South America. (2:37)
Biodiversity in Latin America
Latin America and the Caribbean plays home to 34% of the world’s plant species and 27% of mammals, making it one of the world’s biodiversity superpowers. (3:01)
National Geographic Education: South America
Introduction to South America with a variety of educational resources centered on the continent and the countries located there.
Geographia: Latin America
Provides an introduction to the history, culture, and geography of seven South America countries (Argentina, Brazil, Chile, Ecuador, Guyana, Peru, and Venezuela), including details about their major cities and geographical features.
Latin American School and Educational Resources
A website for middle and high school teachers and students who are learning about Latin America in social sciences and humanities classes.
World Bank: Latin America & Caribbean
An extensive site covering development in Latin America and the Caribbean, including educator resources.
Worldwise Schools: Latin America & the Caribbean
Classroom resources based on Peace Corps volunteer experiences.
Latin American Travelogues
The goal of this project is to create a digital collection of Latin American travel accounts written in the 16th-19th centuries.
The Mighty Amazon & River Dolphins -Wild South America – BBC
Running 4,000 miles from the Andes to the ocean, the Amazon carries a fifth of all the river water on the planet. 4:11
Brief introduction to the Amazon, with some beautiful footage. 1:42
Amazon: Land of the Flooded Forest
An hour-long National Geographic special covering the Amazon river and rainforest in depth. 56:48
The Amazon Road: Paving Paradise for Progress?
NPR coverage of a transcontinental highway under construction in Peru and Brazil that is bringing the prospects of economic opportunity and environmental ruin to some of the most remote places on the planet. 4:51
Amazon Gold Mine
National Geographic’s Wild Chronicles goes on expedition with the World Wildlife Fund to one of the most remote parts of the Amazon rainforest of Brazil, in order to find and stop a destructive gold mining operation hidden deep in the forest’s interior. 4:57
A beautiful and comprehensive guide to the Amazon basin from the World Wildlife Fund.
Amazon 2012 Protected Areas and Indigenous Territories
A detailed pdf map showing indigenous territories and protected national areas in the Amazon basin. Also includes text information about the Amazon region.
The Amazon Rainforest
An introduction to the Amazon Rainforest from the Amazon Center for Environmental Education and Research (ACEER) Foundation.
Children of the Jaguar
Winner of the 2012 Best Documentary at the National Geographic All Roads Film Festival, this film details the struggles of the Sarayaku indigenous community of Ecuador as they battled oil companies and the government of Ecuador starting in 2002. The Sarayaku won two major victories in 2012: “in April, for the first time in their history, the government of Ecuador acknowledged responsibility for illegally licensing an oil company to do business on indigenous territory without the community’s consent; and in July the Interamerican Court of Human Rights (ICHR) ruled that the government must consult with indigenous communities prior to such enterprises and to pay for physical and ‘moral’ damages to the community” (Kearns, 2012). This link goes to a trailer for the film. (2:31)
South American Indigenous People
The International Museum of Cultures discusses South American Indigenous people, their culture, pottery, hunting methods using the Blow Gun, and furniture. The people specifically discussed are the Waorani people from Ecuador, previously known as the Auca. 7:26
Tradition and Land: Shuar people, deforestation and mining
This project aims to document and voice changes and threats to lands, cultures and plant medicine traditions of native peoples from North to South America. 10:50
The Ashaninka, A Threatened Way of Life – Survival International
The Ashaninka are one of the largest indigenous groups in South America, their ancestral homelands ranging from Brazil to Peru. Today, a large communal reserve set aside for the Ashaninka is under threat by the proposed Pakitzapango dam, which would displace some 10,000 Ashaninka. The dam is part of a large set of hydroelectric projects planned between the Brazilian and Peruvian governments – without any original consultation with the Ashaninka. Bowing to recent pressure from indigenous groups, development one other dam in the project, the Tambo-40, has already been halted. The Pakitzapango dam on Peru’s Ene River is currently on hold, though the project has not been withdrawn yet. Survival International has collected these images of the Ashaninka and their threatened homeland. 9:16
Quechua – Histoire d’un Peuple
This video is in French, and presents a history of the Quechua people. Beautifully filmed. (6:39)
Mapuche Landscapes – Arauco
Teaser of documentary on Mapuche people and the Maqui berry. Shot in Southern Chile. The Lake Region. (2:53)
Professional Development Opportunities
Teaching and Learning for a Sustainable Future is a UNESCO programme for the United Nations Decade of Education for Sustainable Development. It provides professional development for student teachers, teachers, curriculum developers, education policy makers, and authors of educational materials.
Green Education Foundation (GEF) offers online courses in sustainability to educators, students at the high school and undergraduate level, and professionals. In addition to courses, GEF offers a certificate in sustainability concepts for professional development or academic credit. | http://lt.umn.edu/earthducation/expedition4/resources/ | 13 |
23 | The Mars Science Lab rover, Curiosity, carries the most advanced payload of scientific instrumentation ever used on the surface of Mars. Curiosity - and its team of scientists and engineers - is tasked with investigating whether conditions at its landing site have been favorable for microbial life and for preserving clues in the rocks about possible past life. Curiosity will NOT be looking for life; its instruments are not designed to find life. Curiosity will be looking for clues that point to the possibility that life could have existed in the past. Like its predecessors’ landing sites, Curiosity will land in a region with exposed minerals that are formed in wet environments. All of Curiosity’s scientific instruments will work in concert to determine the landing region’s potential for life. In particular, ChemCam will tell mission scientists what the rocks are made of in the rover’s landing region. One of ChemCam’s primary objectives is to determine the compositions of rocks and soil and to identify samples that would be of great interest to scientists for analysis by other instruments onboard Curiosity. Knowing a rock’s composition gives scientists clues as to the environment in which the rock formed. ChemCam will set its sights on the rocks in its landing region, looking for the chemical evidence that water was once abundant there.
Early Fascination and Robotic Exploration of Mars
Prior to the mid-20th century, many people, including scientists and authors, believed Mars was a world capable of supporting life. Authors wrote tales of Martians visiting Earth (Figure 1), sometimes peacefully, but not peacefully in many stories. Early telescopic observations of Mars showed dark surface features that changed over time (Figure 2). Some interpreted these changing patterns as vegetation growing and dying with the seasons on Mars. The first close-up views of Mars from the Mariner 4 spacecraft in 1965 (Figure 3) shattered that belief and other fanciful ideas like Martian plots to destroy humanity.
Figure 1. Artwork for the book "The War of the Worlds" from a 1906 Belgian edition by the Brazilian artist Henrique Alvim Corréa.
Figure 2 Ground-based, telescopic image of Mars showing changes in both dark patterns and the northern polar cap.(Images of Mars are inverted.) Credit: Lowell Observatory
Figure 3 One of the first close-up images of Mars returned by the Mariner 4 spacecraft in 1965. The image shows craters – circular depressions – on the Martian surface. Credit: NASA
Images returned by Mariner 4 revealed a heavily cratered surface with no signs of liquid water or life of any kind. Mariner 4 showed us a Mars that looked similar to our Moon. Two more missions, Mariner 6 and Mariner 7, flew by Mars in 1969 revealing more of the same terrain (Figures 4 and 5) seen by Mariner 4.
Figure 4. Mariner 6 image of the Martian surface. Combined, the Mariner 6 and 7 missions returned only 198 images. Credit: NASA
Figure 5. Mariner 7 image of Mars’ southern polar cap. Temperature measurements from both missions showed the southern cap to be made of carbon dioxide ice. Credit: NASA
In 1971, Mariner 9 became the first spacecraft to orbit the Red Planet. Like its predecessors, Mariner 9 did not find any water or life. However, images from the spacecraft did give scientists their first look at what appeared to be ancient river beds (Figure 6).
Figure 6. Mariner 9 image of ancient, dried-up river beds. Credit: NASA
With the discovery of ancient river beds, the prospects for life on Mars, if even ancient life, reentered the imaginations of scientists and the public, alike. If Mars once had liquid water flowing across its surface, is it also possible life had existed on the surface? In 1976, the twin Viking landers (Figures 7 and 8) touched down on Mars, in two separate locations, while their orbiters remained above Mars, snapping more photos showing evidence for a wet Mars in the past (Figure 9). Fitted with biological experiments, scientists hoped the landers would find evidence of life in the Martian soil. Many scientists believe the results of these experiments do not support evidence for life. The Viking 1 lander proved to be the most robust of the mission’s fleet of spacecraft, ceasing communications with engineers in 1982. It would be 14 years before robotic explorers would return to Mars.
Figure 7. Site of the Viking 1 lander as seen by the lander’s camera. Credit: NASA
Figure 8. Site of the Viking 2 lander as seen by the lander’s camera. Credit: NASA
Figure 9. Long, winding channel in the Martian surface imaged by the Viking 1 orbiter. This channel was carved by water early in Mars history. A detail of the area in the white box can be seen in Figure 11. Credit: NASA
The early years of Mars exploration revealed a planet with characteristics that challenged preconceptions held by many people for many years. However, evidence of liquid water in the past, and the possibility of even simple life, continued to tantalize scientists and stir the public imagination. The science fiction of years past began to seem not so fantastic.
Follow the Water
After the loss of the Mars Observer orbiter in 1992, Mars exploration was given a shot of adrenaline in the latter half of the decade with the successful arrival of the Mars Global Surveyor (MGS) orbiter in 1996 (Figure 10).
Figure 10. Artist’s conception of the Mars Global Surveyor (MGS) spacecraft. MGS forever changed scientists understanding of Mars with its high-resolution imagery of the Martian surface. Communications ceased with MGS in November 2006. Credit: NASA
MGS picked up where the Viking orbiters left off. MGS gave scientists the most detailed images of the Martian surface to date, finding more evidence for a watery past (Figure 11), and evidence of relatively recent activity of liquid water. MGS spotted gullies carved into crater walls (Figure 12). On Earth, gullies typically form from liquid water carving through material as it flows downhill. This finding was significant because later analysis revealed many of the gullies formed relatively recently, possibly within the past few million years. Yes, a few million years is still old. So, why is this a big deal? In the early history of Mars, the atmosphere must have been thicker, providing warmer conditions that would allow liquid water to exist at the surface. This can explain the numerous geologic features on the surface that resemble similar features formed by liquid water on Earth. Currently, Mars has a very thin atmosphere and extremely cold temperatures, conditions that have prevailed for at least the past 3.5 billion years! Under such conditions, water should not exist on the surface as a liquid. If, however, it is possible for liquid water to currently exist, near or on the surface, maybe life currently exists, near or on the surface.
Figure 11. Detail from the Viking 1 Orbiter image in Figure 10. The dry bed of a smaller channel can be seen in the upper-right corner of the image. Credit: NASA/JPL/MSSS
Figure 12. Gullies on the wall of a Martian crater. On Earth, gullies form from water running downhill, wearing away surface material and transporting it downhill leaving an exposed, open channel. Credit: NASA/JPL/MSSS
With the mounting geologic evidence that water once flowed across the surface of Mars, NASA adopted an exploration strategy titled “Follow the Water” for the Mars exploration program:
Following the water begins with an understanding of the current environment on Mars. We want to explore observed features like dry riverbeds, ice in the polar caps and rock types that only form when water is present. We want to look for hot springs, hydrothermal vents or subsurface water reserves. We want to understand if ancient Mars once held a vast ocean in the northern hemisphere as some scientists believe and how Mars may have transitioned from a more watery environment to the dry and dusty climate it has today. Searching for these answers means delving into the planet's geologic and climate history to find out how, when and why Mars underwent dramatic changes to become the forbidding, yet promising, planet we observe today.
- from the Mars Exploration Program Website
The “Follow the Water” strategy has guided every NASA mission to Mars since the year 2000. Water is the common thread that runs through the objectives of the Mars exploration program (Figure 13). The Mars Science Lab is no exception.
Figure 13. Water is the common thread that ties together the objectives of NASA’s Mars Exploration Program. Credit: NASA
Get Down, Get Dirty
The stunning images of the surface of Mars sent back by Mars Global Surveyor, and its predecessors, show convincing large-scale, geologic evidence for past, maybe present, liquid water on Mars. The 2005 Mars Reconnaissance Orbiter has followed MGS with even more incredibly detailed imagery from orbit (Figure 14).
Figure 14. Mars Reconnaissance Orbiter image taken by the Hi-Resolution Imaging Science Experiment (HiRISE) camera. This image shows blocks of bright, layered rock embedded in darker material that are thought to have been deposited by a giant flood. Credit: NASA/JPL/UA
Geologic evidence for water from orbiters is just one piece of the puzzle. Scientists are also looking for chemical evidence. The best way to obtain chemical evidence is to get on the ground and get dirty. Since 1976, NASA has successfully landed six spacecraft on the surface of Mars. These spacecraft have dug up soil and ground up rock looking for the chemical traces of water. The first two were the Viking landers followed twenty years later by the 1996 Mars Pathfinder mission (Figure 15) which featured the first rover on the Martian surface. The Mars Exploration Rovers (MER) Spirit and Opportunity (Figure 16) landed on Mars in 2004. The most recent landing on Mars was by the Mars Phoenix lander (Figure 17). Phoenix touched down on the northern arctic plains of Mars in 2008 at the farthest northern point of any spacecraft to date.
Figure 15. The Sojourner rover examines the rock “Yogi” near the Mars Pathfinder lander. Sojourner, the first rover to operate on Mars, is about the size of a microwave. Credit: NASA/JPL
Figure 16. Artist’s concept of the Mars Exploration Rovers. MER rovers are much larger than Sojourner. The twin “robot geologists” are each about the size of a golf cart. Credit: NASA
Figure 17. Engineers at Lockheed Martin in Denver, CO assemble the Phoenix lander. The Phoenix spacecraft was originally built for the canceled 2001 Mars Surveyor lander. Credit: Lockheed Martin
Pathfinder landed in a region believed to have been altered by a catastrophic flood early in Mars history. Some of the rocks at the Pathfinder landing site show physical signs of alteration by water, but the relatively crude chemical analyses were too simple to determine evidence of alteration by water. The MER rovers hit the jackpot in looking for chemical evidence of the presence of water. Like Pathfinder, the landing sites for both MER rovers were chosen because both sites were believed to have been the locations of abundant liquid water. Chemical analyses of “blueberries” (Figure 18), at the Opportunity landing site reveal they were formed in an environment with abundant water.
Figure 18. “Blueberries” near the Opportunity landing site. “Blueberries” are actually small, spherical concretions of the mineral hematite. On Earth hematite forms in the presence of large amounts of water, but can also be produced volcanically. Credit: NASA/JPL/Cornell/USGS
The abundance of sulfur and the mineral jarosite at the Opportunity landing site are also tell-tale indicators of an environment that was once drenched in liquid water. The Mars Phoenix lander also hit the jackpot at its landing site north of the Martian Arctic Circle. Phoenix touched water! Ice a little below the surface, to be exact, but this was somewhat expected. Data from the 2001 Mars Odyssey orbiter (Figure 19) suggested vast quantities of ice existed just below the surface of Mars in the arctic regions surrounding the northern polar cap.
Figure 19. This map shows relative concentrations of hydrogen in the subsurface of the Martian arctic. The abundance of hydrogen in this area led scientists to believe the hydrogen is locked up in ice. This map was the basis for the 2007 Mars Phoenix mission. Credit: NASA/JPL/UA
Mission scientists knew the ice was there, however, they were not sure how deep it was. Luckily, it turned out to be right below, and in some places right at, the surface (Figure 20).
Figure 20. The Martian surface below the Mars Phoenix lander. The white material at the center of this image was determined to be ice, possibly uncovered by the lander’s decent thrusters. The image was taken by the robotic arm camera mounted on the end of the eight feet long robotic arm as it peered below the deck after landing. Credit: NASA/JPL/UA/Max Planck Institute
Was the ice at the Phoenix landing site ever liquid? The detection of calcium carbonate in the soil led mission scientists to conclude that the site had been wet or damp sometime in the geologic past. The chemical perchlorate was also identified at the landing site. This is significant because perchlorate can lower the melting temperature of water which could allow small amounts of liquid water to form on the surface today.
MSL and ChemCam will Continue to Follow the Water
Did Mars ever have an environment capable of supporting life? Can it still support life? The missions of the past have returned exciting data supporting the existence of a water soaked Mars in the past. Future missions are expected to do the same. ChemCam supports the Mars Science Lab mission as it stands on the shoulders of giants as scientists continue to Follow the Water. | http://msl-chemcam.com/index.php?menu=inc&page_consult=textes&rubrique=63&sousrubrique=208&soussousrubrique=0&titre_url=Curiosity | 13 |
30 | Politics of Spain
|This article is part of the series:
Politics and government of
The politics of Spain take place under the framework established by the constitution of 1978. Spain is established as a social and democratic state, wherein the national sovereignty is vested in the people, from which the powers of the State emanate.
The form of government in Spain is a parliamentary monarchy, that is, a social representative, democratic, constitutional monarchy in which the monarch is the head of state and the prime minister — whose official title is "president of the Government" — is the head of government. Executive power is exercised by the government, which is integrated by the prime minister, the deputy prime ministers, and other ministers, which collectively form the Cabinet, or Council of Ministers. Legislative power is vested in the Cortes Generales (General Courts), a bicameral parliament constituted by the Congress of Deputies and the Senate. The judiciary is independent of the executive and the legislature, administering justice on behalf of the King by judges and magistrates. The Supreme Court of Spain is the highest court in the nation, with jurisdiction in all Spanish territories, superior to all in all affairs, except in constitutional matters, which are the jurisdiction of a separate court, the Constitutional Court.
Spain's political system is a multi-party system, but since the 1990s, two parties have been predominant in politics, the Spanish Socialist Workers' Party (PSOE) and the People's Party (PP). Regional parties, mainly the Basque Nationalist Party (EAJ-PNV) from the Basque Country, and Convergence and Union (CiU) and the Socialists' Party of Catalonia (PSC) from Catalonia, have also played key roles in Spanish politics. Members of the Congress of Deputies are selected through proportional representation, and the government is formed by the party or coalition that has the confidence of the Congress, usually the party with the largest number of seats. Since the Spanish transition to democracy, there have not been coalition governments; when a party has failed to obtain absolute majority, minority governments have been formed.
Regional government functions under a system known as the state of autonomies, a highly decentralized system of administration based on asymmetrical devolution to the "nationalities and regions" that constitute the nation, and in which the nation, via the central government, retains full sovereignty. Exercising the right to self-government granted by the constitution, the "nationalities and regions" have been constituted as 17 autonomous communities and two autonomous cities. The form of government of each autonomous community and autonomous city is also based on a parliamentary system, in which the executive power is vested on a "president" and a Council of Ministers elected by and responsible to a unicameral legislative assembly.
The Crown
The King and his functions
The Spanish monarch, currently, Juan Carlos I, is the head of the Spanish State, symbol of its unity and permanence, who arbitrates and moderates the regular function of government institutions, and assumes the highest representation of Spain in international relations, especially with those who are part of its historical community. His title is King of Spain, although he can use all other titles of the Crown. The Crown, as a symbol of the nation's unity, has a two-fold function. First, it represents the unity of the State in the organic separation of powers; hence he appoints the prime ministers and summons and dissolves the Parliament, among other responsibilities. Secondly, it represents the Spanish State as a whole in relation to the autonomous communities, whose rights he is constitutionally bound to respect.
The King is proclaimed by the Cortes Generales — the Parliament — and must take oath to carry out his duties faithfully, to obey the constitution and all laws and to ensure they are obeyed, and to respect the rights of the citizens, as well as the rights of the autonomous communities.
According to the Constitution, it is incumbent upon the King: to sanction and promulgate laws; to summon and dissolve the Cortes Generales (the Parliament) and to call elections; to call a referendum under the circumstances provided in the constitution; to propose a candidate for prime minister, and to appoint or remove him from office, as well as other ministers; to issue the decrees agreed upon by the Council of Ministers; to confer civil and military positions, and to award honors and distinctions; to be informed of the affairs of the State, presiding over the meetings of the Council of Ministers whenever opportune; to exercise supreme command of the Spanish Armed Forces, to exercise the right to grant pardons, in accordance to the law; and to exercise the High Patronage of the Royal Academies. All ambassadors and other diplomatic representatives are accredited by him, and foreign representatives in Spain are accredited to him. He also expresses the State's assent to entering into international commitments through treaties; and he declares war or makes peace, following the authorization of the Cortes Generales.
In practical terms, his duties are mostly ceremonial, and constitutional provisions are worded in such a way as to make clear the strict neutral and apolitical nature of his role. In fact, the Fathers of the Constitution made careful use of the expressions "it is incumbent upon of the King", deliberately omitting other expressions such as "powers", "faculties" or "competences", thus eliminating any notion of monarchical prerogatives within the parliamentary monarchy. In the same way, the King does not have supreme liberty in the exercise of the aforementioned functions; all of these are framed, limited or exercised "according to the constitution and laws", or following requests of the executive or authorizations of the legislature.
The king is the commander-in-chief of the Spanish Armed Forces, but has only symbolic, rather than actual, authority over the Spanish military. Nonetheless, the king's function as the commander-in-chief and symbol of national unity have been exercised, most notably in the military coup of 23 February 1981, where King Juan Carlos I addressed the country on national television in military uniform, denouncing the coup and urging the maintenance of the law and the continuance of the democratically-elected government, thus defusing the uprising.
Succession line
The Spanish constitution, promulgated in 1978, established explicitly that Juan Carlos I is the legitimate heir of the historical dynasty. This statement served two purposes. First, it established that the position of the King emanates from the constitution, the source from which its existence is legitimized democratically. Secondly, it reaffirmed the dynastic legitimacy of the person of Juan Carlos I, not so much to end old historical dynastic struggles — namely those historically embraced by the Carlist movement — but as a consequence of the renunciation to all rights of succession that his father, Juan de Borbón y Battenberg, made in 1977.
The constitution also establishes that the monarchy is hereditary following a "regular order of primogeniture and representation: earlier line shall precede older; within the same line, closer degree shall precede more distant; within the same degree, male shall precede female; and within the same sex, older shall precede the younger". What this means in practice, is that the Crown is passed to the firstborn, who would have preference over his siblings and cousins; women can only accede to the throne provided they do not have any older or younger brothers; and finally "regular order of representation" means that grandchildren have preference over the deceased King's parents, uncles or siblings. Finally, if all possible rightful orders of primogeniture and representation have been exhausted, then the General Courts will select a successor in the way that best suits the interest of Spain. The heir apparent, the Crown Prince, holds the title of Prince of Asturias. The current Crown Prince is Felipe de Borbón.
Legislative power is vested in the Spanish Parliament, the Cortes Generales. (Literally "General Courts", but rarely translated. "Cortes" has been the historical and constitutional name used since Medieval Times. The qualifier "General", added in the 1978 constitution, implies the nation-wide character of the Parliament, since the legislatures of some autonomous communities are also labeled "Cortes"). The Cortes Generales are the supreme representatives of the Spanish people. This legislature is bicameral, integrated by the Congress of Deputies (Spanish: Congreso de los Diputados) and the Senate (Spanish: Senado). The General Courts exercise the legislative power of the State, approving the budget and controlling the actions of the government. As in most parliamentary systems, more legislative power is vested in the lower chamber, the Congress of the Deputies. The Speaker of Congress, known as "president of the Congress of Deputies" presides a joint-session of the Cortes Generales.
Each chamber of the Cortes Generales meets at separate precincts, and carry out their duties separately, except for specific important functions, in which case they meet in a joint session. Such functions include the elaboration of laws proposed by the executive ("the Government"), by one of the chambers, by an autonomous community, or through popular initiative; and the approval or amendment of the nation's budget proposed by the prime minister.
The Congress of Deputies
The Congress of Deputies must be integrated by a minimum of 300 and a maximum of 400 deputies (members of parliament) — currently 350 — elected by universal, free, equal, direct and secret suffrage, to four-year terms or until the dissolution of the Cortes Generales. The voting system used is that of proportional representation with closed party lists following D'Hondt method in which the province forms a constituency or electoral circumscription and must be assigned a minimum of 2 deputies; the autonomous cities of Ceuta and Melilla, are each assigned one deputy.
The Congress of Deputies can initiate legislation, and they also have the power to ratify or reject the decree laws adopted by the executive. They also elect, via a vote of investiture, the prime minister (the "president of the Government"), before he or she can be formally sworn to office by the King. The Congress of Deputies may adopt a motion of censure whereby it can vote out the prime minister by absolute majority. On the other hand, the prime minister may request at any time a vote of confidence from the Congress of Deputies. If he or she fails to obtain it, then the Cortes Generales are dissolved, and new elections are called.
The upper chamber is the Senate. It is nominally the chamber of territorial representation. Four senators are elected for each province, with the exception of the insular provinces, in which the number of senator varies: three senators are elected for each of the three major islands — Gran Canaria, Mallorca and Tenerife — and one senator for Ibiza-Formentera, Menorca, Fuerteventura, La Gomera, El Hierro, Lanzarote and La Palma. The autonomous cities of Ceuta and Melilla each elect two senators.
In addition, the legislative assembly of each autonomous community designates one senator, and another for each one million inhabitants. This designation must follow proportional representation. For the 2011 elections, this system allowed for 266 senators, 208 of which were elected and 58 of which were designated by the autonomous communities. Senators serve for four-year terms or until the dissolution of the Cortes Generales. Even though the constitution explicitly refers to the Senate as the chamber of territorial representation, as seen from the numbers before, only one-fifth of the senators actually represent the autonomous communities. Since the constitution allowed for the creation of autonomous communities, but the process itself was embryonic in nature — they were formed after the promulgation of the constitution, and the outcome was unpredictable — the constituent assembly chose the province as the basis for territorial representation.
The Senate has less power than the Congress of Deputies: it can veto legislation, but its veto can be overturned by an absolute majority of the Congress of Deputies. Its only exclusive power concerns the autonomous communities, thus in a way performing a function in line with its nature of "territorial representation". By an overall majority, the Senate is the institution that authorizes the Government to adopt measures to enforce an autonomous community's compliance with its constitutional duties when it has failed to do so. Since the constitution of 1978 came into effect, however, this has never occurred.
The Government and the Council of Ministers
At the national level, executive power in Spain is exercised only by "the Government". (The King is the head of state, but the constitution does not attribute to him any executive faculties). The Government is composed by a prime minister, known as the "president of the Government" (Spanish: presidente del gobierno), one or more deputy prime ministers, known as "vice-presidents of the Government" (Spanish: vicepresidentes del gobierno) and all other ministers. The collegiate body composed by the prime minister, the deputy prime ministers, and all other ministers is called the Council of Ministers. The Government is in charge of both domestic and foreign policy, as well as defense and economic policies. As of 21 December 2011, the prime minister of Spain is Mariano Rajoy, president of the People's Party, having been the leader of the opposition from 2004 to 2011.
The constitution establishes that after elections, the King, after consulting with all political groups represented in the Congress of Deputies, proposes a candidate to the "presidency of the Government" or prime ministership through the Speaker of Congress. The candidate then presents the political program of his or her government requesting the Congress's confidence. If the Congress grants him confidence by absolute majority, the King then nominates him formally as "president of the Government"; if he or she fails to obtain absolute majority, the Congress waits 48 hours to vote again, in which case, a simple majority suffices. If he or she fails again, then the King presents other candidates until one gains confidence. However, if after two months no candidate has obtained it, then the King dissolves the Cortes Generales and calls for new elections with the endorsement of the Speaker of Congress. In practice, the candidate has been the leader of the party that obtained the largest number of seats in the Congress. Since the constitution of 1978 came into effect, there have not been any coalition governments, even if the party with the largest number of seats has failed to obtained absolute majority, though in such cases the party in government has had to rely on the support of minority parties to gain confidence and to approve the State's budgets.
After the candidate obtains the confidence of the Congress of Deputies, he is appointed by the King as prime minister in a ceremony of inauguration in which he is sworn at the Audience Hall of the Palace of Zarzuela — the residence of the King — and in presence of the Major Notary of the Kingdom. The candidate takes the oath of office over an open copy of the Constitution next to a Holy Bible. The oath of office used is: "I swear/promise to faithfully carry out the duties of the position of president of the Government with loyalty to the King; to obey and enforce the Constitution as the fundamental law of the State, as well as to keep in secret the deliberations of the Council of Ministers".
The prime ministers proposes the deputy prime ministers and the other ministers, which are then appointed by the King. The number and the scope of competences of each of the Ministries is established by the prime minister. Ministries are usually created to cover one or several similar sectors of government from an administrative function. Once formed, the Government meets as the "Council of Ministers", usually every Friday at the Palace of Moncloa in Madrid, the official residence of the prime minister who presides over the meetings, even though, on exceptions they can be held in any other Spanish city. Also, on exceptions, the meeting can be presided by the King of Spain, by request of the prime minister, in which case, the Council informs the King of the State's affairs.
As of December 2011, the acting government consists of one deputy prime-minister, Soraya Sáenz de Santamaría, and 12 ministries:
- Ministry of Foreign Affairs (Ministerio de Asuntos Exteriores y Cooperación)- José Manuel García-Margallo y Marfil
- Ministry of Justice (Ministerio de Justicia) - Alberto Ruiz-Gallardón
- Ministry of Defense (Ministerio de Defensa) - Pedro Morenés Eulate
- Ministry of Finance (Ministerio de Hacienda y Administraciones Públicas) - Cristóbal Montoro Romero
- Ministry of Home Affairs (Ministerio del Interior) - Jorge Fernández Díaz
- Ministry of Development (Ministerio de Fomento)- Ana María Pastor Julián
- Ministry of Education (Ministerio de Educación, Cultura y Deporte) - José Ignacio Wert Ortega
- Ministry of Employment and Social Security (Ministerio de Empleo y Seguridad Social) - María Fátima Báñez García
- Ministry of Industry, Energy and Tourism (Ministerio de Industria, Energía y Turismo) - José Manuel Soria López
- Ministry of Agriculture, Nutrition and Environment (Ministerio de Agricultura, Alimentación y Medio Ambiente) - Miguel Arias Cañete
- Ministry of Economy (Ministerio de Economía y Competitividad) - Luis de Guindos Jurado
- Ministry of Health (Ministerio de Sanidad, Servicios Sociales e Igualdad) - Ana Mato
The Council of State
The constitution also established the Council of State, a supreme advisory council to the Spanish government. Though the body has existed intermittently since medieval times, its current composition and the nature of its work are defined in the constitution and subsequent laws that have been published, the most recent in 2004. It is currently composed by a president, nominated by the Council of MInisters, several ex officio councilors — former prime ministers of Spain, directors or presidents of the Royal Spanish Academy, the Royal Academy of Jurisprudence and Legislation, the Royal Academy of History, the Social and Economic Council, the Attorney General of the State, the Chief of Staff, the governor of the Bank of Spain, the Director of the Juridical Service of the State, and the presidents of the General Commission of Codification and Law — several permanent councilors, appointed by decree, and no more than ten elected councilors in addition to the Council's Secretary General. The Council of State serves only as an advisory body, that can give non-binding opinions upon request and to propose an alternative solution to the problem presented.
The Judiciary in Spain is integrated by judges and magistrates who administer justice in the King's name. The Judiciary is composed of different courts depending on the jurisdictional order and what is to be judged. The highest ranking court of the Spanish judiciary is the Supreme Court (Spanish: Tribunal Supremo), with jurisdiction in all Spain, superior in all maters except in constitutional guarantees. The Supreme Court is headed by a president, nominated by the King, proposed by the General Council of the Judiciary. This institution is the governing body of the Judiciary, integrated by the president of the Supreme court, twenty members appointed by the King for a five-year term, among whom there are twelve judges and magistrates of all judicial categories, four members nominated by the Congress of Deputies, and four by the Senate, elected in both cases by three-fifths of their respective members. They are to be elected from among lawyers and jurists of acknowledged competence and with over 15 years of professional experience.
The Constitutional Court (Spanish: Tribunal Constitucional) has jurisdiction over all Spain, competent to hear appeals against the alleged unconstitutionality of laws and regulations having the force of law, as well as individual appeals for protection (recursos de amparo) against violation of the rights and liberties granted by the constitution. It consists of 12 members, appointed by the King, 4 of which are proposed by the Congress of Deputies by three-fifths of its members, 4 of which are proposed by the Senate by three-fifths of its members as well, 2 proposed by the executive and 2 proposed by the General Council of the Judiciary. They are to be renowned magistrates and prosecutors, university professors, public officials or lawyers, all of them jurists with recognized competence or standing and more than 15 years of professional experience.
Regional government
The second article of the constitution declares the Spanish nation is the common and indivisible homeland of all Spaniards, which is integrated by nationalities and regions to which the constitution recognizes and guarantees the right to self-government. Since the constitution of 1978 came into effect, these nationalities and regions progressively acceded to self-government and were constituted into 17 autonomous communities. In addition, two autonomous cities were constituted on the coast of North Africa. This administrative and political territorial division is known as the "State of Autonomies". Though highly decentralized, Spain is not a federation since the nation — as represented in the central institutions of government — retains full sovereignty.
The State, that is, the central government, has progressively and asymmetrically devolved or transferred power and competences to the autonomous communities after the constitution of 1978 came into effect. Each autonomous community is governed by a set of institutions established in its own Statute of Autonomy. The Statute of Autonomy is the basic organic institutional law, approved by the legislature of the community itself as well as by the Cortes Generales, the Spanish Parliament. The Statutes of Autonomy establish the name of the community according to its historical identity; the delimitation of its territory; the name, organization and seat of the autonomous institutions of government; and the competences that they assume and the foundations for their devolution or transfer from the central government.
All autonomous communities have a parliamentary form of government, with a clear separation of powers. Their legislatures represent the people of the community, exercising legislative power within the limits set forth in the constitution of Spain and the degree of devolution that the community has attained. Even though the central government has progressively transferred roughly the same amount of competences to all communities, devolution is still asymmetrical. More power was devolved to the so-called "historical nationalities" — the Basque Country, Catalonia and Galicia. (Other communities chose afterwards to identify themselves as nationalities as well). The Basque Country, Catalonia and Navarre have their own police forces (Ertzaintza, Mossos d'Esquadra and the Chartered Police respectively) while the National Police Corps operates in the rest of the autonomous communities. On the other hand, two communities (the Basque Country and Navarre) are "communities of chartered regime", that is, they have full fiscal autonomy, whereas the rest are "communities of common regime", with limited fiscal powers (the majority of their taxes are administered centrally and redistributed among them all for fiscal equalization).
The names of the executive government and the legislature vary between communities. Some institutions are restored historical bodies of government of the previous kingdoms or regional entities within the Spanish crown — like the Generalitat of Catalonia — while others are entirely new creations. In some, both the executive and the legislature, though constituting two separate institutions, are collectively identified with a specific name. It should be noted, though, that a specific denomination may not refer to the same branch of government in all communities; for example, "Junta" may refer to the executive office in some communities, to the legislature in others, or to the collective name of all branches of government in others.
The two autonomous cities have more limited competences. The executive is exercised by a president, which is also the major of the city. In the same way, limited legislative power is vested in a local Assembly in which the deputies are also the city councilors.
The constitution also guarantees certain degree of autonomy to two other political entities: the provinces of Spain (subdivisions of the autonomous communities) and the municipalities (subdivisions of the provinces). If the communities are integrated by a single province, then the institutions of government of the community replace those of the province. For the rest of the communities, provincial government is held by Provincial Deputations or Councils. With the creation of the autonomous communities, deputations have lost much of their power, and have a very limited scope of actions, with the exception of the Basque Country, where provinces are known as "historical territories" and their bodies of government retain more faculties. Except in the Basque Country, members of the Provincial Deputations are indirectly elected by citizens according to the results of the municipals elections and all of their members must be councilors of a town or a city in the province. In the Basque Country direct elections do take place.
Spanish municipal administration is highly homogenous; most of the municipalities have the same faculties, such as managing the municipal police, traffic enforcement, urban planning and development, social services, collecting municipal taxes, and ensuring civil defense. In most municipalities, citizens elect the municipal council, which is responsible for electing the mayor, who then appoints a board of governors or councilors from his party or coalition. The only exceptions are municipalities with under 50 inhabitants, which act as an open council, with a directly elected major and an assembly of neighbors. Municipal elections are held every four years on the same date for all municipalities in Spain. Councilors are allotted using the D'Hondt method for proportional representation with the exception of municipalities with under 100 inhabitants where block voting is used instead. The number of councilors is determined by the population of the municipality; the smallest municipalities having 5, and the largest — Madrid — having 55.
Political parties
Spain is a multi-party constitutional parliamentary democracy. According to the constitution, political parties are the expression of political pluralism, contributing to the formation and expression of the will of the people, and are an essential instrument of political participation. Their internal and structure and functioning must be democratic. The Law of Political Parties of 1978 provides them with public funding whose quantity is based on the number of seats held in the Cortes Generales and the number of votes received. Since the mid-1980s two parties dominate the national political landscape in Spain: the Spanish Socialist Workers' Party (Spanish: Partido Socialista Obrero Español) and the People's Party (Spanish: Partido Popular).
The Spanish Socialist Workers' Party (PSOE) is a social-democrat center-left political party. It was founded in 1879 by Pablo Iglesias, at the beginning as a Marxist party for the workers' class, which later evolved towards social-democracy. Outlawed during Franco's dictatorship, it gained recognition during Spanish transition to democracy, period when it officially renounced Marxism, under the leadership of Felipe González. It played a key role during the transition and the Constituent Assembly that wrote the Spanish current constitution. It governed Spain from 1982 to 1996 under the prime ministership of Felipe González, during which time the party adopted a socio-liberal economic policy. It governed again from 2004 to 2011 under the prime ministership of José Luis Rodriguez Zapatero.
The People's Party (PP) is a conservative centre-right party that took its current name in 1989, replacing the previous People's Alliance, a more conservative party founded in 1976 by seven former Franco's ministers. It is refoundation it incorporated the Liberal Party and the majority of the Christian democrats. In 2005 it integrated the Democratic and Social Center Party. It governed Spain under the prime ministership of José María Aznar from 1996 to 2004, and is currently the party in Government since December 2011, headed by Mariano Rajoy.
Other parties or coalitions represented in the Cortes Generales after the 20 November 2011 election are:
- Convergence and Union (Catalan: Convergència i Unió, CiU)
- Socialists' Party of Catalonia (Catalan: Partit dels Socialistas de Catalunya, PSC)
- Plural Left (Spanish: Izquierda Plural, IP); a coalition of several left-wing parties, among which the largest party is the United Left (Spanish: Izquierda Unida, IU)
- Amaiur, a coalition of Basque nationalist parties
- Union, Progress and Democracy (Spanish: Unión, Progreso y Democracia, UPyD)
- Basque Nationalist Party (Basque: Euzko Alderdi Jeltzalea, Spanish: Partido Nacionalista Vasco, PNV)
- Republican Left of Catalonia (Catalan: Esquerra Republicana de Catalunya, ERC)
- Galician Nationalist Bloc (Galician: Bloque Nacionalista Galego, BNG)
- Coalition of Canarian Coalition (Spanish: Coalición Canaria) and New Canaries (Spanish: Nueva Canarias)
- Commitment Coalition (Catalan: Coalició Compromís), a coalition of Valencian parties,
- Citizen's Forum,
- Yes to the Future (Basque: Geroa Bai).
Electoral process
Suffrage is free and secret to all Spanish citizens of age 18 and older to all elections, and to residents who are citizens of all European Union countries only in local municipal elections and elections to the European Parliament.
Congress of Deputies
Elections to the Cortes Generales are held every four years or before if the prime ministers calls for an early election. Members of the Congress of Deputies are elected through proportional representation with closed party lists where provinces serve as electoral districts; that is, a list of deputies is selected from a province-wide list. Under the current system, sparsely populated provinces are overrepresented because more seats of representatives are allocated to the sparsely populated provinces than they would have if number of seats are allocated strictly according to the population proportion.
Not only provinces with small population are over-represented in Spain's election system, the system also tends to favors major political parties. Despite the use of proportional representation voting system, which in general encourages the development of a larger number of small political parties rather than a few larger ones, Spain has effectively a two-party system in which smaller and regional parties tend to be underrepresented. This is owing to various reasons:
- Due to the great disparity in population among provinces, even though smaller provinces are overrepresented, the total number of deputies assigned to them is still small and tends to go to one or two major parties, even if other smaller parties managed to obtain more than 3% of the votes - the minimum threshold for representation in the Congress.
- The average district magnitude (the average number of seats per constituency) is one of the lowest in Europe, owing to the large number of constituencies. The low district magnitude tends to increase the number of wasted votes (the votes that could not affect the election results because they have been cast for the small parties which could not pass the effective threshold), and in turn increase the disproportionality (so the number of seats and the portion of votes got by a party becomes less proportional). It is often regarded as the most important factor that limits the number of parties in Spain. This point is advanced when Baldini and Pappalardo compare it with the case of Netherlands, where the parliament is elected using proportional representation in a single national constituency. There, the parliament is much more fragmented and the number of parties is much higher than in Spain.
- The D'Hondt method (a type of highest average method) is used to allocate the seats, which slightly favors the major parties when compared to Sainte-Laguë method (another type of highest average method) or the normal kinds of largest remainder methods. It is suggested that the use of D'Hondt method also contribute to a certain degree, though not as large as the low number of seats per constituency, to the bipolarization of the party system.
- The 3% threshold for entering the Congress is ineffective in many provinces, where the number of seats per constituency is so low that the actual threshold to enter the Congress is effectively higher, and thus many parties cannot obtain representation in Congress despite having obtained more than the 3% threshold in the constituency. For example, the actual threshold for the constituencies having 3 seats is 25%, much higher than 3%, making the 3% threshold irrelevant. However, in the largest constituencies like Madrid and Barcelona, where the number of seats is much higher, the 3% threshold is still effective to eliminate the smallest parties.
- The size of the Congress (350 members) is relatively small. It is suggested by Lijphart that the small size of parliament may encourage disproportionality and so favor the large parties.
In the Senate, each province, with the exception of the islands, select four senators using block voting: voters cast ballots for three candidates, and the four senators with the largest number of votes are selected. The number of senators selected for the islands varies, depending on their size, from 3 to 1 senators. A similar procedure of block voting is used to select the three senators from the three major islands whereas the senators of the smaller islands or group of islands, are elected by plurality. In addition, the legislative assembly of each autonomous community designates one senator, and another for each additional one million inhabitants.
Electoral participation
Electoral participation, which is not compulsory, has traditionally been high, peaking just after democracy was restored in the late 1970s, falling during the 1980s, but trending upwards in the 1990s. Since then, voting abstention rate has been around one-fifth to nearly one-third of the electorate.
Recent historical political developments
The end of the Spanish Civil War, put at end to the Second Spanish Republic (1931–1939), after which a dictatorial regime was established, headed by general Francisco Franco. In 1947 he decreed, in one of the eight Fundamental Laws of his regime, the Law of Succession of the Head of State, that Spain was a monarchy with a vacant throne, that Franco was the head of State as general and caudillo of Spain, and that he would propose, when he deemed opportune, his successor, who would bear the title of King or Regent of Spain. Even though Juan of Bourbon, the legitimate heir of the monarchy, opposed the law, Franco met him in 1948, when they agreed that his son, Juan Carlos, then 10 years old, would finish his education in Spain — he was then living in Rome — according the "principles" of the Francoist movement. In 1969, Franco finally designated Juan Carlos as his successor, with the title "Prince of Spain", bypassing his father Juan of Bourbon.
Francisco Franco died on 20 November 1975, and Juan Carlos was crowned King of Spain by the Spanish Cortes, the non-elected Assembly that operated during Franco's regime. Even though Juan Carlos I had sworn allegiance to "National Movement", the sole legal party of the regime, he expressed his support for a transformation of the Spanish political system as soon as he took office. Such an endeavor was not meant to be easy or simple, as the opposition to the regime had to ensure that nobody in their ranks would turn into extremism, and the Army had to resist the temptation to intervene to restore the "Movement".
In 1976 he designated Adolfo Suárez as prime minister — "president of the Government" — with the task of convincing the regime to dismantle itself and to call for elections to a Constituent Assembly. He accomplished both tasks, and the first democratically elected Constituent Cortes since the Second Spanish Republic met in 1977. In 1978 a new democratic constitution was promulgated and approved by referendum. The constitution declared Spain a constitutional parliamentary monarchy with H.M. King Juan Carlos I as Head of State. Spain's transformation from an authoritarian regime to a successful modern democracy was a remarkable achievement, even creating a model emulated by other countries undergoing similar transitions.
Adolfo Suárez headed the prime ministership of Spain from 1977 to 1982, as the leader of the Union of the Democratic Center party. He resigned on 29 January 1981, but on 23 February 1981, day when the Congress of Deputies was to designate a new prime minister, rebel elements among the Civil Guard seized the Cortes Generales in an a failed coup that ended the day after. The great majority of the military forces remained loyal to the King, who used his personal and constitutional authority as commander-in-chief of the Spanish Armed forces, to diffuse the uprising and saving the constitution, by addressing the country on television.
In October 1982, the Spanish Socialist Workers' Party, led by Felipe González, swept both the Congress of Deputies and Senate, winning an absolute majority in both chambers of the Cortes Generales. González headed the prime ministership of Spain for the next 13 years, during which period Spain joined the NATO and the European Community.
The government also created new social laws and large scale infrastructural buildings, expanding the educational system and establishing a welfare state. While traditionally affiliated with one of Spain's major trade unions, the General Union of Workers (UGT), in an effort to improve Spain's competitiveness in preparation for admission to the EC as well as for further economic integration with Europe afterwards, the PSOE distanced itself from trade unions. Following a policy of liberalization, González's government closed state corporations under the state holding company, the National Industry Institute (INI), and down-sized the coal, iron and steel industries. The PSOE implemented the single market policies of the Single European Act and the domestic policies consistent with the Maastricth Treaty EMU criteria. The country was massively modernized and economically developed in this period, closing the gap with other European Community members. There was also a significant cultural shift, into a tolerant contemporary open society.
In March 1996, José María Aznar, from the People's Party, obtained a relative majority in Congress. Aznar moved to further liberalize the economy, with a program of complete privatization of state-owned enterprises, labor market reform and other policies designed to increase competition in selected markets. Aznar liberalized the energy sector, national telecommunications and television broadcasting networks. To ensure a successful outcome of such liberalization, the government set up the Competition Defense Court (Spanish: Tribunal de Defensa de la Competencia), an anti-trust regulator body entrusted with restricting monopolistic practices. During Aznar's government Spain qualified for the Economic and Monetary Union of the European Union, and adopted the euro, replacing the peseta, in 2002. Spain participated, along with the United States and other NATO allies, in military operations in the former Yugoslavia. Spanish armed forces and police personnel were included in the international peacekeeping forces in Bosnia and Herzegovina and Kosovo. Having obtained an absolute majority in the 2000 elections, Aznar, headed the prime ministership until 2004. Aznar supported transatlantic relations with the United States, and participated on the War on Terrorism and the invasion of Iraq. In 2004, he decided not to run as a candidate for the Popular Party, and proposed Mariano Rajoy, who had been minister under his government, as his successor as leader of the party.
In the aftermath of the terrorist bomb attacks in Madrid, which occurred just three days before the elections, the Spanish Socialist Workers' Party won a surprising victory. Its leader, José Luis Rodríguez Zapatero, headed the prime ministership from 2004 to 2011, winning a second term in 2008. Under a policy of gender equality, his was the first Spanish Government to have the same number of male and female members in the Council of Ministers. During the first four years of his prime ministership the economy continued to expand rapidly, and the government ran budget surpluses. His government brought social liberal changes to Spain, promoting women's rights, changing the abortion law, and legalizing same-sex marriage, and tried to make the State more secular. The economic crisis of 2008 took a heavy toll on Spain's economy, which had been highly dependent on construction since the boom of the late 1990s and early 2000s. When the international financial crisis hit, the construction industry collapsed, along with property values and several banks and cajas (savings banks) were in need of rescuing or consolidation. Economic growth slowed sharply and unemployment soared to over 20%, levels not seen since the late 1990s. In applying counter-cyclical policies during the beginning of the crisis, and the ensuing drop in State revenues, the government financing fell into deficit. During a 18 month period from 2010 to 2011, the government adopted severe austerity measures, cutting spending and laying off workers.
In March 2011, Rodríguez Zapatero made his decision not to lead the Socialist Party in the coming elections, which he called ahead of schedule for 20 November 2011. The People's Party, which presented Mariano Rajoy for the third time as candidate, won a decisive victory, obtaining an absolute majority in the Congress of Deputies. Alfredo Pérez Rubalcaba, first deputy prime minister during Rodríguez Zapatero's government and candidate for the Socialist Party in 2011, was elected secretary general of his party in 2012, and is now the leader of the opposition in Parliament.
Key political issues
The nationality debate
Spanish political developments since the early twentieth century have been marked by the existence of peripheral nationalisms and the debate of whether Spain can be viewed as a plurinational State. Spain is a diverse country with different and contrasting regions showing varying economic and social structures, as well as different languages and historical, political and cultural traditions. Peripheral nationalist movements have been present mainly in the Basque Country, Catalonia and Galicia, some advocating for a special recognition of their "national identity" within the Spanish state and others for their right of self-determination or independence.
The Constituent Assembly in 1978 struck a balance between the opposing views of centralism, inherited from Franco's regime, and those who viewed Spain as a "nation of nations". In the second article, the constitution recognizes the Spanish nation as the common and indivisible homeland of all Spaniards, integrated by nationalities and regions. In practice, and as it began to be used in Spanish jurisprudence, the term "nationalities" makes reference to those regions or autonomous communities with a strong historically constituted sense of identity or a recognized historical cultural identity, as part of the indivisible Spanish nation. This recognition, and the process of devolution within the "State of Autonomies" has led to the legitimation of the Spanish state among the "nationalities", and many of its citizens feel content within the current status quo. Nonetheless, tensions between peripheral nationalism and centralism continue, with some nationalist parties still advocating for a recognition of the other "nations" of the Spanish Kingdom or for a peaceful process towards self-determination.
The Government of Spain has been involved in a long-running campaign against Basque Fatherland and Liberty (ETA), an armed secessionist organization founded in 1959 in opposition to Franco and dedicated to promoting Basque independence through violent means, though originally violence was not a part of their method. They consider themselves a guerrilla organization but are considered internationally as a terrorist organisation. Although the government of the Basque Country does not condone any kind of violence, their different approaches to the separatist movement are a source of tension between the Central and Basque governments.
Initially ETA targeted primarily Spanish security forces, military personnel and Spanish Government officials. As the security forces and prominent politicians improved their own security, ETA increasingly focused its attacks on the tourist seasons (scaring tourists was seen as a way of putting pressure on the government, given the sector's importance to the economy) and local government officials in the Basque Country. The group carried out numerous bombings against Spanish Government facilities and economic targets, including a car bomb assassination attempt on then-opposition leader Aznar in 1995, in which his armored car was destroyed but he was unhurt. The Spanish Government attributes over 800 deaths to ETA during its campaign of terrorism.
On 17 May 2005, all the parties in the Congress of Deputies, except the PP, passed the Government's motion giving approval to the beginning of peace talks with ETA, without making political concessions and with the requirement that it give up its weapons. PSOE, CiU, ERC, PNV, IU-ICV, CC and the mixed group —BNG, CHA, EA and NB— supported it with a total of 192 votes, while the 147 PP parliamentarians objected. ETA declared a "permanent cease-fire" that came into force on 24 March 2006 and was broken by Barajas T4 International Airport Bombings on 30 December 2006. In the years leading up to the permanent cease-fire, the government had had more success in controlling ETA, due in part to increased security cooperation with French authorities.
Spain has also contended with a Marxist resistance group, commonly known as GRAPO. GRAPO (Revolutionary group of October the 1st) is an urban guerrilla group, founded in Vigo, Galicia; that seeks to overthrow the Spanish Government and establish a Marxist-Leninist state. It opposes Spanish participation in NATO and U.S. presence in Spain and has a long history of assassinations, bombings, bank robberies and kidnappings mostly against Spanish interests during the 1970s and 1980s.
In a June 2000 communiqué following the explosions of two small devices in Barcelona, GRAPO claimed responsibility for several attacks throughout Spain during the past year. These attacks included two failed armored car robberies, one in which two security officers died, and four bombings of political party offices during the 1999-2000 election campaign. In 2002, Spanish authorities were successful in hampering the organization's activities through sweeping arrests, including some of the group's leadership. GRAPO is not capable of maintaining the degree of operational capability that they once enjoyed. Most members of the groups are either in jail or abroad.
International organization participation
Spain is a member of AfDh, AsDB, Australia Group, BIS, CCC, CE, CERN, EAPC, EBRD, ECE, ECLAC, EIB, EMU, ESA, EU, FAO, IADB, IAEA, IBRD, ICAO, ICC, ICC, ICFTU, ICRM, IDA, IEA, IFAD, IFC, IFRCS, IHO, ILO, IMF, IMO, Inmarsat, Intelsat, Interpol, IOC, IOM (observer), ISO, ITU, LAIA (observer), NATO, NEA, NSG, OAS (observer), OECD, OPCW, OSCE, PCA, United Nations, UNCTAD, UNESCO, UNHCR, UNIDO, UNMIBH, UNMIK, UNTAET, UNU, UPU, WCL, WEU, WHO, WIPO, WMO, WToO, WTrO, Zangger Committee
- Also identified as a "historical community" in its Statute of Autonomy
- Also identified as a "historical community" in its Statute of Autonomy.
- Also identified as a "historical and cultural community" in its Statute of Autonomy.
- The Community of Madrid was detached from Castile-La Mancha to conform a distinct autonomous community in the nation's interest since its capital, Madrid, is also the capital of the Spanish nation, and seat of the State's institutions of government. It is therefore, not referred to neither as a region nor as a nationality in its Statute of Autonomy.
- Navarra acceded to self-government through the "reintegration" and "improvement" of its medieval charters whereby it had some autonomy to manage its internal affairs.
- First article. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Article 56. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Abellán Matesanz, Isabel María (2003, updated 2011). "Sinópsis arículo 56 de la Constitución Española". Cortes Generales. Retrieved 18 February 2012.
- Article 61. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Article 62. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Article 63. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Solsten, Eric; Meditz, Sandra W. (1998). "King, Prime Minister, and Council of Ministers". Spain, a country Study. Washington GPO for the Library of Congress. Retrieved 18 February 2012.
- Sir Raymond Carr, et al. "Spain". Encyclopædia Britannica Online. Encyclopædia Britannica, Inc. Retrieved 28 January 2012.
- Merino Merchán, José Fernando (December 2003). "Sinópsis artículo 62 de la Constitución Española". Cortes Generales. Retrieved 18 February 2012.
- Article 57. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Abellán Matesanz, Isabel María (2003, updated 2011). "Sinópsis arículo 57 de la Constitución Española". Cortes Generales. Retrieved 18 February 2012.
- "Spain". The World Factbook. Central Intelligence Agency.
- Alba Navarro, Manuel (December 2003, updated 2011). "Sinópsis artículo 66 de la Constitución Española". Cortes Generales. Retrieved 19 February 2012.
- Alba Navarro, Manuel (December 2003, updated 2011). "Sinópsis artículo 69 de la Constitución Española". Cortes Generales. Retrieved 19 February 2012.
- Article 99. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Article 107. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Article 117. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Article 159. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Article 2. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Article 6. Cortes Generales (27 December 1978). "Spanish Constitution". Tribunal Constitucional de España. Retrieved 28 January 2012.
- Valencian is the regional, historical and official name for the Catalan language in the Valencian Community
- Colomer, Josep (2004). Hand Book Of Electoral System Choice. New York: Palgrave Macmillan. p. 262. ISBN 1-4039-0454-5.
- Álvarez-Rivera, Manuel. "Elections to the Spanish Congress of Deputies". Retrieved 2 May 2012.
- González, Yolanda (23 December 2007). "Las verdades y mentiras de la ley electoral". El País. Retrieved 19 February 2012.
- Baldini, Gianfranco; Pappalardo, Adriano (2011). Elections, Electoral Systems and Volatile Voters. New York: Palgrave Macmillan. p. 67. ISBN 978-0-230-57448-9. More than one of
- Farrell, David (2011). Electoral Systems: A Comparative Introduction (2 ed.). New York: Palgrave Macmillan. pp. 74–77. ISBN 978-1-4039-1231-2.
- Baldini, Gianfranco; Pappalardo, Adriano (2011). Elections, Electoral Systems and Volatile Voters. New York: Palgrave Macmillan. pp. 67–69. ISBN 978-0-230-57448-9. More than one of
- Norris, Pippa (2004). Electoral Engineering - Voting Rules and Political Behavior. USA: Cambridge University Press. p. 87. ISBN 0-521-82977-1.
- Farrell, David (2011). Electoral Systems: A Comparative Introduction (2 ed.). New York: Palgrave Macmillan. pp. 67–74. ISBN 978-1-4039-1231-2.
- Baldini, Gianfranco; Adriano Pappalardo (2011). Elections, Electoral Systems and Volatile Voters. New York: Palgrave Macmillan. pp. 61–64. ISBN 978-0-230-57448-9. More than one of
- Farrell, David (2011). Electoral Systems: A Comparative Introduction (2 ed.). New York: Palgrave Macmillan. p. 154. ISBN 978-1-4039-1231-2.
- Gunther, Richard; Monero, José Ramón (2009). "The Politics of Spain". Cambridge Textbooks in Comparative Politics. Retrieved 25 February 2012.
- "Spain Politics, government, and taxation". Encyclopedia of the Nations. Retrieved 25 February 2012.
- Minder, Raphael (20 November 2011). "Spanish Voters Deal a Blow to Socialists over the Economy". The New York Times. Retrieved 25 February 2012.
- Villar, Fernando P. (June 1998). "Nationalism in Spain: Is It a Danger to National Integrity?". Storming Media, Pentagon Reports. Retrieved 3 February 2012.
- Shabad, Goldie; Gunther, Richard (July 1982). "Language, Nationalism and Political Conflict in Spain". Comparative Politics Vol 14 No. 4. Retrieved 3 February 2012.
- "Nacionalidad". Real Academia Española. Retrieved 28 January 2012.
- Lewis, Martin W (1 September 2010). "The Nation, Nationalities, and Autonomous Regions in Spain". GeoCurrents. Map-Illustrated Analyses of Current Events and Geographical Issues. Retrieved 29 January 2012.
- Conversi, Daniele (2002). "The Smooth Transition: Spain’s 1978 Constitution and the Nationalities Question". National Identities, Vol 4, No. 3. Carfax Publishing, Inc. Retrieved 28 January 2008. | http://en.wikipedia.org/wiki/Politics_of_the_Canary_Islands | 13 |
20 | View Full Version : THE PLAGUE OF ATHENS - 430-427/425 BC
March 28th, 2007, 05:42 PM
Plague of Athens
From Wikipedia, the free encyclopedia
The Plague of Athens was a devastating epidemic which hit the city-state of Athens in ancient Greece during the second year of the Peloponnesian War (430 BC), when an Athenian victory still seemed within reach. It is believed to have entered Athens through Piraeus, the city's port and sole source of food and supplies. The city-state of Sparta, and much of the eastern Mediterranean, was also struck by the disease. The plague returned twice more, in 429 BC and in the winter of 427/6 BC.
Sparta and its allies, with the exception of Corinth, were almost exclusively land based powers, able to summon large land armies which were very nearly unbeatable. Under the direction of Pericles, the Athenians retreated behind the city walls of Athens. They hoped to keep the Spartans at bay while the superior Athenian navy harassed Spartan troop transports and cut off supply lines. Unfortunately the strategy also resulted in adding many people from the countryside to an already well populated city. In addition, people from parts of Athens lying outside the city wall moved into the more protected central area. As a result, Athens became a breeding ground for disease.
In his History of the Peloponnesian War, the contemporary historian Thucydides described the coming of an epidemic disease which began in Ethiopia, passed through Egypt and Libya, and then to the Greek world. The epidemic broke out in the overcrowded city. Athens lost perhaps one third of the people sheltered within its walls. The sight of the burning funeral pyres of Athens caused the Spartan army to withdraw for fear of the disease. It killed many of Athens's infantry, some expert seamen and their leader Pericles, who died during one of the secondary outbreaks in 429 BC. After the death of Pericles, Athens was led by a succession of incompetent or weak leaders. According to Thucydides, it was not until 415 BC that the Athenian population had recovered sufficiently to mount the disastrous Sicilian Expedition.
Modern historians disagree on whether the plague was a critical factor in the loss of the war. However, it is generally agreed that the loss of this war may have paved the way for the success of the Macedonians and, ultimately, the Romans.
Accounts of the Athenian plague graphically describe the social consequences of an epidemic. Thucydides' account clearly details the complete disappearance of social mores during the time of the plague. The impact of disease on social and religious behavior was also documented during the worldwide pandemic best known as the Black Death.
Fear of the law
Thucydides stated that people ceased fearing the law since they felt they were already living under a death sentence. Likewise people started spending money indiscriminately. Many felt they would not live long enough to enjoy the fruits of wise investment, while some of the poor unexpectedly became wealthy by inheriting the property of their relatives. It is also recorded that people refused to behave honourably because most did not expect to live long enough to enjoy a good reputation for it.
Role of women
The plague changed the role of women in Athenian society. The women were temporarily liberated from the strict bounds of Athenian custom. The plague forced Athens to appoint a magistrate called gynaikonomos to control the behaviour of women.
Care for the sick and dead
Another reason for the lack of honorable behavior was the sheer contagiousness of the illness. Those who tended to the ill were most vulnerable to catching the disease. This meant that many people died alone because no one was willing to risk caring for them. Especially poignant are descriptions of how people were not cared for due to the overwhelming numbers of sick and dying. People were simply left to die in buildings or on the streets, and the dead were heaped on top of each other, left to rot or shoved into mass graves. There were cases where those carrying the dead would come across an already burning funeral pyre. They would dump a new body on it and walk away. Others appropriated prepared pyres so as to have enough fuel to cremate their own dead. Those lucky enough to survive the plague developed an immunity, and so became the main caretakers of those who later fell ill.
A mass grave and nearly 1,000 tombs, dated to between 430 and 426 BC, have been found just outside Athens' ancient Kerameikos cemetery. The mass grave was bordered by a low wall that seems to have protected the cemetery from a wetland. Excavated during 1994-95, the shaft shaped grave may have contained a total of 240 individuals, at least ten of them children. Skeletons in the graves were randomly placed with no layers of soil between them.
Excavator Efi Baziotopoulou-Valavani, of the Third Ephoreia (Directorate) of Antiquities, reported that "(t)he mass grave did not have a monumental character. The offerings we found consisted of common, even cheap, burial vessels; black-finished ones, some small red-figured, as well as white lekythoi (oil flasks) of the second half of the fifth century B.C. The bodies were placed in the pit within a day or two. These [factors] point to a mass burial in a state of panic, quite possibly due to a plague."
The plague also caused religious strife. Since the disease struck the virtuous and sinful alike, people felt abandoned by the gods and refused to worship them. The temples themselves were sites of great misery, as refugees from the Athenian countryside had been forced to find accommodation in the temples. Soon the sacred buildings were filled with the dead and dying. The Athenians pointed to the plague as evidence that the gods favoured Sparta and this was supported by an oracle that said that Apollo himself (the god of medicine) would fight for Sparta if they fought with all their might. An earlier oracle had stated that "War with the Dorians [Spartans] comes and at the same time death".
Thucydides was skeptical of these conclusions and believed that people were simply being superstitious. He relied upon the prevailing medical theory of the day, Hippocratic theory, and strove to gather evidence through direct observation. He noted that birds and animals who ate plague-infested carcasses died as a result, leading him to conclude that the disease had a natural rather than supernatural cause.
Thucydides himself suffered the illness, and survived. He was therefore able to accurately describe the symptoms of the disease within his history of the war.
"As a rule, however, there was no ostensible cause; but people in good health were all of a sudden attacked by violent heats in the head, and redness and inflammation in the eyes, the inward parts, such as the throat or tongue, becoming bloody and emitting an unnatural and fetid breath."
"These symptoms were followed by sneezing and hoarseness, after which the pain soon reached the chest, and produced a hard cough. When it fixed in the stomach, it upset it; and discharges of bile of every kind named by physicians ensued, accompanied by very great distress."
"In most cases also an ineffectual retching followed, producing violent spasms, which in some cases ceased soon after, in others much later."
"Externally the body was not very hot to the touch, nor pale in its appearance, but reddish, livid, and breaking out into small pustules and ulcers. But internally it burned so that the patient could not bear to have on him clothing or linen even of the very lightest description; or indeed to be otherwise than stark naked. What they would have liked best would have been to throw themselves into cold water; as indeed was done by some of the neglected sick, who plunged into the rain-tanks in their agonies of unquenchable thirst; though it made no difference whether they drank little or much."
"Besides this, the miserable feeling of not being able to rest or sleep never ceased to torment them. The body meanwhile did not waste away so long as the distemper was at its height, but held out to a marvel against its ravages; so that when they succumbed, as in most cases, on the seventh or eighth day to the internal inflammation, they had still some strength in them. But if they passed this stage, and the disease descended further into the bowels, inducing a violent ulceration there accompanied by severe diarrhea, this brought on a weakness which was generally fatal." "For the disorder first settled in the head, ran its course from thence through the whole of the body, and even where it did not prove mortal, it still left its mark on the extremities; for it settled in the privy parts, the fingers and the toes, and many escaped with the loss of these, some too with that of their eyes. Others again were seized with an entire loss of memory on their first recovery, and did not know either themselves or their friends."
Translation by M.I. Finley The Viking Portable Greek Historians, pp. 274-275:
Cause of the plague
Historians have long tried to identify the disease behind the Plague of Athens. The disease has traditionally been considered an outbreak of the bubonic plague in its many forms, but re-considerations of the reported symptoms and epidemiology have led scholars to advance alternative explanations. These include typhus, smallpox, measles, and toxic shock syndrome. Others have suggested anthrax, tramped up from the soil by the thousands of stressed refugees or concentrated livestock held within the walls. Based upon descriptive comparisons with recent outbreaks in Africa, Ebola or a related viral hemorrhagic fever has also been considered.
Given the possibility that symptoms of a known disease may have mutated over time, or that the plague was caused by a disease which no longer exists, the exact nature of the Athenian plague may never be known. Due to crowding caused by the influx of refugees into the city, inadequate food and water supplies, and the increase in insects, lice, rats and waste, conditions would have encouraged more than one disease in the outbreak. However, the use of more modern science is revealing clues.
In January 1999, the University of Maryland devoted their fifth annual medical conference, dedicated to notorious case histories, to the Plague of Athens. They concluded that disease that killed the Greeks and their military and political leader, Pericles, was typhus. "Epidemic typhus fever is the best explanation," said Dr. David Durack, consulting professor of medicine at Duke University. "It hits hardest in times of war and privation, it has about 20 percent mortality, it kills the victim after about seven days, and it sometimes causes a striking complication: gangrene of the tips of the fingers and toes. The Plague of Athens had all these features." In typhus cases, progressive dehydration, debilitation and cardiovascular collapse ultimately cause the patient's death.
This medical opinion is supported by the opinion of A. W. Gomme, an important researcher and interpretator of Thucydides' history, who also believed typhus was the cause of the epidemic. This opinion is expressed in his monumental work "Historic Comments on Thucydides", completed after Gomme's death by A. Andrewes and K. J. Dover. Angelos Vlachos (Άγγελος Βλάχος), a member of the Academy of Athens and a diplomat, in his "Remarks on Thoucydides" (in Greek: Παρατηρήσεις στο Θουκυδίδη, 1992, Volume I, pages 177-178) acknowledges and supports Gomme's opinion: "Today, according to Gomme, it is generally acceptable that it was typhus" ("Σήμερα, όπως γράφει ο Gomme, έχει γίνει από όλους παραδεκτό ότι ήταν τύφος").
A different answer was found in a recent DNA study on teeth from an ancient Greek burial pit, led by Manolis Papagrigorakis of the University of Athens, found DNA sequences similar to those of the organism that causes typhoid fever. Symptoms generally associated with typhoid resemble Thucydides' description. They include:
a high fever from 39 °C to 40 °C (103 °F to 104 °F) that rises slowly;
bradycardia (slow heart rate)
myalgia (muscle pain)
lack of appetite
in some cases, a rash of flat, rose-colored spots called "rose spots"
extreme symptoms such as intestinal perforation or hemorrhage, delusions and confusion are also possible.
Other scientists have disputed the findings, citing serious methodologic flaws in the dental pulp-derived DNA study. In addition, as the disease is most commonly transmitted through poor hygiene habits and public sanitation conditions, it is an unlikely cause of a widespread plague, emerging in Africa and moving into the Greek city states, as reported by Thucydides.
March 28th, 2007, 05:59 PM
The Thucydides Syndrome: Ebola Déjà Vu? (or Ebola Reemergent?)
To the Editor:
The plague of Athens (430-427/425 B.C.) persists as one of the great medical mysteries of antiquity. Sometimes termed “the Thucydides syndrome” for the evocative narrative provided by that contemporary observer, the plague of Athens has been the subject of conjecture for centuries. In an unprecedented, devastating 3-year appearance, the disease marked the end of the Age of Pericles in Athens and, as much as the war with Sparta, it may have hastened the end of the Golden Age of Greece. Understood by Thucydides to have its origin “in Ethiopia beyond Egypt, it next descended into Egypt and Libya” and then “suddenly fell upon” Athens’ walled port Piraeus and then the city itself; there it ravaged the densely packed wartime populace of citizens, allies, and refugees. Thucydides, himself a surviving victim, notes that the year had been “especially free of disease” and describes the following major findings: After its “abrupt onset, persons in good health were seized first with strong fevers, redness and burning of the eyes, and the inside of the mouth, both the throat and tongue, immediatelywas bloody-looking and expelled an unusually foul breath. Following these came sneezing, hoarseness . . . a powerful cough . . . and every kind of bilious vomiting . . . and in most cases an empty heaving ensued that produced a strong spasm that ended quickly or lasted quite a while.” The flesh, although neither especially hot nor pale, was “reddish, livid, and budding out in small blisters and ulcers.” Subject to unquenchable thirst, victims suffered such high temperatures as to reject even the lightest coverings. Most perished “on the ninth or seventh day . . . with some strength still left or many later died of weakness once the sickness passed down into the bowels, where the ulceration became violent and extreme diarrhea simultaneously laid hold (2.49).” Those who survived became immune, but those who vainly attended or even visited the sick fell victim.
By comparison, a modern case definition of Ebola virus infection notes sudden onset, fever, headache, and pharyngitis, followed by cough, vomiting, diarrhea, maculopapular rash, and hemorrhagic diathesis, with a case-fatality rate of 50% to 90%, death typically occurring in the second week of the disease.
Disease among health-care providers and care givers has been a prominent feature. In a review of the 1995 Ebola outbreak in Zaire, the Centers for Disease Control and Prevention reports that the most frequent initial symptoms were fever (94%), diarrhea (80%), and severe weakness (74%), with dysphagia and clinical signs of bleeding also frequently present. Symptomatic hiccups was also reported in 15% of patients.
During the plague of Athens, Thucydides may have made the same unusual clinical observation. The phrase lugx kene, which we have translated as “empty heaving,” lacks an exact parallel in the ancient Greek corpus. Alone, lugx, means either “hiccups” or “retching” and is infrequently used, even by the medical writers. Although contexts usually dictate “retching,” we note unambiguous “hiccups” in Plato’s Symposium (185C). In his thorough commentary on the Thucydides passage, the classicist D. L. Page remarks: “Hiccoughs is misleading, unless it is enlarged to include retching.” Regarding “empty, unproductive retching [he] has noted no exact parallel . . . in the [writings of the] doctors, but . . . tenesmus comes very close to it” . A CD-ROM search of Mandell, Bennett, and Dolin discloses no reference to either “hiccups” or “singultus” in the description of any disease entity.
The profile of the ancient disease is remarkably similar to that of the recent outbreaks in Sudan and Zaire and offers another solution to Thucydides’ ancient puzzle. A Nilotic source for a pathogen in the Piraeus, the busy maritime hub of the Delian League (Athens’ de facto Aegean empire), is clearly plausible. PCR examination of contemporaneous skeletal and archaeozoological remains might test this hypothesis against the 29 or more prior theories.
March 28th, 2007, 10:12 PM
how many estimated deaths worldwide ?
I couldn't find it. Why are they not addressing this question,
why not speculating about it ?
March 29th, 2007, 02:38 AM
Originally Released: January, 1999
Plague of Athens: Another medical mystery solved at University of Maryland
Another medical mystery -- the Plague of Athens, which contributed to the end of the Golden Age of Greece -- may have been solved at the fifth annual medical conference dedicated to notorious case histories of the past.
It was probably typhus fever that killed the Greeks and their military and political leader, Pericles, according to "detectives" David Durack, M.B., D.Phil., and Robert Littman, M.Litt. Ph.D.
Each year since 1995, the University of Maryland School of Medicine and the Veterans Affairs Maryland Health Care System have held a special historical "clinicopathologic conference," an exercise in which the history of an unnamed patient's illness is presented to an experienced clinician for discussion in an academic setting. This method teaches medical students and residents how experienced clinicians would approach a difficult or challenging case.
"We present an unusual modern case on a weekly basis, but once a year we stray from our modern lives and discuss a historical figure," said Philip A. Mackowiak, M.D., professor and vice chair of medicine at the University of Maryland School of Medicine and director of medical care at the VA Maryland Health Care System. "Over the past four years we have discussed the deaths of Edgar Allan Poe, Alexander the Great, and Ludwig van Beethoven, as well as the mental health of General George A. Custer." This year, the historical figure was Pericles, the military and political leader of the Golden Age of Athens.
For the first time, the conference was broadcast over the World Wide Web, with the help of Condor Technology Solutions, Inc. and SOBO Video Productions, Inc., as an interactive forum for hundreds of medical investigators around the world. It was broadcast from the School of Medicine's Davidge Hall, the oldest building in the United States continuously used for medical education, dating to 1812.
Dr. Durack is consulting professor of medicine at Duke University, where he had been chief of the Division of Infectious Diseases before becoming vice president, medical affairs at Becton Dickinson Microbiology Systems. Dr. Littman is Professor of Classical Languages at the University of Hawaii, Manoa. His expertise is in ancient medicine and the social and political history of Athens.
Both scholars doubt previous theories that the Plague of Athens was caused by ebola, bubonic plague, dengue fever, influenza or measles because the symptoms described in ancient historical records do not match those diseases. Despite evidence that it was typhus fever spread either by lice or by air, Dr. Durack and Dr. Littman do not rule out the possibility the Plague of Athens was caused by something else.
"Epidemic typhus fever is the best explanation," said Dr. Durack. "It hits hardest in times of war and privation, it has about 20 percent mortality, it kills the victim after about seven days, and it sometimes causes a striking complication: gangrene of the tips of the fingers and toes. The Plague of Athens had all these features."
Dr. Durack explains: "The Plague of Athens is a medical and historical classic, which has fascinated doctors and historians for centuries. Even if we can never be absolutely sure what caused the plague, the story is still relevant today because we continue to experience the outbreak of new emerging infectious diseases. The Plague of Athens can give us insights on how to respond to AIDS, Legionnaire's Disease, drug-resistant organisms, toxic shock, hantavirus infections and other emerging diseases."
Dr. Littman elaborates: "Plagues are a recurring phenomenon in human history and they are something that is a constant fear of mankind ? being struck by an unknown disease. This plague was of tremendous importance because it signaled the downfall of the Golden Age of Athens, caused the death of Pericles and 25 percent of the population, weakened Athens at the beginning of its 27-year war with Sparta and became the first medical outbreak so thoroughly recorded by historians.""
This kind of medical detective work on famous cases is interesting and even fun, yet it is no more important than the kind of intensive scrutiny that goes on every day for every patient at an academic medical center like Maryland," said R. Michael Benitez, M.D., assistant professor of cardiology and cofounder with Dr. Mackowiak of the event.
Here are the facts of the case given to Dr. Durack and Dr. Littman for their diagnosis:
A 65 year old man is seen because of fever, headache, sore throat and vomiting. He had been in excellent health until approximately one week earlier when he noted a sudden onset of a headache, ocular erythema and halitosis. On the third day of his illness, he began sneezing and coughing, and noted bilateral pleuritic chest pain. On the sixth day of his illness, the patient began projectile vomiting productive of dark bilious fluid.
At this time, he complained of fever so intense that he would not allow himself to be covered with even the lightest clothing. He also complained repeatedly of insatiable thirst. Although he drank copious amounts of water, he obtained little relief from his thirst, at least in part, because of persistent vomiting. The patient has had no prior serious illnesses. He drinks wine in moderation and does not use tobacco. He is taking no medications and has no known allergies.
The patient is a resident of Athens, Greece, where he has lived his entire life, except for short excursions throughout the eastern Mediterranean. His early years were spent in the military where he rose to the rank of commanding general of the armed forces. In recent years he has devoted himself to politics.
The patient is married. Both of his children by this marriage, sons aged 30 and 25 years, have died recently of illnesses similar to the patient's. Another son (by his mistress), aged 10 years, is alive and well. The patient's father died in battle at 47 years of age. The condition of his mother is unknown. He has a brother and a sister. His sister recently died in her mid-60s of an illness similar to the patient's. The condition of his brother, who is also approximately 60 years of age, is unknown.
An illness similar to the patient's has afflicted large numbers of his fellow residents of Athens. The epidemic began roughly a year prior to the onset of the patient's illness, one year after the outbreak of hostilities with a neighboring city state. Interestingly, although enemy forces have besieged Athens continuously during this period, their troops appear not to have been affected by the illness raging within the city proper. Refugees entering the city from the surrounding countryside, however, have been quickly affected.
The disease attacks all age groups and socioeconomic strata, with the highest attack rates occurring among physicians and other care givers. The illness, which is reported to have originated in sub Saharan Africa, had not been seen in Athens prior to the current epidemic. It is believed to have entered Athens through Piraeus, the city's port. In addition to Athens, much of the eastern Mediterranean is now afflicted with the disease.
The current epidemic has waxed and waned since its appearance without apparent seasonality. Of those who have contracted the disease, approximately a quarter have died. Persons recovering exhibit immunity to further attacks of the disease. Unfortunately, such persons are sometimes permanently disabled by residua of the disease, such as encephalopathy, blindness, and/or distal necrosis of extremities. Although there have been reports of dogs and birds dying after feeding on the corpses of those succumbing to the illness, these reports are unsubstantiated.
The patient is alert and oriented and extremely weak. He appears well nourished, although moderately dehydrated. The pulse is rapid and thready. Respirations are deep. The patient complains of an intense fever, and yet his skin is moist and normothermic to the touch. The head is dolichocephalic. The conjunctivae are injected. The oropharynx is red, inflamed and covered with clotted blood. The breath is fetid. Diffuse rales, ronchi and wheezes are heard throughout both the lungs. There is a generalized, erythematous, maculopapular rash.
Supportive therapy consisting of cool baths is administered without relief. On the 9th day of illness, the patient develops profuse diarrhea, which unfortunately, is not examined for blood or inflammatory cells. Progressive dehydration and debilitation ensues. Cardiovascular collapse occurs on the 11th day of illness, and the patient dies.
March 29th, 2007, 03:21 AM
THE PLAGUE IN ATHENS DURING THE PELOPONNESIAN WAR --------------------------------------------------------------------------------
HISTORICAL BACKGROUND TO THUCYDIDES' DESCRIPTION OF THE PLAGUE
In the early fifth century, the Greeks, apparently against all odds, managed to defeat the numerically far superior forces of the expansive Persian empire in two invasions, in 490 (the battle of Marathon), and again in 480. This sobering experience led a number of Greek cities to join together with Athens in a sea league for the dual purpose of punishing the hubris of the Persians and gaining some recompense for the destruction's of the war. Over time, however, Athens turned this league into an instrument of its own imperial power, enforcing its will upon its allies, now become subjects, and openly appropriating the funds of the league for the creation of monuments of imperial splendor (notably, the Parthenon). This naturally provided a focal point for the jealousies and rivalries of the various Greek poleis, and especially for the Spartans, the acknowledged masters of infantry (hoplite) warfare. The result was an extended war, lasting from 431 to 404 BCE, that pitted the hoplite forces of the Peloponnesus, Sparta and its allies, against the maritime superiority of Athens and its allies.
Thucydides is our primary source for this war. He was an upper-class Athenian and lived through the war (or nearly though it -- it is unclear when he died, but he left his work unfinished). While serving as general he was exiled for coming late to an engagement, and as a result he spent much of the war in exile in the northern Aegean where his family had land -- the same territory in which the doctors who composed the Epidemics were traveling. He was highly aware of the intellectual currents of the time, and both medicine and rhetoric have influenced his presentation of the war.
According to Thucydides, at first enthusiasm for the war was high. Large numbers of young men on both sides who had no experience of war saw it as an adventure and a potential source of profit. But even the first year of the war brought losses and hardship to the Athenians, much of it caused by the radical strategy advocated by the Athenians' current political leader, Pericles, to rely mainly on Athenian naval supremacy: bring all the people in Attica into the city and abandon the outlying countryside to destruction by the Spartans, relying upon the navy to supply the city with food and other necessities that would be carried through the fortified corridor from the port of the Pireus into the city itself (the Long Walls).
In the winter following the first year of the war, morale had fallen considerably in Athens. It was at the year's public funeral (held annually for men who had fallen in battle in the course of the year) that Pericles pronounced the famous funeral oration that is so often quoted as summing up the greatness of Periclean Athens (Thuc.2.34-46). Pericles' speech was an encomium on Athenian democracy and it provided the high point of Thucydides' account of the war. It is immediately and dramatically followed in his account by the description of the plague which struck the city in the following summer, as the Spartans again invaded Attica. Crowded together in the city as the result of Pericles' strategy, the Athenians fell victim to the virulent sickness that was spreading throughout the eastern Mediterranean. People died in large numbers, and no preventive measures or remedies were of any avail. It has been estimated that a quarter, and perhaps even a third, of the population was lost. The plague returned twice more, in 429 and 427/6, and Pericles himself died during this time, probably as a result of the disease.
By 415 the military rolls were full again (Thuc. 6.26), but the thirty-plus generation that filled offices and provided leadership had not yet been replenished.
Thucydides' himself suffered from the plague and recovered; thus he was an eyewitness to the catastrophe (might this have affected his reportage of it?). His expressed intention was not to suggest causes or to identify the illness, but to provide as complete and accurate a description as possible so that the illness could be recognized should it ever recur in the future (in this he showed the influence of the Hippocratic emphasis on prognosis). But the reader cannot be unaware of the dramatic contrast to the idealism that had just been expressed in the Funeral Oration. Thucydides lived in an era in which rhetoric was a highly praised and widely practiced skill, and its effect on his work can often be noticed. Unfortunately, none of our other sources mentions the outbreak, and we cannot confirm his account directly. While it is true that the lack of other notices in literature or archaeological evidence such as mass graves is somewhat puzzling, nevertheless, Thucydides was writing for an audience that included many who had lived through the events themselves, so that we cannot suspect outright invention on his part.
IS RETRODIAGNOSIS POSSIBLE? WHAT WAS THE PLAGUE?
Ironically, despite Thucydides' detailed description, modern scholars are still not able to agree on the identity of the disease. It was clearly not the bubonic plague of the Black Death in the 14th century, for the characteristic symptom of the bubo is not found in Thucydides' description. Other candidates that have been suggested are measles, typhus, ergotism, and even toxic shock syndrome as a complication of influenza. The case for typhus seems strongest both epidemiologically -- the age group is similar -- and from the standpoint of the symptoms. Typhus is characterized by fever and a rash, gangrene of the extremities occurs, it is known as a "doctors' disease" from its frequent incidence among care-givers, it confers immunity, and patients during a typhus epidemic in the First World War were reported to have jumped into water tanks to alleviate extreme thirst. But the fit is not exact. The rash is difficult to identify on the basis of Thucydides' description (modern medical texts often employ pictures to differentiate rashes), and the state of mental confusion may not fit Thucydides' description. In the long run, all such attempts at identification may be futile, however. Diseases develop and change over time, and it may be, as A.J.Holladay and J.C.F.Poole argue (Classical Quarterly 29 (1979) 299ff.), that the plague of the 5th century no longer exists today in a recognizable form. In the course of their argument they provide a full bibliography for the various candidates up to that time. New suggestions continue to be made: toxic shock complicated by influenze: A.D.Langmuir, et al, "The Thucydides Syndrome," New England Journal of Medicine (1985) 1027-30; Marburg-Ebolu fevers: G.D.Scarrow, "The Athenian Plague. A possible diagnosis," Ancient History Bulletin 11 (1988) 4-8. Holladay and Poole credit Thucydides for first recognizing the factor of contagion; for another view on this issue, see J.Solomon, "Thucydides and the recognition of contagion," Maia 37 (1985) 121ff.; on the intellectual effects of the plague, see J.Mikalson, "Religion and the plague in Athens 431-427 BC," Greek, Roman and Byzantine Studies 10 (1982) 217ff.
Thucydides' emphasis on the social and moral effects of the Athenian plague may be augmented by studies of the effects of the Black Death in Europe (for example, Millard Meiss, Painting in Florence and Siena after the Black Death, 1978). Perhaps a third of the population died, and a large number of these were sudden and untimely deaths, occurring indifferently to those of both good and bad character. Appeals to the gods were fruitless. Normal expectations were upset as distant relatives of the wealthy suddenly found themselves the possessors of unexpected fortunes, and the normal pool of aristocratic candidates for political office was swept away. (For example, both of Pericles' legitimate sons died, and he made a special plea to set aside the citizenship law, which he himself had sponsored in 451, so that his son by the Milesian Aspasia could be declared a citizen.)
March 30th, 2007, 03:21 PM
The Athenian Plague: 430 B.C. - 426 B.C.
By William Sutherland
As the Peloponnesian War (431-404 B.C.) loomed with the worsening of the cold war between Athens and Lacedæmonia (Sparta), an ancient oracle was said to have provided a warning to Athens and inspiration to Lacedæmonia: “A Dorian war shall come and with it death… “When the god was asked whether they (Lacedæmonia) should go to war, he answered that” if they put their might into it, victory would be theirs…” At the time Athens was in its golden age (479-431 B.C.) under the enlightened leadership of Pericles (495-429 B.C.) who had introduced the world’s first form of democracy under which individual rights, literature and the arts thrived.
According to Thucydides (460-400 B.C.), an Athenian general, political critic and historian, enthusiasm and support for the Peloponnesian War among Athenians “was high” when the conflict erupted. Many, especially the young, “saw it as an adventure and a potential source of profit.” However, support and enthusiasm for the war quickly waned when Athens was hit by misfortune (the Peloponnesians led by Lacedæmonia invaded Attica committing some of the “worst ravages”) and the plague that decimated much of the City’s population.
As the Attica countryside was overrun in April 430 B.C., Athenians following Pericles’ instructions – “bring all the people… into the city” took shelter in “parts… that were not built over and in the temples and chapels of the heroes… and other such places as were always kept closed” including the Pelasgian citadel (just south of the Acropolis) where residence “had been forbidden by a… Pythian oracle which [read]: ‘Leave the Pelasgian parcel desolate, Woe worth the day that men inhabit it!’” The Attica countryside was abandoned to Lacedæmonian destruction, which targeted “not merely [Athenian] corn and fruits, but even the garden vegetables near the city, [which] were rooted up and destroyed” as Athenians placed sole reliance upon the supremacy of their navy to provide “food and other necessities.” As crowds packed within Athens’ confines, the city’s existing “sanitation and drainage” infrastructure could not accommodate the bloated population, creating “appalling” conditions on top of those left in the wake of 431-430 B.C. winter as described by Greek historian Diodorus Siculus (90-30 B.C.)
As a result of heavy rains… the ground had become soaked with water, and many low-lying regions, having received a vast amount of water, turned into shallow pools and held stagnant water, very much as marshy regions do; and when these waters became warm in the summer and grew putrid, thick foul [vapors] were formed, which, rising up in fumes, corrupted the surrounding air, the very thing which may be seen taking place in marshy grounds which are by nature pestilential.
In addition, the immune systems of Athenians were also compromised due to the lack of quality food within the City. “Contributing to the disease was the bad character of the food available; for the crops which were raised that year were altogether watery and their natural quality was corrupted,” Diodorus Siculus stated. In short, the situation was optimal for the outbreak of a deadly epidemic.
“Not many days after [the arrival of the Peloponnesians] in Attica the plague… began to show itself among the Athenians. It was said that it had broken out in many places previously in the neighborhood of Lemnos and elsewhere; …first… it is said in the parts of Ethiopia above Egypt, and thence descended into Egypt and Libya and into most of the king’s country [as well as in parts of the Persian empire]… but a pestilence of such extent and mortality was nowhere remembered. Suddenly falling upon Athens, it first attacked the population in Piræus – which was the occasion of their saying that the Peloponnesians had poisoned the reservoirs, there being as yet no wells there – and afterwards appeared in the upper city, when the deaths became much more frequent.” The plague attacked all regardless of “class, sex, or age,” Thucydides wrote.
As the outbreak began, physicians, including Hippocrates (460-377 B.C.), often referred to as the “Father of Medicine,” and priests rushed to the aid of the stricken. Yet their efforts were futile. Thucydides recounted their heroic efforts – “Neither were the physicians at first of any service, ignorant as they were of the proper way to treat it, but they died themselves the most thickly, as they visited the sick most often; nor did any human art succeed any better. Supplications in the temples, divinations, and so forth were found equally futile, till the overwhelming nature of the disaster at last put a stop to them altogether [when it was shown that ‘the oracles had no useful advice to offer’ and prayers went unanswered].”
Per Diodorus Siculus, “Athenians… ascribed the causes of their misfortune to [Apollo, a] deity. Consequently, acting upon the command of a certain oracle, they purified the island of Delos, which was sacred to [him] and had been defiled, as men thought, by the burial there of the dead. Digging up, therefore, all the graves on Delos, they transferred the remains to the island of Rheneia, as it is called, which lies near Delos. They also passed a law that neither birth nor burial should be allowed on Delos. And they also celebrated the festival assembly, the Delia, which had been held in former days but had not been observed for a long time.” Yet the plague continued unchecked, leading to panic and great despair.
With the medical efforts, “the usual remedies” being administered in Athens to no avail and the plague spreading north, the Thessalians grew fearful. “No remedy was found that could be used as a specific; for what did good in one case did harm in another.” Out of desperation they urged Hippocrates to return to Thessaly with promises of unlimited riches as recounted by Hippocrates’ son in the “Speech of the Envoy.”
In the time in which the plague was running through the barbarian land north of the Illyrians and Pæonians, when the evil reached that area, the kings of those peoples sent to Thessaly after my father [Hippocrates] because of his reputation as a physician, which, being a true one, had managed to go everywhere. He had lived in Thessaly previously and had a dwelling there then. They summoned him to help, saying that they were not going to send gold and silver and other possessions for him to have, but that he could carry away all that he wanted when he had come to help. And he made inquiry what kind of disturbances there were, area by area, in heat and winds and mist and other things that produce unusual conditions. When he had gotten everyone’s information he told them to go back, pretending that he was unable to go to their country. But as quickly as he could he arranged to announce to the Thessalians by what means they could contrive protection against the evil that was coming.
Hippocrates had good reason to avoid Thessaly. “Physicians were among the first to die, since they contracted the disease from its earliest victims. …the mortality among [physicians] was unusually high, because they most frequently came into contact with the disease.”
When the plague began, despite word of similar outbreaks in North Africa, Persia and Rome, the latter in about 446 B.C., it was still unexpected by Athenians. “That year then is admitted to have been otherwise unprecedentedly free from sickness; and such few cases as occurred all eventuated in this. As a rule, however, there was no ostensible cause; but people in good health were all of a sudden attacked by violent heats in the head, and redness and inflammation in the eyes, the inward parts, such as the throat or tongue, becoming bloody and emitting an unnatural and fetid breath,” Thucydides began. “These symptoms were followed by sneezing and hoarseness, after which the pain soon reached the chest, and produced a hard cough. When it fixed in the stomach, it upset it; and discharges of bile of every kind… ensued, accompanied by very great distress. In most cases… an ineffectual retching followed, producing violent spasms, which in some cases ceased soon after, in others much later. Externally the body was not very hot to the touch, nor pale in its appearance, but reddish, livid, and out into small pustules and ulcers. But internally it burned so that the patient could not bear to have on him clothing or linen even of the very lightest description… What they would have liked best would have been to throw themselves into cold water; as indeed was done by some of the neglected sick, who plunged into the rain tanks in their agonies of unquenchable thirst… though it made no difference whether they drank little or much. Besides this, miserable feeling of not being able to rest or sleep never ceased to torment them. The body meanwhile did not waste away so long as the distemper was at its height, but held out to a marvel against its ravages; so that when they succumbed, as in most cases, on the seventh or eighth day to the internal inflammation, they had still some strength in them. But if they passed this stage, and the disease descended further into the bowels, inducing a violent ulceration there accompanied by severe diarrhea, this brought on a weakness, which was generally fatal. For the disorder first settled in the head, ran its course from thence through the whole of the body, and even where it did not prove mortal, it still left its mark on the extremities; for it settled in the privy parts, the fingers and the toes, and [even the] eyes,” he added. Generally, even though there were survivors, including Thucydides, as well as some who “were seized with an entire loss of memory on their first recovery, and did not know either themselves or their friends,” the disease was fatal. “Seven to nine days the disease lasted, and when it passed it left behind it a terrible weakness, so that many perished of exhaustion.”
To compound matters, Athenian soldiers were also hindered by the outbreak as Diodorus Siculus wrote – “As for the Athenians, they could not venture to meet [the Lacedæmonians] in a pitched battle, and being confined as they were within the walls, found themselves involved in an emergency caused by the plague; for since a vast multitude of people of every description had streamed together into the city, there was good reason for their falling victim to diseases as they did, because of the cramped quarters, breathing air which had become polluted.” As an indicator of the plague’s severity and the adverse impact it had on the Athenian military, Pericles had “started with 150 triremes (ancient ships utilizing three banks of oars and sails for mobility) and a large number of hoplites and horsemen” to attack the Peloponnesus states when it initially broke out. After being joined by plague-infected reinforcements, this Athenian force returned a few years later “in a pitiable condition” having suffered a great loss of life.
~Continued From Part 1~
Before long Athenian morale had fallen sharply. In an attempt to boost his peoples’ sagging spirits and restore the confidence they had lost, Pericles spoke about the City’s greatness during the annual “public funeral” that was held to honor her war dead.
“Our form of government does not enter into rivalry with the institutions of others. We do not copy our neighbors, but are an example to them. …we are called a democracy, for the administration is in the hands of the many and not the few,” the Athenian leader declared. “There is no exclusiveness in our public life… we are lovers of the beautiful, yet simple in our tastes, and we cultivate the mind without loss of manliness. Wealth we employ, not for talk and ostentation, but when there is a real use for it. To avow poverty with us is no disgrace; the true disgrace is in doing nothing to avoid it. An Athenian citizen does not neglect the state because he takes care of his own household; and even those of us who are engaged in business have a very fair idea of politics,” he added before addressing the courage of the City’s defenders who had fallen in battle. “Methinks that a death such as theirs… gives the true measure of a man’s worth; it may be the first revelation of his virtues… And when the moment came they were minded to resist and suffer, rather than to fly and save their lives; …on the battle-field their feet stood fast, and in an instant, at the height of their fortune, they passed away from the scene, not of their fear, but of their glory. Such was the end of these men; they were worthy of Athens.”
Yet the epidemic was too great for Athenians to bear, which was made even worse by the hot summer as described by Diodorus Siculus – “the etesian winds… by which normally most of the heat in the summer is cooled failed to blow; and when the heat intensified and the air grew fiery, the bodies of the inhabitants, being without anything to cool them, wasted away.”
Social order collapsed as many abandoned the dead along with their sick friends and family since “strong and weak constitutions proved equally incapable of resistance...” To Thucydides, this was the worst part of the epidemic – “By far the most terrible feature in the malady was the dejection which ensured when any one felt himself sickening, for the despair into which they instantly fell took away their power of resistance, and left them a much easier prey to the disorder; besides which, there was the awful spectacle of men dying like sheep, though having caught the infection in nursing each other. …On the other hand, if they were afraid to visit each other, they perished from neglect; indeed many houses were emptied… for want of a nurse: on the other, if they ventured to do so, death was the consequence.”
At the same time, as mentioned earlier, many suffering from the affects of the plague threw themselves into cisterns and water tanks – “…all the illnesses which prevailed at the time were found to be accompanied by fever, the cause of which was the excessive heat. And this was the reason why most of the sick threw themselves into the cisterns and springs in their craving to cool their bodies,” Diodorus Siculus added. Some even amputated extremities such as fingers and toes in a desperate attempt to survive. “[N]umerous unburied bodies were left lying here and there.”
Per Thucydides, “The bodies of dying men lay one upon another, and half-dead creatures reeled about the streets and gathered round all the fountains in their longing for water. The sacred places also in which they had quartered themselves were full of corpses… for as the disaster passed all bounds, men, not knowing what was to become of them, became utterly careless of everything… All burial rites before in use were entirely upset, and they buried the bodies as best they could… [Wood used for pyres, became scarce.] Sometimes getting the start of those who raised a pile, they threw their own dead body upon the stranger’s pyre and ignited it… Fear of the gods or law there was none to restrain them… No one expected to be brought to trial for his offenses, but each felt that a far severer sentence had been already passed upon all.” Even beasts and birds of prey avoided the dead – “All the birds and beasts that prey upon human bodies, either abstained from touching them, or died after tasting them. In proof of this, it was noticed that birds of this kind actually disappeared; they were not about the bodies, or indeed to be seen at all,” Thucydides wrote.
With no one certain that they would survive since it seemed like everyone regardless of the precautions they took, fell ill – “Athenians avoided each other but perished anyway,” most ignoring the “moans of the dying” as they “hastened to gratify their tastes, and abandoned themselves to the greatest moral depravity.”
Per Thucydides, “Men now coolly ventured on what they had formerly done in a corner, and not just as they pleased, seeing the rapid transitions produced by persons in prosperity suddenly dying and those who before had nothing succeeding to their property. So they resolved to spend quickly and enjoy themselves, regarding their lives and riches as alike things of a day. Perseverance in what men called honor was popular with none, it was so uncertain whether they would be spared to attain the object; but it was settled that present enjoyment, and all that contributed to it, was both honorable and useful. Fear of gods or law of man there was none to restrain them. As for the first, they judged it to be just the same whether they worshipped them or not, as they saw all alike perishing; and for the last, no one expected to live to be brought to trial for his offenses, but each felt that a far severer sentence had been already passed upon them all and hung ever over their heads, and before this fell it was only reasonable to enjoy life a little.”
At the same time, with 25% of the City’s population dead, the people turned on their leader. They blamed Pericles, whom they viewed as “the author of the war” for the outbreak (because of his strategy of bringing everyone within the City’s walls even though he “had had no [viable] alternative… since it would have been suicidal to engage the larger and better-trained [Lacedæmonian] infantry” in the Attica countryside) and even urged capitulating to Lacedæmonia’s demands. According to Diodorus Siculus, “Athenians, now that the trees of their countryside had been cut down (by the Lacedæmonians who ravaged their lands) and the plague was carrying off great numbers, were plunged into despondency and became angry with Pericles…” This emboldened Pericles’ political opponents, Kleon, Simmias, and Lakratidas, to bring suit against him on frivolous grounds of “mismanagement of public funds.”
When addressing the charges, Pericles spoke with determination, offering no apologies – “I was expecting this outburst of indignation; the causes of it are not unknown to me… I allow that for men who are in prosperity and free to choose it is great folly to make war. But when they must either submit and at once surrender independence, or strike and be free, then he who shuns and not he who meets the danger is deserving of blame. For my own part, I am the same man and stand where I did. But you are changed; for you have been driven by misfortune to recall the consent which you gave when you were yet unhurt, and to think that my advice was wrong because your own characters are weak… Anything which is sudden and unexpected and utterly beyond calculation, such a disaster for instance as this plague coming upon other misfortunes, enthralls the spirit of a man.” As he spoke to the Athenian Ecclesia, Pericles still urged courage and strength while appealing for understanding – “…being the citizens of a great city and educated in a temper of greatness, you should not succumb to calamities however overwhelming, or darken the luster of your fame… You must not be led away by the advice of such citizens as these [Pericles’ accusers], nor be angry with me; for the resolution in favor of war was your own as much as mine. What if the enemy has come and done what he was certain to do when you refused to yield? What too if the plague followed? That was an unexpected blow… I am well aware that your hatred of me is aggravated by it. But how unjustly…”
By then the anger was so strong that Pericles’ defense fell on deaf ears. He was fined between 15 to 80 talents and removed from power. Afterwards per Telemachus Timayenis, Pericles “calmly submitted to this terrible trial, his physical nature now succumbed to the most frightful sufferings. The pestilence, which spared no one, carried away many of his best friends and many of his relatives, including [his first wife], his sister and his sons Xanthippus and Paralus. He who had so many times insisted upon courage and fortitude in his fellow citizens, and had shown himself worthy of his words, when he saw his dear son Paralus dead, and had drawn near in order to place a wreath on that beloved head, could not restrain himself, and, for the first time in his life, wept bitterly.” He also held the same warm regard for his close circle of friends, whom he also mourned as they fell victim to the plague, demonstrating that “behind his almost icy reserve there was a warm and affectionate heart.”
However, by September 430 B.C., Athenians had had a change of heart “overcome with remorse,” especially when they “saw how much inferior were his successors.” They elected Pericles, who had also begun to suffer from the affects of the plague back to his former office of “Strategos.” However, only the persuasion of his closest friends convinced Pericles to again “take the helm of affairs,” which he then used to gain the permission of Athenians to bypass the citizenship law he had enacted in 451 B.C. to grant his “illegitimate” son, whom he loved to his last breath, Athenian citizenship. Pericles had requested an exception because this surviving son had been born to his mistress, a beautiful educated Milesian woman, Aspasia (470 B.C.-410 B.C.), who had defied the stereotype of the day by taking advantage of her non-Athenian status to become “a great writer… and philosopher.”
Afterwards with Pericles back in charge, the war appeared to go well. The siege of Potidæa, triggered by a popular revolt against Athens came to an end in January 429 B.C. when the Athenian military allowed its inhabitants to depart for neighboring states. Athens laid siege to Platæa two months later (which ultimately surrendered in 427 B.C.) while Admiral Phormio brought the City a “remarkable victory” in the Corinthian Gulf, after engaging with only 20 ships against a Peloponnesian force that had almost three times that number as they attempted to “wrest Acarnania from the Athenian alliance.” It also helped that in 429 B.C., Lacedæmonian forces, unlike in 430 B.C. and every year afterwards, refused to enter and ravage Attica because “the condition of the plague-stricken city made approach [too] dangerous.”
By this time, Pericles’ devoted “service [to] his country was approaching its end” as his life slowly wasted away from the affects of the fever he was suffering from the plague. “He was dying” in sorrow because his “house had been left desolate by the plague” with the deaths of the aforementioned family members and many relatives.
Then as he lay dying, slipping in and out of consciousness, Pericles, according to an account by Mestrius Plutarchus known as “Plutarch” (c. A.D. 46-127), a Greek historian and biographer, “roused himself from the slumber… he had fallen” to scold his friends who spoke “of the victories that he had gained, the power that he had held, and his nobleness of character,” stating that “these were not his chief titles to fame.” He was proudest of Athens’ democratic system of government and a man who disliked all but necessary wars, held humanity in the highest esteem, and harbored a “complete absence of vindictiveness.”
When he passed away at 64 in the autumn of 429 B.C., Pericles was the essence of Athens – a great statesman and general, “a man of action, a philosopher, [and] a lover of art” who had “lived an austere life” never “adopting the tactics of a demogogue,” so much so that in the words of Arthur Grant, “it may be doubted indeed whether any great popular leader ever had so little recourse to flattery.”
Pericles’ death, though, did not bring an end to the plague. It lingered for another three years resulting in an incalculable loss of life, leaving tens of thousands dead. By the time the plague finally lifted in 426 B.C., a third of the Athenian population had perished and the Delian confederacy headed by Athens was crumbling, sparked by the Lacedæmonian capture of Lesbos in 428 B.C. that left Chios the last independent member of the Athenian alliance. Amidst the great loss of life and chaos, Athenian “women were temporarily liberated from the strict bounds of [the City’s] custom” so that they could perform vital functions previously carried out by men. A magistrate called “gynaikonomos” was appointed to supervise their activities.
However, by this time, the very City that Pericles loved, was also nearing its end as “normal expectations were upset as distant relatives of the wealthy suddenly found themselves the possessors of unexpected fortunes, and the normal pool of aristocratic candidates for political office was swept away.” Accordingly, despite the replenishment of Athens’ military by 415 B.C., the City lacked vision and competent leadership to bring victory. In August 405 B.C., Athens suffered a crushing defeat at the hands of Lacedæmonian admiral Lysandros, who “captured most of [her] fleet’s triremes.” With the City’s fate sealed by this devastating loss, “Athens was forced to capitulate. Lysandros immediately tore down the Long Wall and the walls around Piræus” before handing power over to a proxy government.
Yet, Pericles proved prophetic when he declared that the memory of Athens’ “glory will always survive. So long as the literature of Greece calls forth admiration, and so long as the pillars of the Parthenon remain upon the Acropolis” the spirit of Pericles and Athens lives as symbols of democracy and the Hellenic golden age.
While the history and devastating affects of the Athenian plague have been known for more than 2000 years, it was not until 1994 that the disease that consisted of headaches, conjunctivitis, a rash which covered the body, and fever” with victims suffering from extremely painful stomach cramps, coughing up blood “followed by vomiting and ‘ineffectual retching’” could be retrospectively and thoroughly investigated. It was proven to be Typhoid based on DNA collected from the teeth of “at least 150 bodies, including those of infants” that had been piled hastily and haphazardly one on top of the other in a mass grave that also consisted of “a small number of [funery] vases” dating back to 430 to 429 B.C. “deep beneath Kerameikos cemetery.”
When the mass grave consisting of close to 1000 tombs that may have held 240 bodies including those of ten children, that had been “randomly placed with no layers of soil between them,” was discovered during excavation work for a subway station, Efi Baziotopoulou-Valavani immediately knew that there was something different about it since it “did not have a monumental character. The offerings we found consisted of common, even cheap, burial vessels; black-finished ones, some small red-figured, as well as white lekythoi (oil flasks) of the second half of the fifth century B.C.,” she stated in describing the grave. “The bodies were placed in the pit within a day or two. These [factors] point to a mass burial in a state of panic, quite possibly due to a plague.”
When conducting their tests, “Manolis Papagrigorakis and her colleagues at the University of Athens” selected “three random teeth samples… and extracted the pulp,” which “can store pathogens and other information about the body for centuries” and tested them for a range of bacteria – “bubonic plague, typhus, anthrax, tuberculosis, cowpox and catscratch disease before finding a match in Salmonella enterica serovar Typhi – the bacteria responsible for typhoid fever.” To guard against possible “false results,” the team also tested “two modern teeth” for the same pathogens.
Based on the test results made possible by recent advances in technology, namely “molecular biology tools (DNA PCR and sequencing techniques) which can provide retrospective diagnoses” and through historical accounts, especially by Thucydides and Diodorus Siculus, the mystery has been solved. “Typhoid fever – transmitted by contaminated food or water – [caused the] fever, rash and diarrhea” while the “quick onset” was due to the “possible evolution of typhoid fever over time.”
William Sutherland is a published poet and writer. He is the author of three books, "Poetry, Prayers & Haiku" (1999), "Russian Spring" (2003) and "Aaliyah Remembered: Her Life & The Person behind the Mystique" (2005) and has been published in poetry anthologies around the world. He has been featured in "Who's Who in New Poets" (1996), "The International Who's Who in Poetry" (2004), and is a member of the "International Poetry Hall of Fame."
He is also a contributor to Wikipedia, the number one online encyclopedia.
vBulletin® v3.8.7, Copyright ©2000-2013, vBulletin Solutions, Inc. | http://www.flutrackers.com/forum/archive/index.php/t-20251.html | 13 |
24 | Impact of El Niño on Agriculture, Fisheries and Forestry
Impact on Cereal Production and Markets
Impact on Other Crops
Impact on Livestock and Products
Impact on Fisheries
Impact on Forestry
The Role of FAO in Mitigating the Impact of El Niño
El Niño is the name given to the occasional warming of surface waters in the central and eastern equatorial Pacific Ocean. Sea-surface winds blow from east to west towards the equator and pile warm water in the upper ocean of the western tropical Pacific near Indonesia and the Australian continent. As a result of this warm pool of water, the atmosphere is heated and conditions favourable for precipitation occur there. A weakening of the winds is the first sign that an El Niño event is underway. This is accompanied by the accumulation of unusually warm water off the coast of Ecuador and Peru with a peak around Christmas season. The fishermen who first observed it named it "El Niño" ("the Christ Child"). La Niña refers to the "cold" equivalent of El Niño.
Since early March 1997 significant warming of sea-surface temperatures in the Pacific Ocean has been observed and recognized as the beginning of an El Niño phenomenon. Such a phenomenon is known to occur every 2 to 7 years, with varying degrees of intensity and duration. It usually peaks around late December. An El Niño is often associated with important subsequent changes in temperatures and precipitation in several parts of the globe, which may affect agriculture and water resources either positively or negatively. The change in sea surface temperatures also affects natural conditions for marine ecosystems.
The last two El Niños occurred in 1982/83, which caused severe flooding and extensive weather-related damage in Latin America and drought in parts of Asia, and in 1991/92, which resulted in a severe drought in Southern Africa. This year’s El Niño is regarded by various experts as one of the most severe this century with record Pacific surface temperatures being observed. Various climate agencies around the world also indicate that the phenomenon could continue throughout 1997 and possibly extend into 1998. The worst effects of El Niño are expected to be felt over the next few months and well into 1998.
No precise quantitative association between the occurrence of El Niño and changes in agricultural production have been established and it is difficult to forecast precisely the impact of El Niño in specific areas. In recent months FAO has been closely monitoring weather anomalies and assessing possible effects these may have on agricultural production in various parts of the world in order to warn about adverse situations developing and to enable preventive action.
At the global level, cereal production in 1997 is expected to be little affected by the El Niño phenomenon, despite some reduced harvests due to adverse El Niño-related weather in several countries along the equatorial belt as well as in the southern hemisphere. However, as the most intense impact of El Niño is expected from December, greatest concern is over the threat that El Niño may pose to the crops to be planted in the coming months for harvest in 1998.
Latin America is especially prone to the effects of the El Niño phenomenon. In 1982/83 El Niño resulted in severe drought and flood damage in several countries. This year, first season crops have been affected by drought in most Central American and some Caribbean countries. On average, losses are estimated at about 15 percent compared to last year’s average crops, but they have been more severe locally. In South America, wheat planting in the southern areas was affected by a wetter than normal winter season and a significant reduction in planted area is reported in Argentina and Brazil. Wheat crop yields will, however, be largely determined by the intensity of El Niño related rains in the coming months. With respect to coarse grains, sowing of the crops for harvest in 1998 is underway. Plantings in the main producing countries are expected to drop from last year’s near record levels, but as for wheat, the final outcome will depend greatly upon the weather in the coming months.
CLIMATIC ANOMALIES USUALLY ASSOCIATED WITH EL NINO EVENTS
Note: The top chart covers likely impacts of El Niño in the October to March period and the bottom chart covers impacts during April to September. D indicates drier conditions than normal, R stands for more rain than normal, and W indicates abnormally warm periods.The figure is based on two illustrations taken from the NOAA Pacific Marine Environmental Laboratory World Wide Web El Niño Theme Page.
In Asia, possible El Niño related effects over the past months include serious droughts in Indonesia, the Philippines and Thailand - countries known to be susceptible to the phenomenon. Other serious weather anomalies in the region, unrelated to El Niño, include severe drought in northeast China and Korea, D.P.R. and floods in Pakistan. These adverse weather conditions have affected some 1997 coarse grain crops and the rice crops which have still to be harvested in the coming weeks. However, despite some anticipated localized cereal shortfalls, output in 1997 for the region as a whole will still be about average. The more intense impact of the current El Niño is generally expected to occur between December 1997 and March 1998. In many countries of the region, winter wheat planting for the 1998 harvest will commence soon. In some countries, the adverse weather conditions could lead to a delay in rice planting operations resulting in a switch to early maturing but lower yielding varieties. Preliminary indications point towards reduced rice acreage in some of the southern hemisphere countries. As for the other areas susceptible to the weather-related impacts of El Niño, prospects will depend largely on weather conditions in the coming months.
In southern Africa, the outlook for the 1997 wheat crop currently being harvested is favourable. However, there is considerable concern over the possible adverse impact of El Niño on the 1998 coarse grain crop. Experts predict a strong possibility of poor rainfall for the planting season which is soon to start. Accordingly, most governments have prepared comprehensive contingency plans for mitigating the impact of a possible drought. The sub-region suffered a serious El Niño related drought in 1991/92.
b) Cereal markets
No major effects are expected from El Niño on the 1997 coarse grain crops. However, given the low level of global coarse grains stocks, markets for these grains, particularly maize, would be vulnerable if the anticipated adverse impacts of the El Niño on next year’s crops, especially in the southern hemisphere, were to materialize. The current situation is markedly different from that during the previous major El Niño event in 1982/83 in terms of the levels of supplies and prices. The 1982 coarse grain crops were a record and carryover stocks were at very high levels, i.e. at 28 percent of utilization at the end of the season. This situation provided a cushion for the sharp reduction in the 1983 output and helped containing the sharp rise in prices during the year. By contrast, next year’s ending stocks are forecast to be very low, especially for maize, and represent only 12 percent of utilization. Thus, in view of the current tight market situation, the possibility of reduced course grains output in 1998 in important producing regions as a result of El Niño is cause for concern.
Although the 1997 wheat production has been mostly unaffected by El Niño, the wheat market has been reacting nervously to weather reports and wheat prices have remained strong in recent weeks. This is mainly because of the speculative longer-term interest in wheat which continues to lend support to the market. While current indications do not support a direct El Niño-induced global wheat production shortfall scenario for next year, possible indirect effects can not be ruled out. Although the element most likely supporting wheat price in 1998/99 will again be the low level of global stocks, possible spillovers from the other commodity markets would also lend support to firmer wheat prices.
Reports about the possible impacts of El Niño have not greatly affected rice trade thus far in 1997 and prices are currently at their seasonal lows. However, until the potential effects to the 1998 production become clearer, trader speculation could drive prices up during the first part of 1998. Prices in the remainder of 1998 will largely depend on crop prospects in the major exporting and importing countries. In addition, rice prices may receive support from the other related commodity markets, depending on the reaction of those markets to the El Niño phenomenon.
World cassava production in 1997 has not been greatly affected by El Niño. This is because cassava adapts better than other crops to poor soils in marginal lands, to water stress and adverse climatic conditions and with its deep root system can tolerate dry weather for a longer period. Should, however, drought conditions persist in 1998, production in Asia, Latin America and the Caribbean could be adversely affected, leading to upward pressure on prices of cassava and its products. The most significant and visible effect of El Niño as far as the oil crops, oils and meals sectors are concerned is the sharp decline of south American fishmeal production and global export availabilities, which combined with already low levels of stocks in exporting countries, are expected to lead to firm prices for fishmeal and other high-protein meals in 1998. Moreover, lower-than-average rainfall in south east Asia is expected to reduce coconut and palm yields, and, hence, production of palm oil and lauric oils (coconut and palm kernel) during 1998, possibly leading to higher and more volatile prices for these products. The market for coconut oil is expected to be particularly affected, because of the already higher prices observed over the past two seasons. Although production if certain oilseeds in some of the countries in these two regions may also be affected, not significant effects are expected overall for the oilseeds market in the United States and Europe.
The possible impact of El Niño for coffee will result primarily from the effects of drought on the Asian crop for harvest next April and excess rainfall on the Brazilian crop which has already fueled a rise in market prices for high quality coffee. Given the already tight stock situation for cocoa, should the effect of El Niño be similar to that of 1982/83, the global cocoa shortage would become more severe, resulting in considerable price increases. No significant impact is expected on the major tea producers or exporters. The current tight supply is due to drought (unrelated to El Niño) in Kenya and Sri Lanka, and the late start of harvesting in India due to a cold spell.
For sugar, prices in 1997/98 to date have remained steady within their normal trading range but with a tendency to rise due to prospects for an unfavourable Asian crop, in some cases because of adverse weather conditions associated with El Niño. However, prices are not expected to rise sharply in 1997/98, as stocks are adequate at present. However, should stocks be drawn down more than currently anticipated and the effects of El Niño linger into the 1998/99 production season, prices can be expected to increase beyond their recent trading range.
There is little concern at the present time over the outlook for export bananas, but production of bananas and plantains for local consumption could be adversely affected by prolonged drought. Other tropical fruits for export are unlikely to be affected as this year’s crops are already largely harvested. Any effects from El Niño would likely be observed in next year’s crops. As stocks of processed citrus are large it is unlikely that possible damage to orange production in Brazil would have much of an effect on prices at this time. Some impact on the price of grapes and other horticultural products could occur should El Niño lead to problems with the California crops or with the supply of fruit and vegetables from Chile and other suppliers of off-season products to Northern Hemisphere markets.
Global supplies of cotton and jute do not appear likely to be affected, while production of some hard fibres may be expected to decline in the event of continued drought in some southern hemisphere countries. Overall it is not expected that rubber will be greatly affected.
El Niño could bring about abnormal drought conditions in some important southern hemisphere livestock producing countries. Pasture and range conditions could deteriorate as a consequence. Delayed or scarce rainfall would boost slaughtering, especially of large ruminants which depend on pasture and range land, with increased meat output in the short-run depressing producer prices. As a result, production of hides and skins could expand. Subsequently, as pasture and range conditions recover, livestock offtake will decline as stock numbers are allowed to build up again. Poultry and pig meat production will be affected primarily by developments in feed prices.
Since the mechanism underlying El Niño episodes resides in the climatic system of the tropical Pacific Ocean, some of the most striking ecological impacts involve the ecosystems of this particular ocean region. In addition, oceanic geophysical wave phenomena promote propagation of the anomalous conditions toward the eastern boundary of the ocean and then poleward along the continental boundary regions of both the northern and southern hemispheres.
The eastern Pacific Region
Being situated at the eastern end of the "equatorial wave guide" the Peru Current region located off Peru and Ecuador receives the full force of El Niño impacts. The area off the western South America is one of the major upwelling regions of the world, producing 12 to 20 percent of the world total fish landings. In such upwelling regions, nutrient-rich deep waters are brought to the illuminated surface layers (i.e. upwelled) where they are available to support photosynthesis, and thus large fish populations. The Peruvian anchoveta was, prior to a major stock collapse in conjunction with the El Niño of 1972-73, by far the largest fish harvest, with peak annual catches of over 12 million metric tons. The stock was reduced further, to its lowest level on record in conjunction with the El Niño of 1982-83, and has been recovering since, until the development of the current El Niño.
Of the various predictions that can be made about the impact on fish availabilities of the current El Niño, perhaps the most "confident" may be that the enormously important Peruvian anchoveta stock will suffer severely and may take years to recover. Other fish stocks distributed throughout the broader eastern Pacific area will also be adversely affected. There will, however, also be examples of positive impacts, i.e. the expansion in scallop fishery off Peru in following seasons.
Other Areas of the World
Through atmospheric teleconnections there will be impacts on fish populations outside of the eastern Pacific. For example, it is thought that the milder 1991 El Niño may have had strong detrimental effects on the fisheries and marine ecosystem off Namibia. However, as distance from the Pacific increases, the linkages become less clear, and this makes the task of separating the effects of El Niño from effects of fishing and from non-El Niño-related environmental effects more difficult. In general, unusual weather patterns may be expected in nearly all regions of the world and these may affect the complex life cycle processes of fishery resource species.
The extensive fire damage in Indonesia as a result of El Niño and the associated impacts in terms of smoke and haze, not only in Indonesia but in neighbouring countries as well, provide a strong and dramatic justification for an analysis of the magnitude of the relationship between El Niño and forestry. The implications need to be assessed and lessons learned in terms of mitigating impacts in the future to the forest resources of Indonesia and of other countries affected by El Niño.
Given the relatively long growing season of most forest resources, the climate change impacts of El Niño on trees tend to be less dramatic than on annual agricultural crops. The greatest El Niño-related threat to trees and forests is that of fire - the current situation in Indonesia is a dramatic example. This can and does result in enormous losses in terms of resources, products, environmental quality and human life. Short term climatic changes may also affect forest regeneration, both spontaneous and that assisted by man. Forest resources also comprise a vast array of non-wood forest products - of critical importance to the food supplies of local people and to their economies - and these may suffer from the El Niño. Moreover, agroforestry systems in which trees and agricultural crops are raised in symbiosis may be affected likewise.
Forestry contributes to food security in three main ways, through the direct provision of food, through the protection of the agricultural base for food production and through the generation of employment and income. With regard to the direct provision of food, the greatest immediate impact of El Niño will likely be on the many non-wood forest foods, including nuts, tree flowers and honey, that serve as supplementary and emergency sources of nourishment. The supply of many of these products may be adversely affected by El Niño. If areas affected by El Niño are exposed to fire, the agricultural crop area may be re-usable after a single season. But if the soil and water protection provided by forests is destroyed, agricultural land may permanently lose its productivity. Another grave risk associated with El Niño is a local shortage, in areas affected by fire, of the forest fuels that are fundamental for the cooking and heating energy of most rural populations. Similarly, if forest resources are destroyed, they will no longer be able to make the vital contribution to income and employment that they currently do.
FAO has been, and continues to be, actively involved in helping countries to prepare for and respond to the adverse impact of El Niño.
FAO has assisted countries in implementing long-term preventive measures against drought and flood-related events which now facilitate preparedness and response of countries affected by El Niño. Examples of such measures promoted by FAO include:
- support to well construction and small-scale irrigation development programmes in Southern Africa and Central America;
- development of drought and cyclone-resistant cropping patterns and farming and fishing practices for South Asia, the Sahel, eastern and southern Africa and the Caribbean;
- support for the preparation of a disaster preparedness strategy for the member countries of the Intergovernmental Authority on Development in Eastern Africa and the Horn;
- the establishment of early warning systems for forestry, provision of information and direct assistance to member countries on appropriate forestry policy and planning, forest management and land use decision making, environmentally sound logging, fire control, etc.;
- support to flood prevention through integrated watershed development programmes in eroded, mountainous regions, and
- support for the design and management of strategic food security reserves.
As regards long-term preventive measures in the fishery sector, FAO has been active in building international awareness on the environmentally-induced fluctuations of fish stocks and fisheries that depend on them. Member States are advised to take a precautionary approach in their fisheries management and development plans when dealing with fish stocks known to be subject to large environmentally-induced fluctuations, such as those caused by the El Niño phenomenon in the eastern tropical Pacific and, to a lesser extent, also elsewhere.
Early Warning and Forecasting
Since March 1997 FAO has intensified the monitoring of weather developments and crop prospects in all parts of the world through its Global Information and Early Warning System (GIEWS). The System has issued two reports on the impact of El Niño on crop production in Latin America and Asia. Current focus is on southern Africa where the 1997/98 growing season has just started. GIEWS has discussed with the WFP the possibility of launching advance emergency operations to be jointly approved by the Director-General (FAO) and the Executive Director (WFP) and of fielding FAO/WFP Crop and Food Supply Assessment Missions to southern Africa in April/May 1998, if drought conditions develop. The Systems assessments provide a lead in initiating agricultural rehabilitation activities in affected countries.
In the last six months, FAO has made arrangements for assessing the essential agricultural inputs needed to restore production in four countries adversely affected by El Niño. An appeal for financial assistance to implement emergency relief, short term rehabilitation and preparedness interventions will be distributed to the international donor community. With regard to the situation in Indonesia, FAO made an official offer of assistance to the Indonesian Ministry of Forestry on 26 September. In the past, FAO has executed forestry projects in Indonesia in forest fire policy, fire suppression, education and extension. | http://reliefweb.int/report/world/impact-el-ni%C3%B1o-agriculture-fisheries-and-forestry | 13 |
22 | |-||1848–1866||Francis Joseph I|
|-||Established||June 8, 1815|
|-||Disestablished||August 23, 1866|
|Today part of|| Germany
The German Confederation (German: Deutscher Bund) was a loose association of 39 German states in Central Europe, created by the Congress of Vienna in 1815 to coordinate the economies of separate German-speaking countries and to replace the former Holy Roman Empire.1 It acted as a buffer between the powerful states of Austria and Prussia. According to Lee, most historians have judged the Confederation to be weak and ineffective, as well as an obstacle to German nationalist aspirations. It collapsed due to the rivalry between Prussia and Austria, known as German dualism, warfare, the 1848 revolution, and the inability of the multiple members to compromise.2 It dissolved with Prussian victory in the Seven Weeks War and the establishment of the North German Confederation in 1866.
In 1848, revolutions by liberals and nationalists were a failed attempt to establish a unified German state. Talks between the German states failed in 1848, and the confederation briefly dissolved but was re-established in 1850.
The dispute between the two dominant member states of the confederation, Austria and Prussia, over which had the inherent right to rule German lands ended in favour of Prussia after the Austro-Prussian War in 1866, and the collapse of the confederation. This resulted in the creation of the North German Confederation, with a number of south German states remaining independent, although allied first with Austria (until 1867) and subsequently with Prussia (until 1871), after which they became a part of the new German state.
The War of the Third Coalition lasted from about 1803 to 1806. Following the Battle of Austerlitz in December 1805, The Holy Roman Empire was dissolved on 6 August 1806 when the last Holy Roman Emperor Francis II abdicated, following a crushing defeat at the Battle of Austerlitz by the French under Napoleon resulting in the Treaty of Pressburg, and sixteen of France's allies among the German states (including Bavaria and Württemberg) established the Confederation of the Rhine in July 1806. Following the Battle of Jena-Auerstedt of October 1806 in the War of the Fourth Coalition, various other German states, including Saxony and Westphalia, also joined the Confederation. Only Austria, Prussia, Danish Holstein, and Swedish Pomerania stayed outside the Confederation of the Rhine.
These nations would later join in the War of the Sixth Coalition from 1812 to 1814.
The original signatories of the act were:3
- Electorate of Hesse
- Grand Duchy of Hesse
- Denmark on account of Holstein
- Netherlands on account of Luxemburg
- Saxe Weimar
- Saxe Gotha
- Saxe Coburg
- Saxe Meiningen
- Saxe Hildburghausen
- Reuss, elder line
- Reuss, younger line
To these were afterwards added:3
The German Confederation ended as a result of the Austro-Prussian War of 1866 between the constituent Confederation entities of the Austrian Empire and its allies on one side and the Kingdom of Prussia and its allies on the other. The war resulted in the Confederation being partially replaced by a North German Confederation in 1867 which included Prussia but excluded Austria and the South German states. During November 1870 the four southern states joined the North German Confederation by treaty.4
On 10 December 1870 the North German Confederation Reichstag renamed the Confederation as the German Empire and gave the title of German Emperor to the King of Prussia as President of the Confederation.5 During the Siege of Paris on 18 January 1871, King Wilhelm I of Prussia was proclaimed German Emperor in the Hall of Mirrors at the Palace of Versailles.6
- The Austrian Empire and the Kingdom of Prussia were the largest and by far the most powerful members of the Confederation. Large parts of both countries were not included in the Confederation, because they had not been part of the former Holy Roman Empire, nor had the greater parts of their armed forces been incorporated in the federal army. Each of them had one vote in the Federal Assembly.
- Three member states were ruled by foreign monarchs: the King of Denmark, the King of the Netherlands, and the King of Great Britain (until 1837) were members of the German Confederation; the first as Duke of Holstein, the second as Grand Duke of Luxembourg and Duke of Limburg, and the latter as King of Hanover. Each of them had a vote in the Federal Assembly.
- Six other greater states had one vote each in the Federal Assembly: the King of Bavaria, the King of Saxony, the King of Württemberg, the Elector of Hesse, the Grand Duke of Baden and the Grand Duke of Hesse.
- 23 smaller and tiny member states shared five votes in the Federal Assembly.
- The four free cities of Bremen, Frankfurt, Hamburg, and Lübeck shared one vote in the Federal Assembly.
Between 1806 and 1815, Napoleon organized the German states into the Confederation of the Rhine, but this collapsed after his defeats in 1812 to 1815. The German Confederation had roughly the same boundaries as the Empire at the time of the French Revolution (less what is now Belgium). The member states, drastically reduced to 39 from more than 300 (see Kleinstaaterei) under the Holy Roman Empire, were recognized as fully sovereign. The members pledged themselves to mutual defense, and jointly maintained the fortresses at Mainz, the city of Luxembourg, Rastatt, Ulm, and Landau.
During the revolution of 1848/49 the German Confederation was inactive. It was revived in 1850 under Austrian presidency, but rivalry between Prussia and Austria grew more and more.
The Confederation was dissolved in 1866 after the Austro-Prussian War, and was 'succeeded' in 1866 by the Prussian-dominated North German Confederation. Unlike the German Confederation, the North German Confederation was in fact a true state. Its territory comprised the parts of the German Confederation north of the river Main, plus Prussia's eastern territories and the Duchy of Schleswig, but excluded Austria and the other southern German states.
Prussia's influence was widened by the Franco-Prussian War resulting in the proclamation of the German Empire at Versailles on 18 January 1871, which united the North German Federation with the southern German states. All the constituent states of the former German Confederation became part of the Kaiserreich in 1871, except Austria, Luxembourg, and Liechtenstein.
The late 18th century was a period of political, economic, intellectual, and cultural reforms, the Enlightenment (represented by figures such as Locke, Rousseau, Voltaire, and Adam Smith), but also involving early Romanticism, and climaxing with the French Revolution, where freedom of the individual and nation was asserted against privilege and custom. Representing a great variety of types and theories, they were largely a response to the disintegration of previous cultural patterns, coupled with new patterns of production, specifically the rise of industrial capitalism.
However, the defeat of Napoleon enabled conservative and reactionary regimes such as those of the Kingdom of Prussia, the Austrian Empire and Tsarist Russia to survive, laying the groundwork for the Congress of Vienna and the alliance that strove to oppose radical demands for change ushered in by the French Revolution. The Great Powers at the Congress of Vienna in 1815 aimed to restore Europe (as far as possible) to its pre-war conditions by combating both liberalism and nationalism and by creating barriers around France. With Austria's position on the continent now intact and ostensibly secure under its reactionary premier Klemens von Metternich, the Habsburg empire would serve as a barrier to contain the emergence of Italian and German nation-states as well, in addition to containing France. But this reactionary balance of power, aimed at blocking German and Italian nationalism on the continent, was precarious.
After Napoleon's final defeat in 1815, the surviving member states of the defunct Holy Roman Empire joined to form the German Confederation (Deutscher Bund) — a rather loose organization, especially because the two great rivals, the Austrian Empire and the Prussian kingdom, each feared domination by the other.
In Prussia the Hohenzollern rulers forged a centralized state. By the time of the Napoleonic Wars, Prussia was a socially and institutionally backward state, grounded in the virtues of its established military aristocracy (the Junkers), stratified by rigid hierarchical lines. After 1815, Prussia's defeats by Napoleonic France highlighted the need for administrative, economic, and social reforms to improve the efficiency of the bureaucracy and encourage practical merit-based education. Inspired by the Napoleonic organization of German and Italian principalities, the reforms of Karl August von Hardenberg and Count Stein were conservative, enacted to preserve aristocratic privilege while modernizing institutions.
Outside Prussia, industrialization progressed slowly, and was held back because of political disunity, conflicts of interest between the nobility and merchants, and the continued existence of the guild system, which discouraged competition and innovation. While this kept the middle class small, affording the old order a measure of stability not seen in France, Prussia's vulnerability to Napoleon's military proved to many among the old order that a fragile, divided, and backward Germany would be easy prey for its cohesive and industrializing neighbor.
The reforms laid the foundation for Prussia's future military might by professionalizing the military and decreeing universal military conscription. In order to industrialize Prussia, working within the framework provided by the old aristocratic institutions, land reforms were enacted to break the monopoly of the Junkers on landownership, thereby also abolishing, among other things, the feudal practice of serfdom.
Although the forces unleashed by the French Revolution were seemingly under control after the Vienna Congress, the conflict between conservative forces and liberal nationalists was only deferred at best. The era until the failed 1848 revolution, in which these tensions built up, is commonly referred to as Vormärz ("pre-March"), in reference to the outbreak of riots in March 1848.
This conflict pitted the forces of the old order against those inspired by the French Revolution and the Rights of Man. The sociological breakdown of the competition was, roughly, one side engaged mostly in commerce, trade and industry, and the other side associated with landowning aristocracy or military aristocracy (the Junker) in Prussia, the Habsburg monarchy in Austria, and the conservative notables of the small princely states and city-states in Germany.
Meanwhile, demands for change from below had been fomenting since the influence of the French Revolution. Throughout the German Confederation, Austrian influence was paramount, drawing the ire of the nationalist movements. Metternich considered nationalism, especially the nationalist youth movement, the most pressing danger: German nationalism might not only repudiate Austrian dominance of the Confederation, but also stimulate nationalist sentiment within the Austrian Empire itself. In a multi-national polyglot state in which Slavs and Magyars outnumbered the Germans, the prospects of Czech, Slovak, Hungarian, Polish, Serb, or Croatian sentiment along with middle class liberalism was certainly horrifying.
The Vormärz era saw the rise of figures like August Heinrich Hoffmann von Fallersleben, Ludwig Uhland, Georg Herwegh, Heinrich Heine, Georg Büchner, Ludwig Börne and Bettina von Arnim. Father Friedrich Jahn's gymnastic associations exposed middle class German youth to nationalist and democratic ideas, which took the form of the nationalistic and liberal democratic college fraternities known as the Burschenschaften. The Wartburg Festival in 1817 celebrated Martin Luther as a proto-German nationalist, linking Lutheranism to German nationalism, and helping arouse religious sentiments for the cause of German nationhood. The festival culminated in the burning of several books and other items that symbolized reactionary attitudes. One item was a book by August von Kotzebue. In 1819, Kotzebue was accused of spying for Russia, and then murdered by a theological student, Karl Ludwig Sand, who was executed for the crime. Sand belonged to a militant nationalist faction of the Burschenschaften. Metternich used the murder as a pretext to issue the Carlsbad Decrees of 1819, which dissolved the Burschenschaften, cracked down on the liberal press, and seriously restricted academic freedom.7
German artists and intellectuals, heavily influenced by the French Revolution, turned to Romanticism. At the universities, high-powered professors developed international reputations, especially in the humanities led by history and philology, which brought a new historical perspective to the study of political history, theology, philosophy, language, and literature. With Georg Wilhelm Friedrich Hegel (1770–1831) in philosophy, Friedrich Schleiermacher (1768–1834) in theology and Leopold von Ranke (1795–1886) in history, the University of Berlin, founded in 1810, became the world's leading university. Von Ranke, for example, professionalized history and set the world standard for historiography. By the 1830s mathematics, physics, chemistry, and biology had emerged with world class science, led by Alexander von Humboldt (1769–1859) in natural science and Carl Friedrich Gauss (1777–1855) in mathematics. Young intellectuals often turned to politics, but their support for the failed Revolution of 1848 forced many into exile.8
The population of the German Confederation (excluding Austria) grew 60% from 1815 to 1865, from 21,000,000 to 34,000,000.9 The era saw the Demographic Transition take place in Germany. It was a transition from high birth rates and high death rates to low birth and death rates as the country developed from a pre-industrial to a modernized agriculture and supported a fast-growing industrialized urban economic system. In previous centuries, the shortage of land meant that not everyone could marry, and marriages took place after age 25. The high birthrate was offset by a very high rate of infant mortality, plus periodic epidemics and harvest failures. After 1815, increased agricultural productivity met a larger food supply, and a decline in famines, epidemics, and malnutrition. This allowed couples to marry earlier, and have more children. Arranged marriages became uncommon as young people were now allowed to choose their own marriage partners, subject to a veto by the parents. The upper and middle classes began to practice birth control, and a little later so too did the peasants.10 The population in 1800 was heavily rural,11 with only 8% of the people living in communities of 5000 to 100,000 and another 2% living in cities of more than 100,000.
In a heavily agrarian society, land ownership played a central role. Germany's nobles, especially those in the East called Junkers, dominated not only the localities, but also the Prussian court, and especially the Prussian army. Increasingly after 1815, a centralized Prussian government based in Berlin took over the powers of the nobles, which in terms of control over the peasantry had been almost absolute. They retained control of the judicial system on their estates until 1848, as well as control of hunting and game laws. They paid no land tax until 1861 and kept their police authority until 1872, and controlled church affairs into the early 20th century. To help the nobility avoid indebtedness, Berlin set up a credit institution to provide capital loans in 1809, and extended the loan network to peasants in 1849. When the German Empire was established in 1871, the nobility controlled the army and the Navy, the bureaucracy, and the royal court; they generally set governmental policies.1213
Peasants continued to center their lives in the village, where they were members of a corporate body and help manage the community resources and monitor the community life. In the East, they were serfs who were bound prominently to parcels of land. In most of Germany, farming was handled by tenant farmers who paid rents and obligatory services to the landlord, who was typically a nobleman.14 Peasant leaders supervised the fields and ditches and grazing rights, maintained public order and morals, and supported a village court which handled minor offenses. Inside the family the patriarch made all the decisions, and tried to arrange advantageous marriages for his children. Much of the villages' communal life centered around church services and holy days. In Prussia, the peasants drew lots to choose conscripts required by the army. The noblemen handled external relationships and politics for the villages under their control, and were not typically involved in daily activities or decisions.1516
After 1815, the urban population grew rapidly, due primarily to the influx of young people from the rural areas. Berlin grew from 172,000 people in 1800, to 826,000 in 1870; Hamburg grew from 130,000 to 290,000; Munich from 40,000 to 269,000; Breslau from 60,000 to 208,000; Dresden from 60,000 to 177,000; Königsberg from 55,000 to 112,000. Offsetting this growth, there was extensive emigration, especially to the United States. Emigration totaled 480,000 in the 1840s, 1,200,000 in the 1850s, and 780,000 in the 1860s.17
Further efforts to improve the confederation began in 1834 with the establishment of a customs union, the Zollverein. In 1834, the Prussian regime sought to stimulate wider trade advantages and industrialism by decree — a logical continuation of the program of Stein and Hardenberg less than two decades earlier. Historians have seen three Prussian goals: as a political tool to eliminate Austrian influence in Germany; as a way to improve the economies; and to strengthen Germany against potential French aggression while reducing the economic independence of smaller states.18
Inadvertently, these reforms sparked the unification movement and augmented a middle class demanding further political rights, but at the time backwardness and Prussia's fears of its stronger neighbors were greater concerns. The customs union opened up a common market, ended tariffs between states, and standardized weights, measures, and currencies within member states (excluding Austria), forming the basis of a proto-national economy.19
By 1842 the Zollverein included most German states. Within the next twenty years the output of German furnaces increased fourfold. Coal production grew rapidly as well. In turn, German industry (especially the works established by the Krupp family) introduced the steel gun, cast-steel axles, and a breech loading rifle, exemplifying Germany's successful application of technology to weaponry. Germany's security was greatly enhanced, leaving the Prussian state and the landowning aristocracy secure from outside threat. German manufacturers also produced heavily for the civilian sector. No longer would Britain supply half of Germany's needs for manufactured goods, as it did beforehand.20 However, by developing a strong industrial base, the Prussian state strengthened the middle class and thus the nationalist movement. Economic integration, especially increased national consciousness among the German states, made political unity a far likelier scenario. Germany finally began exhibiting all the features of a proto-nation.
The crucial factor enabling Prussia's conservative regime to survive the Vormärz era was a rough coalition between leading sectors of the landed upper class and the emerging commercial and manufacturing interests. Marx and Engels, in their analysis of the abortive 1848 Revolutions, defined such a coalition: "a commercial and industrial class which is too weak and dependent to take power and rule in its own right and which therefore throws itself into the arms of the landed aristocracy and the royal bureaucracy, exchanging the right to rule for the right to make money."21 Even if the commercial and industrial element is weak, it must be strong enough (or soon become strong enough) to become worthy of co-optation, and the French Revolution terrified enough perceptive elements of Prussia's Junkers for the state to be sufficiently accommodating.
While relative stability was maintained until 1848, with enough bourgeois elements still content to exchange the "right to rule for the right to make money", the landed upper class found its economic base sinking. While the Zollverein brought economic progress and helped to keep the bourgeoisie at bay for a while, it increased the ranks of the middle class swiftly - the very social base for the nationalism and liberalism that the Prussian state sought to stem.
The Zollverein was a move toward economic integration, modern industrial capitalism, and the victory of centralism over localism, quickly bringing to an end the era of guilds in the small German princely states. This led to the 1844 revolt of the Silesian Weavers, who saw their livelihood destroyed by the flood of new manufactures.
The Zollverein also weakened Austrian domination of the Confederation as economic unity increased the desire for political unity and nationalism.
News of the 1848 Revolution in Paris quickly reached discontented bourgeois liberals, republicans and more radical workingmen. The first revolutionary uprisings in Germany began in the state of Baden in March 1848. Within a few days, there were revolutionary uprisings in other states including Austria, and finally in Prussia. On 15 March 1848, the subjects of Friedrich Wilhelm IV of Prussia vented their long-repressed political aspirations in violent rioting in Berlin, while barricades were erected in the streets of Paris. King Louis-Philippe of France fled to Great Britain. Friedrich Wilhelm gave in to the popular fury, and promised a constitution, a parliament, and support for German unification. But at least his regime was still standing.2223
On 18 May the Frankfurt Parliament (Frankfurt Assembly) opened its first session, with delegates from various German states. It was immediately divided between those favoring a kleindeutsche (small German) or grossdeutsche (greater German) solution. The former favored offering the imperial crown to Prussia. The latter favored the Habsburg crown in Vienna, which would integrate Austria proper and Bohemia (but not Hungary) into the new Germany.
From May to December, the Assembly eloquently debated academic topics while conservatives swiftly moved against the reformers. As in Austria and Russia, this middle-class assertion increased authoritarian and reactionary sentiments among the landed upper class, whose economic position was declining. They turned to political levers to preserve their rule. As the Prussian army proved loyal, and the peasants were uninterested, Friedrich Wilhelm regained his confidence. The Assembly issued its Declaration of the Rights of the German people, a constitution was drawn up (excluding Austria which openly rejected the Assembly), and the leadership of the Reich was offered to Friedrich Wilhelm, who refused to "pick up a crown from the gutter". Thousands of middle class liberals fled abroad, especially to the United States.
In 1849, Friedrich Wilhelm proposed his own constitution. His document concentrated real power in the hands of the King and the upper classes, and called for a confederation of North German states (the Erfurt Union). Austria and Russia, fearing a strong, Prussian-dominated Germany, responded by pressuring Saxony and Hanover to withdraw, and forced Prussia to abandon the scheme in a treaty dubbed the "humiliation of Olmütz".
A new generation of statesmen responded to popular demands for national unity for their own ends, continuing Prussia's tradition of autocracy and reform from above. Germany found an able leader to accomplish the seemingly paradoxical task of conservative modernization. Bismarck was appointed by Wilhelm I of Prussia (the future Kaiser Wilhelm I) to circumvent the liberals in the Landtag who resisted Wilhelm's autocratic militarism. Bismarck told the Diet, "The great questions of the day are not decided by speeches and majority votes...but by blood and iron" --that is, by warfare and industrial might.24 Prussia already had a great army; it was now augmented by rapid growth of economic power.
Gradually Bismarck won over the middle class, reacting to the revolutionary sentiments expressed in 1848 by providing them with the economic opportunities for which the urban middle sectors had been fighting.25
The current countries whose territory were partly or entirely located inside the boundaries of German Confederation 1815–1866 are:
- Luxembourg (entire territory)
- Liechtenstein (entire territory)
- Netherlands (province of Limburg - the province joined the Confederation after 1839)
- Czech Republic (entire territory)
- Poland (West Pomeranian Voivodship, Lubusz Voivodship, Lower Silesian Voivodship, Opole Voivodship, part of Silesia)
- Belgium (German-speaking community and some other territory at the east of the province of Liège); the larger province of Luxembourg had left the Confederation at its accession to Belgium in 1839
- Italy (autonomous region of Trentino-Alto Adige/Südtirol, the Province of Trieste, most of the Province of Gorizia except the Monfalcone enclave, and the municipalities of Tarvisio, Malborghetto Valbruna, Pontebba, Aquileia, Fiumicello and Cervignano in the Province of Udine)
- Croatia (the Pazin territory in Istria county and the coastal strip between Opatija and Plomin in the Liburnia region)
- The Danish crown had been a member only in chief of its duchy of Holstein. Schleswig first joined as part of Prussia following the Second War of Schleswig (1864).
- States of the German Confederation
- History of Germany
- German Empire
- North German Confederation
- Former countries in Europe after 1815
- Lee, Loyd E. (1985). "The German Confederation and the Consolidation of State Power in the South German States, 1815–1848". Consortium on Revolutionary Europe 1750-1850: Proceedings 15: 332–346. ISSN 0093-2574.
- Heeren, Arnold Hermann Ludwig (1873), Talboys, David Alphonso, ed., A Manual of the History of the Political System of Europe and its Colonies, London: H. G. Bohn, pp. 480–481
- Case, Nelson (1902). European Constitutional History. Cincinnati: Jennings & Pye. p. 139. OCLC 608806061.
- Case 1902, pp. 139–140
- Case 1902, p. 140
- Williamson, George S. (2000). "What Killed August von Kotzebue? The Temptations of Virtue and the Political Theology of German Nationalism, 1789–1819". Journal of Modern History 72 (4): 890–943. JSTOR 318549.
- Sheehan, James J. (1989). German History: 1770–1866. New York: Oxford University Press. pp. 324–371, 802–820. ISBN 0198221207.
- Nipperdey, Thomas (1996). Germany from Napoleon to Bismarck: 1800–1866. Princeton: Princeton University Press. p. 86. ISBN 069102636X.
- Nipperdey, Thomas (1996). Germany from Napoleon to Bismarck: 1800–1866. Princeton: Princeton University Press. pp. 87–92, 99. ISBN 069102636X.
- Clapham, J. H. (1936). The Economic Development of France and Germany: 1815–1914. Cambridge: University Press. pp. 6–28.
- Weber, Eugen (1971). A Modern History of Europe. New York: Norton. p. 586. ISBN 0393099814.
- Sagarra 1977, pp. 37–55, 183–202
- The monasteries of Bavaria, which controlled 56% of the land, were broken up by the government, and sold off around 1803. Nipperdey, Thomas (1996). Germany from Napoleon to Bismarck: 1800–1866. Princeton: Princeton University Press. p. 59. ISBN 069102636X.
- Sagarra 1977, pp. 140–154
- For details on the life of a representative peasant farmer, who migrated in 1710 to Pennsylvania, see Kratz, Bernd (2008). "Hans Stauffer: A Farmer in Germany Before his Emigration to Pennsylvania". Genealogist 22 (2): 131–169.
- Nipperdey, Germany from Napoleon to Bismarck: 1800-1866 p 96-97
- Murphy, David T. (1991). "Prussian aims for the Zollverein, 1828–1833". Historian 53 (2): 285–302. doi:10.1111/j.1540-6563.1991.tb00808.x.
- W. O. Henderson, The Zollverein (1959) is the standard history in English
- William Manchester, The Arms of Krupp, 1587-1968 (1968)
- Karl Marx, Selected Works, II., "Germany: Revolution and Counter-Revolution", written mainly by Engels.
- James J. Sheehan, German History, 1770-1866 (1993), pp 656-710
- Mattheisen, Donald J. (1983). "History as Current Events: Recent Works on the German Revolution of 1848". American Historical Review 88 (5): 1219–1237. JSTOR 1904890.
- Martin Kitchen, A history of modern Germany, 1800-2000 (2006) p. 105
- Otto Pflanze, Bismarck and the Development of Germany, Vol. 1: The Period of Unification, 1815-1871 (1971)
- Westermann, Großer Atlas zur Weltgeschichte (in German, detailed maps)
- WorldStatesmen- here Germany; also links to a map on rootsweb.com
- Barrington Moore, Jr. 1993 . Social Origins of Dictatorship and Democracy. Boston: Beacon Press.
|Wikisource has original text related to this article:|
|Wikimedia Commons has media related to: German Confederation|
- Blackbourn, David. The Long Nineteenth Century: A History of Germany, 1780-1918 (1998) excerpt and text search
- Blackbourn, David, and Geoff Eley. The Peculiarities of German History: Bourgeois Society and Politics in Nineteenth-Century Germany (1984) online edition
- Brose, Eric Dorn. German History, 1789-1871: From the Holy Roman Empire to the Bismarckian Reich. (1997) online edition
- Evans, Richard J., and W. R. Lee, eds. The German Peasantry: Conflict and Community from the Eighteenth to the Twentieth Centuries (1986)
- Nipperdey, Thomas. Germany from Napoleon to Bismarck (1996), very dense coverage of every aspect of German society, economy and government
- Pflanze, Otto. Bismarck and the Development of Germany, Vol. 1: The Period of Unification, 1815-1871 (1971)
- Ramm, Agatha. Germany 1789-1919 (1967)
- Sagarra, Eda (1977). A Social History of Germany: 1648–1914. New York: Holmes & Meier. pp. 37–55, 183–202. ISBN 0841903328.
- Sagarra, Eda. Introduction to Nineteenth Century Germany (1980)
- Sheehan, James J. German History, 1770-1866 (1993), 969pp; the major survey in English
- Werner, George S. Bavaria in the German Confederation 1820-1848 (1977) | http://www.bioscience.ws/encyclopedia/index.php?title=German_Confederation | 13 |
18 | Economics Lecture Four
In Lectures Two and Three we discussed how a government price control, or price ceiling, results in shortages. For example, government controls on the cost of medical services will cause a shortage of medical services, and rationing of medical care becomes necessary to manage the shortage.
Consider this question: why doesn't the government simply order people to provide more of the medical services, so there is no shortage? If government is powerful enough to limit and control the price, which causes a shortage, then why doesn't the government increase the supply by ordering people to provide more of the services at the lower price?
Think about that for a while. The answer is in this footnote.
In this course we have already learned about the supply and demand curves, and examined the economic concept of “elasticity”. Recall that elasticity measures the sensitivity of the quantity demanded to a change in price, as we discussed in the last lecture.
In this lecture we focus on other aspects of “demand”.
The Income and Substitution Effects
Recall that the “Law of Demand” is this: when the price of a good increases, its demand decreases. When the price of a good decreases, its demand increases. This is one of the most fundamental rules of economics: the Law of Demand.
The question we ask and answer now is this: why is the Law of Demand true? There are two reasons.
First, the more expensive something becomes, the less people can afford to buy it. Someone with access to a maximum of $10,000 (in wealth or loans) cannot purchase a good costing $20,000. He cannot afford it. Actually, this person probably cannot afford anything more expensive than about $5,000, because he needs to use his other money for food and shelter and clothing and transportation.
The second reason for the Law of Demand is that even people who can afford something may prefer, as the price of a good increases, to spend their money on an alternative instead. People who can afford to spend $20,000 on a luxury cruise may look at that price and decide to spend that money in a more enjoyable manner, such as renting house at the Jersey shore one week each summer for the next 20 years.
These two reasons, or effects, have been given names in economics. They are called the income and substitution effects. They cause the important "Law of Demand." Let's examine each effect now.
The “income effect” is the effect that a change in price of a good has on a buyer's overall income. When the price of a good decreases, a buyer of the good saves money. This is the same as if he earned more money. "A penny saved is a penny earned." When the price of a good decreases by a penny, the "income effect" is as though the buyer earned an extra penny in income. For example, if you drink a gallon of milk each week and the price of that gallon decreases by 25 cents, then you have 25 cents extra to spend on something else. It is as though your income went up by 25 cents.
Remember how we discussed that an increase in income usually causes people to buy more of a good? Now that you have more income, you may want to buy more milk. Instead of drinking a gallon a week, perhaps you can now afford to drink a gallon and a quart a week. The decrease in price of milk created an income effect (increase in income), which encourages you to buy more milk.
To summarize: the income effect of a decrease in price is to allow the buyer to save more money, and some buyers will use that savings to purchase more of the good. If you went to the store to buy a new pair of socks for $5, and found they were on sale for only $2.50, then you may use that savings to buy two pairs of socks. This "income effect" of a decrease in price causes demand to increase for the good.
The second reason that the Law of Demand is true is because a decrease in price of a good also causes a “substitution effect.” Think of what happens when the price of a good increases. People start to buy substitutes instead, and so the demand for the good with the higher price decreases. Similarly, when the price of a good decreases, people want to buy more of the less expensive good instead of something else. In other words, a decrease in price of a good makes you more willing to buy it as a "substitute" for a similar good. In the milk example, its cheaper price makes it more attractive to purchase. You may want to substitute milk for the fruit juice you used to drink.
Both the “income effect” and the “substitution effect” give you an incentive to buy more of the good that decreased in price. The overall increase in quantity demanded for a good that cut its price is the sum of the income effect and the substitution effect. For a normal good, a decrease in its price causes an increase in real income (the income effect) and an increase in substitution for other goods (the substitution effect), which add together to cause an overall increase in demand. This is typical.
Learn these two important concepts well -- the income effect and substitution effect. In understanding concepts, it helps to restate them slightly differently until there is a full appreciation of them. The “income effect” is the change in your wealth (income) due to a change in price of something you buy, AND how that change effects what you buy. If the price of milk decreases, then the “income effect” is to make you feel like you have more income AND enable you to buy more milk. If the price of milk increases, then the “income effect” is to make you feel like you have less income because you had to spend more on buying the milk, leaving you less money to spend on other things. Understand this? Reread it again if necessary, and think about how a change in the price of milk affects your decisions about how much milk and other things you can buy.
The “substitution effect” is the change in substitution (one good for another) due to the change in price of one of the two goods. If the price of a good decreases, then that price change makes it more attractive to be used as a substitute for another good. If, for example, chicken sandwiches are on sale at half-price at McDonalds, then more customers are going to choose chicken sandwiches as a substitute for hamburgers. The decrease in price of chicken sandwiches has a “substitution effect” of causing more people to buy them as they move from eating hamburgers to eating the cheaper chicken sandwiches.
For Honors Students Only: Giffen Goods
Last class, we mentioned an odd type of good known as an “inferior” good. You may recall that the demand for an “inferior” good actually increases when income decreases. Examples are margarine (because people with declining income can less afford butter). Bankruptcy services are “inferior”, because when people's income declines, more of them file for bankruptcy, and then there is a greater demand for bankruptcy services.
Question: If a good is inferior, does a decrease in its price cause an increase in quantity demanded?
Think about it. From the prior section, the overall change in quantity demanded is a sum of the “income effect” and the “substitution effect.” Let's look at the income effect first. Ordinarily, the “income effect” when the price decreases is that real income goes up. But for an “inferior” good, an increase in income means a decrease in demand! That’s strange. Thus for an inferior good, when the price decreases, the income effect is a decrease in demand.
However, the “substitution effect” goes up when the price falls, even for an inferior good. Because the income effect and substitution effect move in opposite directions, it is difficult to predict what will happen to their sum, which is the overall demand. A “Giffen good” is an inferior good for which the income effect dominates, and thus it has the unusual characteristic that a fall in price of the good causes a fall in demand. A Giffen good violates the Law of Demand. It is difficult to think of an example.
It is worth defining a “Giffen good” again, this time in terms of a price increase: A "Giffen good" is a good which has an increase in demand when its price increases. Raise the price of a Giffen good, and the demand for this good increases. (All other goods have a decrease in demand when the price of the good increases.)
A Giffen good could be an “inferior” good that responds so negatively to an increase in income that the income effect outweighs the substitution effect. In the 1895 edition of the classic “Principles of Economics,” Alfred Marshall wrote: “As Mr. Giffen has pointed out, a rise in the price of bread makes so large a drain on the resources of the poorer labouring [British spelling] families and raises so much the marginal utility of money to them, that they are forced to curtail their consumption of meat and the more expensive farinaceous foods: and, bread being still the cheapest food which they can get and will take, they consume more, and not less of it.”
Another textbook example of a “Giffen good” is the potato during the terrible Irish famine of 1846-1849. This famine caused nearly a million Irish to die of starvation, and forced nearly another million to emigrate to the United States, Britain, Canada and Australia. Economists say that the potato during the famine was Giffen good. The potato crops failed and there was less supply of potatoes, which caused its price to increase. But the poor Irish, after paying the higher price for potatoes, had less money left to buy more expensive foods. So they had no choice but to buy even more potatoes to try to fill their stomachs! In other words, the increase in the price of the potato wiped out the savings of the Irish, who then bought even more potatoes despite its increase in price. Note that the potato could have been a Giffen good only if it was also an “inferior” good that saw a surge in demand when people had less income (less money to spend).
Other economists suggest that tortillas are a Giffen good in Mexico today. But further investigation shows that neither potatoes in Ireland nor tortillas in Mexico actually qualify as Giffen goods.
Rice and noodles are now described as Giffen goods among the poor in China. Do you believe it?
Giffen goods are exceptions to the Law of Demand ... if Giffen goods really exist! Many, including your teacher, are skeptical that there really is such a thing as a Giffen good.
Economist Thorstein Veblen suggested that a "status symbol," like a fancy car or diamond, can have an increase in demand when its price goes up. While "status symbols" certainly do exist, it is questionable whether the demand for them ever increases due to an increase in price. If it does occur, then it is a "Veblen good." Notice that a "Giffen good" is not a status symbol as a "Veblen good" would be.
Utility and Diminishing Marginal Utility
Money isn’t everything. We have many expressions for this concept. “There’s more to life than money.” “It’s only money.” “What’s your job satisfaction?” The basic point is that dollars and cents do not capture our overall happiness or satisfaction as a consumer. You may buy the most expensive music CD on the market, or watch the most popular movie, or buy the fanciest clothes, but that does not mean you will like those items the best. Often our favorite goods are not the most expensive ones.
“Utility” is concept in economics created to include non-monetary satisfaction. “Total utility” is defined as a consumer’s overall satisfaction. “Marginal utility” is defined as the additional satisfaction of a consumer in buying an additional unit of a good.
As we discussed in Lecture One, Economics is not only about money. The economic concept of “utility” encompasses everything worthwhile, whether it has to do with money or not. Helping someone has “utility”, for example, even though it is voluntary and not for money. Charity has utility even though nothing is obtained in return.
Let’s take an example. Suppose you are on a family road trip by car out West. You left your campsite near Phoenix just after you woke up, and you’re driving through the desert to Los Angeles. You have not eaten all day. Hour by hour goes by and you do not see any place to eat.
Finally, at 4 o'clock in the afternoon, you see the golden arches of McDonalds appear on the horizon. You drive closer and the arches appear bigger. It’s not a mirage!
When you arrive, you run in and order its famous french fries. You’re famished. When the food arrives, you take your first handful of french fries. Wow, it is really satisfying to eat that first bunch of french fries with an empty stomach. Your marginal utility is extremely high. You might have even been willing to pay $5 for that first mouthful of french fries because you are so hungry. Then you eat your second handful of french fries. Your marginal utility is still high, but not quite as high as the first one. You wouldn’t have paid as much for the second handful either. By the time you finish all the french fries, the last few bites were not so satisfying. In fact, you’ve gotten sick to your stomach. The marginal utility of that last french fry was very low. Perhaps even less than zero!
You have just experienced the Law of Diminishing Marginal Utility: the marginal utility of each additional unit (e.g., french fry) always declines (in a given period). This is similar to the "Diminishing Returns" experienced by a producer of goods.
In general, the rational consumer will always try to maximize his or her total utility. How is this done? The consumer always purchases the good with the highest marginal utility in order to maximize his total utility.
Suppose you go to a shopping mall with $80. You can buy food or clothes or anything else you find in a mall. Would you spend it all on food? Of course not. The marginal utility of your food purchases declines as you eat more. Ideally, you wouldn’t even buy enough food to fill your stomach, because you can always eat more cheaply at home. To maximize your utility, you would spend every dollar in a way that has the most marginal utility. Your first purchase would be what you want most, and then your next purchase would be your second choice, and so on. If you really want something that happens to cost $80, then you may spend all your money on that one item.
The rational consumer maximizes utility by spending each dollar in a way to maximize marginal utility for that dollar. For such a consumer, the marginal utility of every good divided by that good’s price must be equal. MUx/Px = MUy/Py=MUz/Pz, where MUx is the marginal utility of good “x” and Px is the price of good “x”. This is known as the Law of Equiproportion Marginal Benefit.
Lack of Utility
You might notice that you, like most people, waste many hours each day on thoughts or activities that have no utility at all. Once you recognize this, you can minimize the wasted utility and start to focus on maximizing your utility.
"Alarmism" and "anxiety" are examples of wasted time, and loss utility. Time spent worrying can be much better spent maximizing utility, either by earning money or doing something else (such as charity) that increases your utility. Jesus may have been making the same point when He said, "Who of you by worrying can add a single hour to his life?" If something is not done for God and has no utility for you either, then why are you wasting time on it?
The economic concept of utility is helpful in combating distractions and minimizing time wasted on alarmism and anxiety. Occasionally ask yourself, what utility was achieved in the last hour of my time? Did I maximize my utility during that hour?
Now that we understand the important concept of utility, we can graph it by using an "indifference curve." In a simple case, consider having separate utilities for two different goods. The goods could be food (like two types of candy), or they could be websites (like Conservapedia and Facebook), or they could be modes of transportation (like a car or a bicycle). The point is that there is a trade-off in utility when you substitute one good for the other. Your overall utility will increase, remain the same, or decrease, when you give up some of one of the good in exchange for more of the other good.
When the quantity for one of these two goods is graphed on the x-axis, and the quantity for the other good is graphed on the y-axis, then we can draw a line that represents constant utility as we substitute one good for the other.
The indifference curve is a graph of utilities such that every point along a curve has the same amount of total utility. A person should be "indifferent" to where he is along the curve, because it shows where his total utility is constant.
Let's take a simple example. Suppose you like chocolate and peanut butter equally well, and you are "indifferent" between receiving one chocolate bar and one peanut butter candy bar. Plotting the good for a chocolate bar on the x-axis and the good for a peanut butter candy bar on the y-axis, the indifference curve will be a straight line with a negative slope of 1. Give up a chocolate bar but receive a peanut butter bar, and you're on the same indifference curve: your overall utility has not changed. But give up a chocolate bar and receive TWO peanut butter bars, and you've increased your overall utility and you've left that original indifference curve. You're better off with that deal and are not "indifferent" to it. You want the improvement in utility in the 2-for-1 deal.
Let’s take another example that illustrates the usefulness of an indifference curve. Suppose you are working on the homework for this course with three friends - Chris, Stephanie and Kevin. Someone says they are hungry and go to look for snacks. You see a half-eaten bag of potato chips and you pop a bag of popcorn. However, there is not enough food for everyone, so you have to ration who receives what.
You count 24 potato chips and 40 kernels of popcorn. Uh oh. There are four of you. On average, that’s only 6 potato chips and 10 kernels of popcorn per person. You tell everyone that.
But Chris likes potato chips more than corn; Stephanie prefers the opposite. To decide how to allocate the food, you ask Chris and Stephanie to draw their "indifference curves" with potato chips on the y-axis and corn on the x-axis. You learn from the curve that Chris is just as happy with 9 potato chips and 2 kernels of popcorn as he would be if he received 6 potato chips and 10 kernels of corn. Chris’s overall utility (or satisfaction) is the same in both cases. Meanwhile, Stephanie is just as happy receiving 18 kernels of popcorn and 1 chip. Fine, you give Chris 9 chips and 2 kernels and Stephanie 18 kernels and 1 chip.
Was this worth it? Yes: now you have two extra potato chips that you would not have had by splitting everything equally. Chris and Stephanie are just as happy, and you can share the additional chips with Kevin.
The graph below is for a typical set of three indifference curves (I1, I2 and I3) for someone for two goods X and Y. Note that this graph does not compare Price and Quantity, but compares the Quantity of good X (on the x-axis) with the Quantity of good Y (on the y-axis). As one goes down a specific curve, the person is "indifferent" with giving up Quantity of good Y in exchange for more of good X. The person becomes more satisfied or happier when he shifts to an entirely new curve, as in moving from I1 to I2.
The next graph below is for two goods X and Y that are perfect substitutes for each other (like our example above of a chocolate bar and a peanut butter candy bar). In the case of perfect substitutes, the person is happy to substitute one good for the other on a one-for-one basis, and hence the slope of the curves is a perfect negative one.
“Consumer surplus” is a concept that illustrates the power of the free market as it drives down the price of goods. When we buy goods and services, most of us would pay at a higher price if we had to. For example, our families would pay twice the cost of milk because we would still want to drink milk even if the price were higher. We may not buy as much milk at a higher price, but we would still buy some. We get extra value when we can buy milk at a price lower than what we would really pay if we had to.
The "consumer surplus" is the net benefit (in dollars) that a consumer obtains from buying a good. Thus (consumer surplus) = (total benefit) - (total cost). The "consumer surplus" is never negative, because people would not purchase goods or services if their total benefit is less than their total cost. They would be better off keeping their money and not making the purchase.
To illustrate how powerful the concept of the "consumer surplus" is, let’s define another term: “demand price.” That is the most someone is willing to pay for something. When you go to see a movie, there is a maximum amount you are willing to pay for a ticket. It varies for different consumers. It also depends on what the movie is.
A consumer’s demand price is his marginal benefit from obtaining the good or service (not including what he had to pay for it). You may walk out of a movie theater after seeing a movie you really liked, and conclude that it gave you $25 worth of benefit. Your marginal benefit is thus $25 from the movie (not subtracting what you paid to see it). The total benefit in the market is thus the sum of all the demand prices, which is the area under the demand curve.
The consumer surplus is the demand price (the most a consumer would pay) minus the price paid (the amount the consumer actually has to pay). Suppose you were effusive (i.e., very enthusiastic) about a particular movie, and wanted very much to see it. You were so excited that you were willing to pay $20 to see that movie. But if the theater charges you only $8, then your consumer surplus is $20 - $8 = $12.
Consumers stop buying a good when the demand price falls slightly below the sales price. For movies, the demand price falls the longer it keeps playing in a theater. After you’ve seen the movie once or twice, you’re not willing to pay so much to see it again. Over time people stop paying to see the same movie, and the theater stops playing it and begins showing a new movie instead.
Almost every time someone buys something, he benefits from the consumer surplus of that transaction because he would probably pay a little more than he did. If you value a chocolate candy bar at $1.05 but can buy it for $1, then you acquired extra wealth of 5 cents as your consumer surplus. You would have paid $1.05 for it, but paid only $1 and then had both the candy bar and the extra 5 cents. You became wealthier from the transaction by an amount equal to your consumer surplus. And you became fatter too!
A force perhaps even stronger than the “invisible hand,” or perhaps a variation of the invisible hand, is charity. The desire to give something of benefit to others, without demanding as much in return, is enormous. “Give and ye shall receive” is advice from the Bible. The giving may be in one form (money), and the receiving may be in another form (heavenly reward, which could be described as part of an overall "utility"). Look around, and you will notice numerous important charitable acts by others and yourself.
An immense advantage of charity is that it has no transaction costs. One person simply gives money, or goods, or services, to someone else (or to a church), and the recipient of the donation then uses it in the best way possible. A person giving $10 in a collection basket at church does not fill out any paperwork or require any commitments on how the church uses the donation. It is assumed that the church will use the money as best it can, perhaps even in an unexpected way during the following year. Pure charity is often purely efficient, with little waste in terms of transaction costs. Charitable transfers are not taxed either, so none of it is spent on harmful government programs.
America’s health care system, by far the greatest in the world, was built on a foundation of charity. People and religious organizations giving time, money and expertise to care for individuals who could never afford to pay all the costs. Education, too, was developed in this country largely through charity. In recent years, both health care and education have lost their charitable identities and been taken over in significant ways by the government, and the results of government involvement have not been positive. Costs have gone up and quality has gone down.
Most economics courses avoid charity entirely. When charity is mentioned, it is described as a minor add-on to the invisible hand of self-interest. But is this backwards? Is the invisible hand of self-interest actually a wrapper around a basic foundation of charity? These are thoughts to consider throughout this course.
The size and importance of charitable institutions in the world is immense. All religious institutions are charitable, as are nearly all private schools. Many hospitals are still charitable, including the Seventh-Day Adventist hospital in Hackettstown, New Jersey. On a trip to Orlando, I saw another Seventh-Day Adventist hospital there. Many religious organizations operate hospitals through the United States, just as they built most of our leading universities. Harvard, Yale, Princeton and Columbia, for example, were all built by charitable religious organizations.
The renowned Sloan-Kettering Cancer Center was built with the generosity of Alfred Sloan and Charles Kettering. They acquired wealth as senior executives of General Motors and ultimately donated their money to the cause of medicine. It has operated on a non-profit basis to this day.
The first private medical clinic in the United States was the Mayo Clinic, established by the Mayo family of physicians in 1889. Their sponsor was the Sisters of St. Francis, which built St. Mary’s Hospital in Rochester, Minnesota.
Likewise, the successful American colonies were initially more religious and charitable than commercial. Pennsylvania was founded on religious principles by the Quaker William Penn, who in turn established Philadelphia as the City of Brotherly Love. Within merely a few decades it grew to become the second most successful city in the British Empire, after London. Another successful city, Boston, was built on the religious values of Puritans.
It is primarily on the solid foundation of religious values and charity that the flower of free enterprise blossoms.
Read and, if necessary, reread the above lecture. Answer any six out of the following seven questions, and the extra credit question at the end can be answered by anyone (honors questions can also be completed):
1. A consumer's overall satisfaction is expressed in economics as his _________________.
2. Suppose you see a sleek-looking used sports car and you immediately want to buy it. You think to yourself, "I can paint that car and fix it up so it looks brand new!" You like it so much that you would very work hard for a year and save up $10,000 to buy it. You ask the owner how much he'd sell the car for, and he says $9,000. If you buy it for $9,000, then what is your "consumer surplus"? What does that concept mean?
3. Suppose your favorite hobbies are reading books and hiking, and imagine that they have the following values for marginal utility. The first hour that you hike gives you lots of utility: 10 units. But as you start to tire, you enjoy and benefit from it less. The next hour of hiking is worth only 8 units of utility (in other words, it has a marginal utility of 8 units rather than 10), and the next hour of hiking is worth only 5 units, and then 3, then 1, and then zero for the next hours, in that order. Your marginal utility for reading books does not decline so quickly. In the first hour, reading a book gives you utility of 6 units; the next hour is worth 5 units; the next hour is worth 4 units; and then 3, 2, 1 and 0. Suppose that you have 5 extra hours today. How should you spend those hours on hiking and reading in order to maximize your utility, and what will be your total utility for those 5 hours? Explain your answer.
4. Suppose you plan to buy a brand new car for $25,000. When you go to the car dealership to make your purchase, you notice that there is a car on the lot that looks brand new but no longer has the sticker price on it. The dealer says it was returned by someone after driving it only 500 miles. You like the color and ask if you can buy it. The dealer, seeing that you’re so interested, says he’ll sell it to you for the same price as a brand new car that has never been sold. You’re willing to buy it at full price, and do not mind one bit that someone else used it briefly and returned it. But you notice that other people (the “market”) would not pay full price for a returned car. Should you pay the price of a brand new car for this car that has been driven 500 miles? Explain.
5. Explain why the shape of an indifference curve for two goods that are perfect substitutes is a straight line going from the upper left down to the lower right. Extra credit: why must its slope be negative 1?
6. Describe either the "income effect" or the "substitution effect." Take your pick.
7. Charity is based on the foundation of a successful free market. Or is a successful free market based on a foundation of charity? Describe and explain which is the "cart", and which is the "horse" (in other words, which comes first or is most important, charity or the free market).
Write in about 300 of your own words on one or more of the following topics:
8. "A penny saved is a penny earned!" In fact, once taxation is taken into account, "a penny saved is almost two pennies earned!" Discuss one or both of these quotations.
9. Do you think a Giffen good really exists? Can you see any possible political bias in the claim that Giffen goods exist? Your views, please.
10. Prove the Law of Demand as simply as you can, perhaps using the assumption that the consumer always tries to maximize marginal utility.
11. Discuss the Irish Potato Famine between 1846 and 1849, and whether you think potatoes were an "inferior good" then.
Extra Credit for Anyone
- ↑ The Thirteenth Amendment to the U.S. Constitution, which was passed to ban slavery, generally prohibits compelling people to work.
- ↑ "There is no way to buy more potatoes when there are fewer potatoes. Thus, the Giffen legend concerning the great potato famine appears at best to be a misinterpretation of some observed but misunderstood phenomenon, and at worst a kind of hoax."
- ↑ Luke 12:25 (NIV). | http://conservapedia.com/Economics_Lecture_Four | 13 |
29 | The Tupi: explaining origin and expansions in terms of archaeology and of historical linguistics.Interest in explaining scientifically the enormous territorial expansion of the Tupi has been an issue since 1838, now with a consensus: a common centre of origin existed, from which the Tupi fanned out, differentiating through distinct historic and cultural processes whilst keeping several common cultural features. But there is no consensus as to where the centre was located and where passed the routes of expansion.
Scholars have often asserted this hypothesis, but contributed very little scientific proof. Since 1960, archaeological (site location, radiocarbon and thermoluminescent dating) and linguistic data (giottochronology, relationships among languages) have been brought to the scene. In this article, I also intend to show:
* Enough elements now link prehistoric to historic Tupian groups, setting the ground for understanding origins, continuities, changes and/or extinction;
* Chronology can now be based on archaeological and linguistic evidence rather than on Martius', Metraux's and other speculations, which distort prehistoric events.
In his study of the Indo-European question (1987), Renfrew concluded that linguists and archaeologists had for a long time used archaeological and linguistic results acritically; it was time for methodologies integrating both approaches. The same is true of research on the Tupi. Underlying the debate are two hypotheses:
* material differentiations followed linguistic derivations;
* material and technological differentiations did not occur in isolation, but stemmed from culturally chained phenomena.
Between 1838 and 1946, the hypotheses were developed with historical and ethnographic data, and influenced by theories ranging from degenerationism to racial and geographic determinism Geographic determinism is a theory that the human habits and characteristics of a particular culture are shaped by geographic conditions. Espoused by Huntington. The theory looked at the rise and fall of the Roman Empire from 400-500 A.D. to evolutionism ev·o·lu·tion·ism
1. A theory of biological evolution, especially that formulated by Charles Darwin.
2. Advocacy of or belief in biological evolution. . Most were based on the historic location of known Tupian peoples.
From 1946 to the present, with the publication of the Handbook of South American Indians, archaeological information was interpreted in frameworks of ecological determinism and diffusionism. During the same period, historical linguistic methods were introduced (Dyen 1956; Rodrigues 1963; 1986; Swadesh 1971; Ehret 1976; Camurn, Jr 1979a; 1979b), especially to identify the relationships among kin languages.
The word Tupi is used to denominate de·nom·i·nate
tr.v. de·nom·i·nat·ed, de·nom·i·nat·ing, de·nom·i·nates
1. To issue or express in terms of a given monetary unit: securities that are denominated in dollars or yen. a linguistic stock that encompasses approximately 41 languages which spread, several millennia ago, throughout eastern South America South America, fourth largest continent (1991 est. pop. 299,150,000), c.6,880,000 sq mi (17,819,000 sq km), the southern of the two continents of the Western Hemisphere. (Brazil, Peru, Bolivia, Paraguay, Argentina and Uruguay). Tupi is also used to refer to the speakers of these languages. Of those 41 languages, the two most frequently mentioned since the arrival of Europeans have been Guarani gua·ra·ni
n. pl. guarani or gua·ra·nis
See Table at currency.
[Spanish guaraní, Guarani; see Guarani.]
Noun 1. and Tupinamba.(1)
Migration or expansion?
Terminology used for population shifts of the Tupi has regarded these simply as migrations (see Anthony (1990) for general principles in studying migrations). Etymologically, the term migration means a moving from one place to another, a leaving of the original region. This term is appropriate for the movement the Tupi undertook when pressed by other peoples, for instance the migrations after 1500 - regarded as escape movements from Europeans (Metraux 1927).
The term 'migration' does not cover adequately those Tupian peoples who moved in other ways, possibly for other reasons - demographic growth, the breaking-up of villages, forestry management, etc. According to according to
1. As stated or indicated by; on the authority of: according to historians.
2. In keeping with: according to instructions.
3. archaeological studies, the Tupi held possession of their domains for long periods, expanding to new territories without abandoning old ones (Brochado 1984; Scatamacchia 1990; Noelli 1993b). Studies in ethnobiology and Native South American history demonstrate that territories under the domain of some Tupian peoples were slowly conquered, managed and tapped for a long time in an important aspect to expansion (Noelli 1993a; 1993b). The better term for these population shifts is expansion, meaning distention dis·ten·tion or dis·ten·sion
The act of distending or the state of being distended.
n a state of dilation. and spreading, a conquering of new regions without abandoning previous ones.
Martius and Metraux: defining the Tupi issue
In a lecture delivered in 1838 about 'The past and future of American humanity', Karl F. Ph. von Martius (1867 I: 1-42) proposed, for the first time, the hypothesis of a centre of origin for the Tupi; he located it between Paraguay and the south of Bolivia, the region he considered the probable gateway through which peoples from the Andes headed to the east of South America. Martius believed the expansion was recent, shortly prior to the arrival of the Europeans (before 1500), with higher cultures preceding tribal ones. Seeing Native American peoples as having gone through a continuing decadence, he deduced that several languages derived from a few original ones, by a disorganized dis·or·gan·ize
tr.v. dis·or·gan·ized, dis·or·gan·iz·ing, dis·or·gan·iz·es
To destroy the organization, systematic arrangement, or unity of. mixture of different peoples resulting in new languages and dialects. (This argument was repeated in his thesis 'How the history of Brazil The History of Brazil begins with the arrival of the first indigenous peoples, over 8.000 years ago by crossing the Bering land bridge into Alaska coming from the North and Central America's. should be written': Martius 1845.)
In 1839, following Martius and using linguistic and physical criteria, as well as the geographical location of Tupian speakers, Alcides D'Orbigny suggested a region between Paraguay and Brazil as the Tupi's 'primitive homeland' ( 1944: 37, 368). He called all the Tupi 'Brasilio-Guarani' or simply 'Guarani'.
In 1886, Karl von den Steinen (1886: 353) proposed that the sources of the Xingu river Xingu River
River, central and northern Brazil. Formed by several headstreams, it flows north through northeastern Mato Grosso state and central Pará state into the Amazon River near its mouth. were situated in the region 'where the geographical central radiation point of the Tupi is probably located'. Von den Steinen (1886: 323) coined the term 'Tupi-Guarani', we can infer to eliminate confusion at a time when the Tupi were called interchangeably 'Tupi' or 'Guarani' (discussion in Edelweiss 1947).
Paul Ehrenreich (1891), member of von den Steinen's second expedition to the Xingu in 1887, used linguistic and ethnographic arguments more explicit than those of his predecessors in claiming, 'all the evidence indicate that we should look for their point of exodus where these tribes are more tightly concentrated, that is, in Paraguay and its surroundings'. He understood that the 'widespread distribution of these peoples, as we can see in a map, is explained by the radiation from a centre' (Ehrenreich 1891).(2) In Ehrenreich one sees both Martius and D'Orbigny's suggested locations of an origin, and von den Steinen's central radiation. These four scientists provided the foundations for other researchers.
Wilhelm Schmidt Wilhelm Schmidt may also refer to a German former politician.
Wilhelm Schmidt (1868–1954) was an Austrian linguist, anthropologist, and ethnologist.
Wilhelm Schmidt was born in Hörde, Germany in 1868. (1913), a creator of the Kulturkreise theory, and the one who first applied it to South America, compared several cultural aspects among the Tupi and between these and the peoples belonging to other cultural groups to locate a Tupi centre of origin in the sources of the Amazon. Other authors suggested other locations: Afonso A. de Freitas (1914), the region between the sources of the Madeira river Madeira River
River, western Brazil. A major tributary of the Amazon River, it is formed by the junction of the Mamoré and Beni rivers in Bolivia and flows north along the border between Bolivia and Brazil. , Lake Titicaca Lake Titicaca sits 3,812 m (12,507 feet) above sea level making it the highest commercially navigable lake in the world . By volume of water it is also the largest lake in South America. , Beni and Araguaia rivers (critique in Baldus 1954: 251-2); Rodolfo Garcia (1922, based on Ehrenreich), the region between the Paraguay and Parana river basins; Fritz Krause (1925), the area of the Omagua and Kokama, between the Napo and Jurua rivers.
Of all these researchers, Alfred Metraux (1927; 1928; 1948a; 1948b) was the first to justify this hypothesis with systematically organized and compared elements. He was also the most quoted and the one whose hypotheses about the centre of origin and expansion routes were least contested (critiques in Brochado 1984: 331 - 4; Laraia 1986: 22). In a remarkable immunity, his proposals about prehistoric 'migrations', made obsolete by archaeology, are still alive (Laraia 1988; Brandao 1990; Fausto 1992; Santos 1992; Porto 1992: 74-6).
Although his first exhaustive work dealt with the historical Tupian migrations(3) (Metraux 1927), it was in studying this group's material culture that Metrauz (1928) advanced his hypothesis about the centre of origin. Inspired by Nordenskjold and Schmidt's comparative methods, Metraux compared material and technological elements geographically, deducing that the centre of origin was located close enough to Amazonia because the Tupi showed northern and Amazonian influences (Metraux 1928: 310). Metraux thought it unlikely that the 'primitive Tupian motherland' was located on the northern banks of the Amazon river Amazon River
Portuguese Rio Amazonas
River, northern South America. It is the largest river in the world in volume and area of drainage basin; only the Nile River of eastern and northeastern Africa exceeds it in length. ; it should be somewhere in the Tapajos or Xingu river basins. He concluded (Metraux 1928: 312):
No important prehistoric Tupi-Guarani tribe was settled on the left banks of the Amazon river and the occupation of its [Brazilian] coast took place at a later period, thus forcing us to place the dispersion centre of the tribes of this race within the area bound in the north by the Amazon river, in the south by the Paraguay river Paraguay River
River, South America. The fifth largest river in South America, it is 1,584 mi (2,550 km) long and the principal tributary of the Paraná River. Rising in the Mato Grosso region of Brazil about 980 ft (300 m) above sea level, it crosses Paraguay to its , in the east by the Tocantins and in the west by the Madeira river.
Branislava Susnik (1975: 57), after an ethnological eth·nol·o·gy
1. The science that analyzes and compares human cultures, as in social structure, language, religion, and technology; cultural anthropology.
2. review as extensive as Metraux's, suggested the Colombian plains as the centre, with expansion driven by four factors: demographic growth, with the original nuclei breaking up; need for new croplands; peripheral pressure by non-Tupian groups; and collective abandonment of ecologically unsuitable areas.
Linguists also based their hypotheses on Martius, von den Steinen and Ehrenreich.
Moises Bertoni (1916; 1922) suggested a single language, Carib-Guarani, dominating Central and South America. Calling the Tupian stock 'Guarani', this author favoured an Asian origin for the Tupi, who had come to the Americas culturally formed. Bertoni (1922: 298), reproducing Max Uhle Max Uhle (1856 - 1944) was a German archaeologist, whose work in Peru and Bolivia at the turn of the Twentieth Century had a significant impact on the practice of archaeology of South America.
Uhle was born in Dresden, Germany on March 25, 1856 and received his Ph.D. , saw the Tupi as directly influenced by the high Mexican and Central-American cultures. After comparing several languages, Paul Rivet Paul Rivet (1876 – 1958) was a French ethnologist, who founded the Musée de l'Homme in 1937. He was also one of the founder of the Comité de vigilance des intellectuels antifascistes (1924), influenced by Martius and Ehrenreich, set the centre of origin between the Paraguay and Parana rivers, at the latitude of Paraguay (endorsed by Stella 1928; Guerios 1935; Rodrigues 1945; Mason 1950). Aryon Rodrigues (1964: 103), using the lexico-statistic method and the notion that a concentration of language families suggested the centre of a protolanguage pro·to·lan·guage
A language that is the recorded or hypothetical ancestor of another language or group of languages. Also called Ursprache. , placed it in the Guapore river region. Other linguists proposed different centres of origin: Loukotka (1929: map; 1935; 1950), between the Juruena and the Arinos rivers; Childe (1940), the sources of the Xingu and the Upper Araguaia; Migliazza (1982), between the Ji-Parana and the Aripuana, tributaries of the Madeira; Urban (1992), for the Tupian stock between the Madeira and the Xingu, closer to the sources than to the valleys, for the Tupi-Guarani family between the Madeira and the Xingu. Magalhaes (1993) merged Loukotka's proposals with Meggers's expansion routes (see below).
A third group hypothesizing the centre of origin are the archaeologists.
A first stage of archaeological research compared pottery, attempting to verify the relationship of Tupinamba and Guarani pottery to that of Amazonia (Netto 1885; Torres 1911; 1934; Linne 1925; Costa 1934; Howard 1947; 1948; Willey 1949). In 1934, Angyone Costa (1934: map VI) set the centre of origin in central Mato Grosso Mato Grosso (mä`t grô`s) [Port.,=thick forest], state (1996 pop. . Martius and Metraux's covert influences were noticeable (Lothrop 1932; Willey 1949), mainly in hypotheses of a later dispersion and a ceof ntre located in the middle Parana.
The archaeological issue was highlighted during the 1960s when PRONAPA(4) accumulated data for the development of a 'cultural sequence and for recognizing the directions of the influences, migration and diffusion' (Evans 1967: 9). From previous research premisses(5) (Meggers 1951; 1954; 1957; 1963; Meggers & Evans 1957; Silva & Meggers 1963), the programme anticipated the invention of pottery outside Amazonia, where a cultural decadence was caused by adverse environmental conditions of the tropical rain forest and a recent Tupian diffusion. A similarity with Martius' proposals was evident. During the five years of PRONAPA, three general syntheses (Brochado et al. 1969; PRONAPA 1970; Meggers 1985) were drawn, and two concerning the 'Tupiguarani' tradition (Brochado 1973; Meggers & Evans 1973).
The 'pronapians' suggested the abandonment of old ethnographic denominations for archaeological remains (Guarani & Tupinamba), proposing (Brochado et al. 1969: 10; PRONAPA 1970: 12):
After consideration of possible alternatives, it was decided to retain the label 'Tupiguarani' (but to be written as a single word) for this widely disseminated late ceramic tradition, in spite of its linguistic connotations; the term is well established in the literature, and ethnohistoric information substantiates the correlation of the protohistoric and early historic archaeological remains with speakers of Tupi and Guarani languages along most of the Brazilian Coast.
The concept of a 'Tupiguarani' tradition, based on Willey & Phillips' proposal (1958: 22), was defined ('Terminologia' 1969: 8; 1976: 146) as:
A cultural tradition characterized principally by polychomatic pottery (red and/or black on white and/or red slip), corrugated cor·ru·gate
v. cor·ru·gat·ed, cor·ru·gat·ing, cor·ru·gates
To shape into folds or parallel and alternating ridges and grooves.
v.intr. and brushed, secondary burials in urns, polished stone axes and the use of tembetas [lip plugs].
By this PRONAPA approach, the use of historic and linguistic information was to be abandoned in favour of the archaeological. Yet the 'pronapians' used models to deal with prehistoric events that were established by Martius and others without archaeological data. An oblivion for the identity and material culture differences recognizable among the Tupi started among archaeologists, who framed, in a single category, peoples historically known for their similarities as well as for their differences and oppositions.
This 'pronapian' proposal depended on the similarity in surface treatment of pottery by several Tupian peoples, including those thousands of kilometres away. So the analysis of paste composition was privileged over the relationship between the shape and the use of pots, described in profusion in the first-contact chronicles and dictionaries of the 16th and 17th centuries. By considering the whole relationship, with shape and function, the similarities and differences among the Tupian pottery can be clarified, whereas the paste is a limited marker, depending on the pottery-maker's choice or on the geological singularities of their region.
Meggers (1972: 129), using PRONAPA's results and her own proposals (Meggers 1963), defined the foot of the Andes, in Bolivia, as the origin. The following year, with Clifford Evans, and based on Metraux (1927) and Rodrigues (1958), she shifted the Tupian 'homeland' to the Amazonian plain, east of the Madeira river (border between Brazil and Bolivia), largest concentration of Tupian linguistic families (Meggers & Evans 1973: 57; reiterated in Meggers 1975; 1976; 1982; Meggers & Evans 1978: [ILLUSTRATION FOR FIGURES 7, 8 OMITTED]; Meggers et al. 1988: [ILLUSTRATION FOR FIGURE 5 OMITTED].). Among archaeologists, Meggers was followed by Pedro I Pedro I (Dom Pedro de Alcântara) (pā`drō), 1798–1834, first emperor of Brazil (1822–31); son of John VI of Portugal. . Schmitz (1985: map 1; 1991: map 1), who based his works linguistically on Migliazza (1982).
Brochado (1973) located the sites geographically, interpreting 55 PRONAPA radiocarbon and 7 thermoluminiscence dates from the Paranapanema Project in Sao Paulo to admit Metraux's suggested centre of origin.
Donald Lathrap Donald Ward Lathrap (1927.07.04 - 1990.05) was an Americanist archaeologist.
Lathrap took a bachelor's degree in anthropology from the University of California at Berkelely, studying under Alfred L. Kroeber and Carl Sauer. His Ph.D. opposed Meggers's hypothesis, postulating that pottery in South America was invented in Amazonia: the proto-Tupian centre of origin was the confluence of the Madeira and Amazon rivers. He also suggested that the proto-Tupi, pressed by the Arawak, went up the Madeira and its eastern tributaries as far as the Serra dos Parecis, where derivations took place that culminated in the linguistic families of the Tupian stock (Lathrap 1970: 75-8). His hypotheses were influenced by Metraux, whom he did not quote, and by, explicitly, Rodrigues (1958). Brochado (1984), abandoning the assumptions he had used in PRONAPA, adopted and expanded Lathrap's hypotheses.
More recently, Ondemar Dias (1993) after reviewing Brochado's (1984) and Schmitz's work, and based only on information from non-Amazonian areas, situated a Tupian centre in southeastern Brazil, between the Paranapanema and the Guaratiba rivers.
Claristella Santos (1991; 1992), discussing the approaches that synthesize and relate linguistic and archaeological results (exclusively PRONAPA's), considers that at the time suggested by Rodrigues (1964) for the origin of the Tupian stock - 5000 b.p. - these peoples did not have pottery, being hunter-gatherers; so there is no unity between the linguistic and archaeological data, no historic-cultural unity at the time of the 'fundamental economic shift that took place in the cultural system of the Tupian protolanguage' (Santos 1992: 112). The pottery, its attributes and the analytical methods applied were not enough to outline elements relating them to Tupian stock.
Routes of expansion: the quest for Verb 1. quest for - go in search of or hunt for; "pursue a hobby"
quest after, go after, pursue
look for, search, seek - try to locate or discover, or try to establish the existence of; "The police are searching for clues"; "They are searching for the the Tupian paths
The geographical detection of prehistoric routes depends on relating the location of archaeological sites to their dates. The historical migrations studied by Metraux (1927), on which most researchers depend, represent movements to escape European pressure (see also Fernandes 1963: 25-58). Scholars have postulated routes of expansion for which there was no proof. And recent researchers have neither taken into account the archaeological studies now available, nor recognized advances of the last 30 years. The proposition takes two forms, expansion in a south-north direction, and a radial expansion.
Martius (1867 I: 7-10) postulated that the Tupian route from Paraguay went first southwards and then towards the north of Brazil: 'probably from the region between the Uruguay and the Parana [rivers], reaching the coast of Bahia, Pernambuco Pernambuco (pərnəmb`k), state (1991 pop. 7,127,855), 37,946 sq mi (98,280 sq km), NE Brazil, on the Atlantic Ocean. and the Amazon jungle'. Martius - never quoted by the professional archeologists of the last 38 years - appears implicitly in Meggers & Evans (1957), and in their followers. It was only Costa (1934: map VI) who cited Martius in following him.
D'Orbigny, after Martius, suggested a portion of the Tupi had moved into the Buenos Aires Buenos Aires (bwā`nəs ī`rēz, âr`ēz, Span. bwā`nōs ī`rās), city and federal district (1991 pop. region, from an area located between Paraguay and Brazil; later, another portion went to the Andes (Chiriguanos). Finally, without linking the suggestions, (D'Orbigny (1944:37 concluded/only the Guarani,(6) if we consider that their origin is the Tropic of Capricorn Tropic of Capricorn, parallel of latitude at 23°30' south of the equator; it is the southern boundary of the tropics. This parallel marks the farthest point south at which the sun can be seen directly overhead at noon; south of the parallel the sun appears less , migrated from the south to the north'.
Ehrenreich (1891), observing the geographical situation of the historic Tupi, proposed the 'radial dispersion' had occurred in successive waves, to the north, east and south. Following Martius, he had those from the south as moving to the north along the Atlantic coast.
Metraux (1928: 310-11), for the Guarani and Tupinamba, merged the models of radial expansion and of south-north expansion along the Atlantic coast.
From site location and radiocarbon dates, Brochado (1973) detailed a 'migration' schema for the PRONAPA regions, on the lines proposed by Metraux, with the 'Tupiguarani' expansion occurring in two 'migratory waves', one prehistoric and one after the European arrival. The first wave was represented by the Pintada Subtradition, the second by the Corrugated Subtradition. After European contact, the Corrugated Subtradition transformed into the Brushed Subtradition, another subtradition characterized in its ceramic expression by the predominance of a certain surface finish ('Terminologia' 1969: 7; 1976: 143). Afterwards, in his thesis (1984: 69-77) and at several scientific congresses, Brochado refuted completely the existence of these subtraditions: it had all resulted from confusion created by the indiscriminate mixture of Guarani and Tupinamba pottery (see also: Brochado et al. 1990; Brochado & Monticelli 1994; La Salvia salvia: see sage.
Any of about 700 species of herbaceous and woody plants that make up the genus Salvia, in the mint family. Some members (e.g., sage) are important as sources of flavouring. & Brochado 1989).
Lathrap (1970: 75-8, [ILLUSTRATION FOR FIGURE 5 OMITTED]), amalgamating archaeological, linguistic and ethnographic data (principally archaeological data), based a radial expansion on Tupi geographical distribution the natural arrangements of animals and plants in particular regions or districts.
See under Distribution.
See also: Distribution Geographic . This rather synthetic and deductive de·duc·tive
1. Of or based on deduction.
2. Involving or using deduction in reasoning.
de·duc model influenced proposals outside the mainstream schema among researchers, inaugurating a political polarization of the discussion about the origin of pottery and agriculture inside and outside Amazonia. His field methodology, not very different from that of the 'pronapians', was driven by different theoretical conceptions.
Meggers & Evans (1973), from an origin east of the Madeira river, suggested expansion towards the south of Brazil and then to the north (Meggets 1972: 129; 1975; 1976; 1982; Meggers & Evans 1973; 1978: [ILLUSTRATION FOR FIGURES 7-8 OMITTED]; Meggers et al. 1988: [ILLUSTRATION FOR FIGURE 5 OMITTED]), without mentioning the full comparative archaeological analysis concerning the Tupi; instead, the stratigraphical sequences of the middle-lower Amazon were privileged and those outside Amazonia excluded. Although assuming an 'incapacity of lexico-statistical methods to reveal earlier locations of speakers of akin languages', Meggers & Evans (1976: 60) based arguments about Tupi expansion on historical linguistics historical linguistics
n. (used with a sing. verb)
The study of linguistic change over time in language or in a particular language or language family, sometimes including the reconstruction of unattested forms of earlier stages of a language. and on the historical information analysed by Metraux (1927).
Following Lathrap, Brochado (1984: 28-39) matched internal divisions of the Tupian stock, from Proto-Tupi to historic languages and dialects, to the model of evolution and differentiation of Amazonian pottery (Lathrap 1970; Brochado & Lathrap 1980). After observing the Proto-Tupi divisions proposed by Rodrigues (1964) and Lemle (1971), he verified the correspondences, considering that material and linguistic differentiations must have been concomitant. Later, Brochado has seen the need to expand regional investigations and the multidisciplinary links that ensure consistent results for each Tupian group (pets. comm. 1993).
By Brochado's (1984; 1989) hypothesis, the Proto-Tupi resulted when the makers of the Guarita Tradition pottery (of the Polychromatic polychromatic /poly·chro·mat·ic/ (-krom-at´ik) many-colored.
pol·y·chro·mat·ic or pol·y·chro·mic or pol·y·chro·mous
Having or exhibiting many colors. Amazonian Tradition) split, somewhere in central Amazonia. based on historical linguistic assumptions, he considered the differentiation of languages and of pottery to have resulted from the spatial-temporal splitting of the Proto-Tupi, caused by continuous demographic growth in the heart of Amazonia. This division links the Guarani to the pottery of western Amazonia, and the Tupinamba to that of eastern Amazonia. The expansion is seen as having two periods, a first alongside the principal rivers, a second colonizing the smaller tributaries.
In the case of the Guarani, colonizations followed a north-south direction, from Amazonia to the mouth of the River Plate, through the courses of the Parana, Paraguay and Uruguay rivers; there are sites from Corumba (Peixoto 1995) to Buenos Aires. To the east, the Tupinamba, leaving the mouth of the Amazon, followed the coastline as far as Sio Paulo, moving up the Atlantic rivers into the hinterlands.
Brochado (Brochado & Lathrap 1980; Brochado 1984) concluded that the Guarani pottery in the Guarita tradition lost decorative techniques- modelling, excision and incision incision /in·ci·sion/ (in-sizh´un)
1. a cut or a wound made by cutting with a sharp instrument.incis´ional
2. the act of cutting.
1. in fine and long lines In communications, circuits that are capable of handling transmissions over long distances. - during the southward south·ward
adv. & adj.
Toward, to, or in the south.
A southward direction, point, or region.
south expansions outside Amazonia, through the Madeira and Guapore rivers. Bowls with everted and thickened rims disappear; labial labial /la·bi·al/ (la´be-al)
1. pertaining to a lip or labium.
2. in dental anatomy, pertaining to the tooth surface that faces the lip.
adj. and medial medial /me·di·al/ (me´de-il)
1. situated toward the median plane or midline of the body or a structure.
2. pertaining to the middle layer of structures.
adj. flanges replace decoration of the Guarita tradition. New, cone-shaped pans and jars resulted from contact with pottery-makers from eastern Bolivia and Peru. This characteristic Guarani pottery - both archaeological and historical - has a complex or inflected in·flect
v. in·flect·ed, in·flect·ing, in·flects
1. To alter (the voice) in tone or pitch; modulate.
2. Grammar To alter (a word) by inflection.
3. contour, developed waist and/or horizontal segmentation; corrugated or painted, it is utilized secondarily as burial urns.
There is no archaeological record The archaeological record is a term used in archaeology to denote all archaeological evidence, including the physical remains of past human activities which archaeologists seek out and record in an attempt to analyze and reconstruct the past. for the Tupi of the Lower Amazon. From their centre of origin, Brochado proposed, the Tupinamba shifted eastwards through the middle course of the river and, leaving its mouth, moved southwards to colonize col·o·nize
v. col·o·nized, col·o·niz·ing, col·o·niz·es
1. To form or establish a colony or colonies in.
2. To migrate to and settle in; occupy as a colony.
3. the coastline as far as the Tropic of Capricorn. Some constituent features of Tupinamba pottery are found in the Lower Amazon and in the Marajoara style: most of the open pots, including those with oval and quadrangular quadrangular
having four angles. mouths, and the polychromatic paint concentrated on the everted and thickened rims (features not occurring in the Madeira-Guapore and Parant-Paraguay basins). This pottery does not include most of the closed shapes, principally anthropomorphic Having the characteristics of a human being. For example, an anthropomorphic robot has a head, arms and legs. , nor the incision, excision and modelling techniques. From comparisons between the Tupinamba and Marajoara pottery and the indication that the Tupinambt had occupied the Lower Amazon, we suggest that Marajoara pottery may derive from the Tupinambt's (Brochado & Noelli n.d.).
Comparing shape and decoration, Brochado (1984) demonstrated that the Tupinamba pottery could not have evolved and unfolded outside Amazonia, next to Paraguay, as was proposed last century. Nor was it dispersed firstly southwards and then to the north of Brazil, as suggested by Meggets: there is no material evidence of a sequence outside Amazonia, in eastern South America.
Linguistic relations published after 1984 (Rodrigues 1984-5; 1986) make it unlikely that the Tupinambt colonized Colonized
This occurs when a microorganism is found on or in a person without causing a disease.
Mentioned in: Isolation the Brazilian coast and hinterlands from Paraguay to the south of Brazil and then moved towards northeastern/northern Brazil. Considered the most ancient language of the Tupi-Guarani family (Jensen 1989: 13), the Tupinambt could not have derived from the Guarani, the only Tupian-speaking pottery-makers south of Silo silo, watertight and airtight structure for making and storing silage. Silos vary in form from a covered pit, such as was used by the early Romans, to the modern storage tower, dating from the 19th cent. Paulo. Relations between Tupinambt and Koktma may explain and confirm the origin of the Tupinambt, if it can be determined whether the Kokama belongs to Tupian stock or is a Tupian language adopted by a non-Tupian people. Kokama and Tupinamba share characteristics absent from languages of the Tupi-Guarani family south of the Amazon river, in the Madeira-Tapajos, Tocantins-Araguaia and Xingu regions. This strengthens Brochado's hypothesis: the Tupinambt expansion, starting in the Lower Amazon, followed the Atlantic coastline southwards.
If Tupinamba pottery derives from the Guarani's, moving beyond the Paranapanema in a south-to-north diffusion, it changed drastically to include shapes and surface-finish techniques absent from southern Brazil. How did this occur, if constituent elements of the Tupinambt pottery originated exclusively in Amazonia?
Eliminating the fuzzy 'pronapian' concept of 'Tupiguarani', Brochado (1984) resorted to an old notion in calling this a 'Guarani Subtradition' and suggesting 'Tupinambt Subtradition' for the Tupinambt of the Brazilian coast, as well as for the other Tupi (non-Tupinamba) previously called 'Tupiguarani'. Since 1984 Brochado has proposed a 'Tupinamba Subtradition' exclusively for Tupinamba speakers, to differentiate them from the other Tupi groups. He also extends the concept of subtradition to the Asurini, Kokama, Tapirape, Munduruku, and so forth. Those peoples not using pottery should be judiciously studied: did they never produce it or was there a loss? Brochado (pers. comm. 1990) believes it important to have a model based on up-to-date information about the Tupi; the traditional model, primarily supported by historical data, was conceived before the archaeological and linguistic evidence came to light.
Greg Urban's (1992: 92-3) expansion hypothesis, based on Rodrisues's and Lemle's studies, connects linguistic derivation more explicitly to geographical expansion. Using exclusively linguistic data, Urban divides the expansions into two successive stages, in terms of distance from the origin, according to the Rodrigues (1964) chronology.
The first stage, 3000-5000 years ago, corresponds to the early division and expansion of the Tupian stock (which Urban calls Macro-Tupi) in the centre-western region of Brazil, between the Madeira and the Xingu rivers, as far as the Amazon river, with more concentration and diversity in Rondonia. The second stage, no longer associated to with the early Tupi expansion, corresponds to the geographical expansion of the Tupi-Guarani family, divided into three consecutive phases. This stage, Urban considers, occurred 2000-3000 years ago (Rodrigues 1958; 1964); he also suggests part of the expansion is probably very recent.
Arguing that the Tupi-Guarani family started its expansion 'somewhere between the Madeira and the Xingu rivers', Urban suggested that the first derivation must have occurred towards the Amazon river, through the Kokama and the Omagua, who shifted to the Amazon river. 'About the same time', the Guaiaki moved southwards, reaching Paraguay, while the Siriono moved southwestwards, as far as Bolivia. This movement was followed by Pauserna and Kawahib (Parintintin) speakers westwards; the Kayabi and Kamayura alongside the Xingu; the Xeta towards the south of Brazil; the Tapirape, Tenetehara and, perhaps, Wayampi moving as far as Guyana, into a region close to the mouth of the Amazon (Urban 1992: 92).
The third phase took place around AD 1000, with the expansion of Chiriguano and Guarayo speakers to Bolivia, the Tapiete and Guarani to Paraguay, the 'Kaingwa' to the region between Paraguay, Argentina and Brazil. Finally, the Tupinamba, Tupiniquin and Potiguara settled down on the Brazilian coast. They were originally speakers of a same language, called 'Tupiguarani, not to be mistaken with the family which is much wider' (Urban 1992: 92).
By stating that there had been a language called Tupi-guarani, Urban revives a nomenclature resolved in the late 1940s, since when Tupi-guarani has referred to a linguistic family, rather than a language (Edelweiss 1947: 39: Loukotka 1950; Rodrigues 1945; 1950; 1984-5). It is more appropriate to talk of a 'proto'-Tupi-guarani, the language from which originated current languages of the Tupi-guarani family?
In the light of older radiocarbon dates, a derivation at about 1000 AD is incorrect. The Tupinambg and the Guarani were already occupying most of their historically known territories at least 2000 years ago. The Wayampi arrived in Guyana in the 17th century, much later than Urban suggests, migrating from the Xingu when pushed by Luso-Brazilian slave hunters (Gallois 1986: 77-85].
The chronology of Tupian expansions
Two approaches to dating are available: absolute through radiocarbon and thermoluminescence thermoluminescence
Emission of light from certain heated substances as a result of previous exposure to high-energy radiation. The radiation causes displacement of electrons within the crystal lattice of the substance. ); relative through pottery series and giottochronology. The pottery series are not depended on here, because they do not provide accurate datings.
By the glottochronological datings of Rodrigues (1958; 1964), Proto-Tupi, the language in which originated the components of the Tupian stock, was formed around 5000 years ago, and the Tupi-guarani family some 2500 years later. And dates also show that the Guarani inhabited Parang pa·rang
A short, heavy, straight-edged knife used in Malaysia and Indonesia as a tool and weapon.
Noun 1. and Rio Grande do Sul Rio Grande do Sul (rē` grän`dĭ th s at least 2000 years ago and that the Tupinama were in Piaui Sao Paulo and Rio de Janeiro as early as 1800 years ago. Although published in the early 1970s, these absolute dates have not been considered by linguists in their analyses, or in their reproduction of Rodrigues' datings (Migliazza 1982; Greenberg 1987; 1992].
Several radiocarbon and thermoluminescent dates later than AD I are published for sites in the Amazon and Parang-Paraguay basins, Rio Grande do Sul, Atlantic coast, and coastal rivers (Brochado 1973; 1984; Brochado & Lathrap 1980; Scatamacchia 1990). These are much older than was imagined by ethnographers since Martius, who envisaged a quick expansion, close to the arrival of Europeans, with the cultural uniformity of the Tupi materializing just before the breaking-up of Tupian groups towards the 16th century.
Although few compared to the number of sites, and unequally distributed in the regions occupied by the Tupi, these radiocarbon dates show that the expansion and differentiation of some peoples was not recent. They provide cause to place the expansion of the Tupiguarani family at much earlier than 2500 years ago.
Three regions provide datings close to AD 1: Santa Maria, RS, about AD 150; Ivai river, PR, about AD 100; lower Tiete-SP, about AD 232; Sao Raimundo Nonato-PI, about AD 260; coast of Rio de Janeiro, about AD 300. Some of these datings are isolated; others are part of sequences which reach historic times. In regions far from the proposed centres of origin - deep southern Brazil, the northeast, coastal Rio de Janeiro - the dates attest to the antiquity of the expansions, and can be related to linguistic derivations. The few dates available for Argentina, Uruguay, Paraguay, and Bolivia are all later than the 10th century (Brochado 1984). In Peru and in neighbouring Brazilian regions, the pottery associated with the Kokama, Omagua, and Kokamiya still needs study in detail (Lathrap 1970; Myers 1990).
Other regions also yield dates close to the oldest: in the Mogi-guacu river, about AD 400; coast of Rio de Janeiro, about AD 440; Santa Maria-RS, about AD 475; middle Ivai-PR, about 460 AD and about AD 70; lower Tiete-SP, about AD 578, which may prove coexistence with the oldest dates. Dates closer to the present occur in several parts of eastern South America. On the southeastern and northeastern coast of Brazil we have: lower Tiete-SP, about AD 668; Curimatau-RN, about AD 800; coast of Rio de Janeiro, about AD 870; Cricare-ES, about AD 895; Guaratiba-RJ, about AD 980.
So the Tupian peoples were already spread over Brazil as early as 2000 years ago, in regions very distant from one another and from the proposed centres of origin, rendering obsolete Martius' account (repeated by many scholars) of a quick Tupian expansion shortly before the European arrival.
Many more archaeological studies have been conducted and dates obtained in southern Brazil than in Amazonia and other regions (data partially published: Brochado & Lathrap 1980; Brochado 1984; Scatamacchia 1981; 1991). The most recent research in Amazonia is yielding dates that reveal even earlier cultural phenomena - pottery, agriculture, chiefdoms - and demonstrate that some common Tupian elements are yet older.
Paraphrasing Manuela Carneiro da Cunha (1992: 11), we may say that we already know 'the extent of what we don't know' concerning Brazilian Native American (pre-)history.
Martius' hypothesis of 1838 has often been used by authors who provide neither archaeological nor historic linguistic evidence. Until the late 1950s it depended on historical evidence from the time of the European arrival onwards, and on linguistic evidence which did not verify derivations between languages. In this reality, it is understandable that most researchers of Tupian peoples suggested a late expansion at a period close to the 16th century. The dates now show that at least the Guarani and the Tupinamba were already settled in their historically known territories as early as 2000 years ago.
The corpus of all archaeological, linguistic and ethnographic information about the Tupi presents no evidence of a centre of origin outside South America, or in the 'Highlands', or below Parallel 16 [degrees] South.
In the 'Lowlands', where occupation sequences are known, confronting the archaeological publications will rule out Paraguay, southern Bolivia, Mato Grosso do Sul Mato Grosso do Sul (pron. IPA: ['ma.tu 'gɾo.su du suw] ) is one of the states of Brazil. Neighbouring states are (from north clockwise) Mato Grosso, Goiás, Minas Gerais, São Paulo and Paraná. , Goias, southern, southeastern and northeastern Brazil as a centre of origin. In the upper and main course of the Xingu, in the Araguaia and in the upper and main course of the Tocantins, according to PRONAPABA's first investigations (Meggets et al. 1988: 288), no archaeological evidence identifies an origin there; stratigraphical sequences instead provide clear evidence that the Tupian pottery did not evolve from previous pottery [ILLUSTRATION FOR FIGURE 1 OMITTED].
On the other hand, the Tupian archaeological evidence presents elements closely linked to the stratigraphical sequences of Central Amazonia [ILLUSTRATION FOR FIGURE 1 OMITTED], especially with those classified in the Polychromatic Amazonian Tradition (Brochado 1984: 308; also Lathrap 1970; Brochado & Lathrap 1980; Roosevelt 1991a; 1991b: 98-125). Parallel to this, the linguistic data show the greatest concentration of families and Tupian languages Tupian languages
Family of South American Indian languages with at least seven subgroups, spoken or formerly spoken in scattered areas from the Andes Mountains to the Atlantic Ocean and (with two exceptions) south of the Amazon River to southernmost Brazil and Paraguay. south of the Amazon (Rodrigues 1964; 1986; Urban 1992), and traces of a very ancient linguistic connexion between the Proto-Tupian and Proto-Karib languages (Rodrigues 1985: 393-400). The largest concentration of Karib languages north of the Equator also may contribute to placing the origin of the Proto-Tupi in Amazonia (archaeological information about the Karib in Rouse 1986).
Within the huge Amazon region, a space in which the centre of origin of the Tupi may be located is bounded: on the north by the right bank of the middle and lower Amazon; on the east by the Tocantins; on the west by the basins of the Madeira and lower - middle Guapore; on the south, by a line running from the middle Guapore (Parallel 12 [degrees] 30 [minutes]) as far as the Tocantins, close to the mouth of the Araguaia. These generic boundaries circumscribe cir·cum·scribe
tr.v. cir·cum·scribed, cir·cum·scrib·ing, cir·cum·scribes
1. To draw a line around; encircle.
2. To limit narrowly; restrict.
3. To determine the limits of; define. a probable centre of origin somewhere within them.
The centre of origin may be in that region's western portion. The linguistic consensus sets it there, in the largest concentration of families (principally close to the Madeira-Guapore basin). The best archaeological model - complex, updated, complete, organizing more evidence - is Lathrap's and Brochado's, which points to the region by the confluence of the Madeira and Amazon rivers [ILLUSTRATION FOR FIGURE 2 OMITTED]. If Lathrap's hypothesis of the Proto-Tupi being pushed towards the south is right, an explanation follows as to why the centre of origin of pottery is far from the region where the linguistic families of the Tupi stock were formed.
Clarification of the expansion routes from that centre of origin depends on the relationship between archaeological remains and linguistic evidence for all the Tupi. It is very likely that a differentiation in pottery corresponds to each linguistic derivation, as happens between the Guarani and Tupinamba (Brochado 1984; Scatamacchia 1981; 1991), without losing the general features of what the 'pronapians' call 'Tupiguarani' pottery.
Historical information, especially after the profound demographic and cultural changes that took place after the arrival of the Europeans, cannot determine the expansion routes clearly. Menendez's (1981-2), Gallois' (1986) and Porro's (1992) studies demonstrate how the European presence changed territoriality Territoriality
Behavior patterns in which an animal actively defends a space or some other resource. One major advantage of territoriality is that it gives the territory holder exclusive access to the defended resource, which is generally associated with in the Amazon region, influencing the mobility and spatial reallocation Noun 1. reallocation - a share that has been allocated again
allocation, allotment - a share set aside for a specific purpose
2. reallocation of several peoples; they also show the extinction of probable Tupi-speaking peoples. Historic research, as well as archaeological studies with a regional perspective, may also come to demonstrate changes in the spatial distribution of prehistoric peoples, explaining expansion and, whenever applicable, collapse.
Of the 41 Tupi peoples historically and archaeologically known, the most complete data are restricted only to 2, with much unknown about the material inventory of the other prehistoric peoples. We can make statements about the Guarani and Tupinamba based on empirical data, but no definitive evidence links other Tupi peoples to their prehistoric ancestors or determines the routes that took them to their historically known territories?
Of current models, Brochado's (1984) is the most complete; the only one that maps the regions where the cultural development of the Tupi was unlikely to occur, it thus delimits the most likely spaces in which expansion outside the Amazon region started. This model focuses on the Guarani and Tupinamba expansions, without encompassing the other 39 Tupi peoples [ILLUSTRATION FOR FIGURE 3 OMITTED].
The Tupinamba expanded from the lower Amazon, passing through its mouth towards the Brazilian coastline, from north to south as far as the Tropic of Capricorn. Parallel to this, other groups penetrated the interior, going upstream within the basins that flow into the Atlantic. There is no evidence in all the historically and archaeologically known Tupinamba territory of a relationship between the Tupinamba strata and those below; which proves that the Tupinamba pottery did not develop outside the Amazonia.
The lack of systematic archaeological research between Rio Grande do Norte Rio Grande do Norte (rē` grän`dĭ th nôr`tĭ), state (1996 pop. and Maranhao has led scholars to find support exclusively from the historical information systematized by Metraux (1927: 2-16) and Fernandes (1963: 33-57), of the flights of the Tupinamba towards Maranhao and Amazonas. Neither reports by 16th-century chroniclers such as Cardim (1939: 179) and Soares de Sousa (1987: 299-300) about the memory of territorial conquest by the ancestors of the Tupinamba, nor Abbeville's reports (1975: 208-9) about the flights caused by the Portuguese are pertinent to the prehistoric expansion.
Information about the Guarani is not so problematic.Archaeologically, except for the frontier with other Tupi groups, in all the Guarani territory studied, south of Parallel 17 [degrees], there is no direct connection to evidence of earlier occupations. Linguistically, the Guarani language is closer to the Tupi-Guarani family spoken in southern Bolivia, Paraguay and southern Brazil (except the Tupinamba). Most of these languages do not derive from the Guarani, which makes unlikely a south-north expansion. A region to be studied in detail stretches north of Parallel 17 [degrees], the Guapore and the western border of the Pantanal, in Bolivia.
Physical anthropological data, still to be incorporated into the issue, offers information to reconstruct parental populations and to help understand diversification, health/disease patterns, and ways of life (Salzano 1992). Some recent studies point to great genetic distances between Tupi groups in Amazonia and to the action of dispersive dispersive /dis·per·sive/ (-per´siv)
1. tending to become dispersed.
2. promoting dispersion. factors among them, which may indicate assimilated members of other populations (Salzano & Callegari-Jacques 1991). These studies can be conducted on skeletons of the same archaeological site, at the local level, or regionally.
The rhythm of the expansions did not develop in a void or isolated from other peoples. No studies deal with this issue. The Martius model lingers on in the argument that expansions were fast without considering the everyday life of the Tupi associated with the expansive processes.
In research on Guarani subsistence practices (Noelli 1993b), to which I applied a broader integration between archaeological, linguistic, historical, ethnographic, ethnobiological and ecological data, I was able to conclude that the Tupi were highly sedentary. A consequence of the territorial expansion must have been demographic growth and the breaking-up of villages. Expansion must have been resisted by the peoples whose lands were claimed, in turn implying interethnic relationships, bellicose bel·li·cose
Warlike in manner or temperament; pugnacious. See Synonyms at belligerent.
[Middle English, from Latin bellic and friendly.
In parallel, the management of crops and plant-gathering influenced directly the rhythm of expansion. The Tupi transported their plants, introducing them to all the regions they settled; they also took up new vegetables. These processes required investment in research time and in preparing the environment, in transforming the primary forest into known and productive areas (BaiZe baize
An often bright-green cotton or woolen material napped to imitate felt and used chiefly as a cover for gaming tables.
[French baies, from pl. 1994). The phenological cycle of plants is another factor in the rhythm of expansions.
As a village could not occupy new lands without their prior preparation, it could not move into far-away territory. The expansion must have taken place not by leaps, but through the slow and continues annexation of lands immediately adjacent to the occupied territories, as ethnobiological studies of tropical and subtropical sub·trop·i·cal
Of, relating to, or being the geographic areas adjacent to the Tropics.
of the region lying between the tropics and temperate lands
peoples have been demonstrating.
The key issue that allows us to understand the variables conditioning the expansions is related to territoriality, with its social units marked by consanguineous con·san·guin·e·ous
consanguineous adjective Referring to a blood relationship–ie, descendent from a common ancestor relations and alliances, tekoha in Guarani (Noelli 1993b; Melia 1986). The corresponding Tupinamba term is tecoaba (VLB See VL-bus.
VLB - VESA local bus : 127); research on other Tupi groups is still open.
Tekoha is the territory that corresponds to a village, with its hunting and fishing grounds, its crops, its natural resources for gathering and raw-materials, delimited by geographical elements and predominantly tapped by the group occupying these lands. Under normal conditions
Archaeology and linguistics provide some evidence that these peoples remained in the same place, from which they slowly broke up. Several Guarani lands show a continous occupation for over 1500 years, and Tupian lands for over 1000 years, in a permanence which may indicate a slower rhythm of movement. If Aryon Rodrigues' estimates are correct, several Tupian peoples have lived for at least 5000 years in the Guapore basin and adjacent regions.
Acknowledgements. Translation by Amilcar Mello D'Avila, Drawings by Carlos Cesar Reis de Oliveira.
1 The term Tupi has been used wrongly to designate just the Tupinamba language. In many archaeological publications, all 40 non-Guarani are grouped as if they were a single people called 'Tupi', overlooking their differentiations (see list of languages in Montserrat 1994: 98). The expression Tupi-Guarani, which defines one of the seven linguistic families of the Tupian stock, has also been wrongly used to designate a language.
2 Nimuendaju's map (1981) shows the historic location of the Tupi.
3 The Tupian Stock had not been linguistically defined in 1927-8; Metraux called it 'Tupi-guarani'.
4 Programa Nacional de Pesquisas Arqueologicas (National Programme of Archaeology Research). 1965-1970. Continued in the Legal Brazilian Amazonia since 1977 as PRONAPABA, Programa Nacional de Pesquisas Arqueologicas da Bacia Amazonica (cf. general analysis in: Brothado 1984; Alves 1991; Noelli 1993b).
5 Now outdated (Moran 1990; Roosevelt 1991a; 1991b; 1992).
6 D'Orbigny called almost all Tupian peoples 'Guarani'.
7 'Kaingwa' is not a language, but an expression - 'those from the woods' - used to refer to Guarani speakers not integrated to the Jesuit Reducciones, or to colonial societies (Melia et al. 1987: 362).
8 Collections of Tupian ethographic pottery, such as the one studied by Lima (1987), have not been systematically compared with archaeological collections yet.
ABBEVlLLE, C. 1975, Historia da missao dos padres capuchinhos na ilha do Maranhao e terras circunvizinhas. Sao Paulo: Itatiaia/EDUSP.
ALVES, C. 1991. A ceramica pre-historica no Brasil: Avaliacao e Proposta, Clio (serie arqueological) (Recife) 1(7): 9-88.
ANTHONY, D.W. 1990. Migration in archaeology: the baby and the bathwater, American Anthropologist 92: 895-914.
BALDUS, H. 1954. Bibliografia critica da etnologia brasileira. Sao Paulo: Comissao do IV Centenario da Cidade Silo Paulo.
BALE, W. 1994. Footprints of the forest: Ka'apor ethnobotany ethnobotany /eth·no·bot·a·ny/ (-bot´ah-ne) the systematic study of the interactions between a culture and the plants in its environment, particularly the knowledge about and use of such plants. - the historical ecology Historical ecology is a fairly new field of study that takes a human/nature dialectical approach to the history of landscapes, cultures, and regions. It is similar in some ways to environmental history, cultural ecology, and evolutionary ecology though different enough that many of plant utilization by an Amazonian people, New York New York, state, United States
New York, Middle Atlantic state of the United States. It is bordered by Vermont, Massachusetts, Connecticut, and the Atlantic Ocean (E), New Jersey and Pennsylvania (S), Lakes Erie and Ontario and the Canadian province of (NY): Columbia University Press Columbia University Press is an academic press based in New York City and affiliated with Columbia University. It is currently directed by James D. Jordan (2004-present) and publishes titles in the humanities and sciences, including the fields of literary and cultural studies, .
BERTONI, M.S. 1916. Influencia de la lengua Guarani en Sud-America y Antillas, Anales Cientificos Paraguayos (serie II) (Asuncion) 1: 1-120.
1922. La Civilizacion Guarani, parte I. Puerto Bertoni: Ex Sylvis.
BRANDAO, C.R. 1990. Os Guarani: indies do sul, religiao, resistencia e adaptacao Estudos Avancados (Sao Paulo) 4(10): 53-90.
BROCHADO, J.P. 1973. Migraciones que difundieron la tradicion alfarera Tupiguarani, Relaciones (Buenos Aires) 7: 7-39.
1984. An ecological model of the spread of pottery and agriculture into eastern South America. Unpublished Ph.D dissertation, University of Illinois at Urbana-Champaign Early years: 1867-1880
The Morrill Act of 1862 granted each state in the United States a portion of land on which to establish a major public state university, one which could teach agriculture, mechanic arts, and military training, "without excluding other scientific .
1989. A expansao dos Tupi e da ceramica da tradicao policromica amazonica, Dedalo (Sao Paulo) 27: 65-82.
1991. Um modelo de difusao da ceramica e da agricultura no leste da America do Sul, in Anais do I Simposio de Pre-historia do Nordeste Brasileiro, Clio [serie arqueologica) (Recife) 4: 85-8.
BROCHADO, J.P. et al. 1969. Arqueologia Brasileira em 1968. Belem: Museu Paraense Emilio Goeldi.
BROCHADO, J.P. & D. LATHRAP. 1980. Amazonia. ??details?(datil.).
BROCHADO, J.P., & G. MONTICELLI. 1994. Regras praticas na reconstrucao grafica da ceramica Guarani par comparacao cam vasilhas inteiras, Estudos Ibero-Americanos (Porto Alegre Porto Alegre
Port and city(pop., 2005 est.: city, 1,386,900; metro. area, 3,978,263), southern Brazil. Located along the Guaíba River near the Atlantic Ocean coast, it was founded c. 1742 by immigrants from the Azores. It was first known as Porto dos Casais. ) 20(2): 107-18.
BROCHADO, J.P., G. MONTICELLI & E. NEUMANN. 1990. Analogia etnografica na reconstrucao grafica das vasilhas Guarani arqueologicas Veritas (Porto Alegre) 35(140): 727-43.
BROCHADO, J.P. & F.S. NOELLI. N.d. Relacoes entre as ceramicas Marajoara e Tupinamba. Unpublished manuscript.
CAMARA, J.M., Jr. 1979a. Introducao as linguas indigenas brasileiras. Rio de Janeiro: Ao Livro Tecnico.
1979b. Principios de linguistica geral. 6th edition. Rio de Janeiro: Padrao.
CARDIM, F. 1939. Tratado da terra e gente do Brasil. Sao Paulo: Cia Editora Nacional.
CHILDE, A. 1940. Etude e·tude
1. A piece composed for the development of a specific point of technique.
2. A composition featuring a point of technique but performed because of its artistic merit. philologique sur les noms du 'chien' de l'antiquite Jusqu'a nos jours, Arquivos do Museu Nacional Museu Nacional means National Museum in Portuguese. The following museums have this denomination:
COSTA, A. 1934. Introducao a arqueologia brasileira. Sao Paulo: Cia Editora Nacional.
CUNHA, M.C. (ed.). 1992a. Historia dos indios no Brasil. Sao Paulo: FAPESP/SMC/Cia das Letras.
1992b. Introducao a uma historia indigena in Cunha (ed.): 9-24.
DIAS, O. 1994-5. Consideracoes a respeito dos modelos de difusao da ceramica Tupiguarani no Brasil: texto apresentado na IX Reuniao Cientifica da Sociedade de Arqueologia Brasileira, 1993, Revista de Arqueologia (Sao Paulo) 8(2): 113-32.
D'OBRIGNY, A. 1944. El hombre americana considerado en sus aspectos fisiologicos y morales. Buenos Aires: Editorial Futuro.
DYEN, I. 1956. Language distribution and migration theory, Language 32: 611-26.
EEDELWEISS, F. 1947. Tupis e Guaranis, estudos de etnonimia e linguistica. Salvador: Museu do Estado da Bahia.
EHRENREICH, P. 1891. Die Einteilung und Verbreitung der Volkerstamme Brasiliens nach dem gegenwartigen Stande unsrer Kenntnisse, Patermanns Mitteilungen 37: 81-91, 114-24.
EHRET, C. 1976. Linguistic evidence and its correlations with archaeology, World Archaeology 8(1): 5-18.
EVANS, C. 1967. Introducao: PRONAPA 1, Publicacoes Avulsas do Museu Paraense Emilio Goeldi (Belem) 6: 7-12.
FAUSTO, C. 1992. Fragmentos de historia e cultura Tupinamba. Da etnologia como instrumento critico de conhecimento etno-historico, in Cunha (ed.): 381-96.
FERNANDES, F. 1963. Organizacao social dos Tupinambd. 2nd edition. Sao Paulo: Difel.
FREITAS, A.A. de. 1914. Distribuicao geographica das tribus indigenas na epoca do descobrimento, Revista do Instituto Historico e Geographico de Sao Paulo (Sao Paulo) 19: 103-28.
GALLOIS, D.T. 1986. Migracao, guerra e comercio: os Waiapi na Guiana. Sao Paulo: FFLCH-USP.
GARCIA, R. 1922. Ethnographia indigena: diccionario historico, geographico, e ethnographico do Brasil 1: Introducao geral: 249-77. Rio de Janeiro: Imprensa Official.
GREENBERG, J. 1987. Language in the Americas. Stanford (CA): Stanford University Stanford University, at Stanford, Calif.; coeducational; chartered 1885, opened 1891 as Leland Stanford Junior Univ. (still the legal name). The original campus was designed by Frederick Law Olmsted. David Starr Jordan was its first president. Press.
GUERIOS, R.F.M. 1935. Novas rumos da Tupinologia, Revista do Circulo de Estudos Bandeirantes [Curitiba) 1(2).
HOWARD, G.D. 1947. Prehistoric ceramic styles of lowland South America, their distribution and history. New Haven New Haven, city (1990 pop. 130,474), New Haven co., S Conn., a port of entry where the Quinnipiac and other small rivers enter Long Island Sound; inc. 1784. Firearms and ammunition, clocks and watches, tools, rubber and paper products, and textiles are among the many (CT): Yale University Yale University, at New Haven, Conn.; coeducational. Chartered as a collegiate school for men in 1701 largely as a result of the efforts of James Pierpont, it opened at Killingworth (now Clinton) in 1702, moved (1707) to Saybrook (now Old Saybrook), and in 1716 was . Publications in Anthropology 37.
1948. Northeast Argentina, in G.D. Howard & G.R. Willey, Lowland Argentine archaeology: 9-24. New Haven (CT): Yale University. Publications in Archaeology 39.
JENSEN, C.J.J. 1989. O desenvolvimento historico da lingua lingua /lin·gua/ (ling´gwah) pl. lin´guae [L.] tongue.lin´gual
lingua geogra´phica benign migratory glossitis.
lingua ni´gra black tongue. Wayampi. Campinas: Editora da UNICAMP.
KRAUSE, F. 1925. Beitrage zur Ethnographie des Araguaya-Xingu-Gebietes, Actes du [XXI.sup.e] Congres International des Americanistes (Goteborg): 67-79.
LARAIA, R.B. 1986. Tupi: Indios do Brasil atual. Sao Paulo: FFCLH-USP.
1988. O movimento constante do povoamento indigena no Brasil, Humanidades (Brasilia) 5: 104-9.
LA SALVIA, F. & J.P. BROCHADO 1986. Ceramica Guarani. Porto Alegre: Pozenato Arte & Cultura.
LATHRAP, D. 1970. The upper Amazon. London: Thames & Hudson.
LEMLE, M. 1971. Internal classification of the Tupi-guarani linguistic family, in D. Bendor-Samuel (ed.), Tupi Studies 1: 107-29. Norman (OK): Summer Institute of Linguistics.
LIMA, T.A. 1987. Ceramica indigena brasileira, in B. Ribeiro (ed.), Suma etnologica brasileira 2: 173-229. Petropolis: Vozes.
LINNE, S. 1925. The technique of South Americam ceramics. Goteborg: Goteborgs kungl. Vetenskaps-och Vitterhets-Samhalles Handlingar.
LOTHROP, S.K. 1932. Indians of the Parana Delta, Argentina, Annals of the New York Academy of Science 32: 77-232.
LOUKOTKA, C. 1929. Le seta, un nouveau dialacte tupi, Journal de la Societe des Americanistes (NS) 21: 373-98.
1935. Linguas indigenas do Brasil, Revista do Arquivo Municipal (Sao Paulo) 54: 147-74.
1950. Les langues de la famillie Tupi-guarani. Sao Paulo: Faculdade de Filosofia, Ciencias e Letras da Universidade de Sao Paulo. Boletim 16 de Etnografia e Linguas Tupiguarani.
MAGALHAES, E.D. 1993. O Tupi no litoral, Revista de Arqueologia (Sao Paulo) 7: 51-67.
MARTIUS, K.F.Ph. 1845. Como se dave escrever a historia do Brazil. Revista Trimensal de Histaria e Geographia ou Jornal do Instituto Historico e Geographico Brazileiro (Rio de Janeiro) 6: 389-411.
1869. Beitrage zur Ethographie und Sprachangenkunde Sudamerika's zumals Brasiliens 1. Leipzig: Friederich Fischer.
MASON, J.A. 1950. The languages of South American Indians, in J. Steward (ed.), Handbook of South American Indians 6: 157-317. Washington (DC): Smithsonian Institution Smithsonian Institution, research and education center, at Washington, D.C.; founded 1846 under terms of the will of James Smithson of London, who in 1829 bequeathed his fortune to the United States to create an establishment for the "increase and diffusion of .
MEGGERS, B.J. 1951. A pre-columbian colonization of the Amazon, Archaeology 4(2): 110-14.
1954. Environmental limitation on the development of culture, American Anthropologist 56: 801-24.
1957. Environment and culture in the Amazon Basin “Amazonian” redirects here. For other uses, see Amazonian (disambiguation).
The Amazon Basin is the part of South America drained by the Amazon River and its tributaries. : an appraisal of the theory of environmental determinism. Washington (DC): Pan American Union.
1963. Cultural development in Latin America Latin America, the Spanish-speaking, Portuguese-speaking, and French-speaking countries (except Canada) of North America, South America, Central America, and the West Indies. : an interpretative overview, in Meggers & Evans (ed.): 131-45.
1972. Prehistoric America. Chicago (IL): Aldine.
1975. Application of the biological model of diversification to cultural distributions in tropical lowland South America, Biotropica 7: 141-61.
1976. Fluctuacion vegetal vegetal /veg·e·tal/ (vej´e-t'l) vegetative (defs. 1, 2, and 3).
1. Of, relating to, or characteristic of plants.
2. y adaptacion cultural prehistorica en Amazonia: algunas correlaciones tentativas, Relaciones (NS) (Buenos Aires) 10: 11-26.
1982. Archaeological and ethnographic evidence compatible with the model of forest fragmentation Forest fragmentation is a form of habitat fragmentation, occurring when forests are cut down in a manner that leaves relatively small, isolated patches of forest. The intervening matrix that separates the remaining woodland patches can be natural open areas, farmland, or developed , in Prance (ed.): 483-96.
1985. Advances in Brazilian archaeology, 1935-1985, American Antiquity 50(2): 364-73.
MEGGERS, B.J. & C. EVANS. 1957. Archaeological investigations at the mouth of the Amazon. Washington [DC): Smithsonian Institution.
(Ed.). 1963. Aboriginal cultural development in Latin America: an interpretative review. Washington (DC): Smithsonian Institution.
1973. A reconstituicao da pre-historia amazonica: algumas consideracoes teoricas, O Museu Goeldi no Ano do Sesquicentenario (Belem) Publicacoes Avulsas 20: 51-69.
1978. Lowland South America and the Antilles, in J.D. Jeunings (ed.). Ancient native Americans: 543-91. San Francisco San Francisco (săn frănsĭs`kō), city (1990 pop. 723,959), coextensive with San Francisco co., W Calif., on the tip of a peninsula between the Pacific Ocean and San Francisco Bay, which are connected by the strait known as the Golden (CA): W.H. Freeman.
MEGGERS, B.J., O. DIAS, E.TH. MILLER & C. PEROTA. 1988. Implications of archaeological distributions in Amazonia, Anais da Academia Brasileira de Ciencias: 275-94.
MELIA, B. 1986. El 'modo de ser' Guarani en la primera documentacion jesuitica (1594-1639), in B. Melia, El Guarani conquistado y reducido: 93-116. Asuncion: CEAUC.
MELIA, B., M.V.A. SAUL & V. MURARO. 1987. O Guarani: uma bibliografia etnologica. Santo Angelo: Fundacao Nacional pro-Memoria/FUNDAMES.
MENENDEZ, M. 1981-2. Uma contribuicao para etno-historia da area Tapajos-Madeira, Revista do Museu Paulista The Museu Paulista of University of Sao Paulo, best known as Museu do Ipiranga is one of the most important museums in Brazil. A historic museum located near the place where D. (NS) (Sao Paulo) 28: 289-388.
METRAUX, A. 1927. Migrations historiques des Tupi-guarani, Journal de la Societe de Americanistes (NS) 19: 1-45.
1928. La civilisation materielle des tribus Tupi-guarani. Paris: Librairie Orientaliste.
1948a. The Guarani, in Steward (ed.): 69-94.
1948b. The Tupinamba, in Steward (ed.): 95-133.
MIGLIAZZA, E. 1982. Linguistic prehistory prehistory, period of human evolution before writing was invented and records kept. The term was coined by Daniel Wilson in 1851. It is followed by protohistory, the period for which we have some records but must still rely largely on archaeological evidence to and the refuge model in Amazonia, in Prance (ed.): 497-519.
MONTSERRAT, R.M.F. 1994. Linguas indigenas no Brasil contemporaneo, in L.D.B. Grupioni (ed.), Indios no Brasil:
93-104. Brasilia: Ministerio da Educacao e Desporto.
MORAN, E.F. 1999. A Ecologia humana das populacoes da Amazonia, Petropolis: Vozes.
MYERS, T. 1990. Sarayacu: ethnohistorical and archaeological investigations of a nineteenth-century franciscan mission in the Peruvian Montana. Lincoln (NE): University of Nebraska Press.
NETTO, L. 1885. Investigacoes sabre archeologia brazileira, Archives do Museo Nacional do Rio de Janeiro (Rio de Janeiro) 6: 257-554.
NEVES, W. (ed.) 1991. Origens, adaptacoes e diversidade bio1ogica do hamera nativo da Amazonia. Belem: SCT/CNPq/MPEG.
NIMUNENDAJU, C.U. 1981. Mapo etno-historico de Curt Nimuendaju. Rio de Janeiro: IBGE/FUNDACAO NACIONAL PRO-MEMORIA.
NOELLI, F.S. 1993a. Por uma revisao da 'busca da terra sem mal' dos Tupi. Boletim da ABA (Florianopolis) 20 (Dezembro): 18.
1993b. Sem Tekoha nao ha Teko (em busca de um modelo etnoarqueologico da subsistencia e da aldeia Guarani aplicado a uma area de dominio no delta do Jacui-RS). Porto Alegre, Dissertacao (Mestrado), IFCH-PUCRS.
1994. Por uma revisao das hipoteses sabre o centro de origem e rotas de expansao pre-historicas dos Tupi, Estados Ibero-Americanos (Porto Alegre) 20(1): 107-35.
N.d. A fossilizacao de uma vista academica: o desenvolvimento e a manutencao da producao cientifica de Betty J. Meggers (1948-1993). MS.
PEIXOTO. J.L. 1995. A ocupacao do Tupiguarani na borda oeste do Pantanal Sul-matogrossense: macico do Urucu. Porto Alegre, Dissertaco (Mestrado), IFCH-PUCRS.
PORRO, A. 1992. As cronicas do rio amazonas: notas etnohistoricas sobre as antigas populacoes indigenas da amazonia. Petropolis: Vozes.
PRANCE, G.T. (Ed.). 1982. Biological diversification in the tropics tropics, also called tropical zone or torrid zone, all the land and water of the earth situated between the Tropic of Cancer at lat. 23 1-2°N and the Tropic of Capricorn at lat. 23 1-2°S. . New York (NY): Columbia University Press.
PRONAPA [Programa Nacional de Pesquisas Arqueologicas].
1970. Brazilian archeology in 1968: an interim report on the National Program of Archeology Research - PRONAFA, American Antiquity 35(1): 1-23.
RENFREW, G. 1987. Archaeology and language: the puzzle of Indo-European origins. London: Jonathan Cape.
RIVET, P. 1924. Langues Americaines III: langues de l'Amerique du sud et des Antilles, in A. Meillet & M. Cohen cohen
(Hebrew: “priest”) Jewish priest descended from Zadok (a descendant of Aaron), priest at the First Temple of Jerusalem. The biblical priesthood was hereditary and male. (ed.), Les Langues du monde n. 1. The world; a globe as an ensign of royalty.
Le beau monde
fashionable society. See Beau monde.
See Demimonde. : 639-717. Paris: Societe de Linguistique de Paris. Collection Linguistique 16.
RODRIGUES, A.D. 1945. Fonetica historica Tupi-guarani: diferencas foneticas entre o Tupi e o Guarani, Arquivos do Museu Paranaense (Curitiba) 4: 333-54.
1950. A nomenclatura na familia This article is about the Polish political party. For other uses, see Familia (disambiguation).
Familia ("The Family," from the Romain familia Tupi-guarani, Bolefin de Filologia (Montevideo) 6(43-44-45): 98-104.
1958. Classification of Tupi-Guarani, International Journal of American Linguistics The International Journal of American Linguistics is an academic journal published by the University of Chicago devoted to the study of the indigenous languages of the Americas. It was established in 1917 by anthropologist Franz Boas. 24: 231-4.
1963. Os estudos de linguistica indigena no Brasil, Revista de Antropologia (Sao Paulo) 11(1-2): 9-22.
1964. A classificao do Tronco linguistico Tupi, Revista de Antropologia (Sao Paulo) 12: 99-104.
1984-5. Relacoes internas na Familia linguistica Tupi-guarani, Revista de Antropologia (Sao Paulo) 27-28: 33-53.
1985. Evidences for Tupi-Carib relationship, in H. Klein & L. Stark (ed.), South American Indian languages: retrospect and prospect: 371-404. Austin (TX): University of Texas Press.
1986. Idnguas Brasileiras. Sao Paulo: Loyola.
ROOSEVELT, A.C. 1991a. Determinismo ecologico na interpretacao do desenvolvimento social indigena da Amazonia, in Neves (ed.): 103-41.
1991b. Moundbuilders of the Amazon. New York (NY): Academic Press.
1992. Arqueologia Amazonica, in Cunha (ed.) 1992: 53-86.
ROUSE, I. 1986. Migrations in prehistory. New Haven (CT): Yale University Press.
SALZANO, F.M. 1992. O velho e o novo: antropologia fisica e historia indigena, in Cunha (ed.) 1992a: 27-36.
SALZANO, F.M. & S.M. CALLEGARI-JACQUES. 1991. O indio da Amazonia: uma abordagem microevolucionria, in Neves (ed.): 39-53.
SANTOS, C.A. 1991. Rotas de migracao Tupiguarani - analise das hipoteses. Recife, Dissertacao (Mestrado), CFCH-UFPE.
1992. Mobilidade espaco-temporal da Tradicao Tupiguarani: consideracoes linguisticas e arqueologicas, Clio (serie arqueologica) (Recife) 1(8): 89-130.
SCATAMACCHIA, M.C.M. 1981. Tentativa de caracterizacao da tradicao Tupiguarani. Sao Paulo, Dissertacao (Mestrado), FFLCH-USP.
1991. Tradicao policromica no leste da America do Sul evidenciada pela ocupacao Guarani e Tupinamba: fontes arqueologicas e etno-historicas. Sao Paulo, Tese (Doutorado), FFLCH-USP.
SCHMIDT, W. 1913. Kulturkreise und Kulturschichten in Sudamerika. Zeitschrift fur Ethnologie 45: 1014-1130.
SCHMITZ, P.I. 1985. O Guarani no Rio Grande do Sul, Boletim do Marsul (Taquara) 2: 5-42.
1991. Migrantes da Amazonia: a tradicao TupiGuarani. Arqueologia do RGS RGS Royal Geographical Society
RGS Rio Grande do Sul (Brazilian State)
RGS Regulators of G Protein Signaling
RGS Royal Grammar School (England)
RGS Royal Grammar School (UK) , Brasil - Documentos (Sao Leopoldo) 5: 31-66.
SILVA, F.A. & B.J. MEGGERS. 1963. Cultural development in Brazil, in Meggers & Evans (ed.): 119-29.
SOARES DE SOUSA, G. 1987. Tratado descritivo do Brasil em 1587. Silo Paulo: Cia Editora Nacional.
STELLA, J.B. 1928. As linguas indigenas da America, Revista do Instituto Historico e Geographico de Sao Paulo (Sao Paulo) 26: 5-172.
STEWARD, J. (ed.). 1948. Handbook of South American Indians 3. Washington (DC): Smithsonian Institution.
SUSNIK, B. 1975. Dispersion tupi-guarani prehistorica: ensayo analitico. Asuncion: Museo Etrnografico 'Andres Barbero'.
SWADESH, M. 1971. Glottochronology, in M. Fried (ed.), Readings in anthropology: 384-403. 2nd edition. New York (NY): Thomas Crowell.
'Terminalogia'. 1969. Terminologio arqueologica brasileira para a ceramica, porte II. CEPA CEPA Canadian Environmental Protection Act
CEPA Closer Economic Partnership Arrangement (Mainland China-Hong Kong)
CEPA Canadian Energy Pipeline Association
CEPA Comisión Ejecutiva Portuaria Autónoma . Curitiba: UFPR UFPR Universidade Federal do Paraná (Spanish: University of Panama) . Manuais de Arqueologia 1.
1976. Terminologia arqueologica brasileira para a ceramica (2nd edition), Cadernos de Arqueologia (Curitiba) 1(1): 119-48. Centro de Ensino e Pesquisas Arqueologicas.
TORRES, L.M. 1911. Los primitivos habitantes del delta del Parand. Buenos Aires: Imprenta de Coni Hermanos.
1934. Relaciones arqueologicas de los pueblos del Amazonas, Actas y Trabajos Cientificos del XXV Congreso Internacional de Americanistas, Buenos Aires 2: 191-3.
URBAN, G. 1992. A historia da cultura brasileira segundo as linguas nativas, in Cunha (ed.) 1992a: 87-102.
VLB. 1952. Vocabulario na lingua brasilica. 2nd edition. Revista por Carlos Drumond. Sao Paulo, Boletim 137, Etnografia e Tupi-guarani 23-FFCL-USP.
VON DEN STEINEN, K. 1886. Durch Zentral-Brasilien: Expedition zur Erforschung des Schingu im Jahre 1884. Leipzig: F.A. Brockhaus.
WILLEY, G. 1949. Ceramics, in J. Steward, Handbook of South American Indians 5: 139-204. Washington (DC): Smithsonian Institution.
WILLEY, G.R. & P. PHILLIPS. 1958. Method and theory in American archaeology. Chicago (IL): University of Chicago Press The University of Chicago Press is the largest university press in the United States. It is operated by the University of Chicago and publishes a wide variety of academic titles, including The Chicago Manual of Style, dozens of academic journals, including . | http://www.thefreelibrary.com/The+Tupi%3A+explaining+origin+and+expansions+in+terms+of+archaeology...-a021221495 | 13 |
16 | While the EEC did not result in political cooperation as some had hoped it would, it was certainly the most successful of the three European communities formed during the early days of the Cold War. As a result, the six member countries merged the other two European communities—the European Coal and Steel Community (ECSC) and the European Atomic Energy Community (EURATOM)—into a single European Community in 1967. The emphasis on inter-governmentalism became the model for further economic integration, such as the December 1991 formation of the European Union.
The foundation of the EEC was laid in the late 1940s and early 1950s, a time of profound Cold War tension between the United States and the Soviet Union. Wanting Europe to become responsible at least partially for its own defense, the United States proposed the creation of a West German army. This led the six countries of the ECSC (France, the FRG, Italy, Belgium, the Netherlands, and Luxembourg) to consider forming a community for defending Western Europe from advances of the Soviet Union. France was particularly alarmed about the prospects of German rearmament, and it vetoed the European Defense Community (EDC). One motivation for the subsequent EEC was the desire to integrate the West German economy into that of Western Europe, lengthening the odds of Germany going to war again on its own.
Following defeat of the EDC, ECSC members began discussing a common market during 1956–1957. EEC negotiations occurred in the midst of the Suez crisis. At the same time that British Prime Minister Anthony Eden telephoned French Premier Guy Mollet in Paris to notify him that the British had agreed to a cease-fire, Mollet and FRG Chancellor Konrad Adenauer were meeting to discuss the formation of a common market. Four months later, in March 1957, France, the FRG, the Netherlands, Luxembourg, Belgium, and Italy signed the Treaty of Rome, creating a new economic bloc in Europe.
The 1957 Treaty of Rome, the founding document of the EEC, created four new institutions designed to govern relations among the FRG, France, Italy, Luxembourg, the Netherlands, and Belgium. The four institutions were the Commission, the Council of Ministers, the Assembly, and the European Court of Justice. Although the Council of Ministers contained national representatives and was designed to act as the main coordinating body among the six EEC states, it was the Commission that quickly emerged as the most dynamic branch of the EEC structure. It could initiate new policy and also had the responsibility of ensuring that agreed-upon treaties were enforced. The nine commissioners were not representatives of their states and indeed took an oath of loyalty to the EEC. Under the leadership of its first president, Walter Hallstein, the Commission became an active force in European politics.
In January 1959, the EEC took the first step toward implementing a common tariff by reducing intracommunity tariffs by 10 percent and increasing quotas by 20 percent. However, the first true test of the Common Market involved negotiations over a European free trade area. The British launched this idea in an effort to lure the FRG and the Netherlands away from the EEC, which London opposed. In early 1959 the British invited the six non-EEC states of Austria, Denmark, Norway, Portugal, Sweden, and Switzerland to begin negotiations to establish a rival trade bloc, the European Free Trade Association.
The United States strongly supported the EEC. Washington hoped that it would anchor the FRG in Western Europe, strengthen Western Europe's ability to withstand communist subversion and Soviet pressure, and bring the EEC to stand with the United States in a strong transatlantic community.
In the 1957 Treaty of Rome, the EEC countries agreed to develop common approaches to such areas as commerce, transportation, fair competition in trade, monetary policy, and the coordination of macroeconomic policy. Although the Rome Treaty did not mention a common agricultural policy, this was the most successful area of cooperation among the EEC states. French President Charles de Gaulle was the strongest proponent of a Common Agricultural Policy (CAP), because France produced more food than it consumed. De Gaulle could not afford to offend the powerful French agricultural lobby by reducing subsidies to farmers. France sought to export its agricultural surplus; however, its subsidized products were not competitive internationally. France therefore needed either export markets with guaranteed high prices or generous export subsidies to bridge the gap between higher French prices and lower international prices. France could get both through the CAP: an EEC-wide market with guaranteed high prices and subsidies for exports outside the EEC. Thus, de Gaulle pursued the formation of a CAP even though the Treaty of Rome did not provide for such a policy.
The CAP ultimately set France on a collision course with the FRG and the United States. Not self-sufficient in agricultural production, the FRG therefore sought to import significant amounts of agricultural products at the lowest possible price. France wanted to sell its agricultural products to the FRG but was stymied by cheaper imports from other countries including the United States, which did not wish to be excluded from EEC markets.
The West German government finally acquiesced to a common agricultural policy even though it did not make economic sense for them to do so. The West Germans wanted further economic integration because of their policy of West-politik, linking their policies to the alliance with the United States. The West Germans also subordinated their economic interests to the larger geopolitical interests of further European economic integration.
Cold War politics again impinged on EEC development when Britain applied for membership in August 1961. Britain sought easy access to West European markets and could get it only by entering the EEC. Britain's application happened to coincide with U.S. President John F. Kennedy's Grand Design for transatlantic relations. This plan sought to mollify Europeans' resentment of America's preponderant power while strengthening the Western alliance's political cohesion. Thus, the United States wanted a strong EEC to emerge as part of a stronger Western Europe, which in turn would strengthen the Atlantic Alliance.
De Gaulle, however, had a radically different understanding of the European union and the transatlantic partnership. He envisioned a Europe based on intergovernmentalism rather than supranationalism, a Europe of the states rather than a federal Europe, and a Europe genuinely equal with the United States in NATO rather than militarily subservient to Washington. The British government agreed with de Gaulle's antipathy toward supranationalism, but it shared Washington's vision of the transatlantic relationship. Yet the United States viewed Britain's absence from the EEC as politically awkward. The Americans were thus pleased when the British government signaled in early 1961 its intention to apply for EEC membership.
In December 1962, Kennedy and British Prime Minister Harold Macmillan struck a deal that would provide U.S. missiles for Britain's supposedly independent nuclear force. De Gaulle saw this as further evidence of British subservience to the United States. At the time the missile program was announced, negotiations over British admission to the EEC were at a critical stage. The British had made many concessions but were unwilling to accept the principle of supranationalism or a CAP. In January 1963, de Gaulle abruptly announced at a press conference that France would veto the British application.
The political consequences of de Gaulle's action were profound. A week after the press conference, de Gaulle and Adenauer signed a treaty on Franco-German cooperation. The United States saw this as a rejection by de Gaulle of its Grand Design and of the Atlantic Alliance. De Gaulle's vision of France and Europe placed him on a collision course with Washington. Despite the difficulties between Paris and Washington, the customs union remained intact, and European integration remained on course.
The CAP provoked another crisis that was even more significant to further European integration. The so-called Empty-Chair Crisis began over EEC Commission proposals for a new financial arrangement for the CAP for the period after July 1965, when the existing system of national contributions would expire. In 1970, following completion of the third stage of the transition to the customs unions, the EEC was supposed to acquire its "own resources," consisting of duties from agricultural and industrial imports, from which the CAP would be permanently funded. The Commission proposed moving the budgetary authority up to 1965. As this would result in the transfer of power from national parliaments to the Commission, de Gaulle opposed the Commission's proposal. Because the negotiations for the added budget authority for the Commission went past the deadline of 30 June 1965, de Gaulle's foreign minister, Maurice Couve de Murville, abruptly ended the meeting in the early hours of 1 July.
France then withdrew its representation from the Council of Ministers but pointedly continued to participate in routine Community business. In a September 1965 press conference, de Gaulle declared his refusal to accept policies that were to come into force in January 1966. He had two objections: on principle, he refused to countenance qualified-majority voting, which smacked of supranationalism; in practice, he feared the impact of qualified-majority voting on French agricultural and trade interests (under qualified-majority voting, a coalition of liberal member states could alter the CAP and thwart French efforts to protect agriculture in the General Agreement on Tariffs and Trade, or GATT). De Gaulle threatened to continue the boycott until member states agreed on a new financial regulation for the CAP, the Commission curbed its "political ambitions," and provisions for qualified-majority voting were dropped from the Treaty of Rome. The EEC Council of Ministers agreed to a member state's right to veto legislative proposals, which became known as the Luxembourg Compromise. With that, France agreed to take its seat again in the Council of Ministers.
Resolution of this crisis cleared the way for negotiations of a new financial arrangement for the CAP. As part of the deal, France agreed to a West German request that all remaining intra-EEC tariffs on industrial goods be abolished by July 1968, when the common external tariff would take effect. Thus, the customs union would come into being eighteen months ahead of schedule. In 1967, the institutions of the other two European Communities were folded into the EEC.
After 1967, these institutions were known as the European Community (EC). The combination of supranationalism and intergovernmentalism embodied in these institutions became the basis of the EC. So successful was the EC that Denmark, Ireland, and the United Kingdom decided to join it. This first enlargement, from six to nine members, took place in 1973. At the same time, the EC took on new tasks and introduced new social, regional, and environmental policies. In the early 1970s, EC leaders realized that they had to bring their economies into line with one another and that, in the end, what was needed was monetary union. In 1979, the member states of the EC introduced the European Monetary System to help stabilize exchange rates and encouraged the member states to implement strict monetary policies.
Further enlargement of the EC occurred throughout the 1980s. In 1981 Greece joined, followed by Spain and Portugal in 1986. This enlargement placed further pressure for structural reform on the EC. Meanwhile, the political shape of Europe was changing with the fall of the Berlin Wall in 1989, the reunification of Germany in 1990, and the coming of democracy to the countries of Central and Eastern Europe. The countries of the EC signed a new treaty at Maastricht in December 1991. This treaty came into force on 1 November 1993. It added areas of intergovernmental cooperation to the existing EC system, creating the European Union (EU).
The EU expanded in 1995 to include three more countries: Austria, Finland, and Sweden. In 2004, the EU welcomed ten additional countries: Cyprus, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia, and Slovenia. This enlargement ended the traditional split separating the free world from the communist world. It also brought pressure on the EC to consider the application of Turkey, the first non-European country that might join. This raised questions about how large the EC could become as well as where to draw the boundaries of the EU.
Current members of the EC are Austria, Belgium, Cyprus (Greek part), the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, and the United Kingdom.
Hitchcock, William I. The Struggle for Europe: The Turbulent History of a Divided Continent, 1945 to the Present. New York: Anchor, 2003.; Stirk, Peter M. R. A History of European Integration since 1914. New York: Pinter, 1996.; Urwin, Derek W. The Community of Europe: A History of European Integration since 1945. 2nd ed. London: Longman, 1995. | http://www.historyandtheheadlines.abc-clio.com/ContentPages/ContentPage.aspx?entryId=1162118¤tSection=1130228&productid=4 | 13 |
102 | Overview | What consequences do countries face by being in debt? What obstacles do they face trying to get out? In this lesson, students investigate the mathematics of being in debt by exploring and analyzing debt-crisis data. Through collecting and crunching economic information, calculating debt-repayment, and exploring the concept of financial risk, students apply quantitative skills that will help them better understand the crisis of being in debt, abroad or at home.
Materials | Computers with Internet access, scientific or graphing calculators, spreadsheet software.
Warm-Up | Working in pairs or trios, students explore the infographic “It’s All Connected: An Overview of the Euro Crisis” to become familiar with the basic vocabulary, geography and mathematics of the European financial crisis.
As they work, they should jot down key vocabulary terms like debtor, creditor, imbalance, default, exposure, borrowing costs and interest rate. In their small groups, they should discuss and define these terms, using, as needed, resources like the Times Topics page on the European Debt Crisis or The Times’s Glossary of Financial and Business Terms.
Have students tally the total debt for each country featured in the infographic (scrolling over each node in the graph reveals a list of that that country’s debts) and compute the sums of these debts to determine how much each country owes; this figure will be used later.
After they have explored the graph, have students write down their ideas down about why countries like Greece, Italy, Ireland and Portugal are considered “more worrisome,” according to the infographic, and why they owe so much money to other countries. In a brief whole-class discussion, have students share their ideas, and discuss and summarize the relevant vocabulary.
If students need further background or explanation about the debt crisis in Europe, they can read through the Magazine’s “Europe’s Financial Crisis, in Plain English” and look at the infographic “The Eurozone.”
Explain that a conventional solution to a debt crisis is to loan money to a country loaded with debt so that its creditors can be paid. Make a list of the potential benefits and hazards of such action, addressing questions like “Where does the money come from?”, “What are the potential consequences for the lenders?” and ”What are the potential consequences for the borrowers?”
Related | The article “Central Banks Take Joint Action to Ease Debt Crisis” explains what the Federal Reserve and other banks have done to try to head off problems that could arise from the European debt crisis:
The Federal Reserve moved Wednesday with other major central banks to buttress the financial system by increasing the availability of dollars outside the United States, reflecting growing concern about the fallout of the European debt crisis.
The banks announced that they would reduce by roughly half the cost of an existing program under which banks in foreign countries can borrow dollars from their own central banks, which in turn get those dollars from the Fed. The banks also said that loans will be available until February 2013, extending a previous endpoint of August 2012.
“The purpose of these actions is to ease strains in financial markets and thereby mitigate the effects of such strains on the supply of credit to households and businesses and so help foster economic activity,” the banks said in a statement. The participants in addition to the Fed were the Bank of England, the European Central Bank, the Bank of Japan, the Bank of Canada and the Swiss National Bank.
Read the entire article with your class, using the questions below.
Questions | For discussion and reading comprehension:
- What actions are the banks taking to shore up the financial system?
- Why did the banks decide to take action now?
- What effect is the program expected to have, and how did some analysts criticize it?
- How does the article characterize the debt crisis?
- What questions do you still have about the debt crisis and its components?
From The Learning Network
- Nowhere to Go but Up? Analyzing Economic Measures in a Downturn
- It’s All Greek to Me: Understanding the Debt Crisis in Europe
- Domestic Downturns and Global Woes
- European Debt Crisis Tracker
- Times Topics: European Debt Crisis
- Interactive: Tracking Europe’s Debt Crisis
- Interactive: Debt Rising in Europe
Around the Web
Activity | Explain to students that they will now explore the basic mathematics of loans, interest rates and risk through a simplified model of loan repayment and risk calculation.
Assign each small group one country experiencing problems with debt, including Greece, France, Italy, Ireland, Portugal and Spain. Their task is to tally up that country’s total debt using the infographic “It’s All Connected: An Overview of the Euro Crisis” and explore the quantitative consequences of the country taking out a loan to repay its debt.
Provide students with the following specifications:
- In our simplified model of loan repayment, we assume that the loan will be repaid in one lump sum at the end of the specified term.
- The variables to consider are the amount of the loan (P), the length of the loan in years (t), and the yearly interest rate (r).
- We will use the formula A = P*e^(rt) to compute the amount of the lump-sum repayment of the loan. This is the formula for continuously compounded interest: A will be the amount the country must repay; P is the principal, or the initial amount of the loan; r is the yearly interest rate; t is the number of years; and e is Euler’s constant, which is approximately 2.718.
Once students have calculated the amount the country needs to borrow (P), they should then use the formula to explore different loan scenarios based on the values of r and t. Have students compute the value of the lump-sum repayment, A, for 5-year, 10-year, 15-year and 20-year loans (the t-values) at interest rates ranging from 2.0% to 6.0% in 0.5% increments (the r-values). Students can review how the formula works, and use an online interest calculator.
Have students compute the difference between the amount borrowed (P), and the amount repaid (A), and display these values in table form, with the different t-values heading the columns and the r-values heading the rows. In our simplified model, these differences represent the lender’s profit on the loan, or the price of the loan for the borrower.
Now have students explore the role of risk in lending by taking into consideration the likelihood their selected country will be able to repay the loan.
Here is an example to share with students:
Let us assume hypothetically that there is a 5% chance that our selected country will not be able to repay the loan. What this means mathematically is that if the country took out this loan 100 times, we would expect the loan to be repaid 95 times; the other 5 times, the country would pay back nothing, or “default” on the loan. The lender therefore assumes some financial risk in making the loan, that is, the risk that it won’t be paid back.
One way lenders handle this risk is by demanding that the borrower pay back more than they originally borrowed. Suppose a country takes out a loan for $200 million, and they are 95% percent likely to pay it back. Using the above reasoning, the lender expects to be re-paid, on average, only 95% of the $200 million. Thus, on a $200 million loan, the lender expects to be repaid (0.95)*$200 million = $190 million on average. (This is the basic idea of expected value.)
In order to avoid that $10 million loss, the lender will ask that the borrower pay back $210.5 million for borrowing $200 million. The $210.6 million figure is a consequence of the following equation: 95% of $210.6 is approximately $200 million. So if the borrower is 95% likely to repay $210.6 million for the loan, then on average, the bank will get back (0.95) * $210.6 = $200 million. That is, they will break even.
Have students apply this elementary risk analysis to their selected country. First, have them choose a set of percentages that represent the country’s likelihood to repay the loan (say, 95%, 90%, 85% and 80%). Then, for each percentage, have them compute the new amount the country will have to pay back: for example, if the country needs to borrow P dollars and is 90% likely to repay the loan, then they must pay back P / (0.90) dollars. Use this new amount as P in the formula A = P*e^(rt) to compute the new lump-sum payment, A, for each percentage and interest rate.
Once the data is computed and complied, have students put together a small portfolio on their country, highlighting their current debt situation and the costs of various scenarios for the country to get out of debt. Have the students evaluate the various scenarios and suggest a course of action that the country itself, and its neighbors, should take.
Going Further | In more sophisticated models, loans are repaid in monthly, quarterly or yearly installments. Students can investigate the quantitative consequences of these models by using a more complicated monthly payment formula or loan calculator.
Financial agencies like Moody’s and Standard & Poor’s rate countries (and companies) on their ability to repay loans. A lower rating means a higher risk of default, and thus a higher price for taking out loans.
Students can take a look at the ratings that Moody’s and Standard & Poor’s have issued for countries around the world, and use Times resources like the country and territory pages to research what economic, political and historical reasons might play a role in the country’s rating. A model is the article “Ratings Firms Misread Signs of Greek Woes.”
The same fundamental ideas govern matters of personal finance, as well. Students can explore this concept by reading the article “College Graduates’ Debt Burden Grew, Yet Again, in 2010″ and figuring out just how much a large student loan (or home mortgage, or car loan) will cost.
3. Uses basic and advanced procedures while performing the processes of computation.
6. Understands and applies basic and advanced concepts of functions and algebra.
9. Understands the general nature and uses of mathematics.
13. Analyzes and interprets data using common statistical procedures, charts, and graphs.
32. Understands the social, cultural, political, legal, and economic factors and issues that shape and impact the international business environment.
10. Understands basic concepts about international economics.
23. Understands the impact of significant political and nonpolitical developments on the United States and other nations.
11. Understands the patterns and networks of economic interdependence on Earth’s surface.
6. Understands the nature and uses of different forms of technology. | http://learning.blogs.nytimes.com/2011/11/30/crunching-the-numbers-exploring-the-math-of-the-debt-crisis/ | 13 |
15 | Macroeconomics/Money and Inflation
What is inflation?
Inflation can be defined as the increase in the overall level of prices. Whilst the price of individual goods or services may vary due to changes in supply and demand, production costs or technological progress, inflation refers to the increase in the price level as a whole or for a selection of goods and services (commonly referred to in economics as a basket of goods). The result of inflation is that the nominal amount of goods and services that a unit of currency can purchase (its purchasing power) declines over time.
Of course, there is, in theory, nothing which make inflation an inherent feature of our economies. In historical times there have been prolonged periods of deflation (where the overall level of prices falls) as well as hyperinflation and disinflation.
Examples 1) ..... The average price of a specific "basket" of goods and services today is 100. If one year later the average price of a "basket" containing the SAME goods and services will cost you 300, then the currency of your country is worth only 1/3 as much as a year ago as a result of inflation. As a result of inflation all the prices of the SAME goods and services have increased.
..... Inflation is stated as a percentage. Assume, for example, that inflation is steady, every year the same, at 2% per year. Then an item that costs now 100 would cost after 1 year 102, then in future years: 104.04 106.12 108.24 110.41 112.62 114.87 117.17 119.51 121.90 and so on.
Last modified on 20 August 2010, at 19:21
2) ..... Labor or labour unions always want more and more money to "make ends meet", and so do most other people. The price of goods and services must cover all costs, including all involved salaries and wages. If the currency is worth less as a result of inflation, then all costs go up, and therefore all prices go up. But because prices go up, everybody wants a raise in wages/salary. Therefore the costs go up again, the prices go up some more, and so on and so on. THAT is the unfortunate result of the inflation "spiral". | http://en.m.wikibooks.org/wiki/Macroeconomics/Money_and_Inflation | 13 |
33 | From Ohio History Central
The Panic of 1819 and the accompanying Banking Crisis of 1819 were economic crises in the United States of America principally caused by the end of years of warfare between France and Great Britain.
These two nations had been at war with each other since the 1680s. They finally settled their differences in 1815. While these two nations had warred with each other, the United States had prospered. These European nations needed American industrial and agricultural products to sustain themselves during the conflict. Once the war ended, American products were no longer in such great demand. Both the French and the British downsized their respective militaries. Many of these former soldiers returned home and assumed their peacetime occupations, cutting into the need for American items overseas.
During the various British-French conflicts, United States goods, especially agricultural products, were in high demand in Europe, Americans had purchased Western land at an extravagant rate. In 1815, Americans purchased roughly one million acres of land from the federal government. In 1819, the amount of land had skyrocketed to 3.5 million acres. Many Americans could not afford to purchase the land outright. The federal government did allow Americans to buy the land on credit. As the economy ground to a halt in 1819, many Americans did not have the money to pay off their loans. The Bank of the United States, as well as state and private banks, began recalling loans, demanding immediate payment. The banks' actions resulted in the Banking Crisis of 1819 and helped lead to the Panic of 1819. The federal government tried to alleviate some of the suffering with the Land Act of 1820 and the Relief Act of 1821, but many farmers, Ohioans included, lost everything.
As a result of the Bank of the United States' actions, money became scarce, making it even more difficult for people to pay their debts. Several states, including Maryland and Ohio, implemented taxes on the Bank of the United States. These states hoped that, by taxing the banks, money would then enter the grasp of state governments. The state governments could then make loans to their citizens, thus relieving the money shortage. In 1819, the case of McCulloch v. Maryland reached the United States Supreme Court. Maryland had created a tax on the Bank of the United States' branch in Baltimore, Maryland. Although the federal government had the power to tax state and private banks, the federal government contended that states could not tax the Bank of the United States. The Supreme Court agreed with the federal government's position, contending that the federal government and its institutions were superior to the state governments. Chief Justice John Marshall believed that "The power to tax is the power to destroy." In other words, if the states could tax the federal government, the states had the power to destroy the federal government.
Ohio implemented its own tax against the Bank of the United States in 1819. In 1819, there were two branches of the Bank of the United States in Ohio -- one at Cincinnati and the other at Chillicothe. The tax law authorized the State of Ohio to seize fifty thousand dollars from each branch. On September 17, 1819, the Ohio Auditor, Ralph Osborn, authorized the seizure of 100,000 dollars from the Chillicothe branch. The tax agents actually seized 120,000 dollars from the bank. Osborn promptly returned the extra twenty thousand dollars.
The Bank of the United States sued Osborn for the return of the additional 100,000 dollars. The federal government contended that Osborn violated a court order prohibiting him from taxing the Bank of the United States. Osborn claimed that he was not properly served with the court order. The federal circuit court ruled in favor of the Bank of the United States, and federal marshals immediately seized 98,000 dollars from the Ohio treasury. Osborn had paid his tax agents two thousand dollars for collecting the tax, and this money still remained in dispute. In 1824, the case reached the United States Supreme Court. In Osborn v. Bank of the United States, the Supreme Court ruled in favor of the Bank and of the United States. Ohio returned the two thousand dollars still in dispute.
The Panic of 1819 and the Banking Crisis left many Ohioans destitute. Thousands of people lost their land due to their inability to pay off their mortgages. United States factory owners also had a difficult time competing with earlier-established factories in Europe. Many American people could not afford the factories' goods due to the lack of money in circulation. The United States did not fully recover from the Banking Crisis and the Panic of 1819 until the mid 1820s. These economic problems contributed immensely to the rise of Andrew Jackson. Many Americans viewed Jackson as one of them. He argued against the Bank of the United States, a message many Americans and Ohioans wanted to hear.
- Rothbard, Murray. The Panic of 1819: Reaction and Policies. New York, NY: Columbia University Press, 1962. | http://ohiohistorycentral.org/w/Panic_of_1819?rec=535&nm=Panic-of-1819 | 13 |
27 | "Students should learn about alternatives to the market system, such as traditional and command economies . . . (they) should study the strengths and weaknesses of each society and its values regarding the objectives of an economic system."
Understanding the core values and assumptions of other economies, as well as our own, contributes to the goal of developing cultural literacy. It also stimulates the imagination and fosters experimentation, helping students envision how an economy could be organized.
Economic systems reflect the values, assumptions and goals of a particular culture. Subsistence economies, which prevail in the more remote and less industrialized areas of the world, place much value on ecology and living in harmony within the natural limits of their environment. Capitalist and Socialist economies both share the goal of generating material wealth but differ in their approach. Capitalist economies emphasize individual freedom while Socialist economies emphasize social equality. The Buddhist economic system, as described by E.F. Schumacher and lived by some Eastern countries, is centered on the goal of human fulfillment and the development of character.
This lesson introduces each of these economies and then asks students to develop an economy based upon their own shared values and priorities.
Brief supplementary readings are provided on subsistence, capitalist, socialist, and Buddhist economies. Students are asked to work in cooperative groups to study and teach each other about these different systems. They are then asked to work individually to imagine what a typical day, and then a special occasion, might be like for a person living in an economy different from their own. Finally, they are asked to compare this with the way they spend their own time.
The Island Game gives students the opportunity to create their own economy based upon what they value and what they view the goal of their economy should be. All of the small groups are asked to develop the basic outline of their economy and then each group is given a problem scenario to solve. All groups report to the whole class how they have chosen to deal with their problem. Students can then be tested on their understanding of basic economic concepts using "The Evaluation of Your Island Economy."
BACKGROUND FOR THE TEACHER
Since the end of the Cold War, there has been a subtle and often unchallenged assumption that in the battle between capitalism and socialism, capitalism has won. The question most publicly debated is how to integrate formerly socialist economies (i.e. Eastern Europe) into a capitalist world economy. This way of framing recent history negates the positive contributions of socialist values and institutions. It also fails to question the ultimately materialistic goals of both capitalism and socialism.
Subsistence economies, far from being "primitive" and outdated, have much to teach us about living in balance with our environment. Subsistence farmers all over the world rotate their crops, lay fields fallow periodically, and intercrop nitrogen rich and nitrogen poor plants so that their soils will remain in balance and productive for many generations to come. This is in contrast to the one crop (monoculture) farming, practiced by agribusiness, where huge amounts of chemical fertilizers and pesticides are required for maintenance, leading eventually to the decay and disappearance of the fertile topsoils. The awareness that human beings are inter-dependent with, and not independent of, the rest of nature is an important contribution of subsistence societies to the modern world.
Adam Smith, the"founder" of the capitalist political economy, has much to contribute to the discussion of contemporary global issues. Smith believed that humans were by nature interested primarily in their own gain and hypothesized that the common good could be attained if everyone sought what was best for her/him individually.
However, he opposed the idea of monopoly and would probably not approve of the modern day corporation because people are liable to become more corrupt when they are in charge of more than their own money and property. Adam Smith believed strongly in the freedom of individuals to act in their own interest, but he would not have extended those same rights to the corporation as our society now does.
Karl Marx, the "founder" of communism felt that the capitalist system only exaggerated the injustice of earlier feudal society. Those who profited most from the production of goods were those who owned the means of production. In order to achieve more equality among peoples and avoid creating one class of people that worked to create goods and another class of people that profited from the sale of these goods, he suggested that there be no private property: that all property be owned equally by the members of society. Karl Marx would look at the situation in Ethiopia where the country is exporting grain while its people are starving and say that it is a result of the fact that the farmers that grow the grain do not own the property and so cannot decide what will be done with it.
E.F. Schumacher wrote a book in 1973 called "Small is Beautiful: Economics as if People Mattered." In his chapter "Buddhist Economics" he describes an economic system that is concerned primarily with an individual's right and ability to live a full and meaningful life. He makes a distinction between mechanization that enhances a persons skill and power and that which degrades a person to be a slave of the machine and perform dull and repetitious movements. He says the goal of a Buddhist economy is to maximize well-being with a minimum of consumption. This economy would chose to use its renewable resources so as not to drain its capital, the non-renewable resources. Simplicity and non-violence are the main goals of the economy. The Buddhist economy is very much like the simple living advocated by Thoreau.
The goal of this lesson is not to convince students that one system is superior to the rest, but to give them a wider range of choice in putting together the elements of their own ideal economy.
QUESTIONS TO EXPLORE:
What are the assumptions about human nature of each of the economic systems?
What are the core values of each?
Since the world economy is now heavily based on the market system, why do we need to understand any economy other than our own? | http://www1.umn.edu/humanrts/edumat/sustecon/lessons/lesson2.html | 13 |
15 | Huey Long was elected Governor of Louisiana in 1928 by the largest margin in the state’s history. In the face of entrenched opposition from the old guard, he launched an unprecedented program to build the state’s infrastructure and provide education and economic opportunity to the masses. After a failed attempt by his opponents to remove him from office, Huey consolidated his power in the state and became known as the “Kingfish.”
Politics in 1920s Louisiana was a dirty business dominated by influence peddling and cronyism. Traditionally, the governor marshaled support by giving out state jobs and lucrative contracts to supporters. Unsalaried, part-time legislators received jobs and cash for doing the bidding of the corporations.
Huey employed many of his predecessors’ tactics to get his programs passed; however, he never received the corporate and media support that the “Old Regular” politicians enjoyed.
Upon his election, Huey transformed the state bureaucracy, installing supporters in every level of government and often placing a premium on competence over cronyism. He cultivated loyalty by giving people a chance to work in his administration, and it soon became common practice for average citizens to approach him for a job, college scholarship, or any other type of assistance.
Huey immediately pushed a number of bills through the legislature to fulfill his campaign promises, including a free textbook program for schoolchildren, night courses for adult literacy, and piping natural gas to New Orleans. He also launched a massive building program of roads, bridges, hospitals, and educational institutions.
Huey's bills met stiff opposition from many legislators and the state’s newspapers, which were financed by the state’s business interests, but Huey used wily and persuasive methods (see "Long's Political Methods") to win passage of his bills. Huey was in a hurry to get things done and passed scores of laws that enabled him to enact his programs. A legal genius, Huey used the law to his advantage without breaking it. Opponents accused Long's administration of graft and overspending, when in fact he ran a fiscally tight ship. Louisiana had the third-lowest cost of government in the nation while providing unprecedented services to its people.
Courtesy of Louisiana State Archives
As Governor, Huey became an active promoter of Louisiana State University. He expanded the campus, tripled enrollment, and built LSU into one of the best schools in the South and the eleventh largest state university in the country. Huey lowered tuition and instituted scholarship programs that enabled poor students to attend. He also established the LSU medical school to meet the state's desperate need for new doctors. He frequently attended LSU football games, giving locker room pep talks to players and advice to coaches, and he even composed the LSU fight song, “Touchdown for LSU,” which is still played before every football game.
The public soon began to see the tangible results of a massive building program to modernize Louisiana. As the nation plunged into the Great Depression after the stock market crash of 1929, thousands of Louisianians were at work building the state’s new infrastructure. Louisiana employed 22,000 men just to build the roads — ten percent of the nation's highway workers. With greater access to transportation, education and healthcare, the quality of life in Louisiana was on the upswing while the rest of the nation declined.
Huey became known as “the Kingfish” in Louisiana, after a character in the radio show “Amos ‘n’ Andy” who headed a fraternal order called the “Mystic Knights of the Sea.” After winning his Senate seat, Huey explained his nickname by saying, “I'm a small fish here in Washington, but I'm ‘the Kingfish’ to the folks down in Louisiana.”
To finance these improvements, Huey restructured the tax system, shifting the burden from the poor to large businesses and the state’s wealthiest citizens (see "How Did Long Pay for His Programs?" below). Huey taxed oil operators to finance his free textbook program, provoking the wrath of Standard Oil, which launched an unsuccessful attempt to remove him from office.
When opponents blocked Huey’s bills in the 1930 legislative session, he responded by running for the U.S. Senate as a referendum on his progams. After his commanding victory, Huey pursued his agenda with renewed strength and formed an uneasy alliance with the “Old Regulars” and their chief, New Olreans Mayor T. Semmes Walmsley (nicknamed “Turkey Head” Walmsley by Huey). The alliance guaranteed support for Long’s programs and candidates in exchange for major structural improvements in New Orleans.
According to historian T. Harry Williams, "Louisiana was known as a state that levied remarkably few taxes ... not enough to support the kind of program Huey envisioned. The most lucrative one, the property tax, bore more heavily on the taxpayer of average or below-average means."
Long's ambitious road-building program was funded by bond measures that were voter-approved and backed by a gasoline tax. Long's education programs were funded by increasing the severance tax on natural resources extracted from the state by various industries based on quantity, which increased state revenue particularly from the oil industry. The funds for hospitals and other institutions came from taxing carbon black at one-half cent per pound.
Conversely, Huey slashed personal property taxes and fees, shifting the burden of government financing from the public to industry. Louisiana's total government operating costs (state and local) were $41.97 per capita - third-lowest among the 24 states that kept such records. During Long's governorship, taxes rose 2.2% compared with a national average of 4.7%.
* Some of Long's programs were completed by his successor, Gov. O.K. Allen.
All I care is what the boys at the fork of the creek think of me.”— Huey Long
Most state employees who received a job from Long were expected to contribute to his campaign fund, which was kept in a locked “deduct box” at his Roosevelt Hotel headquarters in New Orleans.
Without a base of wealthy political contributors, Huey reasoned that this was an appropriate source of funds for his political activities. He refused to take the usual bribes offered by business in exchange for their support, and he was frequently in need of cash to print circulars and travel the state to advocate for his programs and combat negative press.
According to historian T. Harry Williams, Long collected between $50,000 to $75,000 each election cycle from state employees, contrary to exaggerated reports that he collected a million dollars per year.
Few employees complained about the deducts, because jobs were scarce. They knew they would lose their jobs if Long lost his.
Huey did not personally enrich himself with these funds and had surprisingly little money to his name when he was killed. The deduct box was never found and is believed to have been stolen by one of his associates.
Huey Long shocked the political establishment by throwing the aristocracy out of power and building a mightier political machine than the one he toppled. Conservatives called Long a ruthless, dictatorial and corrupt demagogue, and they relentlessly opposed all of his reforms.
To fulfill his mandate, Long mastered the patronage system his opponents had created, and he out-politicked them at every turn. He fired and hired state employees at will, packed local governing boards with supporters, brow beat legislators to vote with him (or bribed them with jobs), passed scores of laws in rapid succession, reduced the powers of city governments that opposed him, and publicly ridiculed the old guard for their reactionary outrage.
With each victory, he relished humiliating the "pie eating politicians" and reveled in his role as the people's champion. "Everything I did, I've had to do with one hand because I've had to fight with the other hand," he said.
Read more quotes on Huey Long's political methods. | http://www.hueylong.com/life-times/governor.php | 13 |
23 | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
The framing effect, one of the cognitive biases, describes that presenting the same option in different formats can alter people's decision making and choice behavior. Specifically, individuals have a tendency to select inconsistent choices, depending on whether the question is framed to concentrate on losses or gains (Plous, 1993).
A set of experiments on framing performed by psychologists Amos Tversky and Daniel Kahneman (1981) indicated that different phrasing affected participants' responses to a question about a disease prevention strategy. The first problem given to participants offered two alternative solutions for 600 people affected by a hypothetical deadly disease:
- option A saves 200 people's lives
- option B has a 33% chance of saving all 600 people and a 66% possibility of saving no one
These decisions have the same expected value of 200 lives saved, but option B is risky. 72% of participants chose option A, whereas only 28% of participants chose option B.
The second problem, given to another group of participants, offered the same scenario with the same statistics, but described differently:
- if option C is taken, then 400 people die
- if option D is taken, then there is a 33% chance that no people will die and a 66% probability that all 600 will die
However, in this group, 78% of participants chose option D (equivalent to option B), whereas only 22% of participants chose option C (equivalent to option A).
The discrepancy in choice between these parallel options is in essence the framing effect; the two groups favored different options because the options were expressed employing different language. In the first problem, a positive frame emphasizes lives gained; in the second, a negative frame emphasizes lives lost. The alterations in the language underlie the differences in the preferences.
Framing impacts people because individuals perceive losses and gains differently, as illustrated in prospect theory (Tversky & Kahneman, 1981). The value function, founded in prospect theory, illustrates an important underlying factor to the framing effect: a loss is more devastating than the equivalent gain is gratifying (Tversky & Kahneman, 1981). Thus, people tend to avoid risk when a positive frame is presented but seek risks when a negative frame is presented (Tversky & Kahneman, 1981). Additionally, the value function takes on a sigmoid shape, which indicates that gains for smaller values are psychologically larger than equivalent increases for larger quantities (Tversky & Kahneman, 1981). Another important factor contributing to framing is certainty effect and pseudocertainty effect in which a sure gain is favored to a probabilistic gain (Clark, 2009), but a probabilistic loss is preferred to a definite loss (Tversky & Kahneman, 1981). For example, in Tversky and Kahneman's (1981) experiment, in the first problem, treatment A, which saved a sure 200 people, was favored due to the certainty effect.
Frame analysis has been a significant part of scholarly work on topics like social movements and political opinion formation in both sociology and political science. Political options can be framed in a way that promotes voters to prefer a certain alternative. For instance, people prefer an economic agenda when high employment rates are provided, but they are against it when the complementary unemployment rates are accentuated (Druckman, 2001b). Additionally, Rugg (as cited in Plous, 1993) exhibited a framing effect in a poll in which the same option was expressed differently. Rugg (as cited in Plous, 1993) discovered that 62% of people disagreed with allowing public condemnation of democracy, but only 46% of people agreed to forbidding public condemnation. The framing effect accounts for the 16% disparity in these effectively congruent decisions (as cited in Plous, 1993). Therefore, framing could have negative social and political implications. Druckman (2001b) also conveys that these effects could discredit public opinion, rendering polls as dubious sources of information.
Certain types of payment options may also be able to employ the framing effect to encourage people to pay at an earlier date. For example, PhD students demonstrated susceptibility to framing when reminded to pay a mandatory registration fee (Gätcher Orzen, Renner, & Stamer, in press). Specifically, Gätcher et al. (in press) reported 93% of PhD students registered early when presented a loss frame, described as a penalty fee, as opposed to 67% students registering early when presented a positive frame in the form of a discount.
It has been argued that pretrial detention may increase a defendant's willingness to accept a plea bargain, since imprisonment, rather than freedom, will be his baseline, and pleading guilty will be viewed as an event that will cause his earlier release rather than as an event that will put him in prison.
One of the dangers of framing effects is that in the reality, people are often only provided options within the context of one of the two frames (Druckman, 2001a). Furthermore, framing effects may persist even when monetary incentives are provided (Tversky & Kahneman, 1981). Thus, individuals' decisions may be malleable through manipulation with the framing effect, and the consequences of framing effects may be inescapable. However, Druckman (2001b) conveys that the framing effects and their societal implications may be emphasized more than they should be. This notion is reflected, as he demonstrated that the effects of framing can be reduced, or even eliminated, if ample, credible information is provided to people (Druckman, 2001b).
- (2001a). Evaluating framing effects. Journal of Economic Psychology 22: 96–101.
- (2001b). Using credible advice to overcome framing effects. Journal of Law, Economics, and Organization 17: 62–82.
- (2009) Framing effects exposed, Pearson Education.
- Gätcher, S., Orzen, H., Renner, E., & Stamer, C. (in press). Are experimental economists prone to framing effects? A natural field experiment. Journal of Economic Behavior & Organization.
- (1993) The psychology of judgment and decision making, McGraw-Hill.
- (1981). The Framing of decisions and the psychology of choice. Science 211 (4481): 453–458.
- ↑ Stephanos Bibas (Jun., 2004). Plea Bargaining outside the Shadow of Trial 117 (8): 2463–2547.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | http://psychology.wikia.com/wiki/Framing_effects?oldid=137206 | 13 |
162 | According to generally accepted theory, petroleum is derived from ancient biomass. It is a fossil fuel derived from ancient fossilized organic materials. The theory was initially based on the isolation of molecules from petroleum that closely resemble known biomolecules.
Structure of vanadium porphyrin compound extracted from petroleum by Alfred Treibs, father of organic geochemistry. Treibs noted the close structural similarity of this molecule and chlorophyll.
More specifically, crude oil and natural gas are products of heating of ancient organic materials (i.e. kerogen) over geological time. Formation of petroleum occurs from hydrocarbon pyrolysis, in a variety of mostly endothermic reactions at high temperature and/or pressure. Today's oil formed from the preserved remains of prehistoric zooplankton and algae, which had settled to a sea or lake bottom in large quantities under anoxic conditions (the remains of prehistoric terrestrial plants, on the other hand, tended to form coal). Over geological time the organic matter mixed with mud, and was buried under heavy layers of sediment resulting in high levels of heat and pressure (diagenesis). This process caused the organic matter to change, first into a waxy material known as kerogen, which is found in various oil shales around the world, and then with more heat into liquid and gaseous hydrocarbons via a process known as catagenesis.
Geologists often refer to the temperature range in which oil forms as an "oil window" below the minimum temperature oil remains trapped in the form of kerogen, and above the maximum temperature the oil is converted to natural gas through the process of thermal cracking. Sometimes, oil which is formed at extreme depths may migrate and become trapped at much shallower depths than where it was formed. The Athabasca Oil Sands is one example of this.
Diagram from http://letslearngeology.com
A small number of geologists adhere to the abiogenic petroleum origin hypothesis and maintain that hydrocarbons of purely inorganic origin exist within Earth's interior. Chemists Marcellin Berthelot and Dmitri Mendeleev, as well as astronomer Thomas Gold championed the theory in the Western world by supporting the work done by Nikolai Kudryavtsev in the 1950s. It is currently supported primarily by Kenney and Krayushkin.
The abiogenic origin hypothesis has not yet been ruled out. Its advocates consider that it is "still an open question". Extensive research into the chemical structure of kerogen has identified algae as the primary source of oil. The abiogenic origin hypothesis fails to explain the presence of these markers in kerogen and oil, as well as failing to explain how inorganic origin could be achieved at temperatures and pressures sufficient to convert kerogen to graphite. It has not been successfully used in uncovering oil deposits by geologists, as the hypothesis lacks any mechanism for determining where the process may occur. More recently scientists at the Carnegie Institution for Science have found that ethane and heavier hydrocarbons can be synthesized under conditions of the upper mantle.
Crude oil reservoirs
Three conditions must be present for oil reservoirs to form: a source rock rich in hydrocarbon material buried deep enough for subterranean heat to cook it into oil; a porous and permeable reservoir rock for it to accumulate in; and a cap rock (seal) or other mechanism that prevents it from escaping to the surface. Within these reservoirs, fluids will typically organize themselves like a three-layer cake with a layer of water below the oil layer and a layer of gas above it, although the different layers vary in size between reservoirs. Because most hydrocarbons are lighter than rock or water, they often migrate upward through adjacent rock layers until either reaching the surface or becoming trapped within porous rocks (known as reservoirs) by impermeable rocks above. However, the process is influenced by underground water flows, causing oil to migrate hundreds of kilometres horizontally or even short distances downward before becoming trapped in a reservoir. When hydrocarbons are concentrated in a trap, an oil field forms, from which the liquid can be extracted by drilling and pumping.
The reactions that produce oil and natural gas are often modeled as first order breakdown reactions, where hydrocarbons are broken down to oil and natural gas by a set of parallel reactions, and oil eventually breaks down to natural gas by another set of reactions. The latter set is regularly used in petrochemical plants and oil refineries.
Wells are drilled into oil reservoirs to extract the crude oil. "Natural lift" production methods that rely on the natural reservoir pressure to force the oil to the surface are usually sufficient for a while after reservoirs are first tapped. In some reservoirs, such as in the
Unconventional oil reservoirs
Oil-eating bacteria biodegrades oil that has escaped to the surface. Oil sands are reservoirs of partially biodegraded oil still in the process of escaping and being biodegraded, but they contain so much migrating oil that, although most of it has escaped, vast amounts are still present, more than can be found in conventional oil reservoirs. The lighter fractions of the crude oil are destroyed first, resulting in reservoirs containing an extremely heavy form of crude oil, called crude bitumen in
On the other hand, oil shales are source rocks that have not been exposed to heat or pressure long enough to convert their trapped hydrocarbons into crude oil. Technically speaking, oil shales are not really shales and do not really contain oil, but are usually relatively hard rocks called marls containing a waxy substance called kerogen. The kerogen trapped in the rock can be converted into crude oil using heat and pressure to simulate natural processes. The method has been known for centuries and was patented in 1694 under British Crown Patent No. 330 covering, "A way to extract and make great quantityes of pitch, tarr, and oyle out of a sort of stone." Although oil shales are found in many countries, the
The petroleum industry generally classifies crude oil by the geographic location it is produced in (e.g.
The American Petroleum Institute gravity, or API gravity
Is a measure of how heavy or light a petroleum liquid is compared to water. If its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks. API gravity is thus a measure of the relative density of a petroleum liquid and the density of water, but it is used to compare the relative densities of petroleum liquids. For example, if one petroleum liquid floats on another and is therefore less dense, it has a greater API gravity. Although mathematically API gravity has no units (see the formula below), it is nevertheless referred to as being in “degrees”. API gravity is graduated in degrees on a hydrometer instrument and was designed so that most values would fall between 10 and 70 API gravity degrees.
The formula used to obtain the API gravity of petroleum liquids is thus:
Conversely, the specific gravity of petroleum liquids can be derived from the API gravity value as
Thus, a heavy oil with a specific gravity of 1.0 (i.e., with the same density as pure water at 60°F) would have an API gravity of:
Here is a useful photo so you don't have to do the API calculation.
The geographic location is important because it affects transportation costs to the refinery. Light crude oil is more desirable than heavy oil since it produces a higher yield of gasoline, while sweet oil commands a higher price than sour oil because it has fewer environmental problems and requires less refining to meet sulfur standards imposed on fuels in consuming countries. Each crude oil has unique molecular characteristics which are understood by the use of crude oil assay analysis in petroleum laboratories.
Barrels from an area in which the crude oil's molecular characteristics have been determined and the oil has been classified are used as pricing references throughout the world. Some of the common reference crudes are:
There are declining amounts of these benchmark oils being produced each year, so other oils are more commonly what is actually delivered. While the reference price may be for West Texas Intermediate delivered at Cushing, the actual oil being traded may be a discounted Canadian heavy oil delivered at Hardisty, Alberta, and for a Brent Blend delivered at the Shetlands, it may be a Russian Export Blend delivered at the port of Primorsk.
The petroleum industry is involved in the global processes of exploration, extraction, refining, transporting (often with oil tankers and pipelines), and marketing petroleum products. The largest volume products of the industry are fuel oil and gasoline (petrol). Petroleum is also the raw material for many chemical products, including pharmaceuticals, solvents, fertilizers, pesticides, and plastics. The industry is usually divided into three major components: upstream, midstream and downstream. Midstream operations are usually included in the downstream category.
Petroleum is vital to many industries, and is of importance to the maintenance of industrialized civilization itself, and thus is critical concern to many nations. Oil accounts for a large percentage of the world's energy consumption, ranging from a low of 32% for
The pour point of a liquid is the lowest temperature at which it will pour or flow under prescribed conditions. It is a rough indication of the lowest temperature at which oil is readily pumpable.
Also, the pour point can be defined as the minimum temperature of a liquid, particularly a lubricant, after which, on decreasing the temperature, the liquid ceases to flow.
Measuring the pour point of petroleum products
The specimen is cooled inside a cooling bath to allow the formation of paraffin wax crystals. At about 9oC above the expected pour point, and for every subsequent 3oC, the test jar is removed and tilted to check for surface movement. When the specimen does not flow when tilted, the jar is held horizontally for 5 secs. If it does not flow, 3oC is added to the corresponding temperature and the result is the pour point temperature.
It is also useful to note that failure to flow at the pour point may also be due to the effect of viscosity or the previous thermal history of the specimen. Therefore, the pour point may give a misleading view of the handling properties of the oil. Additional fluidity or pumpability tests may also be undertaken. An approximate range of pour point can be observed from the specimen's upper and lower pour point.
Two pour points can be derived which can give an approximate temperature window depending on its thermal history. Within this temperature range, the sample may appear liquid or solid. This peculiarity happens because wax crystals form more readily when it has been heated within the past 24hrs and contributes to the lower pour point.
The upper pour point is measured by pouring the test sample directly into a test jar. The sample is then cooled and then inspected for pour point as per the usual pour point method.
The lower pour point is measured by first pouring the sample into a stainless steel pressure vessel. The vessel is then screwed tight and heated to above 100oC in an oil bath. After a specified time, the vessel is removed and cooled for a short while. The sample is then poured into a test jar and immediately closed with a cork carrying the thermometer. The sample is then cooled and then inspected for pour point as per the usual pour point method.
The sample is then cooled and then inspected for pour point as per the usual pour point method.
Group 2, Group 2,
Pour Point 0ºC Pour Point 30ºC
These two crude oil samples show the difference between pour points will need different responses. The Nigerian will loose approximately 35% to evaporation where as the Indoneasian will loose nothing. The Nigerian could be chemically disperse,where as the Indoneasian will need to be recovered completely.
Viscosity is the resistance to flow. The higher the viscosity the slower the liquid will flow and the lower the quality.
Resistance of a fluid to a change in shape, or movement of neighbouring portions relative to one another. Viscosity denotes opposition to flow. It may also be thought of as internal friction between the molecules. Viscosity is a major factor in determining the forces that must be overcome when fluids are used in lubrication or transported in pipelines. It also determines the liquid flow in spraying, injection molding, and surface coating. The viscosity of liquids decreases rapidly with an increase in temperature, while that of gases increases with an increase in temperature.
The SI unit for viscosity is the newton-second per square metre (N-s/m2).
This is a simple explanation taken from oilspilltraining.com
You can tilt each beaker or all together and reset them at the top. This shows the effect of cold temperature on the same oil types by clicking on the thermometer.
In general, in any flow, layers move at different velocities and the fluid's viscosity arises from the shear stress between the layers that ultimately opposes any applied force.
Isaac Newton postulated that, for straight, parallel and uniform flow, the shear stress, τ, between layers is proportional to the velocity gradient, ∂u /∂y, in the direction perpendicular to the layers.
Here, the constant μ is known as the coefficient of viscosity, the viscosity, the dynamic viscosity, or the Newtonian viscosity.
The relationship between the shear stress and the velocity gradient can also be obtained by considering two plates closely spaced apart at a distance y, and separated by a homogeneous substance. Assuming that the plates are very large, with a large area A, such that edge effects may be ignored, and that the lower plate is fixed, let a force F be applied to the upper plate. If this force causes the substance between the plates to undergo shear flow (as opposed to just shearing elastically until the shear stress in the substance balances the applied force), the substance is called a fluid. The applied force is proportional to the area and velocity of the plate and inversely proportional to the distance between the plates. Combining these three relations results in the equation F = μ (Au/y), where μ is the proportionality factor called the dynamic viscosity (also called absolute viscosity, or simply viscosity). The equation can be expressed in terms of shear stress; τ = F/A = μ (u / y). The rate of shear deformation is u / y and can be also written as a shear velocity, du/dy. Hence, through this method, the relation between the shear stress and the velocity gradient can be obtained.
Ratio of the density of a substance to that of a standard substance. For solids and liquids, the standard substance is usually water at 39.2°F (4.0°C), which has a density of 1.00 kg/liter. Gases are usually compared to dry air, which has a density of 1.29 g/liter at 32°F (0°C) and 1 atmosphere pressure. Because it is a ratio of two quantities that have the same dimensions (mass per unit volume), specific gravity has no dimension. For example, the specific gravity of liquid mercury is 13.6, because its actual density is 13.6 kg/liter, 13.6 times that of water.
Relative density, or specific gravity, is the ratio of the density (mass of a unit volume) of a substance to the density of a given reference material. Specific gravity usually means relative density with respect to water. The term "relative density" is often preferred in modern scientific usage.
If a substance's relative density is less than one then it is less dense than the reference; if greater than one then it is denser than the reference. If the relative density is exactly one then the densities are equal; that is, equal volumes of the two substances have the same mass. Simplified, as water is most often used as the reference, if a liquid has a density less than 1, then it will float in water. Hence methylated spirits, with a density less than 0.8, floats on the top of water. On the other hand, an ice cube with a density of about 0.91, will sink to the bottom if placed into methylated spirits.
Temperature and pressure must be specified for both the sample and the reference. Pressure is nearly always 1 atm equal to 101.325 kPa. Where it is not it is more usual to specify the density directly. Temperatures for both sample and reference vary from industry to industry. In British brewing practice the specific gravity as specified above is multiplied by 1000.
Relative density (RD) or specific gravity (SG) is a dimensionless quantity, as it is the ratio of either densities or weights
where RD is relative density, ρ is the density of the substance being measured, and ρ is the density of the reference. (By convention ρ, the Greek letter rho, denotes density.)
The reference material can be indicated using subscripts: RD, which means "the relative density of substance with respect to reference". If the reference is not explicitly stated then it is normally assumed to be water at 4 °C (or, more precisely, 3.98 °C, which is the temperature at which water reaches its maximum density). In SI units, the density of water is (approximately) 1000 kg/m3 or 1 g/cm3, which makes relative density calculations particularly convenient: the density of the object only needs to be divided by 1000 or 1, depending on the units.
The relative density of gases is often measured with respect to dry air at a temperature of 20 °C and a pressure of 101.325 kPa absolute, which has a density of 1.205 kg/m3. Relative density with respect to air can be obtained by
Where M is the molar mass and the approximately equal sign is used because equality pertains only if 1 mol of the gas and 1 mol of air occupy the same volume at a given temperature and pressure i.e. they are both Ideal gasses. Ideal behaviour is usually only seen at very low pressure. For example, one mol of an ideal gas occupies 22.414 L at 0 °C and 1 atmosphere whereas carbon dioxide has a molar volume of 22.259 L under those same conditions.
The density of substances varies with temperature and pressure so that it is necessary to specify the temperatures and pressures at which the densities or weights were determined. It is nearly always the case that measurements are made at nominally 1 atmosphere (101.325 kPa the variations caused by changing weather patterns) but as specific gravity usually refers to highly incompressible aqueous solutions or other incompressible substances (such as petroleum products) variations in density caused by pressure are usually neglected at least where apparent specific gravity is being measured. For true (in vacuo) specific gravity calculations air pressure must be considered (see below). Temperatures are specified by the notation Ts/Tr) with Ts representing the temperature at which the sample's density was determined and Tr the temperature at which the reference (water) density is specified. For example SG (20°C/4°C) would be understood to mean that the density of the sample was determined at 20 °C and of the water at 4 °C. Taking into account different sample and reference temperatures we note that while SGH2O = 1.000000 (20°C/20°C) it is also the case that SGH2O = 0.998203/0.998840 = 0.998363 (20°C/4°C). Here temperature is being specified using the current ITS-90 scale and the densities used here and in the rest of this article are based on that scale. On the previous IPTS-68 scale the densities at 20 °C and 4 °C are, respectively, 0.9982071 and 0.9999720 resulting in an SG (20°C/4°C) value for water of 0.9982343.
The temperatures of the two materials may be explicitly stated in the density symbols; for example:
relative density: or specific gravity:
where the superscript indicates the temperature at which the density of the material is measured, and the subscript indicates the temperature of the reference substance to which it is compared.
Property of a liquid surface that causes it to act like a stretched elastic membrane. Its strength depends on the forces of attraction among the particles of the liquid itself and with the particles of the gas, solid, or liquid with which it comes in contact. Surface tension allows certain insects to stand on the surface of water and can support a razor blade placed horizontally on the liquid's surface, even though the blade may be denser than the liquid and unable to float. Surface tension results in spherical drops of liquid, as the liquid tends to minimize its surface area.
Surface tension is a property of the surface of a liquid. It is what causes the surface portion of liquid to be attracted to another surface, such as that of another portion of liquid (as in connecting bits of water or as in a drop of mercury that forms a cohesive ball).
Surface tension is caused by cohesion (the attraction of molecules to like molecules). Since the molecules on the surface of the liquid are not surrounded by like molecules on all sides, they are more attracted to their neighbors on the surface.
Applying Newtonian physics to the forces that arise due to surface tension accurately predicts many liquid behaviors that are so commonplace that most people take them for granted. Applying thermodynamics to those same forces further predicts other more subtle liquid behaviors.
Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent — but when referring to energy per unit of area, people use the term surface energy — which is a more general term in the sense that it applies also to solids and not just liquids.
In materials science, surface tension is used for either surface stress or surface free energy.
Flash point of a volatile liquid is the lowest temperature at which it can vaporize to form an ignitable mixture in air. Measuring a liquid's flashpoint requires an ignition source. This is not to be confused with the autoignition temperature, which requires no ignition source. At the flash point, the vapour may cease to burn when the source of ignition is removed. A slightly higher temperature, the fire point, is defined as the temperature at which the vapour continues to burn after being ignited. Neither of these parameters is related to the temperatures of the ignition source or of the burning liquid, which are much higher. The flash point is often used as one descriptive characteristic of liquid fuel, but it is also used to describe liquids that are not used intentionally as fuels. Flash point refers to both flammable liquids as well as combustible liquids. There are various international standards for defining each, but most agree that liquids with a flash point less than 43°C are flammable, and those above this temperature are combustible.
Surface tension is caused by the attraction between the liquid's molecules by various intermolecular forces. In the bulk of the liquid, each molecule is pulled equally in every direction by neighbouring liquid molecules, resulting in a net force of zero. At the surface of the liquid, the molecules are pulled inwards by other molecules deeper inside the liquid and are not attracted as intensely by the molecules in the neighbouring medium (be it vacuum, air or another liquid). Therefore, all of the molecules at the surface are subject to an inward force of molecular attraction which is balanced only by the liquid's resistance to compression, meaning there is no net inward force. However, there is a driving force to diminish the surface area. Therefore, the surface area of the liquid shrinks until it has the lowest surface area possible. That explains the spherical shapes of water droplets.
Another way to view it is that a molecule in contact with a neighbour is in a lower state of energy than if it weren't in contact with a neighbour. The interior molecules all have as many neighbours as they can possibly have. But the boundary molecules have fewer neighbours than interior molecules and are therefore in a higher state of energy. For the liquid to minimize its energy state, it must minimize its number of boundary molecules and must therefore minimize its surface area.
As a result of surface area minimization, a surface will assume the smoothest shape it can (mathematical proof that "smooth" shapes minimize surface area relies on use of the Euler–Lagrange equation). Since any curvature in the surface shape results in greater area, a higher energy will also result. Consequently the surface will push back against any curvature in much the same way as a ball pushed uphill will push back to minimize its gravitational potential energy.
Diagram shows, in cross-section, a needle floating on the surface of water. Its weight, Fw, depresses the surface, and is balanced by the surface tension forces on either side, Fs, which are each parallel to the water's surface at the points where it contacts the needle. Notice that the horizontal components of the two Fs arrows point in opposite directions, so they cancel each other, but the vertical components point in the same direction and therefore add up to balance Fw.
Surface tension, represented by the symbol γ is defined as the force along a line of unit length, where the force is parallel to the surface but perpendicular to the line. One way to picture this is to imagine a flat soap film bounded on one side by a taut thread of length, L. The thread will be pulled toward the interior of the film by a force equal to 2L (the factor of 2 is because the soap film has two sides, hence two surfaces). Surface tension is therefore measured in forces per unit length. Its SI unit is newton per metre but the cgs unit of dyne per cm is also used. One dyn/cm corresponds to 0.001 N/m.
An equivalent definition, one that is useful in thermodynamics, is work done per unit area. As such, in order to increase the surface area of a mass of liquid by an amount, δA, a quantity of work, δA, is needed. This work is stored as potential energy. Consequently surface tension can be also measured in SI system as joules per square metre and in the cgs system as ergs per cm2. Since mechanical systems try to find a state of minimum potential energy, a free droplet of liquid naturally assumes a spherical shape, which has the minimum surface area for a given volume.
The equivalence of measurement of energy per unit area to force per unit length can be proven by dimensional analysis.
Pond skaters use surface tension to walk on the surface of a pond—hydrophobic setae on the tarsi keep the insect afloat while an apical hydrophilic claw penetrates the surface, allowing it to "grip" the water. The surface of the water behaves like an elastic film: the insect's feet cause indentations in the water's surface, increasing its surface area. This represents an increase in potential energy through the surface tension of the water equal to the loss of potential energy of the insect's lowered center of mass.
Degree to which a substance dissolves in a solvent to make a solution (usually expressed as grams of solute per litre of solvent). Solubility of one fluid (liquid or gas) in another may be complete (totally miscible; e.g., methanol and water) or partial (oil and water dissolve only slightly). In general, "like dissolves like" (e.g., aromatic hydrocarbons dissolve in each other but not in water). Some separation methods (absorption, extraction) rely on differences in solubility, expressed as the distribution coefficient (ratio of a material's solubilities in two solvents). Generally, solubilities of solids in liquids increase with temperature and those of gases decrease with temperature and increase with pressure. A solution in which no more solute can be dissolved at a given temperature and pressure is said to be saturated.
Solubility is the property of a solid, liquid, or gaseous chemical substance called solute to dissolve in a liquid solvent to form a homogeneous solution. The solubility of a substance strongly depends on the used solvent as well as on temperature and pressure. The pressure also affects the solution whether it is gas or liquid, like temperature. So, in definition of solubility we always mention the pressure and temperature "fixed". The extent of the solubility of a substance in a specific solvent is measured as the saturation concentration where adding more solute does not increase the concentration of the solution.
The solvent is generally a liquid, which can be a pure substance or a mixture. One also speaks of solid solution, but rarely of solution in a gas.
The extent of solubility ranges widely, from infinitely soluble (fully miscible ) such as ethanol in water, to poorly soluble, such as silver chloride in water. The term insoluble is often applied to poorly or very poorly soluble compounds.
Under certain conditions the equilibrium solubility can be exceeded to give a so-called supersaturated solution, which is metastable.
Solubility occurs under dynamic equilibrium, which means that solubility results from the simultaneous and opposing processes of dissolution and phase separation (e.g. precipitation of solids). The solubility equilibrium occurs when the two processes proceed at a constant rate.
The term solubility is also used in some fields where the solute is altered by solvolysis. For example, many metals and their oxides are said to be "soluble in hydrochloric acid," whereas the aqueous acid degrades the solid to irreversibly give soluble products. It is also true that most ionic solids are degraded by polar solvents, but such processes are reversible. In those cases where the solute is not recovered upon evaporation of the solvent the process is referred to as solvolysis. The thermodynamic concept of solubility does not apply straightforwardly to solvolysis.
When a solute dissolves, it may form several species in the solution. For example, an aqueous suspension of ferrous hydroxide, Fe(OH)2, will contain the series [Fe(H2O)6 − x(OH)x](2 − x)+ as well as other oligomeric species. Furthermore, the solubility of ferrous hydroxide and the composition of its soluble components depends on pH. In general, solubility in the solvent phase can be given only for a specific solute which is thermodynamically stable, and the value of the solubility will include all the species in the solution (in the example above, all the iron-containing complexes).
Solubility is defined for specific phases. For example, the solubility of aragonite and calcite in water are expected to differ, even though they are both polymorphs of calcium carbonate and have the same chemical formula.
The solubility of one substance in another is determined by the balance of intermolecular forces between the solvent and solute, and the entropy change that accompanies the solvation. Factors such as temperature and pressure will alter this balance, thus changing the solubility.
Solubility may also strongly depend on the presence of other species dissolved in the solvent, for example, complex-forming anions (ligands) in liquids. Solubility will also depend on the excess or deficiency of a common ion in the solution, a phenomenon known as the common-ion effect. To a lesser extent, solubility will depend on the ionic strength of solutions. The last two effects can be quantified using the equation for solubility equilibrium.
For a solid that dissolves in a redox reaction, solubility is expected to depend on the potential (within the range of potentials under which the solid remains the thermodynamically stable phase). For example, solubility of gold in high-temperature water is observed to be almost an order of magnitude higher when the redox potential is controlled using a highly-oxidizing Fe3O4-Fe2O3 redox buffer than with a moderately-oxidizing Ni-NiO buffer.
Solubility (metastable) also depends on the physical size of the crystal or droplet of solute (or, strictly speaking, on the specific or molar surface area of the solute). For quantification, see the equation in the article on solubility equilibrium. For highly defective crystals, solubility may increase with the increasing degree of disorder. Both of these effects occur because of the dependence of solubility constant on the Gibbs energy of the crystal.
Petroleum ( petroleum, from Greek πετρέλαιον, lit. "rock oil") or crude oil is a naturally occurring, flammable liquid consisting of a complex mixture of hydrocarbons of various molecular weights, and other organic compounds, that are found in geologic formations beneath the earth's surface.
The term "petroleum" was first used in the treatise De Natura Fossilium, published in 1546 by the German mineralogist Georg Bauer, also known as Georgius Agricola.
In its strictest sense, petroleum includes only crude oil, but in common usage it includes both crude oil and natural gas. Both crude oil and natural gas are predominantly a mixture of hydrocarbons. Under surface pressure and temperature conditions, the lighter hydrocarbons methane, ethane, propane and butane occur as gases, while the heavier ones from pentane and up are in the form of liquids or solids. However, in the underground oil reservoir the proportion which is gas or liquid varies depending on the subsurface conditions, and on the phase diagram of the petroleum mixture.
An oil well produces predominantly crude oil, with some natural gas dissolved in it. Because the pressure is lower at the surface than underground, some of the gas will come out of solution and be recovered (or burned) as associated gas or solution gas. A gas well produces predominately natural gas. However, because the underground temperature and pressure are higher than at the surface, the gas may contain heavier hydrocarbons such as pentane, hexane, and heptane in the gaseous state. Under surface conditions these will condense out of the gas and form natural gas condensate, often shortened to condensate. Condensate resembles gasoline in appearance and is similar in composition to some volatile light crude oils.
The proportion of hydrocarbons in the petroleum mixture is highly variable between different oil fields and ranges from as much as 97% by weight in the lighter oils to as little as 50% in the heavier oils and bitumens.
The hydrocarbons in crude oil are mostly alkanes, cycloalkanes and various aromatic hydrocarbons while the other organic compounds contain nitrogen, oxygen and sulfur, and trace amounts of metals such as iron, nickel, copper and vanadium. The exact molecular composition varies widely from formation to formation but the proportion of chemical elements vary over fairly narrow limits as follows.
Composition by weight
83 to 87%
10 to 14%
0.1 to 2%
0.1 to 1.5%
0.5 to 6%
less than 1000 ppm
Four different types of hydrocarbon molecules appear in crude oil. The relative percentage of each varies from oil to oil, determining the properties of each oil.
Composition by weight
15 to 60%
30 to 60%
3 to 30%
Is the common name for the alkane hydrocarbons with the general formula CnH2n+2. Paraffin wax refers to the solids with 20 ≤ n ≤ 40 .
The simplest paraffin molecule is that of methane, CH4, a gas at room temperature. Heavier members of the series, such as that of octane, C8H18, and mineral oil appear as liquids at room temperature. The solid forms of paraffin, called paraffin wax, are from the heaviest molecules from C20H42 to C40H82. Paraffin wax was identified by Carl Reichenbach in 1830.
Paraffin, or paraffin hydrocarbon, is also the technical name for an alkane in general, but in most cases it refers specifically to a linear, or normal alkane — whereas branched, or isoalkanes are also called isoparaffins. It is distinct from the fuel known in
The name is derived from the Latin parum (= barely) + affinis with the meaning here of "lacking affinity", or "lacking reactivity". This is because alkanes, being non-polar and lacking in functional groups, are very unreactive.
Also called, Cycloalkanes especially if from petroleum sources are types of alkanes which have one or more rings of carbon atoms in the chemical structure of their molecules. Alkanes are types of organic hydrocarbon compounds which have only single chemical bonds in their chemical structure. Cycloalkanes consist of only carbon (C) and hydrogen (H) atoms and are saturated because there are no multiple C-C bonds to hydrogenate (add more hydrogen to). A general chemical formula for cycloalkanes would be CnH2(n+1-g) where n = number of C atoms and g = number of rings in the molecule. Cycloalkanes with a single ring are named analogously to their normal alkane counterpart of the same carbon count: cyclopropane, cyclobutane, cyclopentane, cyclohexane, etc. The larger cycloalkanes, with greater than 20 carbon atoms are typically called cycloparaffins.
Cycloalkanes are classified into small, common, medium, and large cycloalkanes, where cyclopropane and cyclobutane are the small ones, cyclopentane, cyclohexane, cycloheptane are the common ones, cyclooctane through cyclotridecane are the medium ones, and the rest are the larger ones.
Aromatic compound: (Meanings related to odor)
In organic chemistry, the structures of some rings of atoms are unexpectedly stable. Aromaticity is a chemical property in which a conjugated ring of unsaturated bonds, lone pairs, or empty orbitals exhibit a stabilization stronger than would be expected by the stabilization of conjugation alone. It can also be considered a manifestation of cyclic delocalization and of resonance.
This is usually considered to be because electrons are free to cycle around circular arrangements of atoms, which are alternately single- and double-bonded to one another. These bonds may be seen as a hybrid of a single bond and a double bond, each bond in the ring identical to every other. This commonly-seen model of aromatic rings, namely the idea that benzene was formed from a six-membered carbon ring with alternating single and double bonds (cyclohexatriene), was developed by Kekulé. The model for benzene consists of two resonance forms, which corresponds to the double and single bonds' switching positions. Benzene is a more stable molecule than would be expected without accounting for charge delocalization.
As is standard for resonance diagrams, a double-headed arrow is used to indicate that the two structures are not distinct entities, but merely hypothetical possibilities. Neither is an accurate representation of the actual compound, which is best represented by a hybrid (average) of these structures, which can be seen at right. A C=C bond is shorter than a C−C bond, but benzene is perfectly hexagonal—all six carbon-carbon bonds have the same length, intermediate between that of a single and that of a double bond.
A better representation is that of the circular π bond (Armstrong's inner cycle), in which the electron density is evenly distributed through a π-bond above and below the ring. This model more correctly represents the location of electron density within the aromatic ring.
The single bonds are formed with electrons in line between the carbon nuclei—these are called σ-bonds. Double bonds consist of a σ-bond and a π-bond. The π-bonds are formed from overlap of atomic p-orbitals above and below the plane of the ring. The following diagram shows the positions of these p-orbitals:
Since they are out of the plane of the atoms, these orbitals can interact with each other freely, and become delocalised. This means that instead of being tied to one atom of carbon, each electron is shared by all six in the ring. Thus, there are not enough electrons to form double bonds on all the carbon atoms, but the "extra" electrons strengthen all of the bonds on the ring equally. The resulting molecular orbital has π symmetry.
Aromatic compounds are important in industry. Key aromatic hydrocarbons of commercial interest are benzene, toluene, ortho-xylene and para-xylene. About 35 million tonnes are produced worldwide every year. They are extracted from complex mixtures obtained by the refining of oil or by distillation of coal tar, and are used to produce a range of important chemicals and polymers, including styrene, phenol, aniline, polyester and nylon.
Other aromatic compounds play key roles in the biochemistry of all living things. Three aromatic amino acids phenylalanine, tryptophan, and tyrosine, each serve as one of the 20 basic building blocks of proteins. Further, all 5 nucleotides (adenine, thymine, cytosine, guanine, and uracil) that make up the sequence of the genetic code in DNA and RNA are aromatic purines or pyrimidines. As well as that, the molecule haem contains an aromatic system with 22 π electrons. Chlorophyll also has a similar aromatic system.
The overwhelming majority of aromatic compounds are compounds of carbon, but they need not be hydrocarbons.
In heterocyclic aromatics (heteroaromats), one or more of the atoms in the aromatic ring is of an element other than carbon. This can lessen the ring's aromaticity, and thus (as in the case of furan) increase its reactivity. Other examples include pyridine, pyrazine, imidazole, pyrazole, oxazole, thiophene, and their benzannulated analogs (benzimidazole, for example).
Polycyclic aromatic hydrocarbons are molecules containing two or more simple aromatic rings fused together by sharing two neighboring carbon atoms. Examples are naphthalene, anthracene and phenanthrene.
Many chemical compounds are aromatic rings with other things attached. Examples include trinitrotoluene (TNT), acetylsalicylic acid (aspirin), paracetamol, and the nucleotides of DNA.
Aromaticity is found in ions as well: the cyclopropenyl cation (2e system), the cyclopentadienyl anion (6e system), the tropylium ion (6e) and the cyclooctatetraene dianion (10e). Aromatic properties have been attributed to non-benzenoid compounds such as tropone. Aromatic properties are tested to the limit in a class of compounds called cyclophanes.
A special case of aromaticity is found in homoaromaticity where conjugation is interrupted by a single sp³ hybridized carbon atom.
When carbon in benzene is replaced by other elements in borabenzene, silabenzene, germanabenzene, stannabenzene, phosphorine or pyrylium salts the aromaticity is still retained. Aromaticity also occurs in compounds that are not carbon-based at all. Inorganic 6 membered ring compounds analogous to benzene have been synthesized. Silicazine (Si6H6) and borazine (B3N3H6) are structurally analogous to benzene, with the carbon atoms replaced by another element or elements. In borazine, the boron and nitrogen atoms alternate around the ring.
Metal aromaticity is believed to exist in certain metal clusters of aluminium. Möbius aromaticity occurs when a cyclic system of molecular orbitals, formed from pπ atomic orbitals and populated in a closed shell by 4n (n is an integer) electrons, is given a single half-twist to correspond to a Möbius strip. Because the twist can be left-handed or right-handed, the resulting Möbius aromatics are dissymmetric or chiral. Up to now there is no doubtless proof that a Möbius aromatic molecule was synthesized.
Aromatics with two half-twists corresponding to the paradromic topologies, first suggested by Johann Listing, have been proposed by Rzepa in 2005. In carbo-benzene the ring bonds are extended with alkyne and allene groups.
Molecular substances that are found in crude oil, along with resins, aromatic hydrocarbons, and alkanes (i.e., saturated hydrocarbons). The word "asphaltene" was coined by Boussingault in 1837 when he noticed that the distillation residue of some bitumens had asphalt-like properties. Asphaltenes in the form of distillation products from oil refineries are used as "tar-mats" on roads.
Asphaltenes consist primarily of carbon, hydrogen, nitrogen, oxygen, and sulfur, as well as trace amounts of vanadium and nickel. The C:H ratio is approximately 1:1.2, depending on the asphaltene source. Asphaltenes are defined operationally as the n-heptane (C7H16)-insoluble, toluene (C6H5CH3)-soluble component of a carbonaceous material such as crude oil, bitumen or coal. Asphaltenes have been shown to have a distribution of molecular masses in the range of 400 u to 1500 u with a maximum around 750 u.
difficult to ascertain, due to the complex nature of the asphaltenes, but has been studied by all available techniques including X-ray, elemental, and pyrolysis GC-FID-GC-MS. However, it is undisputed that the asphaltenes are composed mainly of polyaromatic carbon i.e. polycondensed aromatic benzene units with oxygen, nitrogen, and sulfur, (NSO-compounds) combined with minor amounts of a series of heavy metals, particularly vanadium and nickel which occur in porphyrin structures. Furthermore, asphaltene rotational diffusion measurements show that small PAH chromophores (blue fluorescing) are in small asphaltene molecules while big PAH chromophores (red fluorescing) are in big molecules. This implies that there is only one fused polycyclic aromatic hydrocarbon (PAH) ring system per molecule. Very recent fragmentation
studies by FT ICR-MS strongly support this 'island' molecular architecture refuting the 'archipelago' molecular architecture.
Most of the world's oils are non-conventional.
Crude oil varies greatly in appearance depending on its composition. It is usually black or dark brown (although it may be yellowish or even greenish). In the reservoir it is usually found in association with natural gas, which being lighter forms a gas cap over the petroleum, and saline water which, being heavier than most forms of crude oil, generally sinks beneath it. Crude oil may also be found in semi-solid form mixed with sand and water, as in the
^9 m3) of bitumen and extra-heavy oil, about twice the volume of the world's reserves of conventional oil.
Petroleum is used mostly, by volume, for producing fuel oil and gasoline (petrol), both important "primary energy" sources. 84% by volume of the hydrocarbons present in petroleum is converted into energy-rich fuels (petroleum-based fuels), including gasoline, diesel, jet, heating, and other fuel oils, and liquefied petroleum gas. The lighter grades of crude oil produce the best yields of these products, but as the world's reserves of light and medium oil are depleted, oil refineries are increasingly having to process heavy oil and bitumen, and use more complex and expensive methods to produce the products required. Because heavier crude oils have too much carbon and not enough hydrogen, these processes generally involve removing carbon from or adding hydrogen to the molecules, and using fluid catalytic cracking to convert the longer, more complex molecules in the oil to the shorter, simpler ones in the fuels.
Due to its high energy density, easy transportability and relative abundance, oil has become the world's most important source of energy since the mid-1950s. Petroleum is also the raw material for many chemical products, including pharmaceuticals, solvents, fertilizers, pesticides, and plastics; the 16% not used for energy production is converted into these other materials. Petroleum is found in porous rock formations in the upper strata of some areas of the Earth's crust. There is also petroleum in oil sands (tar sands). Known reserves of petroleum are typically estimated at around 190 km3 (1.2 trillion (short scale) barrels) without oil sands, or 595 km3 (3.74 trillion barrels) with oil sands. Consumption is currently around 84 million barrels (13.4×10
^6 m3) per day, or 4.9 km3 per year.
Octane, a hydrocarbon found in petroleum, lines are single bonds, black spheres are carbon, white spheres are hydrogen.
Petroleum is a mixture of a very large number of different hydrocarbons; the most commonly found molecules are alkanes (linear or branched), cycloalkanes, aromatic hydrocarbons, or more complicated chemicals like asphaltenes. Each petroleum variety has a unique mix of molecules, which define its physical and chemical properties, like color and viscosity.
The alkanes, also known as paraffins, are saturated hydrocarbons with straight or branched chains which contain only carbon and hydrogen and have the general formula CnH2n+2 They generally have from 5 to 40 carbon atoms per molecule, although trace amounts of shorter or longer molecules may be present in the mixture.
The alkanes from pentane (C5H12) to octane (C8H18) are refined into gasoline (petrol), the ones from nonane (C9H20) to hexadecane (C16H34) into diesel fuel and kerosene (primary component of many types of jet fuel), and the ones from hexadecane upwards into fuel oil and lubricating oil. At the heavier end of the range, paraffin wax is an alkane with approximately 25 carbon atoms, while asphalt has 35 and up, although these are usually cracked by modern refineries into more valuable products. The shortest molecules, those with four or fewer carbon atoms, are in a gaseous state at room temperature. They are the petroleum gases. Depending on demand and the cost of recovery, these gases are either flared off, sold as liquified petroleum gas under pressure, or used to power the refinery's own burners. During the winter, Butane (C4H10), is blended into the gasoline pool at high rates, because butane's high vapor pressure assists with cold starts. Liquified under pressure slightly above atmospheric, it is best known for powering cigarette lighters, but it is also a main fuel source for many developing countries. Propane can be liquified under modest pressure, and is consumed for just about every application relying on petroleum for energy, from cooking to heating to transportation.
The cycloalkanes, also known as naphthenes, are saturated hydrocarbons which have one or more carbon rings to which hydrogen atoms are attached according to the formula CnH2n. Cycloalkanes have similar properties to alkanes but have higher boiling points.
The aromatic hydrocarbons are unsaturated hydrocarbons which have one or more planar six-carbon rings called benzene rings, to which hydrogen atoms are attached with the formula CnHn. They tend to burn with a sooty flame, and many have a sweet aroma. Some are carcinogenic.
These different molecules are separated by fractional distillation at an oil refinery to produce gasoline, jet fuel, kerosene, and other hydrocarbons. For example 2,2,4-trimethylpentane (isooctane), widely used in gasoline, has a chemical formula of C8H18 and it reacts with oxygen exothermically:
The amount of various molecules in an oil sample can be determined in laboratory. The molecules are typically extracted in a solvent, then separated in a gas chromatograph, and finally determined with a suitable detector, such as a flame ionization detector or a mass spectrometer.
Incomplete combustion of petroleum or gasoline results in production of toxic byproducts. Too little oxygen results in carbon monoxide. Due to the high temperatures and high pressures involved, exhaust gases from gasoline combustion in car engines usually include nitrogen oxides which are responsible for creation of photochemical smog.
Empirical equations for the thermal properties of petroleum products
Heat of combustion:
At a constant volume the heat of combustion of a petroleum product can be approximated as follows:
Qv = 12,400 − 2,100d2
where Qv is measured in cal/gram and d is the specific gravity at 60°F.
The thermal conductivity of petroleum based liquids can be modeled as follows:
where K is measured in BTU · hr-1ft-2 , t is measured in °F and d is the specific gravity at 60°F.
The specific heat of a petroleum oils can be modeled as follows:
where c is measured in BTU/lbm-°F, t is the temperature in Fahrenheit and d is the specific gravity at 60°F.
In units of kcal/kg°C, the formula is:
where the temperature t is in Celsius and d is the specific gravity at 15°C.
Latent heat of vaporization
The latent heat of vaporization can be modeled under atmospheric conditions as follows:
where L is measured in BTU/lbm, t is measured in °F and d is the specific gravity at 60°F.
In units of kcal/kg, the formula is:
where the temperature t is in Celsius and d is the specific gravity at 15°C.
These are the most important processes from a response point of view:
Oil that spreads and moves, when lighter than water, forming slicks that spread on the surface, on streams, rivers and ponds in percentages that are influenced by gravity, surface tension, viscosity, point of fluidity, winds and currents.
The temperature is another crucial variable to control spreading due to the dependency that viscosity has on temperature. One should note that crude oils vary widely in composition and their behaviour on the ocean also varies. Even viscous crude oils can spread quickly in thin layers. The action of the currents and wind spreads and breaks the slicks into mobile portions of oil that will have the largest amounts (thicker) near their leading edges.
Both wind and current affect the movement of the portions in the water. The effect of the currents is 100% in rivers, while that of the wind is around 3% of the wind speed. The effect of the wind is little felt in rivers, contrary to what happens in a pond where the wind is the predominant element in oil displacement.
Evaporation due to the high percentage of volatile components in most crude oils and the percentage for the loss of these oil volatiles in rivers and ponds is substantially important. Such evaporation occurs quickly and is physically related to the process of dissolution that is promoted by the spreading in high temperatures of water and fast-moving rivers (that generate water spray and bubbles that pop and eject the oil into the atmosphere). Studies have demonstrated that up to 50% of crude oil can be lost to evaporation, usually within 24 to 48 hours. This compares to only 10% of heavy or waste fuel oil, 75% of diesel and eventually 100% of kerosene or gasoline.
The formulation of water emulsions in oil (in contrast to oil dispersions in water, which is dealt with below) leads to many difficulties. The tendency to form emulsions, which are persistent and thick, stick as masses that are often called “chocolate mousses”, depends on the type of oil involved, but it is caused by turbulent conditions in rivers. Under adequate conditions emulsions containing up to 80% of water can form quickly. Their formation adds to the difficulty in cleaning, both on the river margins as in inaccessible areas, increasing the volume and the viscosity of the material to be removed and, because of that, the difficulty in treating and disposing of the oil.
It has been postulated by some that the processes of decomposition can convert emulsions into tar slicks, and particles that can persist in specific environments for long periods and travel long distances and be released into river mouths. Most of the tar balls are dispersed and decompose during the river’s journey to the sea, but those that reach deltas and estuaries can degrade more slowly. Studies on the biological effects of tar on life in temperate areas have shown that such effects probably do not represent an ecological threat, although the aesthetical and economic consequences from tar that runs aground into river margins can be serious, apart from hampering water collection.
Depending on the type of the crude oil involved, the spontaneous formation of small droplets of oil in the water can occur quickly, due to the action of winds. The temperature of the river and other factors contribute to this process. Natural dispersion can be useful to mitigate the effects of Spilled Oil, dissipating the oil and thus reducing its toxicity for aquatic life.
The gradual and spontaneous disappearance of crude oil that has been spilled is aided through dispersion processes. Due to the presence of oil on the water surface the small particles (globules) created by the oil are more easily biodegraded by micro-organisms, due to bigger areas of contact. These droplets lose their volatile and more toxic soluble components more quickly than continuous and larger oil slicks and are quickly dissipated by the action of currents.
The dissolution of dispersed oil many times prevents the oil from travelling away from the surface slicks, thus reducing the likelihood of it reaching farther areas. The dispersion also reduces bird contamination hazards.
The finest example of this process was the MV Braer, Shetland Islands, UK 1993 as metioned in the Major spill section.
In spite of the solubility of most of the hydrocarbons in water being substantially low, some crude oil components, notably low-boiling-point light aromatic hydrocarbons, are soluble enough to quickly penetrate the water following an oil spill. The dissolution rate depends on such factors such as water turbulence and temperature.
These more soluble fractions are also the more volatile oil components and, for this reason, they tend to evaporate instead of transferring themselves to the water phase. This has been confirmed by analytical measurements of concentrations of dissolved hydrocarbons that may remain below or near the location of the spill. Dissolution is not as important as other processes such as evaporation in the determination of weathering of spilled oil.
Sedimentation is the process where oil particles reach the bottom of rivers and ponds. For this to happen it is necessary that the oil particles, which are less dense than the water, are modified through evaporation of lighter components and, more important, through the incorporation of particled material present in the water column which causes them to be denser than the water. Due to the high levels of particled material in rivers that have eroding characteristics, sedimentation should occur in these environments.
This process becomes more important in areas near estuaries and mangroves, where suspended sediment can be found. Some types of oil have a density higher than 1 (Group V) and unavoidably, in the event of an oil spill, will sink to the bottom.
Oil, when subjected to sunlight on water and on land surfaces undergoes chemical changes, usually photochemical oxidation. These changes degrade some oil components and make it more soluble in water and are subject to dissipation through dissolution and dilution. The rates for photochemical oxidation are higher at the water’s surface or in exposed or physically stranded oil.
Water and sediments in the whole world contain micro-organisms (bacteria, yeast and fungi) that use and degrade oil components. A very high number of microorganism species that can degrade oil have been identified in rivers and on land.
Biodegradation is the most important process in determining the final destination of oil in the environment, in spite of not immediately reducing the volume of oil or its impact on the environment after the spill. Biodegradation is promoted by the dispersion of oil slicks into small particles over a large surface area. This is applied when the dispersion occurs naturally. It is interesting to note that biodegradation increases the rate of natural oil dispersion.
For biodegradation to happen at reasonable rates, nutrients such as nitrogen, phosphorous and potassium (NPK) should be present. Thus, biodegradation happens more quickly in eutrophized waters (which contain much more of these nutrients).
Most of the crude oil components can be degraded with the aid of micro-organisms, but lower-molecular-weight light components are degraded more quickly than the heavier ones. Higher temperatures speed up biodegradation but still occur at significant rates even in arctic regions.
“They came up very suddenly out of the seafloor: There were seven of them. The largest we called Il Duomo, and it is about the size of two football fields side by side and as tall as a six-story building,” said David Valentine, an earth scientist at the University of California, Santa Barbara.
“Nobody knew what the domes were made of,” said Chris Reddy, a marine chemist at Woods Hole Oceanographic Institution.
One of Valentine’s colleagues, Ed Keller, had spotted them, and in 2006 he suggested some possibilities. Large deposits of carbonate rock? Mud volcanoes created by mammoth burps of subsea natural gas? Or, most intriguing to Valentine and Reddy, perhaps they were remnants of oil that had erupted from the seafloor, hardened, and piled high to form something never seen before—volcanoes made naturally out of the same material that people use to pave roads: asphalt.
The WHOI undersea vehicle Sentry collected sonar data to create this map of the undersea asphalt mound called Il Duomo, the largest of seven similar domes in the Santa Barbara Channel. It covers twice the area of a football field and rises 30 meters, or six stories, above the seafloor. The scale at right is in meters below the sea surface.
(ABE/Sentry Group, Woods Hole Oceanographic Institution)
In 2007, off Santa Barbara aboard the research vessel Atlantis, Valentine and Reddy seized an opportunity to use the submersible Alvin to investigate the mysterious mounds. They reported what they found April 25, 2010, in the journal Nature Geoscience. We interviewed the two scientists on a bi-coastal conference call.In 2007, UCSB scientist Dave Valentine (right) and WHOI scientist Chris Reddy investigated the largest mound in the submersible Alvin. Using Alvin's manipulator they brought back a large sample of rock from the undersea dome called Il Duomo.They could heft it easily because it was made of asphalt, the solidified residue of oil.
The WHOI undersea vehicle Sentry collected sonar data to create this map of the undersea asphalt mound called Il Duomo, the largest of seven similar domes in the Santa Barbara Channel. It covers twice the area of a football field and rises 30 meters, or six stories, above the seafloor. The scale at right is in meters below the sea surface.
(ABE/Sentry Group, Woods Hole Oceanographic Institution)
The area around Santa Barbara is very geologically active, because of the movement of the San Andreas and other faults. Extensive faulting or rupturing in the Earth allows oil and gas from subterranean reservoirs to seep up to the seafloor and ultimately into the ocean and to the atmosphere. But some oil solidifies to create asphalt volcanoes.
(Jack Cook, Woods Hole Oceanographic Institution) | http://www.oilspillsolutions.org/oil.htm | 13 |
16 | One morning in 1817, in what is now southern Manitoba, representatives of the British Crown prepared to meet with members of the Cree and Chippewa Nations. The intention was treaty making, the object was land. The British authorities told the Cree and Chippewa they needed land for settlement. The Indians asked, how much land? The phrase used in the reply was, "As far as one can see day light under the belly of a horse". This poetic description was an attempt to bridge the cultural differences between the two parties to the treaty. But the final result was only vagueness. Treaties such as the Selkirk Treaty were a regular feature of the colonization of the British of what is now Canada.
In the 16 and 17 hundreds, the soldiers and sailors of France and England attempted to destroy each other in a long series of wars for international supremacy. Their battle grounds included the territories of New France and New England where settlers with the help of the Indians were by then trapping, trading, clearing farms and building towns. When the wars reached the New World, settlers on both sides of the conflict quickly affirmed their friendship with the Indians in an attempt to secure fighting allies or at least guarantee Indian neutrality. The British formalized these guarantees by writing them down in an Agreements of Peace and Friendship. From the earliest days of European exploration, to the late 1700's in what eight of these agreements drawn up and signed by both parties as European legal tradition dictated.
The last of these wars between France and England raged for 7 years and changed the face of North America forever. The spectacular fortress of Louisbourg fell in 1758. Quebec, the heart of New France fell the year after. Any French hope for control of the New World was dashed. At the end of the war King George the III of England issued an important directive on Indian rights. Now called simply the Royal Proclamation of 1763 this document played such a central role in the definition of Indian rights, it is sometimes called the Indian Magna Carta. It confirmed that a vast area in the interior of North American was Indian country and would be preserved as hunting grounds for the Indians. The Eastern boundary was formed by the Appalachian mountains. But the Western boundary was left undefined. King George ordered that no one could use these lands without the public permission of the Indians themselves. And only the Crown or its authorized representatives, he said, could actually acquire the land if indeed the Indians were willing to part with it. And so in a single brief document, a British monarch had laid out the basic formula for treaty negotiation in Canada. From this point on, the British Crown would be the central agent in the transfer of Indian lands to colonial settlers. And land was something that settlers would be looking for plenty of.
During the American Revolutionary war, at least 40,000 American settlers remained loyal to the British Crown. When Britain finally surrendered the American colonies to local forces in 1781, these loyalists, suddenly traitors in their own land, had to flee. They came North. And they came looking for land, but under the provisions of the Royal Proclamation, no land could be given to anyone before Indian title was cleared. So the government machinery for treaty making quickly got underway.
The Royal Proclamation said that the Indians had rights in the land. That individuals could not come up and make deals with Indians for the land. The land could only be alienated to the Crown at the general meeting for that purpose. And what followed was a series of land surrenders or land surrender treaties, if you like. Which were the means whereby Indians gave up their land for settlement purposes or whatever else the Crown wanted to do with the land.
There were a total of 31 Indian treaties signed before Confederation in an attempt to secure rights to what was then called Upper Canada. The colonials recorded their understanding of treaty provisions in writing. The Indians recorded their understanding in stories; memories of promises made in a sacred time. Some tribes, particularly those in the East, embedded their vision of the treaty in wampum, precious beads, which themselves took on the sacred character. No matter how they were remembered, most of these treaties were drawn up and signed in a hurry, sometimes only days, and there were often tremendous weaknesses in them. A famous example of sloppy staff work was the gun shot clause of a treaty prepared in September 1787. The Crown was attempting in one manoeuver to acquire all the lands North of lake Ontario between present day Kingston and Toronto. The Indians were prepared to share their lands but they wanted to know exactly which lands were involved. The area in question was described to them in this way, "from the lakeshore as far inland as a gunshot could be heard on a clear day." But whose gun, whose ears, and what season, winter or summer? The terms were too vague and the Indians declined to deal further. The gunshot treaty fell apart. But most treaties didn't.
In the 1840's, surveyors and excavators found rich deposits of iron, nickel and copper on the shores of Lake Superior. Access to these minerals meant wealth and local mining companies appealed to the Crown for support. So the government sent in William J. Robinson and instructed him to extinguish all the Indian titles to the land.
Why were treaties struck in the first place? Why were treaties made in the first place? They were made for the simple reason that the Crown, the sovereign, recognized that they did not, did not have title to the land. That the title remained with the Indian nations of the...of Canada. That's who had the title. Now, in order for the Crown or the sovereign to settle the land, they had to have access to this title, they had to gain title to the land. And they only way they could do that was through treaties.
In 1850, Robinson quickly concluded 2 treaties with the Ojibway. One near Lake Huron, the other near Lake Superior. The Robinson Treaties were the forerunners for all treaties for the next century. They contained provisions for annual payments to Indian bands, freedom to hunt and fish on unused lands, and they reserved special use for the Indians only.
Confederation: a nation is born, a young nation among many older nation and a new nation with a voracious appetite for real estate. The Americans were stretching West as well and quite possibly would look North in their search for fertile farm land. The Canadian Prairies were a great temptation. And so the race began. Under the authority of the new federal government of Canada, treaties were signed clearing the rights between the lands of Ontario and British Columbia. All the treaties after confederation were numbered. One, at Stone Fort, Two at Manitoba Post, three at the northwest angle of Lake Superior, Four at Qu'Appelle in present day Saskatchewan, Five at Lake Winnipeg, Six in the Cree territory stretching across Saskatchewan and Alberta, and Seven with the Blackfoot in Alberta. With these seven treaties, the gap between British Columbia and the rest of Canada was closed. All possibility of American encroachment from the South had been cut off. The signing of the numbered treaties had been a great administrative victory for the young Canada. Unlike the Americans who had butchered their Native people in series of bloody and costly Indian wars, the Canadian had accomplished their settlement of the west diplomatically, peacefully, fairly...or was it fair?
Back in the 1830's and 40's, buffalo had roamed the prairies unhindered. 50 or 60 million by one count. For the Indians of the plains, these huge beasts had provided food to eat, clothing to wear, shelter from the weather, and medicine for their ills. For thousands of years, the buffalo was a way of life. But when the whites came, the buffalo became a business, the end was inevitable. And the end came fast. By 1880, for all practical purposes, the buffalo was extinct.
The Indians knew that there were far more of these people further east and they were coming west. They knew that the cards were ultimately stacked against them in terms of sheer power. So in that sense, you could say that the treaty was forced on that group of people. But the idea of treaties was something that Indians themselves wanted. Why you might ask would want treaties if they understood them as giving up their land? Well first of all, they didn't understand them as giving up their land, and secondly they saw the treaty as a mechanism to protect them for the future.
Starvation and disease quickly took the people of the plains from proud, self-sufficiency to a grim dependency on the will and whim of the white man and signing a treaty was the quickest way to get help.
The treaties were basically signed because they were forced to, they had to sign. A lot of the times, they didn't want to sign the treaty, they didn't want to agree to the treaty. But because of starvation, because of disease, the people were dying, our Indian people were suffering and they felt by signing the treaties, by getting these annuities in place, they could buy food, they could buy blankets from the Hudson's Bay Company. They could get the things they needed to look after the people, to look after their band.
It was assumed that with the buffalo gone, Indians would settle down and begin farming. So a common feature of treaties was the setting aside of lands known as reserves for the exclusive use of the Indian people. The size of each reserve was proportional to band population. But whether the population should be counted at the time of the signing or at a later date when the Indians might choose to begin farming was a subject of great debate.
Indian people couldn't leave the reserves till the 50's. So in effect, they were captives in their own land. They were prisoners in their own land, they had to get a permit to leave their own reserve boundaries and they had to get that from the Indian agent. But the initial attempt of the reserves was there was a place they would be protected, you know their way of life to go on.
Now, seventy years after the last treaty was signed, a lingering issues is whether the size of a reserve should have been pegged to a single census as it was or be changed to meet the growing needs of a band as its population increased. After land, a major concern within treaties was education. It was well known to the Indians that the only replacement to the buffalo would be schooling. They asked for schools and teachers. They asked also for medicine. The horrors of epidemic tuberculosis and small pox are almost unimaginable today. With no natural immunities, Native bands were easy prey for disease. Sometimes small pox would carry away half a band in a single season. These diseases had all been brought to the Indians by the whites.
At the time Treaty 6 was signed, the famous medicine chest clause was inserted at Indian insistence that the Indian agent should keep a medicine chest at his house for use. Today Indian thinking is this means medical care, in general terms, the medicine chest may be all they had at that time and place but today we have a broader range of medical care and this is a symbol for that.
The interpretation of that clause is very different for the federal government employees or bureaucrats and the Indian leadership because to us and to our elders and leaders who negotiated and signed that treaty, it refers to health care and health benefits for our people. And because our traditional way of healing is still present and alive but we recognized that we would need that assistance.
With the disappearance of traditional sources of food and shelter, many treaties offered annuities. Annual payments of 3, 4, or 5 dollars a head which might buy blankets, tools and provisions to help the Indian people survive the winters. Continual demands of Native nations was that the sale of liquor on or around reserves be utterly prohibited. The advent of whisky had been almost as destructive as the loss of the buffalo and unscrupulous white traders took full advantage of Natives. Even at treaty signing, whisky traders hovered near by to quickly separate the Indian from his treaty payment.
Once medicine and emergency assistance was promised, it wasn't long before the Agreement was reached. Treaties 1 to 7 were concluded within six short years but clearly the bargaining positions had hardly been equal.
For 22 years, no new treaty activity occurred for no new land was needed. But in the 1890's business boomed once again. And now the business was gold. The stuff of dreams, pluck it from the earth and retire a millionaire. They stumbled on gold in the Klondike in the 1890's and the rush was on. The Indian title had to dealt with of course, and access to the Yukon fields could only be gained through treaty. Treaty number 8 to clear title through the Yukon access route between Edmonton and the gold fields was negotiated with the Cree, Beaver and Chippewayan. These groups were gravely concerned that once they signed a treaty, they would be expected to act like white men. They didn't want to pay white man taxes nor fight against the white mans enemies. The treaty commissioner David Laird, assured them this would never be the case. We assured them that the treaty would not lead to any forced interference with their mode of life. That it did not open the way to any imposition of tax. And that there was no fear of military service. This verbal promise duly reported by the chief commission to Ottawa does not appear in the treaty itself but the Indians who signed were comfortable they would no longer fear either taxation or conscription.
In order to get the Indian people to agree to sign the treaty, in order to get them to agree to sign their "x" on the dotted line, so to speak, they were given the assurance "you would not be subject to any imposition of any tax nor enforced military service", that's in the treaty commissioner's report to Ottawa. Now it's written down there and it's verbally talked about through all the numbered treaties, the elders will make reference to it, that was one of the promises, no tax, no enforced military service, we will not have to do that, we will not have to pay that, we will no have to endure that. But then along comes the GST, which is a federal tax on everybody including treaty Indian people, which goes against our understanding of treaty, the spirit and intent, and our understanding of the treaty has been breached again.
The year was 1920, wealth that Yukon gold diggers had only dreamed of came spurting from the ground in the Northwest territories. The discovery was oil. As always, rights to the land had to be acquired before the resource could be tapped. Treaty 11 in 1921, cleared title to the oil rich territories from the bands more recently known as the Dene nation. Or at least that's what the government thought. Half a century later, the Dene were able to successfully argue before a Canadian court that Treaty 11 was essentially an accord of friendship and peace and not of land surrender.
Given the different expectations of the signatories in treaty negotiations, the different approaches, cultures, needs, and even different mechanisms for recording what was agreed, it is not surprising that the terms of Canada's Indian treaties has been the subject of continuing debate.
Now, while the written treaty talks about yielding, ceding, giving up, surrendering the land, this does not seem to be the understanding that Indians at the time had. For example, in North Western Ontario, the Indians there told the commissioners, tell us where you want your roads to run, tell us what pieces of land you want and we'll make those arrangements. They were not talking about yielding, surrendering huge bits of territory and then being given back reserves. They were telling the commissioners the reverse. Tell us what lands you require and we'll arrange to give that to you.
Our understanding of what was surrendered was this, the top soil because we recognized as Indian people that the Europeans wanted to farm it so that's part of the negotiations, that's part of the Agreements, that's not in the treaty but it was talked about and it was verbally agreed to so when we talk about the sharing of the land, when we talk about what was surrendered, that's all that was surrendered. The right to come and use the top soil, to farm and settle it, that's what was surrendered.
The relationship between the First Nations and Canada as a new nation, has been defined in part through some seventy agreements and treaties which took place up to 1923. Every single one of these treaties is still in effect, not one has expired. Because they are living documents, they probably always be the subject of debate and interpretation. But no matter what the controversy, they will continue to serve as a fundamental statement about the way we relate as cohabitants, sharing in a resource of land and riches that the Native people of Canada have always regarded as precious and worthy of respect.
The important thing that Canadians need to know about Indian treaties are that they form an obligation of honour on the part of all of us to attempt to understand what it is that Indian people understand about theses treaties. And what it is they expect of us and what it is that we should be doing to try to fulfil those obligations that were made for us many years ago.
We do as Aboriginal First Nations have a special relationship with the Crown, we do indeed have treaty rights and they are here as long as the sun shines, the rivers flow and the grass grows. They will not be terminated, there is no end to that, they are here forever. And people have to realize and understand that, what those rights are and the more awareness, the more education about them that can be taught, it's more beneficial for both Indian and non-Indian people. So that we can peacefully coexist in this country that we can share the resources together.
Our treaties deserve constant study and review for like any bond between people, they are only helpful as they reflect the ongoing realities which people face. Situations change our response to those situations must also change. Perhaps by studying the treaties of the past, we can better understand why problems arose and work together towards more effective, compassionate, and realistic agreements in the future. | http://www.aadnc-aandc.gc.ca/eng/1100100029178/1100100029179 | 13 |
32 | The history of the Carpenters Union is a story of a group of workers coming together against tremendous odds to create one of the most powerful trade unions America has seen. This story is about resilience, resistance, action, and struggle. Ultimately it is about demanding a better future and fighting for it against all odds.
The history of the Carpenters Union does not exist in a vacuum, but is part of a general history of working class struggle. Carpenters have been one of the most skillful and critical components of every society. More importantly, however, the carpenter has led the fight for fairness, equality and justice for all workers. It is up to each and every one of us to discover our shared past and spread it in our workplaces and dinner tables. For all that divides us as individuals, this history unites us as brothers and sister.
It is said that the idea of throwing tea into the ocean at Boston Harbor, the event known as the 1773 Boston Tea Party, was first conceived at a meeting of ship carpenters and caulkers intent on protesting British oppression. This would not be the first, or last, contribution that carpenters would make for freedom and democracy.
The Revolutionary War was fought and won by labor, both un-free and free. There is little doubt that among these fighters were skilled carpenters willing to risk their lives to be freed from oppression.
The contribution of women during the Revolutionary War is frequently put aside in our textbooks. The fact is that when men were away at war, the colonies did not stop building infrastructure and women served critical roles as carpenters, shipbuilders and blacksmiths as well as other crucial roles.
The Revolutionary War ended in 1783 and independence was won for the American colonies. Among the different freedoms that would now govern America were the freedom of speech and the freedom to assemble. These two freedoms built the labor movement years later. Democracy was established as the political ideology of the land. It promised representation for every American in their government and in their lives. It is important to note that although the framers of the new constitution promised the people a democracy, only white men with property were given full democratic rights. In fact most workers, regardless of gender or race, were denied the right to participate in the new democratic government, sadly, for many years to come.
Although America was now freed from British dominance, slavery grew in the colonies and workers suffered through more decades of exploitation and abuse. Workers did not have any representation with their employers as the new factory system abused thousands of Americans in the workplace. The spirit of the revolution and the desire for freedom would not leave the working class of America. Unions soon became the voice and the representation of wage earners and craft workers.
For the most part, the Carpenters Company of Philadelphia benefited master carpenters (employers) more than it did journeymen carpenters and apprentices. The fact that master carpenters kept the book of prices secret leads us to believe that they were fearful their journeymen would revolt if they knew how much profit they enjoyed. Certainly the Carpenters Company was good for the trade as it set uniform prices and created some stability for the Carpentry industry, but the majority of carpenters were journeymen and it is difficult to say they were protected from their master carpenter.
It was not until 1791 that we see a union of journeymen carpenters in America. The Union Society of Carpenters was created in Philadelphia to combat the master carpenter’s control of their trade. The journeymen demanded a uniform 12-hour workday, from 6AM to 6PM, and an end to piecework in the wintertime. When the masters refused to grant the journeymen their demands, the Union Society of Carpenters went on strike. In order to apply more pressure on the master-carpenters, the Union went directly to owners and offered to work for 25 % below the masters prices while promising quality work. Although the strike failed, this group of journeymen carpenters took an invaluable step toward building unions made up of workers in the carpentry trade.
Unionism, in its current form, entered American society and the economy in the early 1800’s primarily in port cities like Boston, Philadelphia and New York City. The economy was young, brutal and very unstable so workers joined together to try and curb the competition that was driving down wages and working conditions to bottomless scales. Workers in the North weren’t the only ones organizing. There were free workers in the South, both black and white, and many of them were skilled craftsmen. The southern economy, however, was largely based on un-free labor, especially slave labor, and organizing into unions, at least traditional ones, was nearly impossible.
A northern carpenter’s life in the 1800’s was less than respectable, even if his/her trade was invaluable. The workweek was seven days long and the workday was sun up to sun down. A Carpenters union in Philadelphia even struck for a 10-hour day with one carpenter justifying his demand by saying,
Early carpenter unions usually failed because of frequent depressions and employer tactics. When the economy was strong and work was plentiful, carpenters quickly formed unions to try and protect themselves from exploitation and brutal working conditions. When the economy fell into a depression, however, unions disbanded because workers were desperate and competition between them placed everybody at the employer’s mercy. Employers would destroy early unions even in good economic times by importing un-free and non-union workers to drive down area wages and thus create an economic panic.
Although local carpenters were forming unions as early as 1791 and some Locals were forming state federations as early as 1863, carpenter unions were not able to survive economic panics and employer tactics often enough to build real power. The lack of a national organization and strong central leadership meant that employers were always able to import workers and use competition to break early unions. Quite frankly, there were not enough union carpenters to control the industry for workers and without an increase in union memberships; carpenters would always struggle to protect minimal gains.
Peter J. McGuire (PJ) was born in New York City in 1852 to poor Catholic Irish parents. His father, a porter and a staunch Catholic, signed up to fight the Confederacy in 1862. PJ was forced to quit school at eleven years old to help the family survive.
Although PJ was immersed in work and general struggle trying to keep the family fed, he made an effort to enroll at Cooper Union in New York City and take classes. It was there that he met a young Samuel Gompers. Their relationship blossomed over the next years and together these two idealists would give hope to millions of workers.
In 1873 McGuire was only 21 years old, but he was already a fiery agitator for workers issues and an equal society. That year New York City found itself in yet another depression and many New Yorkers were quickly in poverty. McGuire felt the government should protect the unemployed from starvation. He led a committee of like-minded students to ask the City for a permit to demonstrate for the cause. When the City ultimately refused the grant the protest, McGuire led a sit in at police headquarters. Legend has it that McGuire’s father told the police captain his son was a socialist and should be arrested, an event that traumatized a young McGuire, but the sit in continued. The day of the rally, January 13th, close to 10,000 people assembled at Tompkins Square in New York. The police came too and they broke up the assembly with violence and mass arrests. Sam Gompers wrote of the Tompkins Square protest that police brought and McGuire's education, both read and lived, made him a young and radical socialist. He was active in socialist circles throughout New York City and in 1874 he helped form the Socialist Labor Party. He traveled America starting branches of his new political party and although it grew, it never gained national power. When McGuire confronted the issue of constant economic depressions in the American economy he said,
McGuire traveled the country working as a cabinetmaker and agitating workers into action. In 1877 he moved to Saint Louis, Missouri. It was here in Saint Louis that McGuire found the obsession that would lead carpenters to countless marches and fights—the eight-hour day. In 1879 he helped organize Saint Louis workers in an eight-hour day parade. In that same year McGuire spoke in front of 20,000 workers at yet another rally for a shorter workday. He became active in a newly formed carpenter local and led them on a successful strike for an increase in wages. His quick rise as a labor leader in Missouri led him to the highest labor seat in the state when he was elected secretary of the Saint Louis Trades Assembly. His ability to organize workers into unions and lead them to win their demands convinced McGuire he could build a national Carpenters union.
In April of 1881, Peter J. McGuire called for a national carpenters union in this editorial that appeared in The Carpenter magazine's first issue.
The scene was Chicago, an emerging union city in its own right, where thirty six delegates from eleven cities, representing around 2,042 carpenters, came together to form the United Brotherhood of Carpenters. Ten resolutions followed the first convention and they give us insight into the issues most important for working carpenters in the United States of the day:
McGuire and the 36 delegates that followed his call made sure that democracy was a cornerstone of their new union. Offices were set up for the UBC and elections were held immediately. There could be no Carpenters union without a democracy that supported it.
The Chicago convention passed, but not before these delegates succeeded in creating the United Brotherhood of Carpenters, now the official trade union of working carpenters wherever they may be. It was truly a remarkable convention for all working people in America.
PJ McGuire became a national labor leader because he was an expert agitator, but agitation without action does nothing. McGuire was a firm believer in protest and taking action to win demands. Nothing exemplifies this better than his call for Labor Day, one that has been honored since 1882 and will live forever as the idea of PJ McGuire. In 1882 McGuire wrote in The Carpenter,
"It is now suggested that the first Tuesday in September shall be the labor holiday of New York and be celebrated every year by a parade and picnic. It is also proposed that this day should be likewise observed throughout the country, that labor by its own will should establish its own universal holiday...the ruling classes have their own decoration days and Thanksgiving; why should not labor declare its own holiday?"
The Labor Day march in early September has taken place since 1882. Workers marched in their streets for better wages, better working conditions and most of all respect from their employers. In 1894 the U.S Congress voted to make Labor Day, a day for workers, an official holiday.
The period after the Civil War (1878-1890) is frequently called the Gilded Age in American history. It can be defined as a time period of great wealth for owners and great poverty for workers. In 1890, for example, the wealthiest 1% of Americans earned more than the bottom 50%. By any standard this was a time of incredible inequality.
Monopolists and industry owners were busy accumulating wealth and destroying workers unions. Many felt their wealth was what God wanted. Andrew Carnegie, a steel tycoon who bought police to kill striking workers in Homestead, Pennsylvania, wrote “Individualism, Private Property, the Law of Accumulation of Wealth, and the Law of Compensation were the highest result of human experience.” John D. Rockefeller, an energy tycoon whose company, Colorado Fuel and Iron, brought in the National Guard to kill 66 striking miners in 1913 told Sunday School students, “The growth of a large business is merely a survival of the fittest…This is not an evil tendency in business…merely the working-out of a law of nature and a law of God.” The attitude of monopoly employers could best be summarized by a quote from a wealthy Frederick Townsend Martin. He said,
"It matters not one iota what political party is in power or what President holds the reins of office. We are not politicians or public thinkers; we are the rich; we own America; we got it; God knows how, but we intend to keep it if we can by throwing all the tremendous weight of our support, our influence, our money, our political connections, our purchased senators, our hungry congressmen, our public-speaking demagogues into the scale against any legislature, any political platform, any presidential campaign that threatens the integrity of our estate...The class I represent cares nothing for politics."
The UBC, although struggling through even more depressions and a wicked corporate environment, was growing. The Carpenters started with a little more than 2,000 members in 1881 and by 1890 that number jumped to more than 50,000. Wages for all members had increased and although it is true that great inequality between the rich and the working class was spreading, carpenters were fighting to get their fair share.
In December of 1886 the American Federation of Labor was formed under the leadership of PJ McGuire and his Cooper Union classmate, Samuel Gompers, (a Cigar Maker). Workers were forming unions in every trade and every industry throughout the 1800’s. History is rich with stories of women striking in factories in Massachusetts, miners hanged in Pennsylvania for forming unions, and workers in the South forming unions and political parties of their own. Much like the carpenters, these unions needed a national body to bring them together and make them stronger. For the skilled craft workers, the AFL was the answer.
Without the support of the Carpenters union and the leadership of PJ McGuire, it is doubtful that the AFL would have gained the prominence it did in America. PJ McGuire was offered a chairmanship in the federation, and when he declined because he said he was too busy, they essentially forced him to take it. The Carpenters in the 1880’s were the strongest labor union in America and without their participation and leadership, the young AFL probably would have floundered quickly. PJ McGuire, the UBC’s leader and its greatest organizer for many years, also served as the Federation’s Vice-President for the rest of his career as a trade unionist.
It is important to note that other Labor organizations were forming at this time. The Knights of Labor, for example, was similar to the AF of L, but advocated organizing industrial workers as well as craft workers. For many factory workers and workers that were labeled to be unskilled, the Knights of Labor was the only hope for unionization and a brighter future.
PJ McGuire’s obsession with the eight hour day, at a time when it was not unusual for wage earners to work up to twelve hours a day, grew stronger throughout the early years of the Brotherhood. The labor movement as a whole, which at the time was dominated by the AFL and the Knights of Labor, adopted this idea as a major goal for all working people. Needless to say, employers across the country were adamantly opposed to giving their workers a shorter day and fought it with everything they had.
The labor movement organized one of the greatest simultaneous strikes the nation had seen to date on May 1st, 1886. It is estimated that over 350,000 Americans struck their employers on May Day. One of the larger strikes took place in Chicago where over 65,000 workers took to the streets.
A few thousand workers and protesters met at Haymarket Square in Chicago on May 4th, three days after the citywide strike for an eight-hour day. At around 10:00 PM the crowd was already dispersing when a bomb went off that killed a police officer and wounded thirty-six others, seven of which later died from their wounds. To this day no one knows who set off the bomb.
The bombing of Haymarket Square quickly turned into the “crime of the century” and although evidence was lacking, Chicago authorities were looking at labor as the culprit. For the next three weeks after the bombing, union meetings and labor leaders were constantly harassed and all were considered suspects. Finally, on May 27th, Chicago indicted eight labor leaders and organizers of the eight-hour strike earlier in the month.
The jury was full of Chicago industry leaders and it is little wonder that all eight defendants were found guilty of conspiracy to commit murder. Seven of the eight were condemned to death.
One of them, August Spies, knowing he would face the gallows, addressed the judge:
"If you think by hanging us you can stamp out the labor movement...the movement from which the down-trodden millions, the millions who toil in want and misery, expect salvation -if this is your opinion, then hang us! Here you will tread upon spark, but there and there, behind you and in front of you, and everywhere, flames blaze up. It is a subterranean fire. You cannot put it out..."
Supporters protested their conviction and condemnation from as far away as Eastern Europe, but nothing could stop the corporate government from killing these leaders. They paid the ultimate price for leading a struggle for justice. On November 11th, 1887, four of these leaders faced the gallows and were hanged. It is estimated that 25,000 people marched at their funeral.
By 1890 very few workers enjoyed an eight-hour day as employers, with government on their side, rolled back any early victories. The Haymarket Square example was meant to scare labor and its leaders from further agitation, but no one retreated from the fight.
In 1890 Sam Gompers, President of the AFL, asked the Carpenters to once again lead the struggle for an eight-hour day. The Carpenters, under McGuire’s leadership, accepted the challenge. On March 1st Carpenter locals everywhere went on strike. By May there were approximately 141 strikes, involving 208 locals and 53,000 members. McGuire addressed his membership later that year and reported that the eight-hour day was won in thirty-six cities. In 234 cities the hours of labor were reduced to nine. This tremendous victory changed the lives of thousands of carpenters and set precedent for all workers. Although there would be some employer fight back campaigns, the normal hours for many working carpenters and many wage earners remained at eight after those strikes.
The penalty of leading the working class movement was severe and often included imprisonment or even death during the Gilded Age. Industry power holders were not willing to let workers decide their fates and did unconscionable things to stop the labor movement. Still, early leaders including McGuire were not afraid. They fought against tremendous odds and although some paid with their lives, they persisted. Ultimately workers won some critical struggles that sent a message to everyone in the nation, labor is on the march and it can win.
The Carpenters union started its campaign for justice in 1881 with a little more than 2,000 members. At the turn of the century, 1900, that number had risen to a little over 68,000 members. Just a decade later, 1910, the UBC had a membership of over 200,000 carpenters and growing. The UBC’s growth and power can be attributed to both its aggressive organizing approach and its dominance in the labor movement.
For the Carpenters union, and the labor movement in general, the 20th century introduced one crisis after another for workers. The UBC would have to endure two world wars, numerous others as well, a Great Depression that shook the very foundation of the economy, judicial attacks on unions, full blown government attacks on labor, Congressional legislation meant to handcuff unions, military intervention during strikes, employer associations dedicated to crushing unions and countless other factors that could have weakened the UBC in a changing world. In order to survive the UBC had to change with society at times yet, at others, fight off change itself. It was not easy, but the UBC ended the twentieth century with over half a million members and leading wages, benefits and working conditions for carpenters in America.
The United Brotherhood of Carpenters has always had a special relationship with the AFL and other labor unions. As one of the largest and most powerful trade unions in America, the UBC has led the AFL into many struggles, both victorious and not. Many UBC leaders in fact have sat on AFL executive boards or chairmanships in the Federation for long periods of time. Still, the Carpenters dedication to its members and its jurisdictions has led to some conflicts throughout history.
The UBC was never afraid to voice its opposition if the direction of the labor movement, or some unions in it, would not benefit carpenters. When friction occurred within the labor movement the Carpenters would always make many attempts to come to an agreement, but if none could be made, the UBC would not hesitate to take off in another direction.
There were numerous times in the 20th century when the Carpenters, or some of its locals, decided to leave the AFL or the Building Trades. Frequently the split was heated and passionate from both ends. On many occasions the dispute was over Carpenter jurisdiction, something the UBC was always determined to protect at all costs. The Carpenters have even started their own Building Trade Councils comprised of like-minded unions that were sometimes in competition with the original councils in the area. Each time, however, the Carpenters rejoined the AFL or Building Trades after much discussion and thought.
The emergence of the CIO (Congress of Industrial Organizations) came at a time when every worker and his/her unions were recovering from the Great Depression. Some in the labor movement thought unions should organize industrial workers into new unions, whereas others, including the Carpenters, believed that special charters should represent these workers. The disagreement came to a boiling point in 1935 when several unions split from the AFL and created the CIO, a new competing organization.
Throughout the next twenty years the CIO organized millions of workers, as did the AFL, and the labor movement was growing. Although these two organizations did not see eye to eye on some issues, they were both dedicated to the same goal- a strong and large labor movement. The split ended in 1955 when the AFL and the CIO merged to become the AFL-CIO. Although the Carpenters did not approve of the CIO at first, it did not block the merger but rather welcomed a new unity.
At the turn of the 21st century the Carpenters Union itself disaffiliated itself with the AFL-CIO. The Carpenters had long disagreed with the Federation’s allocation of resources among other important issues. In 2005, the Carpenters became a founding member of the Change To Win Federation that encompasses some of the largest trade unions in America.
Disputes within the labor movement have always existed and will forever be a part of the democracy that workers demanded in their unions. The vast majority of disagreements have come between leaders with good hearts and passion for unionism. The Carpenters history is one of leadership and it has never shied away from conflict, even in its own house. Although the UBC may have faced serious disagreements with the AFL-CIO and even disaffiliated, it was still dedicated to solidarity and willing to join in important fights. When workers at Yale University went on strike in 2003, Douglas McCarron, the General President of the UBC, was protesting alongside thousands of workers and different unions.
Women have played a critical role in the labor movement in every respect. The early factories of the North frequently employed a majority female workforce. These workers often labored more than twelve hours a day, six days a week. Their working conditions were brutal in factories and abuse, of every kind, was not hard to find in the workplace. Women, however, were not afraid to fight back. The Lowell, Massachusetts strikes were entirely organized by women and their struggles are some of the most inspirational in labor’s history. Some of the greatest labor organizers were women. Just two of the many were Elizabeth Gurley Flynn and Mother Jones.
Construction, of course, has always been a male dominated industry. It should be said, however, that women have done the arduous work of many construction trades especially when America was at war. During war-time the American labor market endured tremendous shifts. The majority of male wage earners were sent overseas, but production at home was essential to winning, thus women entered the labor force in great numbers. Women did everything from factory work in war producing industries to war necessary construction at home. During WWI and WWII thousands of women became union workers and kept unions strong while male membership was away.
Women sustained the labor movement in another, no less important, respect. Union recognition and the meeting of union demands almost always required significant sacrifice from entire communities. Withholding labor in a job action meant unemployment for a period of time and it took the support of the community to win. When husbands, sons and brothers struck employers in any industry, women would rally behind the cause and organize their community. They would march along side picketers, they would organize food drives to sustain the workforce and they would serve as the backbone to many victories. Indeed at the 1910 UBC convention the delegate body noted,
A “Ladies Auxiliary” had already existed at a Carpenters Local in Indianapolis and this convention passed a resolution to support and guide Ladies Auxiliary’s in the UBC. It should be noted that the UBC also resolved to strongly support the women’s suffrage movement, a burning topic of the time.
The Carpenters Union admitted women into its local’s for the first time in 1918 as part of a merger with an existing Boxmakers and Sawyers Local. It was decided that women would in fact be members of the UBC, but would not pay per capita tax or receive all the benefits of membership. It would not be until the mid 1950’s that women would become “official” members of the UBC and enjoy all the benefits.
By 1955, the UBC resolved to allow women into UBC and all rituals were changed to include the term “sister.” In 1955 there were officially over 8,000 women in good standing.
At the turn of the 21st century a group of women organized their own committee within the UBC named, “Sisters in the Brotherhood.” The organization was formed to better assimilate women carpenters into their union and help organize women in the non-union sector. In 2002 the UBC represented over 16,000 women in construction throughout the country and in some apprentice programs women comprised almost 10% of the apprentice class. At the 2005 UBC convention held in Las Vegas, the Women’s Committee gave a report to the entire delegation and received a standing ovation.
Women, much like minority workers, struggled to join the UBC because unions are the most effective organizations to fight workplace discrimination. Women clearly struggled to gain full acceptance into many construction unions, including the UBC, but their efforts and resilience have won them an important place in deciding the future of the UBC.
In 1973 the United Brotherhood of Carpenters represented almost 850,000 members in North America. Wages had risen for all members close to 25% between 1969 and 1971 alone. Fringe benefits, including health care and pensions, were rising with wages and union carpenters were heading into a bright future, both individually and collectively.
It wasn’t long until an old enemy, and a powerful one, had re-emerged for all workers and their unions. The American economy headed into a phase of sharp inflation and stagnation in the 1970’s. The construction economy slowed, leaving many members unemployed for long stretches and even forcing thousands of members into leaving the industry as a whole. Membership in the Carpenters union started to decline by the thousands as the economy faltered.
The downward economy was accompanied by a newly intensified employer assault on unionism in America. More and more companies were turning violently anti-union in the 1970’s and joining together to take on workers. The newest employer strategies included openly violating labor law by breaking union drives with mass firings of workers active in organizing drives. Penalties for such violations were so minimal, and weakly enforced by government agencies, that many non-union companies treated the fines as a business cost. Union companies started to create “alter-egos” or what is commonly called double breasted. This meant that traditionally union companies would establish non-union “sister” companies that threatened market share. Corporate open shop drives gained strength and started to lobby Congress for more and more anti-union legislation. Their lobby efforts were rewarded as President Nixon took the almost unprecedented step of suspending Davis-Bacon protections on February 3rd, 1971. Although his decision was later reversed, it was a sign that construction unions and the labor movement were heading into troubled waters.
The election of Ronald Reagan in 1980 brought in a new economic and social conservatism that would start to cripple America’s labor movement. The emergence of “Reaganomics” was just another name for an excessively pro-corporate and anti-union economic policy. Indeed the gap between the rich and the poor during Reagan’s presidency widened dramatically. Anti-union groups had a new friend in the White House and would use that relationship to try and destroy labor.
The entire construction industry encountered the 1980’s NLRB when the Board ruled in the John Deklewa and Sons case. The Deklewa decision greatly damaged the power of pre-hire agreements, something that construction unions relied on heavily for decades, when the Board essentially “freed pre-hire signatories from 8(d) requirements to bargain successor agreements, thus allowing for greater latitude for employers to walk away from 8(f) labor contracts.” What this effectively meant for construction trade unions was that they could no longer rely on top-down organizing techniques alone and they had to be able to organize workers through grassroots campaigns.
The Carpenters Union was better organized to deal with such an anti-union decision because it had been teaching members how to organize and build worker support for decades. “In 1973, a call goes out to rank-and-file members to help the union grow as the Brotherhood inaugurates volunteer organizing committees in locals around the U.S. and Canada. ’You and your fellow members are the Brotherhood,’ writes Carpenter in announcing the program. ’Organizing is a task which must be accepted as a matter of urgency and necessity by each and every member.’ Today the UBC remains committed to building Volunteer Organizing Committees made up of members and organizers that aggressively organize workers and teach them how to fight their employers and win.
Employer tactics and anti-union governments are not the only reasons that union membership dropped tremendously from 1980 to the mid 1990’s. Unions walked away from their roots of aggressively organizing workers. This led to a far more conservative labor movement. Even with concern to collective bargaining, unions were mired in givebacks to hyper profitable employers throughout the decade. The decades of the 80’s and 90’s saw fewer strikes than most before them. There is little question that a major reason why union power declined in the 1980’s and 1990’s was that union workers were not as active and militant as their predecessors.
Like every union the Carpenters struggled through the 80’s and 90’s. Although wages and benefits for union members did steadily increase, union membership declined.
In 1995 the United Brotherhood of Carpenters elected new leadership dedicated to bringing the union back to its roots of aggressive organizing and winning at the bargaining table. Almost immediately, the UBC hired new organizers from its ranks to hit jobsites and expand market share. The message was clear; the Carpenters’ union would once again be an aggressive organizing union dedicated to making sure that every worker in its trades is protected by the UBC. By 2006, the 125 year anniversary of the United Brotherhood of Carpenters, the International Union was devoting over half of its budget to organizing new members and fighting non-union employers that exploit carpenters.
Hundreds of organizers have organized new contractors and brought in more members throughout the United Brotherhood of Carpenters. In cities like New York, for example, the Carpenters union has signed historical contracts that increase the wages and benefits for its members by twenty five percent. New policies have been passed by Carpenter District Councils and their Locals throughout America requiring union members to dedicate at least one day per year to union action. These programs have had a tremendous affect on all organizing efforts and legislative goals and are proof that the Carpenters’ union is dedicated to creating a “fighting” union for the twenty-first century. The emphasis on organizing and strong membership action has forced hundreds of new agreements on projects that would have otherwise fallen to the non-union sector.
In 2005 the Carpenters union joined six other unions in a new labor federation called Change to Win. This Federation is comprised of some of the largest and most powerful unions in the service and manufacturing sectors. The Federation promises to spend a majority of its resources on organizing new members and working in solidarity to bring employers from every sector to the negotiating table.
The Carpenters Union is indeed going back to its roots of demanding recognition in the workplace by hitting employers in the streets. The anti-union environment in America comprised of anti-union employer associations, anti-union government and a general anti-union economy has devastated the American working class, regardless of union status. The Carpenters and the labor movement emerged from the last two decades of the 20th century battered, but not beaten. The beginning of the 21st century marks a new chapter in labor history and the Carpenters are determined to write it in favor of union victories throughout America for all workers.
Anyone interested in the history of labor, and that of the Carpenters union, has many more resources today than at any other time. There is no question that the history of workers uniting against almost insurmountable odds is not only stimulating, but critical to any worker who believes winning is impossible. Everyday workers before us re-defined impossibility and their stories are well-documented. Among many of the great books available, there are three great books that immediately come to mind.
Labor's Untold Story. Richard O. Boyer and Herbert M. Morais.
From the Folks Who Brought You the Weekend, Priscilla Murulo and A.B. Chitty, Illustrations by Joe Sacco. The United Brotherhood of Carpenters. Walter Galeson.
Copyright © 2009 The New York City District Council of Carpenters
All Rights Reserved | http://www.nycdistrictcouncil.com/history.aspx | 13 |
14 | Click for Related Articles
The land on which Israel was located contained only a fraction of the Palestine Mandate originally dedicated to the Jews as their homeland, incorporating the Balfour Declaration.1 The League of Nations and the British had designated the land called "Palestine" for the "Jewish National Home" -- east and west of the Jordan River from the Mediterranean to Arabia and Iraq, and north and south from Egypt to Lebanon and Syria.2 Historian Arnold Toynbee observed in 1918 that the "desolate" land "which lies east of the Jordan stream,"3 was
capable of supporting a large population if irrigated and cultivated scientifically. ... The Zionists have as much right to this no-man's land as the Arabs, or more.Thus, the territory known variously as "Palestine," as "South Syria," as "Eastern and Western Palestine," or as part of "Turkey" had been designated by international mandate as a "Jewish National Home," concerning which the United States declared,
That there be established a separate state of Palestine.... placed under Great Britain as a mandatory of the League of Nations ... that the Jews be invited to return to Palestine and settle there.... and being further assured that it will be the policy of the League of Nations to recognize Palestine as a Jewish state as soon as it is a Jewish state in fact. . . . England, as mandatory, can be relied on to give the Jews the privileged position they should have without sacrificing the [religious and property] rights of non-Jews.4The Arabs of that day achieved independent Arab statehood in various lands around Palestine but not within Palestine itself Sovereignty was granted after World War I to the Arabs in Syria and Iraq; in addition, Saudi Arabia consisted of approximately 865,000 square miles of territory that was designated as "purely Arab"5
Considering all the "territories" that had been given to the Arabs, Lord Balfour "hoped" that the "small notch" of Palestine east and west of the Jordan River, which was "being given" to the Jewish people, would not be "grudged" to them by Arab leaders .6
But, in a strategic move, the British Government apparently felt "the need to assuage the Emir's [Abdullah's] feelings."7 As one of the royal sons of the Hejaz (Saudi Arabia), Abdullah was a recipient of British gratitude; the Arabians of the Hejaz had been, among all the Arab world, of singular assistance to England against the Turks8
The insertion of Abdullah and his emirate into mandated Palestine, in the area east of the Jordan River that was part of the land allocated to the "Jewish National Home," might be partially traced to a suggestion received by Colonial Secretary Winston Churchill from T. E. Lawrence. In a letter of January 1921, Lawrence informed Churchill that Emir Feisal (Abdullah's brother, and Lawrence "of Arabia's" choice to lead the Arab revolt)9 had "agreed to abandon all claim of his father to [Western] Palestine," if Feisal got in return Iraq and Eastern Palestine as Arab territories. [See Feisal-Weizmann agreement]
Further explanation was found in a "secret dispatch from Chief British Representative at Amman" later in 1921. He cautioned that the local "Transjordanian Cabinet" had been replaced by a "Board of Secretaries,"
responsible for all internal affairs, referring to his highness Abdullah for a decision in the event of any disagreement....All the "Board" members, according to the Eastern Palestine envoy, were
Syrian exiles, who with perhaps one exception, are more interested in designs on the French in Syria than in developing Trans-Jordania.... In his Highness' opinion, the allies had not dealt fairly with the Arab nation and Great Britain had not treated him as he deserved. He was one of the most chiefly instrumental in bringing about the Arab revolution and when Feisal, during the war, was inclined to accept the overtures of the Turks he had opposed that policy.... When he came to Trans-Jordania "with the consent of the British", he had agreed to act in accordance with Mr. Churchill's wishes and with British policy, as he did not wish to be the cause of any friction between the British and their allies, the French.Winston Churchill proposed his plan for Transjordan to Prime Minister Lloyd George in March 1921:
We do not expect or particularly desire, indeed, Abdullah himself to undertake the Governorship. He will, as the Cabinet rightly apprehend, almost certainly think it too small.... The actual solution which we have always had in mind and for which I shall work is that which you described as follows: while preserving Arab character of area and administration to treat it as an Arab province or adjunct of Palestine.11It was a British Jew, Palestine High Commissioner Sir Herbert Samuel, who supported and even extended Winston Churchill's formulations. Samuel sent a telegram to Churchill in July 1921; while discouraging Churchill from submitting to Abdullah's predicted eventual "demand" for "attachment of Trans-Jordania to, the Hejaz," as being "contrary to Article V of the Mandate and open to much objection in relation to future development," High Commissioner Samuel suggested the following:
I concur in proposal that Abdullah should visit London and had writtcn to you suggesting it.... At the end of six months, the following settlement might be arranged: (1) the Arab governor mutually agreed upon by his majesty's government and Abdullah or King Hussein. (2) British officer(s) to have real control. (3) Reserve force commanded by British officer(s), Air Force and armored cars as at present. (4) A small British garrison to be stationed in District temporarily. (5) A declaration in accordance with new article to be inserted in mandate that Jewish National Home provisions do not apply east of Jordan. This would not prevent such Jewish immigration as political and economic conditions allowed but without special encouragement by Government. 12Feisal got his wishes and became King of Iraq;13 his brother Abdullah was installed in the British mandatory area as ruler of the "temporary" emirate on the land of eastern Palestine, which became known as the "Kingdom of Transjordan."
Palestine High Commissioner Harold MacMichael later offered some evidence -- of the original "temporary" nature of British intentions in a "private, personal and most secret" cipher; MacMichael reported in 1941 that Abdullah now harbored greater ambitions, because of
the part he [Abdullah] played in the last war, his position in the Arab world as a senior member of a royal house, [and] the purely temporary arrangements whereby in 1921 having narrowly missed being made King (a) of Iraq and (b) of Syria in turn, he was left to look after Trans-Jordan .... 14Britain nevertheless quietly gouged out roughly three-fourths of the Palestine territory mandated for the Jewish homeland15 into an Arab emirate, Transjordan,16 while the Mandate ostensibly remained in force but in violation of its terms.17 Historians and official government documents concerned with the area continued to call it "Eastern Palestine," despite the new appellation. That seventy-five percent of the Palestine mandate was described by England's envoy to Eastern Palestine:18 "a reserve of land for use in the resettlement of Arabs [from Western Palestine], once the National Home for the Jews in Palestine"* resulted in the "Jewish independent state."
The League of Nations Mandate for Palestine remained unchanged even though Britain had unilaterally altered its map and its purpose.19 The Mandate included Transjordan until 1946, when that land was declared an independent state.20 Transjordan had finally become the de jure Arab state in Palestine just two years before Israel gained its Jewish statehood in the remaining one-quarter of Palestine; Transjordan comprised nearly 38,000 square miles; Israel, less than 8,000 square miles.
[* As the next chapters will illustrate, instead, Arabs poured from Eastern Palestine as well as from Arab areas within Western Palestine -- into the Jewish -- settled areas in Western Palestine. The course of action which followed from that unrecognized population movement brought ramifications which are as critical to the question of political "justice" as they are unknown or disregarded today.]
Thus, about seventy-five percent of Palestine's "native soil," east of the Jordan River, called Jordan, is literally an independent Palestinian-Arab state located on the majority of the land of Palestine; it contains a majority of Palestinian Arabs in its army as well as its population. In April 1948,21 just before the formal hostilities were launched against Israel's statehood, Abdullah of Transjordan22 declared: "Palestine and Transjordan are one, for Palestine is the coastline and Transjordan the hinterland of the same country." Abdullah's policy was defended against "Arab challengers" by Prime Minister Hazza al-Majali:
We are the army of Palestine.... the overwhelming majority of the Palestine Arabs ... are living in Jordan.23Although Abdullah's acknowledgment of Palestinian identity was not in keeping with the policy of his grandson, the present King Hussein, Jordan is nonetheless undeniably Palestine, protecting a predominantly Arab Palestinian population with an army containing a majority of Arab Palestinians, and often governed by them as well. Jordan remains an independent Arab Palestinian state where a Palestinian Arab "law of return" applies: its nationality code states categorically that all Palestinians are entitled to citizenship by right unless they are Jews.24 In most demographic studies, and wherever peoples are designated, including contemporary Arab studies, the term applied to citizens of Jordan is "Palestinian/Jordanian." In 1966 PLO spokesman Ahmed Shukeiry declared that25
The Kingdom of Palestine must become the Palestinian Republic....Yasser Arafat has stated that Jordan is Palestine. Other Arab leaders, even King Hussein and Prince Hassan of Jordan, from time to time have affirmed that "Palestine is Jordan and Jordan is Palestine." Moreover, in 1970-1971, later called the "Black September" period, when King Hussein waged war against Yasser Arafat's Arab PLO forces, who had been operating freely in Jordan until then, it was considered not an invasion of foreign terrorists but a civil war. It was "a final crackdown" against those of "his people"26 whom he accused of trying to establish a separate Palestinian state, under Arab Palestinian rule instead of his own, "criminals and conspirators who use the commando movement to disguise their treasonable plots," to "destroy the unity of the Jordanian and Palestinian people."27
Indeed, the "native soil" of Arab and Jewish "Palestines" each gained independence within the same two-year period, Transjordan in 1946 and Israel in 1948. Yet today, in references to the "Palestine" conflict, even the most serious expositions of the problem refer to Palestine as though it consisted only of Israel -- as in the statement, "In 1948 Palestine became Israel."28 The term "Israel" is commonly used as if it were the sum total of "Palestine."
However, within what Lord Balfour had referred to as that "small notch" sometimes called Palestine, the "Jewish National Home" had been split into two separate unequal Palestines: Eastern Palestine-or the Arab emirate of Transjordan-and Western Palestine, which comprised less than one-fourth of the League of Nations Mandate. The portion of the "notch" of land on which the Jews settlod and in which most Jews actually lived -- from the 1870s and 1880s through the 1940s -- was in fact only a segment of the area of Western Palestine.
The East Bank and the West Bank, same situationIf Israel must give up a portion, or all of WEST BANK land, which was part of the British Mandated "Palestine" or Jewish National Home, it is only logical that Jordan must give up a proportiately large amount of EAST BANK land which was also part of the British Mandated "Palestine" or Jewish National Home.
Each country, Israel and Jordan should
contribute land according to the number of Palestinians residing in their
country. (Most Palestinians in Jordan live on the EAST BANK)
Palestinians are by law guaranteed the RIGHT OF RETURN to Jordan, where they are entitled to citizenship, "unless they are Jews."
Jordan is very much afraid that it will
be declared THE PALESTINIAN STATE, Jordan has NEVER allowed publication
of the percentage of Palestinians in its population. Jordan is also afraid
that someone might suggest to take a portion of its territory for a Palestinian
state. MORE THAN TWICE the number of Palestinians live on the EAST
BANK of the Jordan River in Jordanian territory, than live on the WEST
1. The Old Testament indicates that historic Palestine included land on both sides of the Jordan River, east bank as well as west bank, including the territory now known as Jordan. The portion of historic Palestine east of the Jordan River equaled or exceeded in area the portion west of Palestine. In biblical times the tribe of Manasseh occupied more territory to the east of the Jordan River than to the west, the entire tribe of Reuben dwelled east of the Jordan, and the land called Gad was east of the Jordan. Mount Gilead and Ramoudh Gilead all were east of the Jordan, as were other biblical places and people. (See map, page 12, Literary and Historical Atlas of Asia, prepared by J. G. Bartholomew for the Everyman Library.) Even in the time of the New Testament (as shown by the map in Appendix 1). the land included territory on the east side of the Jordan River as well as the west. The New Testament city of Philadelphia was well east of the Jordan River, as was the city of Golan, which was part of Palestine, according to the Old Testament as well as the New. For an additional example, see Rand McNally Atlas of World History, ed. R.R. Palmer, Chicago, 1957, p. 25.
2. For map of Palestine, east, see 0. R. Conder, The Survey of Eastern Palestine, Committee of the Palestine Exploration Fund, London, 1889; also see J. Stoyanovsky, The Mandate for Palestine (London, New York, Toronto, 1928), pp. 66, 204---210. Arthur Balfour's memorandum of August 11, 1919, stated: "Palestine should extend into the lands lying east of the Jordan." Balfour, who led the British delegation to the Paris Peace conference (in 1919) "determined the frontiers" Of Palestine in a memorandum to Prime Minister Lloyd George, June 26, 1919: "In determining the Palestinian frontiers, the main thing to keep in mind is to make a Zionist policy possible by giving the fullest scope to economic development in Palestine. Thus, the Northern frontier should give to Palestine a full command of the water power which geographically belongs to Palestine and not to Syria; while the Eastern frontier should be so drawn as to give the widest scope to agricultural development on the left bank of the Jordan, consistent with leaving the Hedjaz Railway completely in Arab possession."
3. December 2, 1918-Toynbee minute: Foreign Office Papers; 371/3398-Amold Toynbee agreed with the Mandate: "It might be equitable [to include in Palestine] that part ... which lies east of the Jordan stream ... at present desolate, but capable of supporting a large population if irrigated and cultivated scientifically ... The Zionists have as much right to this no-man's land as the Arabs, or more," cited in Martin Gilbert, Exile and Return, p. 115. See also David Lloyd George, The Truth About the Peace Treaties (vol. 1), pp. 1144-1145.
5. In Arabia
itself, largely equivalent to present Saudi Arabia, Jews had been present
and had developed towns such as Medina and Khaibar, where they thrived
from Roman days and before, until the conquest by Muhammad and subsequent
directions from Omar. Then the Jews were slaughtered or their land expropriated
and Jews were forced to flee for their lives if they did not convert to
Islam. Many of those Jewsin the seventh century fled as refugees back to
"Palestine," where Jewish inhabitants could even then be found in most
towns referred to today as purely Arab areas.
9. Gilbert, Exile, p. 132; see T.E. Lawrence, Revolt in the Desert, about Abdullah, particularly pp. 1-7. Feisal's role is woven throughout Lawrence's account. Also see King Abdullah of Jordan, My Memoirs Completed (Washington, D.C., 1954).
12. July 4, 1921, telegram to Secretary of State for the Colonies, C0733/35186; response to "Very Confidential" memo "from the Civil Secretary after his recent tour in Trans-Jordania," Churchill to Samuel, July 2, 1921, C0733/36252.
14.MacMichael hoped in 1941 to offer Abdullah a "consolation prize" of "Trans Jordan" when the country gained independence of the Mandate, and after Abdullah "has realized that his hopes ... for Syria ... are vain. We simply cannot have recrimination of these pledges to the Arabs until we are absolutely clear how and when they are to be converted into practice. The smaller the time gap between any promise and its implementation, the better. . . . " MacMichael to the Secretary of State for the Colonies, PRO C0733/27137.
15.According to the 1937 Palestine Royal Commission Report, "Trans-Jordan was cut away from that field [in which the Jewish National Home was understood to be established at the time of the Balfour Declaration.... the whole of historic Palestine]." The reason given was the later claim of the Arabs that a letter, called the McMahon pledge, from Sir Henry McMahon on October 24, 1915, had included Palestine in the territory that Britain promised to the Arabs. A formal Arab protest, called "The Holyland. The Muslim-Christian Case Against Zionist Aggression," was not declared until November 1921, six years after the date of the McMahon letter and four years after the Balfour Declaration. The fact that McMahon had excluded Palestine from his promise-as had the Emir Feisal excluded it from his request at the Paris Peace Conference in 1919, ignoring the McMahon letter-was conspicuously absent. The British government's failure to publish the complete correspondence gave credence to what otherwise would have been a quickly squelched, rather obvious ploy, until 1939, when a committee of British and Arab delegates scrutinized the correspondence; the British then determined that, in the words of one delegate, the Lord High Chancellor, Lord Maugham, "The correspondence as a whole, and particularly ... Sir Henry McMahon's letter of the 24th October, 1915, not only did exclude Palestine but should have been understood to do so. . . ." Similar testimony came from many eminent British government officials. Most notably, from Sir Henry McMahon himself. in The Times of London, July 23,1937, McMahon wrote, "I feel it my duty to state, and I do so definitely and emphatically, that it was not intended by me in giving this pledge to King Hussein to include Palestine in the area in which Arab independence was promised. I also had every reason to believe at the time that the fact that Palestine was not included in my pledge was well understood by King Hussein." The British case supporting McMahon was strengthened even further by the fact that Feisal waited until January 29, 1921-nearly six years later-to bring up the subject, and then he was quoted by Winston Churchill as being "prepared to accept" the exclusion of Palestine. The logical deduction to be made from the plethora of evidence seems clear: Palestine was indeed excluded-and in any case, the Balfour Declaration was incorporated by the Council of the League of Nations and was thus binding upon its trustee, England as Mandatory power, while no British letter of pledge could have been binding even if one had been given. Nevertheless, Arabs and their supporters have continued to attempt to cast doubt, as though the written documents didn't exist. Significantly, however, the 1937 Palestine Royal Commission Report, which was issued the same year that McMahon published his Times rejoinder, made the recommendation that "Transjordan should be opened to Jewish immigration." It never was. Palestine Royal Commission Report, pp. 22-38; for texts of several British witnesses and full McMahon text: Esco, Palestine, vol. 1, p. 1811 Great Britain, Correspondence, Cmd. #5957; Churchill White Paper, June 3, 1922, Statement of British policy in Palestine, Cmd. # 1700, p. 20; Lloyd George, The Truth About the Peace Treaties, vol. 11, pp. 1042, 1140-1155; D.H. Miller, Diary, vol. XIV, pp. 227-234 and 414, vol. 11, pp. 188-189, vol. XVII, p. 456; H.F. Frischwasser-Ra'anan, The Frontiers of a Nation (London: Batchworth Press, 1955), pp. 104-107. Frischwasser-Ra'anan writes of the statement by British Foreign Office expert on the Near East, Lord Robert Cecil: " 'Our wish is that the Arab country shall be for the Arabs, Armenia for the Armenians and Judea for the Jews,"' pp. 104-105; Antonius, Arab Awakening, pp. 390-392; The Letters of TE. Lawrence, David Garnett, ed. (Doubleday, Doran, 1939), pp. 281-282; for international legal interpretation, see J. Stoyanovsky, The Mandatefor Palestine (London, New York, Toronto: Longmans, Green & Co., 1928), pp. 66, 205-223; Parliamentary Debates, Commons, vol. 113, col.115-116, May 23,1939, for the views of the Archbishop of Canterbury; for examples of discussion of the McMahon-Hussein matter that omit available evidence described or referred to above, and suggest support of the Arab protestations, see William B. Quandt, Fuad Jabber, Ann Mosely Lesch, The Politics of Palestinian Nationalism (Berkeley, Los Angeles, London: University of California Press, 1973), pp. 8-11; John S. Badeau, East and West ofSuez (New York: The Foreign Policy Association, 1943), p. 45.
16.In the Anglo-American Committee's "Historical Summary of Principal Political Events in Palestine Since the British Occupation in 1917," a chronological summary beginning in 1917, no mention at all is made of the gift of Transjordan to the Arabs by the British-neither in the 1922 summary nor in 1928, when an "organic Law" was enforced, nor in 1929 when the ratification of the "Agreement" took place. See Summary in Survey of Palestine, vol. 1, pp. 15-25. Yet that act, which severed roughly seventy-five percent of the Mandate of Palestine, is ignored as a "principal political event"-the de facto creation of an Arab state on seventy-five percent of what had been deemed the "Jewish National Home," and which had been specifically set aside by the British and Arabs alike as an area "not purely Arab," as compared to Iraq and Syria. In the chapter preceding the "Summary," the Arabs' acquisition of an Arab-Palestinian state-a Palestinian state surely no less than Israel became-is presented as afait accompli. "Prior to the 12th August, 1927, the High Commissioners for Palestine included within their jurisdiction the entire Mandatory area without separate mention of Transjordan. Since that date, however, the High Commissioners have received separate commissions for Palestine and Trans-Jordan respectively. " See Survey ofPalestine, p. 14. (Emphasis added.) In the Summary, however, exhaustive attention is drawn to the Balfour Declaration and its ramifications upon the Arab community in Palestine; on the rioting- "The hostility shown towards the Jews [which was] ... shared by Arabs of all classes; Moslem and Christian Arabs, whose relations had hitherto been uneasy, were for once united. Intense excitement was aroused by the wild anti-Jewish rumors which were spread during the course of the riots." See Haycraft Inquiry, October 192 1, in Survey of Palestine, pp. 18, 19.
17.The only proposal Britain as Mandatory power submitted to the League of Nations "during the lifetime of the League. . ." was a 1922 memorandum citing Article 25 of the Mandate; Article 25 allowed the Mandatory power "with the consent of the Council of the League of Nations, to postpone or withhold application of such provisions of the mandate as he may consider suitable to those conditions, provided that no action ... is inconsistent with ... Article 15, 16 and 18." The article referred to "the territories lying between the Jordan and the Eastern boundary ofPalestine ...... the eastern boundary being the Hejaz (Saudi Arabia). In Dr. Paul S. Riebenfeld, "Israel, Jordan and Palestine," (unpublished manuscript), pp. 10-18ff, exhaustive study of documentation concerning TransJordan and the Mandate. In fact it appears that, to humor Emir Abdullah, the British gave the appearance of a severance, with the real consequences of a severance from Palestine upon the Jewish National Home, and the de facto creation of the Palestinian Arab state, while the British never attempted to legalize their actions, only to record them; "the only legal action ever taken by the British Government" was taken under Article 25: the Resolution of September 16, 1922. League of Nations Official Journal, November 1922, pp. 1390-1391; Riebenfeld, ibid., p. 18. For an absorbing account of "what exactly happened on September 16, 1922" see Dr. Riebenfeld's "Integrity of Palestine," Midstream, August/September, 1975, p. 12ff; also see Ernest Frankenstein, Justice for My People.
18. Alec Kirkbride, A Crackle of Thorns (1956), pp. 19-20. Kirkbride goes on to say, however, that "There was no intention" in 1920 "of forming the territory east of the river Jordan into an independent Arab state." Also see Palestine Royal Commission Report, suggesting that Transjordan-Eastem Palestine-"if fully developed could hold a much larger population than it does at present," p. 308.
19.When Britain -entered into an agreement to transfer the exercise of administration on February 20, 1928, the League of Nations Permanent Mandates Commission challenged the agreement as a "conflict with the Mandate for Palestine." Quincy Wright, Mandates Under the League of Nations (Chicago: University of Chicago Press, 1930), p. 458. The statement of the Commission (in part) was: "Since the Commission is charged with the duty of seeing that the mandate is fully and literally carried out, it considers it necessary to point out in particular, Article 2 of the Agreement, which reads as follows: "'The powers of legislation and administration entrusted to His Britannic Majesty as mandatory for Palestine shall be exercised in that part of the area under Mandate known as Transjordan by His Highness the Amir . . .' does not seem compatible with the stipulation of the Mandate of which Article I provides that: 'The mandatory shall have full powers of legislation and of administration, save as they may be limited by the terms of this mandate."' League of Nations, Official Journal, Oct. 1928, p. 1574; also see pp. 1451-1453; also in Riebenfeld, Israel, Jordan and Palestine, pp. 24-25; ... At that point Britain's Council member "explained that Great Britain still regarded itself as responsible for the ... mandate in TransJordan and the Council was satisfied." Quincy Wright, Mandates Under the League ofNations (Chicago: University of Chicago Press, 1930), p. 458; as another example, in 1937 the Permanent Mandates Commission, at the 32nd Session, insisted that no obstacle should "prevent that Jewish National Home being established." Minutes of the 32nd Session, p. 90.
12, 1948, Arab League Resolution: No partition would be acceptable, and
a Palestine must be liberated from the Zionists; on April 16, 1948, Abdullah
abolished the Jordan Senate and appointed 20 new Senators: 7 Senators were
Palestinian Arabs; on April 24, 1948, Jordan's House of Delegates and House
of Notables, in joint session of Parliament, adopted a resolution: ". .
. basing itself on the right of self-determination and on the existing
de facto position between Jordan and Palestine and their national, natural
and geographic unity and their common interests and living space. . . ."
The parliament supported the "unity between the two sides of the Jordan
26.Mohamed Heikal, Road to Ramadan (New York: Ballantine Books, 1975), p. 96. See Heikal's account of a meeting between Arab heads of state, including King Faisal, Ghadaffl, and President Nasser; according to Heikal, King Hussein's war ended September 27, 1970, with the signed agreement between Hussein and Yasser Arafat, and the "withdrawal of all ... forces from every city in the country" (p. 99). According to another source, the ceasefire took place September 25, but fighting continued well into 1971. Political Terrorism, edited by Lester Sobel (New York: Facts on File, Inc., 1975), cited in Hashemite Kingdom ofJordan and the West Bank edited by Anne Sinai and Allen Pollack (New York: American Academic Association for Peace in the Middle East, 1977), p. 60.
This page was produced by Joseph
Source: "From Time Immemorial" by Joan
All Rights Reserved | http://www.eretzyisroel.org/~jkatz/jordan.html | 13 |
28 | Humpback whales (Megaptera novaeangliae) occur in all oceans of the world, generally inhabiting waters over continental shelves, along continental edges and around some oceanic islands . They winter in warm waters in a few specific locations and mate and give birth on wintering grounds where little feeding is thought to take place . For the summer season, they migrate to high-latitude areas where they tend to stay relatively close to shore (although some groups inhabit deeper water) and spend the majority of their time feeding .
Humpback whale populations were greatly depleted by commercial whaling . Prior to whaling, humpback whale numbers are thought to have exceeded 125,000 . American whalers alone killed between 14,164 and 18,212 humpback whales between 1805 and 1909 . Humpback whales first received protection in the North Atlantic in 1955 when the International Whaling Commission placed a prohibition on non-subsistence whaling by member nations . Protection was extended to the North Pacific and southern hemisphere populations following the 1965 hunting season . Although hunting has largely been stopped (some exceptions exist that allow the take of a limited number of whales), and populations appear to be increasing, human impacts such as vessel collisions and entanglements are factors that may be slowing the recovery of the humpback whale population .
The total level of human-caused mortality and serious injury is unknown, but data indicate that it is significant . Humpback whales are also vulnerable to marine pollution . The increasing levels of anthropogenic noise in the world’s oceans, such as that produced by certain types of sonar, may also be problematic for whales, particularly for baleen whales that communicate using low-frequency sound .
GULF OF MAINE, WESTERN NORTH ATLANTIC STOCK
There are likely six stocks within the western North Atlantic humpback population . A feeding aggregation in the Gulf of Maine (considered a single stock) is the only one U.S. waters . Within New England waters, humpbacks are present in spring, summer and autumn . They spend much of their time feeding and their distribution in this region has been largely correlated to prey species and abundance . In winter, humpbacks from the different western Northern Atlantic feeding areas mate and calve primarily in the West Indies, where spatial and genetic mixing among subpopulations occurs . From late December to early April most of the population is found at Silver and Navidad Banks at the end of the Bahamian archipelago, and along the coast of the Dominican Republic . They are also found at much lower densities throughout the remainder of the Antillean arc, from Puerto Rico to the coast of Venezuela . The only U.S.-controlled portions of the breeding range include waters along the Northwest coast of Puerto Rico and the U.S. Virgin Islands . Not all of the stock migrates to the West Indies every winter, however, and significant numbers of animals are found in mid- and high-latitude regions during the winter months . There have recently been a number of wintertime humpback sightings in coastal waters of the southeastern U.S. .
North Atlantic humpback numbers are thought to be slowly increasing. An average increase of 1 percent (SE=0.005) was estimated for the period 1979 to 1993 . The best estimate of the number of North Atlantic humpbacks in 1992 and 1993 was 11,570 (CV=0.069) . Data suggest that the Gulf of Maine humpback whale stock is also steadily increasing in size at a rate consistent with the larger population . The Gulf of Maine minimum population was estimated to be 501 in 1992 and 647 in 1999 . Both of these estimates are likely low due to sampling technique . The best estimate of the actual number of animals is thought to be 902 .
Historic summering range for the North Pacific humpback whales encompassed coastal and inland waters around the Pacific rim from Point Conception, Calif. north to the Gulf of Alaska and the Bering Sea, and west along the Aleutian Islands, Kamchatka Peninsula and into the Sea of Okhotsk . Rough estimates of the pre-whaling population speculate that there were around 15,000 humpbacks in the North Pacific . In 1966, the entire North Pacific humpback population was thought to number only around 1,200 animals . This estimate increased to between 6,000 and 8,000 by 1992 . Although these estimates are uncertain and are based on different methods, the 6 percent to 7 percent growth rate implied is consistent with the observed growth rate of the better-studied eastern North Pacific subpopulation .
Although the International Whaling Commission only considered North Pacific humpbacks to be one stock, there is now good evidence for multiple stocks in the North Pacific . There are at least three relatively separate populations that migrate between their respective summer/fall feeding areas and winter/spring calving and mating areas; the eastern North Pacific stock, the central North Pacific stock and the western North Pacific stock . These divisions are a simplification, however, and are not perfect. In general, interchange occurs (at low levels) between breeding areas, although fidelity is extremely high among the feeding areas .
The eastern North Pacific stock spend much of their lives within U.S. waters . They winter in coastal Central America and Mexico, migate along the U.S. West Coast, and summer in British Columbia . Mark-recapture population estimates increased steadily from 1988/90 to 1997/98 at about 8 percent per year . Surveys of humpback whale abundance in feeding areas in California, Oregon and Washington conducted from 1991 to 2002 show a steady upward trend with the exception of a large decline in 2000 . The 2002 to 2003 population estimate (1,391, CV=0.22) was higher than any previous estimate and may indicate that the lower numbers in 1999 to 2001 exaggerated any real decline that might have occurred . It could also indicate that a real decline was followed by an influx of new whales from another area . This latter view was substantiated by a greater fraction of new whales seen for the first time in 2003 .
The central North Pacific stock, in general, winters around the Hawaiian Islands (some go to Mexico) and migrates to northern British Columbia/Southeast Alaska and Prince William Sound west to Kodiak . Three feeding areas for the Central North Pacific stock have been studied using photo-identification techniques; these include southeastern Alaska, Prince William Sound and Kodiak Island . There has been some exchange of individual whales between these locations, although the aggregation in southeastern Alaska seems to remain relatively isolated from other groups . The current total estimated abundance for this stock is 4,005 individuals . The abundance of the Prince William Sound feeding aggregation is thought to be fewer than 200 whales . In the Kodiak region, 127 individual whales were identified between 1991 and 1994 and abundance was estimated to be 651 (95 percent CI: 356-1,523) . The number of animals in the Southeast Alaska aggregation is thought to have increased . The 2000 estimate of 961 is substantially higher than estimates from the 1980s, which put numbers in the high 300s . In a 2004 report, an annual population rate of increase was calculated to be 10 percent . Another study, based on aerial surveys conducted across the main Hawaiian Islands, and designed specifically to estimate trends in the Central Pacific Stock, found an annual increase of 7 percent from 1993 to 2000 . In 2006, the Southeast Alaska population reached 1,115 .
The western North Pacific stock is the least studied of the Northern Pacific populations . This aggregation winters off Japan and probably migrates to waters west of the Kodiak Archipelago (the Bering Sea and Aleutian Islands) in summer/fall . Recent surveys in the central-eastern and southeastern Bering Sea in 1999 and 2000 resulted in humpback whale sightings suggesting that the Bering Sea is an important feeding area . New information indicates that humpback whales from the western and Central North Pacific stocks mix on summer feeding grounds in the central Gulf of Alaska and perhaps the Bering Sea . A major research effort (the SPLASH project) was initiated in 2004 in order to better delineate stock structure of humpback whales in the North Pacific . There are no reliable estimates for the abundance of humpback whales in the western Pacific stock because surveys of the known feeding areas are incomplete, and not all feeding areas are known .
The 2010 estimate of abundance of humpback whales in the entire North Pacific Basin based on a Chapman-Petersen estimate is 21,808 (CV=0.04) .
National Marine Fisheries Service. 1991. Recovery Plan for the Humpback Whale (Megaptera novaeangliae). Prepared by the Humpback Whale Recovery Team for the Silver Spring, Maryland. 105pp.
NatureServe. 2005. NatureServe’s Central Databases. Arlington, VA. U.S.A.
NOAA Fisheries. 2005. Stock Assessment Report. Humpback Whale (Megaptera novaeangliae): Gulf of Maine Stock, revised Dec. 2004. National Oceanic and Atmospheric Administration, Washington, D.C.
NOAA Fisheries. 2005. Stock Assessment Report. Humpback Whale (Megaptera novaeangliae): Eastern North Pacific Stock, revised May 15, 2005. National Oceanic and Atmospheric Administration, Washington, D.C.
Calambokidis J., T. Chandler, L. Schlender, G.H. Steiger, and A. Douglas. 1995-2000. Final reports to Monterey Bay, Channel Islands, and Olympic Coast National Marine Sanctuaries, Southwest Fisheries Science Center, and University of California at Santa Cruz. Cascadia Research, 218 1/2; W Fourth Ave., Olympia, WA 9850. Accessed at http://www.cascadiaresearch.org/abstracts/abstract.htm.
NOAA Fisheries. 2005. Stock Assessment Report. Humpback Whale (Megaptera novaeangliae): Central North Pacific Stock, revised Feb 12, 2005. National Oceanic and Atmospheric Administration, Washington, D.C.
NOAA Fisheries, Office of Protected Resources. Cetaceans: Whales, Dolphins, and Porpoises. Humpback Whale. Website <http://www.nmfs.noaa.gov/pr/species/mammals/cetaceans/humpback_whale.doc> accessed January, 2005.
Mobley J. Jr., S. Spitz, and R. Grotefendt. 2001. Abundance of Humpback Whales in Hawaiian Waters: Results of 1993-2000 aerial surveys. Hawaiian Islands Humpback Whale National Marine Sanctuary Office of National Marine Sanctuaries. National Oceanic and Atmospheric Administration U.S. Dept of Commerce. Available at <http://hawaiihumpbackwhale.noaa.gov/research/HIHWNMS_Research_Mobley.pdf>.
NOAA Fisheries. 2005. Stock Assessment Report. Humpback Whale (Megaptera novaeangliae): Western North Pacific Stock, revised Feb 5, 2005. National Oceanic and Atmospheric Administration, Washington, D.C.
NOAA Fisheries. 2010. Draft Stock Assessment Report. Humpback Whale (Megaptera novaeangliae): Central North Pacific Stock, revised Jan 28, 2010. National Oceanic and Atmospheric Administration, Washington, D.C. | http://www.esasuccess.info/hawaii.shtml | 13 |
15 | - Learn about Metaphors on . Find info and videos including: How to Mix Metaphors, What is an Extended Metaphor?, How to Enhance a Resume With Metaphors and much more. — “Metaphors - ”,
- Metaphor is the concept of understanding one thing in terms of another. A metaphor is a figure of speech that constructs an ***ogy between two things or ideas; the ***ogy is conveyed by the use of a metaphorical word in place of some other word. — “Metaphor - Wikipedia, the free encyclopedia”,
- However, the explicit use of the word 'like' or 'as' which you see in a simile, is not used in a metaphor which is rather a comparison of two unlike things using the verb "to be". Hence, a metaphor sounds more forceful and suggestive, but is still very common in speech. — “Metaphors - list of metaphors at SaidWhat”,
- - Soon to be the world's largest collection of Metaphors! Metaphors are not just a literary form – they are a basic way that we think, communicate and relate to each other as humans. — “MetaphorSky - Soon to be the world's largest collection of”,
- Metaphors are a way to describe something. Authors use them to make their writing more interesting or entertaining. Unlike similes that use the words "as" or "like" to make a comparison, metaphors state that something is something else. Read the statements that contain metaphors in italics. — “Metaphors”,
- Metaphors page shows the importance of mental pictures in creating new ideas. — “Metaphor's Algebra”, ecometry.biz
- Metaphors for life. Definition: A figure of speech in which an implied comparison is made between two unlike things that actually have something in common. A metaphor expresses the unfamiliar (the tenor) in terms of the familiar (the vehicle). — “metaphor - definition and examples of metaphors - figures of”,
- What is a Metaphor? A metaphor is a figure of speech in which a word or phrase that denotes a certain object or idea is applied to another word. — “Metaphors - Study Skills”, how-to-
- The 2010 Victorian election began on Melbourne Cup day surrounded by horse racing metaphors, and perhaps it should end that way too. — “Surge to Vic Coalition could be blessing for Labor - ABC News”, .au
- "A newly invented metaphor assists thought by evoking a visual image," said Orwell, "while on the other hand a metaphor which is technically dead' (eg, iron resolution) has in effect reverted to being an ordinary word and can generally be used without loss of vividness. — “Research tools: information in depth | The Economist”,
- Poets use metaphor to explore the ideas, forces and powers that lay behind our rational thought and our Metaphors are embedded in metaphors, the 9 lines contain metaphors that describe pregnancy, but that whole system of metaphors is. — “Sylvia Plath,"Metaphors", Metaphor, and the number 9”,
- Describes common metaphors in education and how teachers' metaphors influence teacher quality and educational reform efforts. — “Teacher Metaphors”,
- The greatest thing by far is to be a master of metaphor – Aristotle People often associate metaphor with poetry, literature and art, but we all use metaphor in our day-to-day conversation, often without realizing it. — “Become a Master of Metaphor and Multiply Your Blogging”,
- A community about metaphors. Tag and discover new products. Share your images and discuss your questions with metaphors experts. — “: metaphors”,
- A metaphor is a figure of speech that compares two unlike things. their first cousins, similes, metaphors do not use the words like or as to make the. — “Common Usage Dilemmas: Mixed Metaphors: A Dollar Late and a”,
- Common organizational metaphors such as being a machine or an organism, playing a game, fighting a war, and climbing a mountain -- where the landscape is presumed by the metaphor to be fixed are contrasted to a world of continuous change, such as the biotech and internet industries. — “Metaphors”,
- Metaphors make life fun--and they mislead, by blurring the reality and the ***ogy. I work with people who want to think more clearly by controlling how language affects thinking. — “Metaphors and Their Abuses; business jargon | Chaco Canyon”,
- And they're off. Another surging flood is heading our way, a flood of mixed metaphors, embarrassing similes, bathetic descriptions, throbbing members, secret petals, nipples that resemble musical instruments, tiny cries of rapture, flames licking. — “Rude awakening for the risqué writers - Features, Books - The”,
- "All theories of organisation and management are based on implicit images or metaphors that persuade us to see, understand, and. — “Metaphors of Organisation part 1”,
- Top questions and answers about Metaphors. Find 14254 questions and answers about Metaphors at Read more. — “Metaphors - ”,
- [Image:Caspar David Friedrich 032.jpg|144px|thumb|right|The drive toward the formation of metaphors is the fundamental human drive, which one cannot for a single instant dispense with in thought, for one would thereby dispense with man himself. — “Metaphors - Wikiquote”,
- metaphor n. A figure of speech in which a word or phrase that ordinarily designates one thing is used to designate another, thus making an implicit. — “metaphor: Definition from ”,
related images for metaphors
- The companies we represent for Crossroads Financial Group INC ® offer you a broad range of DI insurance products to meet the needs of a
- periodic table of metaphors jpg
- This page is dedicated to my friend Jacob Lavie who died of cancer on 20 03 2004 at the age of 75 He was a Socratic type
- Links to sites about William Sturgeon
- The Metaphors backstage
- The Metaphors backstage
- Paintings aMetaphor by Aka Eartista Image Copyright Aka Eartista X0 1 d4 X1
- Brian Wilson s Synchronized Swimming curated by Pedro Velez postcards and flyers 02 No Metaphors postcard
- 1247073776 jpgbusiness metaphors jpg
- 92 Metaphors and Sym Posted by Tom Jul 28th 2009 |
- Church Metaphors jpg
- metaphors JPG
- Dark Is The Season II Writing Metaphors
- Here is a graphic to go along with the third GnosCast podcast It shows some of the different combinations possible of Gnosis and Episteme using plant metaphors
- Tough and compact die cast chassis 96DC 200BI power supply included The Electro Harmonix Bass Metaphors is a pro level feature rich pedal for bass players Overdrive EQ and compression in a single box along with balanced and unbalanced outs make
- The Metaphors backstage
- Permalink KR at 1268 And from me to you http www someguywithawebsite com cartoons 2008 080512 metaphors png
- View larger image
- of another physical world In exchange for providing something familiar we are failing to break out of the mold and take full advantage of all that the digital medium has to offer
- A Lesson in Cheap Metaphors Sponsored by True Tears Believe it or not cheap metaphors don t usually piss me off unlike incoherent metaphors which always piss me off They re 100 obvious to anyone with a middle school education so
- The Metaphors in the studio
- The Metaphors on stage
- Illustrations Actions and the Function Bar
- metaphors 4 u foto by smith jammed with musician Peter Ball of the Apartment One band this afternoon ended up with three poemsongs but haven t had time to play them back because then
- metaphors b jpg
- Now all these things happened unto them for ensamples and they are written for our admonition upon whom the ends of the world are come 1 Corinthians 10 11 ensamples or types This show picks up where
- previous Another copyright free and public domain image brought to you by ReusableArt com
- Sacred Metaphors is the story of how I healed by learning to identify symbolic meanings and metaphor and integrate them into my waking life Rather than remaining stuck in the wounded places of my body and soul the recognition of sacred metaphor allowed for symbolic rite of passage
- Mixed Metaphors
- below and see the proof for yourself I really didn t take drawing lessons Afterwards you may wish to drop by Onion to see the effect of compounding an encapsulation
- The use of metaphor in strategy and business and in the media is widespread and often annoying Narrative is used to illustrate choices and It s been a long journey is regularly heard from
- The Metaphors in the studio
- metaphors lpnyr l jpg
- Any bonus points for a 3 fer
- Posted on June 12th 2009 by Amrit
- Stainless Steel Metaphors jpg
- Other notable metaphors
- | CRREL Home Page | Feedback |
- Mood Metaphors
related videos for metaphors
- Seal - Dreaming In Metaphors + lyrics Seal - Dreaming In Metaphors Lyrics: Love serenade, Soothe me with the morning sun. Help me find someone, Peaceful and non-judgemental. Holdin' me back, And make me feel whole with life. And stay the same. Life. Without the pain. Why must we dream in metaphors? Try to hold on to something...
- The 3 Self Improvement Metaphors Of Success www.patricchan.name - In this personal development video, I've used 3 simple metaphors to share with you how you can take action, stop procrastination and expand your knowledge.
- Similes and Metaphors in Pop Music Movie.wmv This video includes clips and lyrics from popular songs. All clips have similes and/or metaphors included.
- Metaphors - V Double O (The Fastest Rapper in the UK) LYRICS Verse 1 Lock load and explode a track that you crack on the roads From the dastardly *** of rap laying a smack on the globe Not packing a chrome but I'm ruckus like attacking a rogue I'd even jack a bus and then just take the passengers home I'm loose minded, neck where the noose finds it But never suicidal I'd rather get crucifed It's kinda possible that I meant crucified So you decide, whichever one that other dude Jesus survived If money's hard to find I kinda lose my mind I'd probably let you knock me over just to boost your ride You'd see me hop up off the tarmac with a look deranged Looking at me strange while I drive off in your custom Range Any rapper tryna rap about my rapping I devour See me driving round the town a thousand miles per hour Tryna put a dick in groupie chicks like every hour Tried to vent my brain but then my power drill ran out of power Flicking a rhyme like I'm sick in the mind With vocabulary bigger than big things that you find?? I'm like an engine in a fender bender broke on the road I'm making all this noise and grinding for this paper and dough Get it? I'm just full of these metaphors, I get applause Many want a blade in my back like a stegosaur Rappers wanna jump on my track like a hurdelor! But I'm just never calling em back like when wid a whore Creating tracks from the bounce to the keys I'm the mouse with the cheese or conversely the cat with the cream And if you see don't forget that you should pardon me please Disasters ...
- 1990-91 Gulf War briefing General Norman Schwarzkopf part 1 Schwarzkopf was born Herbert Norman Schwarzkopf Jr,in Trenton,New Jersey to Herbert Norman Schwarzkopf,then the Superintendent of the New Jersey State Police.The family resided in Lawrenceville,New Jersey.In January1952 his birth certificate was amended to make his name"H. Norman Schwarzkopf".His connection with the Persian Gulf region began very early on.In 1946,when he was 12,he and the rest of his family joined their father,stationed in Tehran,Iran,where his father would go on to be instrumental in Operation Ajax.He attended the Community High School in Tehran,later the International School of Geneva,and attended and graduated from Valley Forge Military Academy.He is also a member of Mensa.After attending Valley Forge Military Academy,Schwarzkopf,an army brat,attended the United States Military Academy,where he graduated 43rd in his class in 1956with a Bachelor of Science Degree.He also attended the University of Southern California,where he received a Master's degree in mechanical engineering in 1964.His special field of study was guided missile engineering, a program that USC developed with the Army, which incorporated equally both aeronautical and mechanical engineering.Upon graduating from West Point he was commissioned a Second Lieutenant.He received advanced infantry and airborne training at Fort Benning,Georgia after graduating from West Point.He was a platoon leader and served as executive officer of the 2nd Airborne Battle Group, 187th Airborne Infantry ...
- Mega64 Presents: EGM Metaphors The mad scientists of gaming mourn the passing of EGM.
- Sparks - Metaphor (live) Another Kick-Ass perfomance
- Idea Framing, Metaphors, and Your Brain - George Lakoff Complete video at: fora.tv UC-Berkeley Linguistics Professor George Lakoff discusses how idea framing and metaphors contribute to shaping the way we think. ----- UC Berkeley Professor George Lakoff discusses concepts from his new book, The Political Mind: Why You Can't Understand 21st-Century American Politics with an 18th-Century Brain. George P. Lakoff is a professor of linguistics (in particular, cognitive linguistics) at the University of California, Berkeley, where he has taught since 1972. Although some of his research involves questions traditionally pursued by linguists, such as the conditions under which a certain linguistic construction is grammatically viable, he is most famous for his ideas about the centrality of metaphor to human thinking, political behavior and society. He is particularly famous for his concept of the "embodied mind" which he has written about in relation to mathematics. In recent years he has applied his work to the realm of politics, and founded a progressive think tank, the Rockridge Institute. Joe Epstein is the former President of The Commonwealth Club's Board of Governors.
- Metaphors a little video on metaphors
- Sparks-Metaphor The Song "Metaphor" from Sparks and their Album "Hello Young Lovers" Songtext: A metaphor is a glorious thing, A diamond ring, The first day of summer A metaphor is a breath of fresh air, A turn-on, An aphrodisiac Chicks dig, dig, dig, dig, dig metaphors, Chicks dig, dig, dig,...
- Chevelle - Shameful Metaphors I DO NOT OWN THIS SONG CHEVELLE DOES. If you like rock and want more of it, i have all songs from various bands on my channel!!! Please check out my channel, subscribe, rate, and comment =)
- Top 5 Movie Metaphors: The Rotten Tomatoes Show South African alien Apartheid is front and center in this weekend's District 9 so Brett decided to dig into the Top 5 Movie Metaphors of all time. No similes' allowed! Watch More Rotten Tomatoes, now part of infoMania, Thursdays 11/10c. For more from the Rotten Tomatoes Show: For more about movies from Current
- Understand Your Customers' Minds An interview with Gerald Zaltman, Professor Emeritus, Harvard Business School. Learn how to create more effective marketing campaigns based on your customers' unconscious thoughts and feelings.
- Reptilian, Nordic, and Nordic-Reptilian as Racial Metaphor Those who are concerned with such matters sometimes speak of extraterrestrial influence in terms of Reptilians, Nordic Aliens, and Nordic-Reptilian hybrids. Whether or not one accepts the literal reality of such a phenomenon, the psychic gravity which it exerts upon human mental interest is significant, and therefore of importance, if not literally, then symbolically. Let us explore these three influences as racial and hemispheric metaphors. Generally speaking, the Reptilians are representative of Middle Eastern influence, both psychically and physically, as well as the influence of the peoples of the Southern hemisphere of the globe. These are the lands and peoples associated with Enki/Ea in the Sumerian stories, who was given domain of the South as a result of the feud with his half-brother Enlil who was given domain of the North. Note that even the Europeans who are said to bear Reptilian influence do so because of alleged links with Nephlimic bloodlines--such as the Merovingian--which relate back to Middle Eastern influence. The perception of Reptilians is that much of their influence is exerted at a subconscious level, even as Middle Easterners have demonstrated spiritual/psychological dominance over humanity via their homegrown religions of Judaism, Christianity, and Islam. In short, they are the masters of right brain influence. Therefore, we can equate Reptilian influence with Enki, the Middle East, the Southern Hemisphere of the globe, and the right hemisphere of ...
- WaterlooSunset - Andy Mackay and the Metaphors.wmv Andy Mackay and the Metaphors, doing Justice to an old classic, so cool, music for the Lounge Lizard in all of us.
- Dreamtime - Dreamspace (Nightmares, Astral Trapping and the Metaphoric Astral Code) Why do people get nightmares? Is it true that certain individuals who are advanced in astral projection can deliberately lay traps for astral travellers? If you get trapped inside one of these traps, will you then not be able to return to your physical body? The White Winged Collective Consciousness of Nine, through Magenta Pixie, respond to these questions. Written and performed by Magenta Pixie, Images courtesy of stock.xchng, Music by Kevin Macleod. Video arranged and edited by Daniel Saunders aka Catzmagick Productions.
- How To Rap With Punchlines, Similes & Metaphors Spyda "The Wise Musician" A Punchline Metaphor Dynamo Spitting that Fire!!!!!!!!!!!!!!!!!!
- Metaphor-Free Radio Your favorite songs skip the BS and get right to the point. See more at Free CHTV video podcast on iTunes: CH Facebook Fan Page: Watch this on CHTV and view credits at
- Chevelle - Shameful Metaphors new song off of their new album "Sci Fi Crimes" I dont own the rights to this music, or the copywright for it. I ONLY wish to use it for entertainment purpose
- Living Metaphors John Grinder tells a story of a living metaphor he created on a trip to Russia.
- Shameful Metaphors by Chevelle Shameful Metaphors by chevelle from Sci-Fi Crimes
- Metaphors - English Lesson 24 www.learn-to-speak-english- This lesson has 15 metaphors to aid ESL students to learn English metaphors used in conversation. Music by Garth Brooks singing Desperado.SPECIAL OFFER * My Award-Winning "Speak English Here And Now" ESL video course is now only $9.95. Learn important English Conversation Rules & the Right Things To Say in male and female dialogs. Hundreds of speaking tips. For a FREE LESSON, click the URL above. Teacher Frank TRANSCRIPT ENGLISH METAPHORS LESSON 1 Symbolism - using a symbol of one thing to mean another thing. A symbol (n) is a thing that represents something else. In literature, a material object may stand for something abstract, an exact metaphor. 2 Desperado, why don't you come to your senses? You've been out ridin' fences for so long now. "desperado" in Spanish means a reckless criminal, but here it means a "desperate" man (in despair, hopeless). "riding fences" (n) means protecting feelings like a cowboy fixing fences to protect his land. Note: The adjectives forms of the noun metaphor is "metaphoric" and "metaphorical." The adverb form is "metaphorically." 3 Oh, you're a hard one. I know that you got your reasons. These things that are pleasin' you can hurt you somehow. "hard one" (adj) is a tough person who doesn't show feelings of affection. 4 Don't you draw the queen of diamonds, boy. She'll beat you if she's able. You know the queen of hearts is always your best bet. "queen of diamonds" (n) is a card symbol for wealth and power ...
- Similes and Metaphors In Common Music. wow i can not belive this got over 1000 veiws Thanks Mrs Standley And the 5th grade class of 09-10
- English Shorts Example- Simile, Metaphor, Personification This is a quick sample of what I want your project to look like. Pay attention to what I did NOT do well like camera focus, volume, font size, etc. so that your own video is much better.
- George Lakoff on how he started his work on conceptual metaphor George Lakoff on how he started his work on conceptual metaphor, particularly the 'LOVE IS A JOURNEY' metaphor.
- Victor Ostrovsky decoding the metaphor Decoding Victor Ostrovsky's metaphors of espionage paintings.
- Saigon:"Word Play & Metaphors Don't Make You Nice"Part3 of 3 Rapper Saigon speaks on rumors of signing to Roc Nation, "word play & metaphors dont make you nice, you need to touch people", his new album will change Hip Hop and his non profit community work with In Arms Reach. Part 3 of 3.Interview & Filming by Bruno.
- Similes and Metaphors in Songs This was a project I did in my 9th grade English class. Its purpose was to teach students the definition of similes and metaphors. Showing examples in songs that we hear everyday on the radio is a great way to explain similes. FYI: I may have a different view on the interpretations of the similes and metaphors. :D Songs: Paperplanes- MIA (intro) Pokerface - Lady Gaga Heartless - Kanye West Circus - Britney Spears Sugar - Flo-rida Treat me like a Rose? - Azn dreamers Just the Girl - The Click 5 ______ - Switchfoot Lonely - Akon Life your Life - TI
- Chevelle - Shameful Metaphors (Live at The Metro, Chicago) Music video by Chevelle performing Shameful Metaphors. (C) 2011 Sony Music Entertainment
- Chevelle - Shameful Metaphors Music video by Chevelle performing Shameful Metaphors. (C) 2010 Sony Music Entertainment
- Similes and Metaphors
- Lyrics to Shameful Metaphors-Chevelle here are lyircs to shameful metaphors let me know if i made any mistakes and i'll think if fixing them lol:):):) please comment && rate:D:D
- Real-Time Data Metaphors, understanding CHANGE without reading numbers: BASHI... Google Tech Talks June 26, 2008 ABSTRACT Peripheral Information Awareness through Evolving Mood Maps Representing multivariable changes of complex data sets with beautiful developing landscapes The idea behind the Panorama solution is to express the overall 'mood' of evolving, complex data (such as the development of the stock market) in rendered 3D animations that can be perceived and interpreted with little cognitive effort. The software application maps variables of a data set (eg bonds, shares, overall trading intensity or fluctuation of the stock market) to graphic parameters in a 3D simulation, such as ocean waves, sun strength, wind speed, cloud particles etc. Developments of the stock market, for example, become perceivable by cloud transformations, wave precipitations, and changes in sunlight. The result is a beautiful, developing scene in which observers (eg traders) can monitor several streams of background information without effort in their peripheral vision. Whenever this background information signals particular relevance in a given context, it moves to the observer's foreground attention. In this way information can become functional art instead of just a burden. Speaker: Roberto Vitalini
- Gram Rabbit - Candy Flip - Miracles & Metaphors Gram Rabbit - Candy Flip - Miracles & Metaphors - Shot in Joshua Tree. Music by Gram Rabbit from Miracles & Metaphors. Directed by Giuseppe Asaro. Produced & Edited by Antonio Mendoza. Space-coat, side effects, triple-dip, candy flip Are you extra-technical *** Party in the desert, party in the desert, everybody wants to party in the desert Were gonna boogie down in this desert town Rapping to the beat of the desert sound now Were gonna kick it up, were gonna lick it up Were gonna figure it out before we forget it Party in the desert, party in the desert, everybody wants to party in the desert The party favors past, the pipe is smoking grass The lines are blurry but the musics bumping Stuck in a native trance, we call buffalo stance Well keep on dancing til we feel the earth quake Space-coat, side effects, triple-dip, candy flip Its getting windy out, coyotes scream and shout They want a piece of the celebration Bollocks have just begun, wont until the sun Shows its face on a rugged playground
- Prof. George Lakoff - Reason is 98% Subconscious Metaphor in Frames & CULTural Narratives May I borrow a moment of your time? American Behavioral Scientist-Feb. 2010 Says SCADs are State Crimes Against Democracy including but not limited to Plamegate, WMD, 911, Incubator babies, S&L, Iran/Contra, Watergate, JFK, Korean War, Pearl Harbor & The USS Maine. 2+2=5 Terror Management Theory people.uncw.edu System Justification Theory www.psych.nyu.edu Cognitive Dissonance Theory Motivated reasoning www.psych.utoronto.ca PTSD Nation Thought is 98% unconscious. Collectively, these offer insight into reasons why it feels good to case build for fantasy beLIEfs of immortality Government collects taxes runs the police surveillance state fill prisons & bombs people That is its job It builds highways so its military can travel freely about the empire It funds hospitals to record the births & deaths of all subjects & to care for its ailing bureaucracy that is dying from the Global eugenics pogroms & processed poisons they pass off as food It runs state schools where it is mandatory to be brainwashed through indoctrination into the ideology of beLIEf in unquestioning obedience to authority All organized religions are symbolic. They are human creations. They all offer unrealistic answers to the three big questions, where do we come from, what are we doing here & where are we going. They all offer their beLIEvers immortality & they keep us fighting among ourselves over who will be the winning liars ...
- Metaphor - In Flames [high quality] You stole my pure intention You are the sickness in between Let me in, I'll bury the pain You taught me to be sad as you You almost made me take it all Let me in, I'll bury the pain You bend me and you shake me You beg me then you break me Let me in, I'll bury the pain You made me feel like a sinner You fear you'll die alone Let me in, I'll bury the pain The sickness that you are The plague that made me starve You think you can show me how I've come this far The sickness that you are The plague that made me starve You think you can show me how I've come this far I feel it's taking over Everything falls dark Break me open The desperate cry The sickness that you are The plague that made me starve You think you can show me how I've come this far The sickness that you are The plague that made me starve You think you can show me how I've come this far
- Similes and Metaphors in Pop Culture! YouTube has blocked the FAR superior sequel to this video: Similes and Metaphors in Pop Culture 2 Head over to and watch it (it's a thousand times better than the original) Mr. Wasko made an educational movie film for the childrens, so they can grow up and be smart. And so that they can tell their parents they watched Omarion videos in class. Also, check out another one of Mr. Wasko's educational movie films entitled, "GENRES, son." "It's really good!" - Mr. Wasko "It makes your brain smarter!" - Mr. Wasko "It's on a computer!" - Samuel L. Jackson GENRES, son:
- God, Morality and Gratuitous Football Metaphors. Response to Epydemic2020 on Christian morality. Am I SERIOUSLY incapable of making a video under twenty minutes, now? Is this what it's come to?
- CheVelle "Shameful Metaphors" (Sci-Fi Crimes) 2009 CheVelle "Shameful Metaphors" (Sci-Fi Crimes) 2009, Pete Loeffler,Sam Loeffler,Dean Bernardini,Joe Leffler, Vena Sera,Closure Send the pain Below, the Red HardRock, Rock 2009, Fan Vids, Sleep apnea roswell's spell, This Circus.
- Hypnosis For Sleep 6 (Sleep Temples Metaphor), Eddini For this wonderful product go to Thanks for purchasing this very awesome MP3 for Sleep part six.This is a FULL 43 minute Hypnosis Session which at the end I can promise 99% of you will be asleep. Here a advanced form of Hypnosis is used which is called NLP (Neuro Linguistic Programming) or what is called Conversational Hypnosis.That means a person can be alert even with eyes open, no induction needed but you have Hypnotic Suggestions hidden through out. Here I do however do a Hypnotic Induction though short, and I have you close your eyes after having you pause this so you can get ready for bed as in this I do NOT count you out as this is to be listened to only when you are ready to sleep. Here I use metaphors. See a lot of times the conscious thinking mind don't grasp metaphors but the subconscious mind can and does grasp metaphors. See on youtube my video for premature ejaculation where I talk about the grand canyon but listening to that closely you see how Hypnotically you are being helped for premature ejaculation. Here I use after putting you into Hypnosis a Metaphor about the ancient sleep temples of Egypt. With the sleep temples even consciously to a extent you'll see how in the story you are being put into deep sleep. Here you'll Meet Jonathan a traveler to ancient civilizations and how a tour guide, guides him to the many sleep temples. You'll hear how he can read hieroglyphics on the walls and hear the many stories on sleep. One being about a son ...
- Living Metaphor - Danie Beaulieu NLP video demonstration This is a sample from a 2hr, 15min video. Get it here: j.mp Danie Beaulieu offers a stunningly evocative array of ways to use simple objects as visual and behavioral metaphors for key aspects of a client's problem, and then use these to work toward resolution. You can and read about her work here: Her use of her own nonverbal behavior to elicit states in clients and participants is an added bonus, and particularly useful for women who want to increase their range of expressiveness with clients. Recorded at the 2010 Advanced Mastery Training in Boulder Colorado. Get the full video on DVD or instant download for $59 here: j.mp
- Chevelle- Shameful Metaphors This is a video I created to "Shameful Metaphors" using clips of different Chevelle music and live videos. This is not an official video, just one I did for fun :)
twitter about metaphors
Blogs & Forum
blogs and forums about metaphors
“Website of Sepher | Rob Vens. Philosophical articles on IT architecture and language. Free and open source tools. Metaphors can blind our understanding. 03 March 2009. Posted in Blog - Blog. There are no translations available. In my country, The Netherlands, a discussion between creationism and”
— Sepher - Metaphors can blind our understanding,
“Customer experience optimization secret: keeping your brand promises. Metaphors are also very useful for expanding your organization's customer centricity. It's time [ ] Customer Experience Optimization " Blog Archive " Fall in Love with Your Customers for Best Customer”
— Customer Experience Optimization " Blog Archive " Customer, clearaction.biz
“Shana Moulton, Whispering Pines 9, video, 2009 In an essay titled "Cyborg Anthropology," Gary Lee Downey, Joseph Dumit and Sarah Williams offer a sort”
— Future Metaphors: An Introduction | Art21 Blog, blog.art21.org
“Contrast The blog The feed. Broken metaphors. Tweetie used to be just a very popular iPhone desktop, wizard/assistant, depth and other metaphors to help us decide the interface of”
— Contrast | The Blog | Broken metaphors, contrast.ie
“Have you considered the impact of the metaphors you use? Home " Blog " Mind Your Metaphors. Mind Your Metaphors. Culture warrior. Or would you rather be a culture ambassador? The former sounds confrontational and the latter relational. One suggests an attack and the other an improvement”
— Mind Your Metaphors | Sanborn and Associates,
“A blog by James Geary. Metaphors via Edward Bulwer-Lytton. Posted on Mixed metaphors and outlandish metaphors may be stylistic faux pas, but they are”
— Metaphors via Edward Bulwer-Lytton > All Aphorisms, All the Time,
“The Metaphor Project: Wow! Even More! This is the third set and I'm so enjoying reading these metaphors. I always knew that bloggers were teachers at”
— More Metaphors to Explore | Liz Strauss at Successful Blog, successful-
“Metaphors as alternative to reducing the absolute complexity. Metaphors, by definition, ***ysis of metaphors must rest on empirical task ***ysis of what users actually do”
— jeffrey heer >> blog >> paper: interface metaphors, | http://wordsdomination.com/metaphors.html | 13 |
17 | Shipbuilding is the construction of ships and floating vessels. It normally takes place in a specialized facility known as a shipyard. Shipbuilders, also called shipwrights, follow a specialized occupation that traces its roots to before recorded history.
The dismantling of ships is called ship breaking.
Archaeological evidence indicates that humans arrived on Borneo at least 120,000 years ago, probably by sea from Asia-China mainland during an ice age period when the sea was lower and distances between islands shorter (See History of Borneo and Papua New Guinea). The ancestors of Australian Aborigines and New Guineans also went across the Lombok Strait to Sahul by boat over 50,000 years ago.
4th Millennium BC
Evidence from Ancient Egypt shows that the early Egyptians knew how to assemble planks of wood into a ship hull as early as 3000 BC. The Archaeological Institute of America reports that some of the oldest ships yet unearthed are known as the Abydos boats. These are a group of 14 discovered ships in Abydos that were constructed of wooden planks which were "sewn" together. Discovered by Egyptologist David O'Connor of New York University, woven straps were found to have been used to lash the planks together, and reeds or grass stuffed between the planks helped to seal the seams. Because the ships are all buried together and near a mortuary belonging to Pharaoh Khasekhemwy, originally they were all thought to have belonged to him, but one of the 14 ships dates to 3000 BC, and the associated pottery jars buried with the vessels also suggest earlier dating. The ship dating to 3000 BC was about 25 m, 75 feet long and is now thought to perhaps have belonged to an earlier pharaoh. According to professor O'Connor, the 5,000-year-old ship may have even belonged to Pharaoh Aha.
3rd Millennium BC
Early Egyptians also knew how to assemble planks of wood with treenails to fasten them together, using pitch for caulking the seams. The "Khufu ship", a 43.6-meter vessel sealed into a pit in the Giza pyramid complex at the foot of the Great Pyramid of Giza in the Fourth Dynasty around 2500 BC, is a full-size surviving example which may have fulfilled the symbolic function of a solar barque. Early Egyptians also knew how to fasten the planks of this ship together with mortise and tenon joints.
The oldest known tidal dock in the world was built around 2500 BC during the Harappan civilisation at Lothal near the present day Mangrol harbour on the Gujarat coast in India. Other ports were probably at Balakot and Dwarka. However, it is probable that many small-scale ports, and not massive ports, were used for the Harappan maritime trade. Ships from the harbour at these ancient port cities established trade with Mesopotamia. Shipbuilding and boatmaking may have been prosperous industries in ancient India. Native labourers may have manufactured the flotilla of boats used by Alexander the Great to navigate across the Hydaspes and even the Indus, under Nearchos. The Indians also exported teak for shipbuilding to ancient Persia. Other references to Indian timber used for shipbuilding is noted in the works of Ibn Jubayr.
2nd millennium BC
The ships of Ancient Egypt's Eighteenth Dynasty were typically about 25 meters (80 ft) in length, and had a single mast, sometimes consisting of two poles lashed together at the top making an "A" shape. They mounted a single square sail on a yard, with an additional spar along the bottom of the sail. These ships could also be oar propelled.
The ships of Phoenicia seem to have been of a similar design. The Greeks and probably others introduced the use of multiple banks of oars for additional speed, and the ships were of a light construction for speed and so they could be carried ashore.
1st millennium BC
The naval history of China stems back to the Spring and Autumn Period (722 BC–481 BC) of the ancient Chinese Zhou Dynasty. The Chinese built large rectangular barges known as "castle ships", which were essentially floating fortresses complete with multiple decks with guarded ramparts.
Early 1st millennium AD
The ancient Chinese also built ramming vessels as in the Greco-Roman tradition of the trireme, although oar-steered ships in China lost favor very early on since it was in the 1st century China that the stern-mounted rudder was first developed. This was dually met with the introduction of the Han Dynasty junk ship design in the same century.
Medieval Europe, Song China, Abbasid Caliphate, Pacific Islanders
Viking longships developed from an alternate tradition of clinker-built hulls fastened with leather thongs. Sometime around the 12th century, northern European ships began to be built with a straight sternpost, enabling the mounting of a rudder, which was much more durable than a steering oar held over the side. Development in the Middle Ages favored "round ships", with a broad beam and heavily curved at both ends. Another important ship type was the galley which was constructed with both sails and oars.
An insight into ship building in the North Sea/Baltic areas of the early medieval period was found at Sutton Hoo, England, where a ship was buried with a chieftain. the ship was 26 metres (85 ft) long and, 4.3 metres (14 ft) wide. Upward from the keel, the hull was made by overlapping nine planks on either side with rivets fastening the oaken planks together. It could hold upwards of thirty men.
The first extant treatise on shipbuilding was written ca. 1436 by Michael of Rhodes, a man who began his career as an oarsman on a Venetian galley in 1401 and worked his way up into officer positions. He wrote and illustrated a book that contains a treatise on ship building, a treatise on mathematics, much material on astrology, and other materials. His treatise on shipbuilding treats three kinds of galleys and two kinds of round ships.
Outside Medieval Europe, great advances were being made in shipbuilding. The shipbuilding industry in Imperial China reached its height during the Sung Dynasty, Yuan Dynasty, and early Ming Dynasty, building commercial vessels that by the end of this period were to reach a size and sophistication far exceeding that of contemporary Europe. The mainstay of China's merchant and naval fleets was the junk, which had existed for centuries, but it was at this time that the large ships based on this design were built. During the Sung period (960–1279 AD), the establishment of China's first official standing navy in 1132 AD and the enormous increase in maritime trade abroad (from Heian Japan to Fatimid Egypt) allowed the shipbuilding industry in provinces like Fujian to thrive as never before. The largest seaports in the world were in China and included Guangzhou, Quanzhou, and Xiamen.
In the Islamic world, shipbuilding thrived at Basra and Alexandria, the dhow, felucca, baghlah and the sambuk, became symbols of successful maritime trade around the Indian Ocean; from the ports of East Africa to Southeast Asia and the ports of Sindh and Hind (India) during the Abbasid period.
At this time islands spread over vast distances across the Pacific Ocean were being colonised by the Melenesians and Polynesians, who built giant canoes and progressed to great catamarans.
Early Modern
With the development of the carrack, the west moved into a new era of ship construction by building the first regular ocean going vessels. In a relatively short time, these ships grew to an unprecedented size, complexity and cost.
Shipyards became large industrial complexes and the ships built were financed by consortia of investors These considerations led to the documentation of design and construction practices in what had previously been a secretive trade run by master shipwrights, and ultimately led to the field of naval architecture, where professional designers and draughtsmen played an increasingly important role. Even so, construction techniques changed only very gradually. The ships of the Napoleonic Wars were still built more or less to the same basic plan as those of the Spanish Armada of two centuries earlier but there had been numerous subtle improvements in ship design and construction throughout this period. For instance, the introduction of tumblehome; adjustments to the shapes of sails and hulls; the introduction of the wheel; the introduction of hardened copper fastenings below the waterline; the introduction of copper sheathing as a deterrent to shipworm and fouling; etc.
Industrial Revolution
The industrial revolution made possible the use of new materials and designs that radically altered shipbuilding. Other than its widespread use in fastenings, Iron was gradually adopted in ship construction, initially in discrete areas in a wooden hull needing greater strength, (e.g. as deck knees, hanging knees, knee riders and the like). Then, in the form of plates rivetted together and made watertight, it was used to form the hull itself. Initially copying wooden construction traditions with a frame over which the hull was fastened, Isambard Kingdom Brunel's Great Britain of 1843 was the first radical new design, being built entirely of wrought iron. Despite her success, and the great savings in cost and space provided by the iron hull, compared to a copper sheathed counterpart, there remained problems with fouling due to the adherence of weeds and barnacles. As a result composite construction remained the dominant approach where fast ships were required, with wooden timbers laid over an iron frame (the Cutty Sark is a famous example). Later Great Britain's iron hull was sheathed in wood to enable it to carry a copper-based sheathing. Brunel's Great Eastern represented the next great development in shipbuilding. Built in association with John Scott Russell, it used longitudinal stringers for strength, inner and outer hulls, and bulkheads to form multiple watertight compartments. Steel also supplanted wrought iron when it became readily available in the latter half of the 19th century, providing great savings when compared with iron in cost and weight. Wood continued to be favored for the decks, and is still the rule as deckcovering for modern cruise ships. Scotts Shipbuilding & Engineering Co. Ltd, Greenock, Scotland is a superb example of a shipbuilding firm that lasted nearly 300 years.
Worldwide shipbuilding industry
After the Second World War, shipbuilding (which encompasses the shipyards, the marine equipment manufacturers, and many related service and knowledge providers) grew as an important and strategic industry in a number of countries around the world. This importance stems from:
- The large number of skilled workers required directly by the shipyard, along with supporting industries such as steel mills, railroads and engine manufacturers; and
- A nation's need to manufacture and repair its own navy and vessels that support its primary industries
Historically, the industry has suffered from the absence of global rules and a tendency towards (state-supported) over-investment due to the fact that shipyards offer a wide range of technologies, employ a significant number of workers, and generate income as the shipbuilding market is global.
Shipbuilding is therefore an attractive industry for developing nations. Japan used shipbuilding in the 1950s and 1960s to rebuild its industrial structure; South Korea started to make shipbuilding a strategic industry in the 1970s, and China is now in the process of repeating these models with large state-supported investments in this industry. Conversely, Croatia is privatising its shipbuilding industry.
As a result, the world shipbuilding market suffers from over-capacities, depressed prices (although the industry experienced a price increase in the period 2003–2005 due to strong demand for new ships which was in excess of actual cost increases), low profit margins, trade distortions and widespread subsidisation. All efforts to address the problems in the OECD have so far failed, with the 1994 international shipbuilding agreement never entering into force and the 2003–2005 round of negotiations being paused in September 2005 after no agreement was possible. After numerous efforts to restart the negotiations these were formally terminated in December 2010. The OECD's Council Working Party on Shipbuilding (WP6) will continue its efforts to identify and progressively reduce factors that distort the shipbuilding market.
Where state subsidies have been removed and domestic industrial policies do not provide support, in high-cost nations shipbuilding has usually gone into steady, if not rapid, decline. The British shipbuilding industry is a prime example of this. From a position in the early 1970s where British yards could still build the largest types of sophisticated merchant ships, British shipbuilders today have been reduced to a handful specialising in defence contracts and repair work. In the U.S.A., the Jones Act (which places restrictions on the ships that can be used for moving domestic cargoes) has meant that merchant shipbuilding has continued, but such protection has failed to penalise shipbuilding inefficiencies. The consequence of this is contract prices that are far higher than those of any other nation building oceangoing ships.
Present day shipbuilding
Today, South Korea is the world's largest shipbuilding country with a global market share of 51.2% in 2011. South Korea leads in the production of large vessels such as cruise liners, super tankers, LNG carriers, drill ships, and large container ships. In the 3rd quarter of 2011, South Korea won all 18 orders for LNG carriers, 3 out of 5 drill ships and 5 out of 7 large container ships.
Japan had been the dominant ship building country from the 1960s through to the end of 1990s but gradually lost its competitive advantage to the emerging industry in South Korea which had the advantages of much cheaper wages, strong government backing and a cheaper currency. South Korean production overtook Japan's in 2003 and Japanese market share has since fallen sharply. The market share of European ship builders began to decline in the 1960s as they lost work to the Japanese in the same way as Japanese builders have lost work to South Koreans more recently; Europe's production is now a tenth of South Korea's and is primarily military, although cruise liners are still built in Italy, Finland and France. The output of the United States also underwent a similar change.
South Korea's shipyards are highly efficient, with the world's largest shipyard in Ulsan operated by Hyundai Heavy Industries slipping a newly-built, $80 million vessel into the water every four working days. South Korea's "big three" shipbuilders, Hyundai Heavy Industries, Samsung Heavy Industries, and Daewoo Shipbuilding & Marine Engineering, dominate global shipbuilding, with STX Shipbuilding, Hyundai Samho Heavy Industries, Hanjin Heavy Industries, and Sungdong Shipbuilding & Marine Engineering also ranking among the top ten shipbuilders in the world. In 2007, STX Shipbuilding further strengthened South Korea's leading position in the industry by acquiring Aker Yards, the largest shipbuilding group in Europe. (The former Aker Yards was renamed STX Europe in 2008). In the first half of 2011, South Korean shipbuilders won new orders to build 25 LNG carriers, out of the total 29 orders placed worldwide during the period.
China is an emerging shipbuilder that briefly overtook South Korea during the 2008-2010 global financial crisis as they won new orders for medium and small-sized container ships based on their cheap prices, although its current production is limited mainly to basic vessels.
|World shipbuilding market share by countries (2011)|
Modern shipbuilding manufacturing techniques
Modern shipbuilding makes considerable use of prefabricated sections. Entire multi-deck segments of the hull or superstructure will be built elsewhere in the yard, transported to the building dock or slipway, then lifted into place. This is known as "block construction". The most modern shipyards pre-install equipment, pipes, electrical cables, and any other components within the blocks, to minimize the effort needed to assemble or install components deep within the hull once it is welded together.
Ship design work, also called naval architecture, may be conducted using a ship model basin. Modern ships, since roughly 1940, have been produced almost exclusively of welded steel. Early welded steel ships used steels with inadequate fracture toughness, which resulted in some ships suffering catastrophic brittle fracture structural cracks (see problems of the Liberty ship). Since roughly 1950, specialized steels such as ABS Steels with good properties for ship construction have been used. Although it is commonly accepted that modern steel has eliminated brittle fracture in ships, some controversy still exists. Brittle fracture of modern vessels continues to occur from time to time because grade A and grade B steel of unknown toughness or fracture appearance transition temperature (FATT) in ships' side shells can be less than adequate for all ambient conditions.
Ship repair industry
All ships need maintenance and repairs. A part of these jobs must be carried out under the supervision of the Classification Society. A lot of maintenance is carried out while at sea or in port by ship's staff. However a large number of repair and maintenance works can only be carried out while the ship is out of commercial operation, in a Shiprepair Yard. Prior to undergoing repairs, tankers must dock at a Deballasting Station for completing the tank cleaning operations and pumping ashore its slops (dirty cleaning water and hydrocarbon residues).
See also
- Boat building
- List of shipbuilders and shipyards
- List of Russian shipbuilders
- Marine propulsion
- Naval architecture
- Shipbuilding (song)
- Ward, Cheryl. "World's Oldest Planked Boats", in Archaeology (Volume 54, Number 3, May/June 2001). Archaeological Institute of America.
- Schuster, Angela M.H. "This Old Boat", 11 December 2000. Archaeological Institute of America.
- Possehl, Gregory. Meluhha. in: J. Reade (ed.) The Indian Ocean in Antiquity. London: Kegan Paul Intl. 1996, 133–208
- (e.g. Lal 1997: 182–188)
- Tripathi, page 145
- Hourani & Carswel, page 90
- Robert E. Krebs, Carolyn A. Krebs (2003). Groundbreaking Scientific Experiments, Inventions, and Discoveries of the Ancient World. Greenwood PressScience. ISBN 0-313-31342-3.
- "Michael of Rhodes: A medieval mariner and his manuscript". Meseo Galileo. Dibner Institute for the History of Science and Technology. 2005. Retrieved 12 December 2010.
- Pamela O. Long, David McGee, and Allan M. Stahl,eds. The Book of Michael of Rhodes: A Fifteenth-Century Maritime Manuscript, 3 vols. (Cambridge, MA: MIT Press, 2009)
- Colin Tipping,"Technical Change & the Ship Draughtsman." THE MARINER'S nIRROR 84, No.4, 1998, page 458 foll.
- McCarthy, M., 2005 Ship’s Fastenings: from sewn boat to steamship. Texas A&M Press. College Station. ISBN 1-58544-451-0
- Johnston Fraser Robb, "Scotts of Greenock,1820-1920, A Family Enterprise", 1993", British Library CD Ethos 513119.
- Korean: Korea Marine, # 1 four years recaptured, English
- James Brooke (2005-01096). "Korea reigns in shipbuilding, for now". The New York Times. Retrieved 30 December 2009.
- "기획특집/ 1등 조선.해양 한국에 도전하는 해외 국가별 조선산업 현황: 1)일본, 중국, 인도, 베트남, 브라질, 폴란드, 터키, 독일 조선산업의 현황과 전망/(월간 해양과조선 2008년 11월호)". Shipbuilding.or.kr. Retrieved 2010-11-17.
- "7 Korean Shipbuilders Rank in Top 10". Marinetalk.com. 2006-01-03. Retrieved 2010-11-17.
- http://www.hellenicshippingnews.com/index.php?option=com_content&view=article&id=34664:s-korea-overtakes-china-as-worlds-top-shipbuilder-in-h1-&catid=7:shipbuilding-news&Itemid=71[dead link]
- Drouin, P: "Brittle Fracture in ships - a lingering problem", page 229. Ships and Offshore Structures, Woodhead Publishing, 2006.
- "Marine Investigation Report - Hull Fracture Bulk Carrier Lake Carling". Transportation Safety Board of Canada. 19 March 2002. Retrieved 8 October 2009.
- Shipbuilding Picture Dictionary
- Trading Places—interactive history of Liverpool docks
- U.S. Shipbuilding—extensive information about the U.S. shipbuilding industry, including over 500 pages of U.S. shipyard construction records
- Shipyards United States—from GlobalSecurity.org
- Shipbuilding in Canada
- North Vancouver's Wartime Shipbuilding
- Shipbuilding News
- Bataviawerf - the Historic Dutch East Indiaman Ship Yard—Shipyard of the historic ships Batavia and Zeven Provincien in the Netherlands, since 1985 here have been great ships reconstructed using old construction methods.
- Photos of the reconstruction of the Dutch East Indiaman Batavia—Photo web site about the reconstruction of the Batavia on the shipyard Batavia werf, a 16th century East Indiaman in the Netherlands. The site is constantly expanding with more historic images as in 2010 the shipyard celebrates its 25th year.
- Marine News China Information/News on Chinese Shipbuilding and Shipyards | http://en.wikipedia.org/wiki/Shipbuilding | 13 |
31 | Acres and a Mule:
Slavery as a legal institution lasted for about 250
years up until the Emancipation
Proclamation of 1865 and for another 100 years, African Americans were
subjected to Jim Crow laws of which they were not seen as legally equal
until 1965. Initially,
reparations were to be paid by giving freed slaves 40 acres of land and a
mule, but the bill was vetoed by President Andrew Johnson in 1869 after
having passed in Congress. However,
the issue was far from being put to rest.
One hundred years later in 1969, the Black
Manifesto was published, demanding monetary compensation equaling $3
billion dollars from predominantly white places of worship (Catholic,
Protestant and Jews) depending on the predetermined amount that the
National Black Economic Development Conference calculated. This request
stemmed out of the Civil Rights movement, a fundamentally moral position
taken up by religious leaders. Its
more radical counterpart, the Black militant and power movement felt that
the Civil Rights movement did little to improve the economic situation
despite what was given in the legal sense through the Equal Rights
Amendments of 1964 and 1965. Initially, there were religious groups and churches fighting
for social programs to eradicate poverty and working against forms of
discrimination, “By fall of 1968 nearly $50 million had been pledged and
some millions expended.” However,
these actions resulted in more emergency, short-term help rather than
systemic change. And with the
election of a more conservative president, President Nixon, the tide in
favor of poverty programs and economic development of black community
changed and it was no longer a national priority.
As a result, the Manifesto,
written by SNCC leader, James Forman, brought to attention the forgotten
or tabled issues at hand. However,
the form of attack was not directed at the government on behalf of the
black churches, but rather a public intrusion on predominantly white
places of worship in which the Manifesto
was read aloud. Needless to
say, the response was immediate and the reparation issue, in this more
modern context, became heated and controversial.
Coming up with a cost for what were considered lost wages
implicated national level guilt as well as suggesting that monetary
compensation would begin to make up for historical oppression:
For centuries we have been forced to live as
colonized people inside the United States, victimized by the most vicious,
racist system in the world. We have helped to build the most industrious
country in the world…We are also not aware that the exploitation of
colored peoples around the world is aided and abetted by white Christian
churches and synagogues…(this) is only a beginning of reparations due us
as people who have been exploited and degraded, brutalized, killed and
In general, the churches that were asked to raise
money for the reparation cause rejected this proposal.
Some absolutely denied any right to the suggested money, whereas
some believed that money should not be given to the black community
directly, but through some federal or state social program.
Something was accomplished, however, as the
religious community became aware of the grievances held by the
black community directed against the church.
The most contemporary manifestation of making
reparations has come about in a law suit against the government headed by
Alexander Pires, books such as Randall Robinson’s The Debt: What Americans Owe Blacks (2000), and Richard F.
America’s Wealth of Races
(1990) and Paying the Social Debt:
What White America Owes Black America (1993). On an international
scale, both the United Nations and Nigeria have formalized a position that
the US should respond to this issue, at least with an apology and at most
to right the wrong by paying in the form of economic compensation. A
growing interest has been fostered at Boston University in the very recent
“Great Debate: Should The U.S. Pay Reparations for Slavery” (November
2001), and earlier, in the short-lived running of David Horowitz’s
student newspaper ad, “Ten Reasons Why Reparations for Blacks is a Bad
Idea for Blacks and Racist Too." The new amount that the most contemporary form of
reparations is close to $8 billion, (if each descendent of a slave
received $150,000) an estimated amount that takes into account what 40
acres and a mule would be worth and lost wages over the 250 year period.
The Manifesto and the
legal case is built on a precedence of making reparations to the Indians,
Holocaust victims, Japanese Internment camp victims, and the Tuskegee
Syphilis experiments (the only reparations given to descendents that were
not in the direct family.)
Briefly, the issues of contention are how Americans
can now be responsible given that slavery ended over 150 years ago, and
given that there is no direct connection between the people of today and
slaves of multiple generations ago. Another
issue at stake is whether monetary compensation can make up for slavery,
and whether an apology and/or developing social programs would be more
appropriate to the present black situation.
Third, making reparations can be seen as a handout, further
stigmatizing and perpetuating the victim mentality of the black community.
Fourth, there is an economic challenge that is implicit in asking
for the money in that it could fundamentally change the economic structure
giving African Americans the upper hand in this society.
And finally, one might ask, could reparations bring about unintended
consequences, such as exacerbating racial tensions and creating or
sustaining racial division in the country?
The most critical issue for this paper begs the moral
and in some cases, explicit religious response. From a theological perspective, the concrete issue of making
reparations for slavery will be analyzed using three main themes:
evil/sin, guilt, and redemption.
Manifesto in asking monetary compensation can be summed up in one
sentence: “Reparations is a scheme for the rearrangement of wealth to
offset past iniquities or correct an imbalance in society.” In this first section of the theological analysis, the
iniquity created by slavery will be analyzed in two ways, the structural
possibility for slavery and the perpetuation of its sinful effect in
today’s society using process theology.
This first section will set up the possibility for approving
reparations, although it will do so critically, and in the end with some
reservations, and will leave it open to further sections in this paper to
finalize this approval or entirely reject this possibility.
Theodicy: The Structural Possibility for Slavery
The institution of slavery unequivocally was an evil
institution manipulating Judeo-Christian ideas to justify the practice.
The Black Manifesto cites
29 grievances against religious organizations, specifically against the
dogma and practices of the church that made it possible to keep slaves in
bondage. The theological
ideas of a sovereign God and the eschatological hope were used to justify
and maintain the cruel treatment of slaves.
Process theology rejects both of these propositions, and offers in
its place an explanation for how slavery came into existence and
justification for liberation from its historical and present oppression.
and the World
According to Norman Pittenger, evil is “that which
holds back, diminishes, or distorts the creative advances of the cosmos
toward the shared increase of good.” (74) Evil is both deprivation and
privation and stands in stark opposition to potential goodness either
through discord or triviality (to choose against the possibility of
goodness.) The status of the
world exists in clash and in harmony between two principles, creative and
destructive principles. The world, thus, is perpetually being made and
perpetually being diminished. The
perishing principle is a result of existential finitude of the world in
that the universe is in constant and eternal process, the things of this
world will always be being.
Given the structure of the world there are capacities
for intrinsic good and evil, instrumental good and evil, and the power for
self-determination. In a
world where good and evil is intrinsic and instrumental, the case can be
made for the cruelest of structures: “The evils of pain, suffering,
injustice, catastrophe, etc. are possible in a world structured to evoke
novelty, integration, adventure, and all of the other components of
worthwhile experiences.” Slavery
can be classified in this way, and justifies a beginning analysis on the
veracity of claims made in behalf of reparations.
However, before this is done, one should examine how evil is
brought into being given the structural possibility for slavery.
Codetermination of Power, and Evil
The theodicy question involves not only the existence
of evil, but also an existent God who is good, and all-powerful.
The possibility for slavery is not a determined reality, but rather
brought about through a series of events, choices, and occasions in time.
Who is responsible for slavery? All actors are implicated, even
God. Although God in his
infinite way is working toward creating increased order and goodness, the
world can work against this. God
can only suggest his initial aim, and in an unlimited way he can persuade
for the world to unify itself in the most optimal, intense way for
altruistic satisfaction. God cannot ultimately be in control simply
because the structure of the world allows God passive power and he cannot
prevent evil from occurring. God
is held responsible to heighten our reception to his persuasion, and to
act in novelty and creativity in the world, but he is both limited in
power and is affected or changed by what happens in the world: “Process
theodicy projects a deity who is deeply involved in and profoundly
affected by the experience of finite creatures.”
This has to do with the principle of codetermination of power in
The powers that cause all events to be is produced
and shared between God and the finite world.
In other words, “God is responsible for evil, but not indictable
for it” because “finite actualities can fail to conform to the divine
aims for it.” Humans are
meant to enjoy and to contribute to the world, so they are given freedom
in direct relation to the level of intensity and instrumentality to bring
about the best possible satisfaction.
However, the more freedom that is given to humans the greater the
possibility that freedom will be used as increased “intense and
instrumental” means to go against God’s initial aim creating more evil
and suffering. God also shares in the pain of the world and is affected by
the demonic forms of impoverishment, injustice, and violence.
In this way, God becomes partially implicated by evil since he is
correlated to all that occurs in the world.
God, as can be concluded by this analysis, is not
omnipotent, although the case can still be made for his goodness and for
his love, since his initial aim is to suggest and make possible
increasingly the good in the world. God has and will always have a concern
for the world. He shows this
concern by acting and disclosing himself in the world and as Pittenger
states, God “can make even the wrath of man, as well as whatever other
evil there is in the world, ‘turn to his praise.’”
Obviously, in the case of slavery, the initial aim of God was
rejected in the most intense of ways.
Clearly, slavery needs to be seen in light of God’s goodness,
human action, and the process of culminating evil in the world.
The next section will deal with how slavery has affected the world
today in terms of human sin and the oppressive force of evil persisting
through the centuries.
Human Sin and
In order to make the case for reparations, one should
establish a direct connection of slavery to the contemporary situation and
thus, establish a case for direct and collective responsibility for
slavery. Stackhouse, a
Christian social ethicist claims, “One of the decisive things we ought
to have done is overcome the generalized structured that cast dimensions
of poverty and racism in the society, an inheritance from slave days now
built into the very fabric of the culture."
Before one can make that claim that as a society we owe a debt to
the black community, one should articulate clearly what was lost,
suffered, and deprived in the event of slavery and its perpetual evil
The essence and purpose of humankind is “the
reality of the decisions of creatures, at every level form the quantum of
energy up to the free choice made by man.”
To be human is to choose, to decide, to create, and to be empowered
to be fulfilled in the world. It
is the choice for self-actualization and self-fulfillment, it is the
“spontaneous, creative self-determination in every event.”
Thus, to take away these basic rights is an act of sin against a
person. In fact original sin, comes from the “situation or state of
deprivation or alienation in which men find themselves.”
Process theology also asserts that human beings do not start with a
level playing field in that original sin affects some individuals more
than others. This is
different, for example in a reformed theology where all
humans are “totally depraved.” Acts
that result in terminating the right to determine one’s future and limit
his/her freedom to be fulfilled is the kind of oppression that occurred in
slavery and as result is present still today.
In fact, clearly the reparation demand is nothing
short of claiming the right, in an economic and social way, to fulfill
their human purpose. Forman
states, that “essentially, the fight for reparation is one of
self-determination and the transfer of power.”
In this way, sin has an indelible effect through the passing of
time. Once that right to freedom to be and to choose had been stripped of
the black community, it remained so and perpetually sustained oppression
far past the point of the Emancipation
Proclamation (1865). The thwarted creative potential has and continues
to deprive the black community from accomplishing and contributing to
society, to its community, and to their personal selves.
And far worse, is the prevention of the black community to be
united and engaged with the initial aim of the infinite God. The following
sections will deal with oppression on the economic and religious level.
Sin And Economic Oppression
Those involved in the initial demands for reparations
held a view that saw slavery as a systemic issue. David Griffin offers a
valuable and description of what kind of structure slavery was; it was
“the corporate structure of alienation and oppression which has been
built up through centuries of human sin.”
The injustice incurred in slavery requires an acknowledgement of
societal responsibility for conditioning black people to feel inferior.
However, at the same time, process theology aligns itself with
liberation theology to say that the black community is “not necessarily
a total victim of (societal) values… individuals can exert an influence
back on it and thereby transform it.” Furthermore, Suchocki contends
that “cumulative acts of human beings (are) the sources of the
demonic.” In process
theology, all acts and occasions of interdependent.
This is how Forman sees it when he asserts,
Operating upon all of us are a whole set of control
factors, many of which we are not aware. These control factors however,
have been drummed in our heads for centuries, and we accept them as
realities, hence the major reason we are not all totally dedicated to
The societal factors that Forman points are systemic
in nature, and more specifically requires an economic response.
To be oppressed is to be fundamentally economically
oppressed in that slaves had and the present black community has a “lack
of adequate material prerequisites for a good life and of the opportunity
to determine their own destinies and to make significant contributions to
history.” For the
proponents of reparations for slavery, it requires a collective change in
the system, and an overturn on who remains in control over the system. Early reparation proponents want to see an economic shift of
power from white hands to black hands either through a peaceful exchange,
and if this did not work, through more violent forms of revolution and
guerilla warfare. (Forman, 115) The
attack on systemic evil has not been just toward society proper, but also
towards the church in its responsibility for perpetuating black
The Church’s Responsibility to Systemic and Historical Sin
The direct responsibility for slavery is not just on
the conscience of society, but on the church as well. As a sign of repentance, the church was asked to pay
reparations long before the government.
And as a responsive community, given the process structure, the
church can continue to perpetuate racial divides or ameliorate the
situation by restoring freedom, power and creative control in and through
society. Even in silence the church stands condemned in a way that
Forman writes so clearly, “Basically the Black Manifesto is an
historical reminder to the white religious establishment...and highlights
the contradictions between words and deeds…(which) has been to form an
unholy alliance with a worldwide system of oppression.”
There ought to be a religious assessment of its responsibility to
the systemic perpetuation of evil and then, a plausible solution to help
the plight of the present black situation.
Critiques and Conclusion of Section I
There are a few points of critique that should be
made in light of the prior discussion to establish the relationship
between accepting or rejecting reparations on a theological basis in the
following two sections. The
discussion here poses several questions through two aims of inquiry, how
the reparation cause is ill-fit to a process theology and how process
theology fails to serve reparation aims.
The first critique is on the God and theodicy issue.
Given God’s persuasive nature, why do so many remain un-persuaded?
This empirical question again tests the goodness of God as well as
his adequacy. Second, process
theology relies heavily on aesthetic qualities of possibility, that the
moral question posed here may not be as important. The world must be given
in this way to offer the possibility for God’s involvement in the world,
but does it do so in a way that makes God more concerned with his initial
aim, then what is really happening in the world?
Third, in the possibility for change, who and what is
given the authority to make things happen, and in the same sense what is
the security in buying into the process theology. Next, do the means
justify the ends, and will the means accomplish the process goal of
fulfillment and creative potentiality?
Fifth, since in process theology emphasis is given to the
individual not the institution, can an institution effectively repent
given this emphasis? Sixth, revolutionary sentiments may or may not be in
line with process theology since violence would be a form of discord.
Lastly, one should consider unintended consequences:
Could reparations lead to even greater racist sentiments, creating more
divisions in society, and incur unhelpful anger on the side of the white
community? Furthermore, in
this same line of thought, whose to say that reparations is what the
average black man and woman desires? Could it be just the agenda of black
leaders only? And if this is
true, can the goal of reparations really be brought about if the black
community is not willing to take advantage of their newly achieved
freedom? These seven points of contention speak to the inadequacy of
and disparity between a perfect fit of analysis and subject of analysis.
Despite these multiple critiques, however, quite apparently there is a connection between reparations for slavery and process theological considerations of theodicy and oppression. The question now is to ask if that connection is sufficient enough to side with reparations for slavery. The following two sections will proffer an answer using the theological concepts of guilt and redemption while taking into consideration the discussion and points of critiques developed in Section I.
Guilt is one of the greatest issues at play in the
debate over reparations for slavery and is a strong force on both sides of
the argument. Those in favor of reparations proclaim that the United
States, and essentially the descendents of slave owners, should feel
guilty for the years of kidnapping, bondage, and oppression they forced
upon the slaves. To make amends for these acts, the proponents of
reparations believe reparations of some monetary sort should be paid to
African-Americans today. Those who oppose reparations recognize the guilt
in the same way that their opponents do but believe, among other things,
that reparations is an attempt to absolve the guilt. Reparations might do
more harm than good in terms of helping African-Americans and improving
race relations, because it would likely put an end to building the bridges
burned by slavery.
The case for reparations put forth by Alexader Pires
at the recent Great Debate on the campus of Boston University is largely
built upon the obligation America has to the African-American community.
Pires, who recently won a lawsuit against the United States for $1 million
due to black farmers in the South, is collaborating with other noted
attorneys such as Johnny Cochran to file a formal lawsuit against the
government for reparations for slavery. This relies on several issues,
including precedents such as reparations dealt to victims of
Japanese-American internment camps and the Holocaust.
Also at play are issues regarding the obligation
Pires believes the government has to black Americans for building the
American economic system into what he calls the most powerful economic
structure in the history of the world. Since the slave-driven antebellum
cotton industry in the South was the most successful industry in the world
at the time, reparations proponents believe something is owed to those who
built that industry and the powerful economy that followed.
Reparations are also called for by the empirical data
that shows a strong link between slavery and the current socio-economic
status of African-Americans. By virtue of a poor post-war effort to
assimilate the former slaves into society, far too many blacks live in bad
neighborhoods, work jobs that do not pay a living wage, are undereducated,
or are incarcerated. These
statistics point to a strong link to slavery and call for reparations to
help get these people on something closer to equal footing with others in
Christopher Hitchens also argued in favor of
reparations at the Great Debate but from a realist’s perspective.
Hitchens, a noted writer and editor, argued that reparations is not an
ideal circumstance but the best recourse available today to help to
resolve the present-day problems that linger from slavery. Reparations
does not solve all the problems, according to Hitchens, but he says one
should not make “the best the enemy of the good.” By this, Hitchens is
saying that the one should not put down reparations because it is not the
best possible solution to the dilemma at hand. The best ways to solve the
problem are not attainable because we do not live in an ideal world, so
one should not expect ideal solutions. The imperfection of reparations is
not a suitable reason to discount it, or in other words, “don’t make
the best the enemy of the good.”
The arguments against reparations are plenty and one
does not have to look far to find someone who disagrees with paying them.
About one year ago, David Horowitz bought advertising space in many
college newspapers including the Daily
Free Press at Boston University for his article “Ten Reasons Why
Reparations for Slavery is a Bad Idea and Racist Too.” The ad caused
hysteria and disruption in nearly every locale that the article was seen,
including Boston University where the ad was pulled after one appearance.
Many papers banned the ad, causing an uproar regarding the rights of free
speech, while many students protested against Horowitz’s advertisement
and ideology. In his article, Horowitz describes ten ways in which
reparations is either ineffective, unnecessary, racist, or foolish. Many
of Horowitz’s arguments are important points in the debate over
reparations and are at the heart of the dilemma, while others arguments
seem venomous, heartless, and even inaccurate. To better understand the
argument against reparations, it is important to take a closer look at
Horowitz’s article but also to keep in mind that Horowitz surely does
not speak for all those opposed to reparations for slavery.
Horowitz’s first argument against reparations is
that there is not one group solely responsible for slavery in America. He
claims that Africans and Arabs should be indicted alongside white slave
owners and claims that 3,000 blacks owned slaves and questions whether
their descendents should be paid reparations. This argument is both
logical and helpful, because it brings in to question who is owed
reparations and the complications in making such a determination.
Next Horowitz argues that black Americans have
prospered economically by living in the United States and are better off
economically than they would have been in their forefathers’ native
lands. This claim is off base, because the fact that the black community
has in some ways been able to compete in society does not offset the other
statistics that suggest something different.
Thirdly, Horowitz argues that it is unfair to ask
descendants of non-slaveholders to pay reparations because their ancestors
were not the oppressors and, in some cases, gave their lives to free the
slaves. This is certainly a strong point against reparations, because one
is asking the descendants of those who freed the slaves to pay reparations
for the oppression. Furthermore, Horowitz next points out that many
Americans are descendants of immigrants who weren’t even in the United
States at the time of slavery and should not be asked to pay reparations.
In this light, reparations for slavery might be on the right track but is
asking some people to pay for a crime that their ancestors didn’t even
Horowitz’s fifth point recognizes that those in
favor of reparations are making judgments based on race rather than on
injury. Many blacks, Horowitz claims, are not descendents of slaves and
some are even descendants of slaves, so it would be irresponsible to pay
reparations to these people. Moreover, Horowitz points out that this case
would set a precedent in that never before have reparations been paid to
anyone other than the victims or their direct descendants, such as in the
cases regarding the Japanese-American internment camps and the Holocaust.
While that is an interesting part of the reparations story, it doesn’t
affect whether reparations should be paid; it simply means that this case
would set a precedent. Perhaps this case could even set a precedent for
crimes the United States committed against the Native Americans when the
country was being formed.
Next, Horowitz writes that it is unfair to give
reparations because descendants of slaves do not suffer economically from
slavery. In this portion of his argument, Horowitz argues that blacks have
had an opportunity to be successful economically since slavery and many
have achieved economic success. Those who have not, Horowitz writes, are
victims of their own failures rather than the failure of the American
system and are not due reparations. Horowitz, however, is unfairly holding
the majority up to the standard of the minority. While it is true that
many blacks have been successful in society, too many statistics point to
the fact that their descending from slavery has had an adverse effect on
the standing of blacks in society today.
Horowitz’s seventh argument states that reparations
is another attempt to turn blacks into victims rather than to hold them
responsible for their state in today’s society. Reparations, then, is a
way for the government to help people who can’t help themselves. Once
again, Horowitz is holding the black majority to the standard of the
successful black minority and overlooking too many other factors. While
Horowitz has a point that reparations might make blacks into victims, he
fails to notice that the entire point of reparations is that blacks are
victims and are due compensation not only for their work as slaves but
also for the poor way in which the American government helped them
assimilate into society.
Next, Horowitz claims that reparations have already
been paid through the Civil Rights Act of 1965 and welfare benefits.
Horowitz does not recognize, however, that the giving of civil rights to
the descendants of slaves is completely different than paying reparations.
Recognizing the blacks’ civil rights helped bring the African-American
community into the fold but did not right the wrongs of centuries of
slavery in the past. Horowitz also fails to realize that welfare benefits
do not go only to blacks but to all who qualify for them and are not
adequate restitution for the slaves’ oppression nor does it account for
the wages the slaves lost by working without pay.
Finally, Horowitz closes his argument with two
shortsighted and heartless points about the state of African-Americans in
today’s society. First, Horowitz claims that African-Americans owe a
debt of gratitude for being brought to America and for the whites who
spearheaded the abolitionist movement to free blacks from slavery.
Secondly, Horowitz writes that reparations places African-Americans
against the nation that gave them freedom and that they should be more
appreciative of being part of such a prosperous nation. In these two
points, Horowitz becomes the supreme judge as to what is good and evil and
that blacks are better off in America than in their homelands. Horowitz
does not consider that economic power might not be an appropriate measure
of whether one should be happy in his or her country. Also, Horowitz
believes that blacks should be grateful to live in the United States
rather than upset that they were raped of their free will to choose where
to live their lives.
One point Horowitz misses in this debate is the
effect reparations would likely play on race relations today. Since the
lines are drawn fairly clearly as far as who is in favor of reparations
and who is opposed -- and often in heated fashion with such a
controversial issue -- it is likely that reparations would perpetuate
racial division in American society. Whites who did not want to pay
reparations, for instance, would likely resent blacks for taking money
that they did not deserve. Blacks also might be indicted in this process
because it might bring to the surface new feelings of resentment in the
black community toward whites for slavery. Moreover, many whites would
likely feel no further need to help blacks to get a foot up in society if
reparations were paid. Reparations, then, is not a starting point for
reconciling this issue but a distinct end in which whites feel there is
not further need to help blacks.
Guilt plays a major role in the issue of paying
reparations for slavery. Advocates of reparations play on the guilt of the
descendants of slave owners and the American government by asking them to
own up to their responsibility. Opponents of reparations, such as David
Horowitz, do not feel guilty for the state of blacks in today’s society
and place the blame on their own failure to realize opportunities for
Karl Rahner addresses the issue of guilt and sin in
his systematic theology The Content
of Faith, which is particularly relevant to the issue of reparations
for slavery. Rahner believes that sin is not only a part of the past but
recognizes that the present and the future are built upon that past.
“ . . . sin is not a contingent act which I
performed in the past and whose effect is no longer with me,” writes
Rahner. “It is certainly not like breaking a window which falls into a
thousand pieces, but afterward I remained personally unaffected by it. Sin
determines the human being in a definite way: he has not only sinned, but
he himself is a sinner. He is a sinner not only by a formal, juridical
imputation of a former act, but also in an existential way, so that in
looking back on our past actions we always find ourselves to be
This understanding of sin, and guilt regarding past
sin, should make one cautious to pay reparations for slavery. If
reparations would indeed become an end to the white community’s
willingness to help the black community, it seems that reparations would
become a way of a people trying to wipe the slate clean of their past
actions. The government might then believe that it no longer has an
obligation to help blacks succeed in American society, because they have
paid them reparations; no longer does the government have to take
responsibility for its past sin, since reparations have already been paid
and wiped the slate clean. This is one of the greatest reasons that
reparations could be a very unhelpful choice for American society.
According to Rahner, true guilt is only understood
through God’s revelation and grace. Rahner would likely say then that
the guilt that Alexander Pires is trying to get the United States
government to admit to can only come through God’s grace.
“ . . . it remains true that the real knowledge of
guilt, that is, the sorrowful admission of sin, is the product of God’s
revelation and grace. Grace is already at work in us when we admit guilt
as our own reality, or at least admit the possibility of guilt in our own
lives . . . On the other hand, a purely natural knowledge of guilt -- one
that is completely independent of grace (if this is philosophically
possible) -- would be suppressed if God’s grace and the light of
revelation were not there to help us.”
Rahner offers another helpful understanding of guilt
in which the person refuses to admit to his or her guilt and instead
represses it. Repression, of course, only exacerbates the problem.
“By this basically false type of arguing that we
use in trying to excuse ourselves before God, our conscience, our life,
and the world, we manifest nor out innocence, but only the way in which
the unenlightened person, as yet untouched by the grace of God, considers
his own guilt, that is, he will not admit it. He prefers to repress it.”
If one considers this in terms of the debate over
reparations for slavery, the government is only exacerbating the
present-day lingering effects of slavery by not reacting to it. Instead of
paying reparations for slavery, the government and those opposing
reparations like David Horowitz are only making the problem worse by not
admitting their guilt and facing their responsibility.
The debate over reparations for slavery is a difficult and controversial one with many theological implications. Advocates of reparations point to a strong link between slavery and the current state of the African-American socio-economic class and a need for the government to own up to its responsibility regarding slavery. The opponents should not, however, all be classified into one group, since the camp that David Horowitz represents opposes reparations on often venomous, frivolous claims. While recognizing the good that reparations could do, one must also acknowledge the problems that such a occurrence would be sure to instigate. Increased racial tension, the resurfacing of the guilt regarding slavery, and the chance that reparations could put an end to other types of support given the African-American community, reparations for slavery is not worth the trouble it would cause. Without regard to the stress it would put on the American economy, reparations are simply worth the trouble. Social programs for the whole American public that target certain aspects of society known to be of concern to African-Americans would be a step in the right direction.
The question of reparations for slavery demands the resolution of a host of philosophical and theological issues. What is the nature of sin? What is an individual? What is the meaning of history, and what impact does it have on the present and the future? What are the limits of an individual’s responsibility in relation to their culture? What is the relationship between justice and freedom, redemption and forgiveness?
Though it is obviously impossible to resolve these issues through this discussion, some definitions must be attempted. The most basic question arises from an apparent absurdity in the proposition of reparations. Why should anyone today benefit from the suffering of their ancestors, and why should anyone be compelled to compensate for past wrongs? The fact is that if the reparations are intended as a redress for American slavery than neither American slavery nor any of its perpetrators or victims exists today. Thus the question is raised as to the nature of an individual and that person’s relationship to history. Are human beings fundamentally independent units of value, meaning, and purpose, relating only incidentally to each other; is community an abstraction from individual goals and needs; is existence an act of individual reason or will rather than a gift, and is individual life an entity which is primarily responsible only to itself? If this classically liberal definition of the individual is accepted, than the argument for reparations is moot. No individuals exist who are responsible for slavery, and there is no possible object of the reparations.
The question arises then, is this a valid definition of an individual? From the perspective of Christian orthodoxy, several critiques can be made. The bible has bequeathed to humanity a vision of human beings both created and free, both receiving the conditions of existence and in turn transforming them. From the classical liberal perspective, the role of God for the individual is limited to the creation of the conditions of existence by fiat; individuals can struggle against these conditions (hence the Protestant struggle with authority), attempt to reject them (the heroic-existentialist tradition), or passively accept them through a gesture of obedience and surrender. The kind of freedom envisioned by the bible as existing by virtue of God’s creatorship, a freedom which emerges from God’s inner being and remains rooted in it, is impossible from the classical liberal perspective.
What are the responsibilities of an individual from the orthodox perspective? The context of creation vastly widens the scope of human possibility and responsibility. God as the creator, endowing humans with the freedom of creaturely relationality, suggests the possibility of a meaning for human life beyond the leveling of universal laws of nature. This meaning is the meaning of relationship; God is the thing (or the equality of thingness) that every other thing has in common. This awareness of a grand intention binds the universe together, and reveals itself to human beings as the gift of history.
As specifically created beings, humans receive the conditions not just of universal existence but of a particular place and time. Each person exists not just in general but in particular, in a precise moment in history. This means that each individual is constantly receiving the present as an effect of the past. This insight is what has traditionally been called by Christians the communion of saints. Every person receives the past into his or her experience of the present; for Christians, this past is blessed, hallowed, and filled with grace by the completed lives of the ancestors who in turn received it from their own historical past. Each past moment has been forgiven and redeemed by the God who is revealed in history; therefore, each past moment is a bearer of grace and meaning for the present. In biblical narrative, the continuity between generations is organic. The cycle of Abraham contains within itself all of the patterns of Israelite history: ethnic conflicts, stupendous acts of faith, dialogues with divinity, struggles with election, and inter-family wars are all prefigured in the life of the one ancestor.
The letter to the Hebrews eloquently witnesses to this merging of the historical and the personal: “We might even say that Levi, who collects the tenth, paid the tenth through Abraham, because when Melchizedek met Abraham, Levi was still in the body of his ancestor” (Hebrews 7:9-10, NIV). Further, Hebrews interprets the past not only as embodied in the present, but also as a wellspring of comfort and encouragement:
And what more shall I say? I do not have time to tell about Gideon, Barak, Samson, Jepthah, David, Samuel and the prophets, who through faith conquered kingdoms, administered justice, and gained what was promised; who shut the mouths of lions, quenched the fury of the flames, and escaped the edge of the sword; whose weakness was turned to strength; and who became powerful in battle and routed foreign armies; these were all commended for their faith, yet none of them received what had been promised. God had planned something better for us so that only together with us would they be made perfect” (Hebrews 11:32-39, NIV).
The something better alluded to here is, for the author of Hebrews, the present moment; the gift of the ancestors’ redemption of the past on behalf of the present culminates in the Incarnation, when past, present, and future are united and eternally redeemed in the person of Christ.
How does this affect the question of reparations for slavery? Because humans are in our depths created, historical beings, to the extent that the conditions of our existence are determined by the past, we bear a deep responsibility for the past as it is revealed in each aspect of present existence. Completed actions which refused the grace of God, contributed to injustice, and denied the relational nature of human life continue to impact the present in profoundly destructive ways.
The institution of slavery, a monstrous action only completed at the cost of tremendous suffering, has exerted an enormous impact on the present. Randall Robinson has memorably described this suffering and its continuing effects:
Robinson’s argument for reparations rests on the notion that the evil passed on to the present by slavery is so enormous that no length of time will ever cause it to dissipate; instead, its effects will continue to be received by future generations, growing worse rather than better with time. An equally powerful good, Robinson argues, must be generated in order to counter the evil .
What might be the nature of this good action, bequeathed to the future by means of the present? Redemptive action can take two forms: symbolic and practical. To perform a symbolic act of redemption is to restore by means of reinterpretation, to demonstrate the hidden relationships between actions, to acknowledge the falsehood of past interpretations, and to ask for forgiveness, whether on behalf of one’s own actions or those completed actions to which we remain responsible.
Symbolic actions are an active transformation of present reality. Symbolic redemption can be expressed artistically, liturgically, or politically, it can be both public and private, and it can involve individuals or institutions. Robinson argues that symbolic redemption is the first step that individuals ought to take in response to slavery:
One argument against reparations is that any such reparations would necessarily be a one-time event, by which presumably the complicit present could wash its hands of its historical past and forever absolve itself of blame. However, the kind of symbolic redemption advocated by Robinson is not a payoff but a transformation, with effects necessarily flowing forward into the future. The transformation would be first personal, as individuals repent of their prejudices, commit their resources towards the cause of justice, and work actively towards there-establishment of truly relational identities, and secondly institutional, as governments, businesses, and churches all strive to repair past injustices and ongoing institutional biases. All of this could happen as part of a deliberate and public acknowledgement by institutions of their role in both the past and present effects of slavery, taking the form of a request for forgiveness and a pledge of restitution.
With this confession in place, it would not be out of place for governments to call businesses to account for profits gained at the expense of slaves, to commit financial resources towards redeeming those individuals and communities who continue to be affected by slavery, and to seek to dismantle all institutions which continue to perpetuate the effects of slavery.
The concept of a war on poverty is not new, but the understanding of racial poverty as both arising from within a historical context and potentially redeemed by that context provides an interpretive resource which is often lacking from programs of institutional reform. What is the responsibility of an individual affected by racial poverty? Do their circumstances absolve them of the responsibility and the dignity that comes with being truly free? Or are they wholly responsible for their conditions and for every negative consequence which results from them? Individuals affected by racial poverty are not limited by their historical circumstances, but they are conditioned by them; the conditions of their existence arise out of those circumstances and thus the consequences of their actions can never be understood apart from them. Like every other created being, they are both free and bound, determined by history and yet finding freedom in the midst of that determination. Thus, the response to the problem of racial poverty must account for both of these realities, engaging the individual as a free being and yet always discerning the continuing effects of the past as a present reality.
To take such a course of action is to actively participate in the sacrament of history as God’s self-revelation. As free beings continually caught up between the redemption of the past and the hope for the future, we are God’s vehicles for transformation, both placed within history and bearing history into the future. This is a task for which God has amply equipped us, filling us with grace through the redemptive love of Christ, the unifier of all things past, present, and future. From this perspective, making reparations for slavery is not a case of overcoming a special evil but rather part of an ongoing responsibility both to the past - the ancestors from whom we come - and to the future, the generations who will rely on us for grace and for the hope of glory.
National Public Radio broadcast
Churches Reactions to the Call for
Reparations for Slavery
The Case for Reparations for Slavery with
Ten Reasons Why Reparations is a Bad Idea
. . . and Racist Too
CBS News Article on Alexander Pires' Call
for Reparations for Slavery
Support for David Horowitz's Arguments
Against Reparations for Slavery
Reparations for Slavery Discussion Board
Arguments Against the Reparations for
Pires, Alexander. The Great Debate. Tsai Performance Center,
Boston. November 7, 2001.
Pires, Alexander. The Great Debate. Tsai Performance Center,
Boston. November 7, 2001.
Hitchens, Christopher. The Great Debate. Tsai Performance
Center, Boston. November 7, 2001.
Rahner, Karl. The Content of Faith. New York: Crossroad. 1999,
Rahner, Karl. The Content of Faith. New York: Crossroad. 1999, | http://people.bu.edu/wwildman/WeirdWildWeb/courses/theo1/projects/2001_coophenkphillips/index.htm | 13 |
20 | Women's Legal Rights in Ancient Egypt
by Janet H. Johnson
rom our earliest preserved records in the Old Kingdom on, the formal legal status of Egyptian women (whether unmarried, married, divorced or widowed) was nearly identical with that of Egyptian men. Differences in social status between individuals are evident in almost all products of this ancient culture: its art, its texts, its archaeological record. In the textual record, men were distinguished by the type of job they held, and from which they derived status, "clout," and income. But most women did not hold jobs outside the home and consequently were usually referred to by more generic titles such as "mistress of the house" or "citizeness." Women were also frequently identified by giving the name and titles of their husband or father, from whom, presumably, they derived their social status. Thus the New Kingdom literary text entitled "The Instructions of (a man named) Any" state, "A woman is asked about her husband, a man is asked about his rank."
Funerary statuettes of a husband and wife from the tomb of Nykauinpu from Giza (Dynasty 5, ca. 2477 B.C.).
But in the legal arena both women and men could act on their own and were responsible for their own actions. This is in sharp contrast with some other ancient societies, e.g., ancient Greece, where women did not have their own legal identity, were not allowed to own (real) property and, in order to participate in the legal system, always had to work through a male, usually their closest male relative (father, brother, husband, son) who was called their "lord." Egyptian women were able to acquire, to own, and to dispose of property (both real and personal) in their own name. They could enter into contracts in their own name; they could initiate civil court cases and could, likewise, be sued; they could serve as witnesses in court cases; they could serve on juries; and they could witness legal documents. That women very rarely did serve on juries or as witnesses to legal documents is a result of social factors, not legal ones.
The great disparity between the social and legal status of women can be observed in both documentary and literary materials. For instance, in the literary text entitled "The Instructions of the (Vizier) Ptahhotep," preserved in Middle Kingdom and later copies, a man's wife is seen basically as a dependent, of whom it behooves him to take good, and loving, care:Egyptian civil lawWhen you prosper and found your house and love your wife with ardor, fill her belly, clothe her back; ointment soothes her body. Gladden her heart as long as you live; she is a fertile field for her lord.But next comes a jarring statement,Do not contend with her in court. Keep her from power, restrain her--her eye is her storm when she gazes. Thus will you make her stay in your house.This reference to contending with one's wife in court clearly indicates that women had legal rights and were willing to fight for them. This distinction between the legal status of women in ancient Egypt and their public or social status is of major importance in understanding how the Egyptian system actually worked.
The Egyptian word which most corresponds to our word "law" (of which a possible definition is: a system of rights, i.e., individual claims, which are enforced by the "state" if they conform to certain conditions) is hp, which can also connote custom, order, justice, or right, according to its usage. In ancient Egypt all law was given from above; there was no "legislature" which would draft "legislation." In a New Kingdom court case, a man cites the "law of Pharaoh" as precedent and in another, when citing the law a man says, "The King said, . . . " Thus, "law" is the king's word (wd-nswt).
Contracts were written copies of oral agreements in which Party A spoke to Party B in the presence of witnesses and a (professional) scribe who copied down (and put into "legalese") the words of Party A. Although only Party A spoke, Party B had the right to accept or refuse the contract, thus making these agreements bilateral and binding on both parties. Copies of contracts concerning real property were filed in the local records office, under the ultimate jurisdiction of the vizier. These public records made it possible for the state to know who was responsible for paying taxes on the land; the documents were also available for consultation in any subsequent lawsuit.
Civil lawsuits involved an oral petition to the court by a private individual. The best-known example of a local court is the one at Deir el-Medina, the New Kingdom village on the west bank of the Nile at modern Luxor, ancient Thebes, inhabited by the workmen who carved and decorated the royal tombs in the Valley of the Kings. This court was composed of local people, usually the relatively important local citizens including the scribes and crew chiefs, but also some simple workmen and, even more rarely, women. Egyptian judges based their decisions on traditions and precedent and kept copies of their decisions.
The earliest contracts of which we have record are imyt pr documents, literally "that which is in the house." These contracts frequently have been identified as "wills," but a better translation is "(land) transfer document." They were used to transfer property to someone other than the person(s) who would inherit the property if the owner died intestate (i.e., without a will). These documents were sealed and filed or recorded in a central government office.
There is a fair amount of Old Kingdom evidence for women in the economy or "public sphere," including women shown as merchants in market scenes and women acting as priestesses, especially for the goddess Hathor. Much of the New Kingdom evidence for the economic role of women comes from documents reflecting their dealings with both men and women. That the government was also perfectly willing to deal with women is indicated by Papyrus Wilbour, a long text recording "taxes" due on farmland; each piece of land is identified by owner and (if different) by the person working the land. Of the 2,110 parcels of land for which the name of the owner is preserved, women are listed as owners of 228, just over 10 percent; the land frequently is described as being worked by their children. However these women originally acquired this land, what is significant is that they hold title to the land and bear responsibility for assessments due.Property
It should be noted that the Egyptians not only had a concept of private property, they also developed a concept of "joint property," property acquired by a married couple during their marriage. The husband had use of the joint property, meaning he could dispose of joint property without his wife's permission. But if a husband sold or otherwise disposed of a piece of joint property (or of any of his wife's property which she brought with her to the marriage), he was legally liable to provide his wife with something of equal value. That it is the husband who has use of joint property reflects the social fact that men normally participated in the public sphere, whereas women did not.
The legal independence and identity of Egyptian women is reflected not only in the fact that they could deal with property on the same terms that men did and that they could make the appropriate contracts in their own names, but also in the fact that they themselves were held accountable for economic transactions and contracts into which they had entered.
In one case, a woman named Iry-nefret was charged with illegally using silver and a tomb belonging to a woman named Bak-Mut to help pay for the purchase of a servant-girl. Iry-nefret was brought to court and told in her own words how she acquired the girl, listing all the items which she gave the merchant as price for the girl and identifying the individuals from whom she bought some of the items used in this purchase. She had to swear an oath before the judges in the names of the god Amon and the Ruler. The judges then had the complainant produce witnesses (three men and three women) who would attest that she had used stolen property to purchase the girl. The end of the papyrus recording the court case is lost, but it is clear that the woman Iry-nefret acted on her own in purchasing the servant-girl and was held solely liable for her actions while the testimony of both women and men was held by the judges to be equally admissible.Marriage and family law
Marriage in ancient Egypt was a totally private affair in which the state took no interest and of which the state kept no record. There is no evidence for any legal or religious ceremony establishing the marriage, although there was probably a party. The preserved portion of the first Late Period story of Setne Khaemwast tells how Ahure and Na-nefer-ka-Ptah fell in love and wanted to marry. Their parents agreed, so Ahure was taken to Na-nefer-ka-Ptah's house, people (especially the father of the bride) gave presents, there was a big party, the two slept together, and then they lived together and had a child. But basically marriage was an agreement by two people, and their families, that they would live together (hms irm), establish a household (grg pr), and have a family. The same vocabulary was used for both women and men. Although most marriages may have been arranged at the desire of the husband and parents of the bride, there is also a repeated literary image of a girl persuading her father to let her marry the man whom she wishes, rather than the father's choice.
Modern scholars have analyzed the role of women in many societies, ancient to modern, as that of a commodity, sold by the father and bought by the husband. Some Egyptian evidence could suggest that this was or had been true in Egypt, as well. For instance, a man might give a gift to his prospective father-in-law, which could be interpreted as "buying" the man's daughter as wife. But the gift which a man might give to his future father-in-law has also been analyzed as serving to break the bonds of the woman with her biological family, so that the new couple could establish their own family as the center of their life and loyalty.
Although women were legally the equals of men, and could deal with property on equal terms with men, the social and public role of women was vastly different from that of men. Although there are examples where the wife of a couple is stronger or more important than the husband (by family, fortune, or personality), most Egyptians tended to marry a person from their own social class; thus, a woman frequently would marry a man in the same or similar profession as her father and brother(s). This resulted not from formal laws or restrictions but simply, presumably, from the fact that this was the group of people with whom one had the most contact and with whom one was most comfortable.Annuity contracts
Although women sometimes helped their husbands with their jobs (whether the equivalent of the modern "mom and pop store" or the wife filling in for her husband when the husband was "on the road") and although women had ways of acquiring some wealth through their own initiative (especially through textile production), they needed some assurance that the father of their children would provide for their (hers and their children's) material future. Thus there developed what have been called "marriage contracts," although such documents are purely economic and embody no social expectations at all.
These documents were not designed to legitimize the marriage--they were not a prerequisite for marriage nor did they have to be contracted at the time of the union since some refer to children who are already born to the couple. They were not intended to establish the social/personal rights and responsibilities of either party toward the other, as did both the Greek and Aramaic Jewish marriage contracts preserved from first millennium Egypt.
Such concepts certainly existed; they are presented in wisdom literature from the Old Kingdom on, and in a New Kingdom letter a man spells out what he considered the obligations of a man to his wife: fidelity, (loving) attention, the responsibility to provide well for her and their children, to take care of her medically, to take pride in her, and not to treat her as a master treats a servant.
The so-called "marriage contracts" concern themselves only with economic matters--the annual responsibility of the husband to feed and clothe the wife (and their children) and the right of their children to inherit his wealth--and are better called annuity contracts. As such, they were extremely advantageous to the wife and one may assume that the woman and her family exerted as much pressure as they could to ensure that the husband made such a contract. Because Egyptian women were full participants in the legal system, not chattel and not dependent on a man to handle their legal concerns for them, such contracts were made by the husband directly with the wife, not her father or any other man on her behalf. This is in sharp contrast with other ancient "marriage documents," whether these documents were purely economic or also embedded social concerns.
In an annuity contract found in the Ptolemaic "Family Archive from Siut" (a town in Middle Egypt), the man addresses the woman. He lists the value of all the expensive property that she brought with her to the marriage, he notes that he will give her an amount of money as a "bridal gift," and he declares that, if they divorce (and whether the divorce was instigated by him or by her), he must give her money equivalent to the full value of everything which he had mentioned; if he doesn't give her all the money, then he must (continue to) feed and clothe her (the amounts of grain, oil, and money for clothing which he must provide every month are spelled out) until he does give her the full amount in silver. If he defaults on his payments, she remains legally entitled to any and all arrears. By implication, if they divorce, then once he has paid her the full amount of silver included in the contract, she returns the contract to him and all obligations are canceled.
Note that although the wife "owned" the property, the husband had use of it. Thus, in case of divorce, the husband had to repay the value, not return the specific items. It has been suggested that the "bridal gift" (in this case 20 pieces of silver), and similarly the earlier fine imposed on a husband who divorces his wife, was intended as a deterrent to the man's divorcing his wife. In either case, the man would have had to actually hand the money over to the woman only at the time of divorce. The contract is confirmed by the husband's father: since the husband would not actually come into ownership of the property to be inherited from his father until his father's death, the father must confirm that he approves of his son's marriage and will not use this marriage as an excuse to disown his son (thereby leaving the son's new wife high and dry).Divorce
Divorce and remarriage were common in Egypt at all periods and contention between siblings and half-siblings, frequent. To stress the close nature of siblings, both literary and documentary sources frequently specify that they share both mother and father. To resolve potential disputes before they might arise, the somewhat practical or pragmatic expediency was chosen of making it incumbent on the father to secure the permission of his older children, who stood to lose part of their inheritance. Since men, even full grown men, remained economically dependent on their parents, and especially their fathers, until the parents died, it would also be in the best interests of the son to agree to his father's remarriage (and not risk rupture and complete disinheritance). Thus, everybody's wants or needs were satisfied by getting everyone to agree to what at least some people wanted. This pattern fits with the observation that agreement and resolution of conflict, rather than "abstract justice," often seem to have been the aim of Egyptian court decisions.
Divorce and remarriage seem to have been relatively easy and relatively common. There is little convincing evidence for polygamy, except by the king, but extensive evidence for "serial monogamy." Either party could divorce a spouse on any grounds or, basically, without grounds, without any interest or record on the part of the state. The vocabulary for divorce, like that for marriage, reflected the fact that marriage was, basically, living together; a man "left, abandoned" a woman; a woman "went (away from)" or "left, abandoned" a man.
Although neither party had to provide legal (or social, moral or ethical) grounds for divorce, the economic responsibilities spelled out in the annuity contracts made this a serious step. Thus, normally a married woman was supported by her husband for as long as they remained married and his property was entailed for their children. Since even remarriage after the death of a first wife could lead to wrangling over property and inheritance rights, a bitter divorce and remarriage could lead to major legal contests.
If a man divorced his wife, he had to return her dowry (if she had brought one) and pay her a fine; if she divorced him, there was no fine. A spouse divorced for fault (including adultery) forfeited his or her share of the couple's joint property. After divorce, both were free to remarry. But it seems clear that, until the husband has returned his wife's dowry and paid her the fine, or until she has accepted it, the husband remained liable for supporting her, even if they were no longer living together. Some (ex-)husbands, then as now, tried to avoid supporting their (ex-)wives, and we have several references to a woman's biological family stepping in to support or assist her when her husband can't or won't.
The ancient Egyptian concept of adultery consisted of a married person having sex with someone other than that person's spouse. It was just as "wrong" for a man to commit adultery as for a woman. The Egyptian system was family centered, and the terminology for marriage and divorce was the same for both sexes; adultery was defined in family terms and condemned for both men and women, and sex by unmarried individuals seems not to have been a major concern.
This brief overview on women's rights, which has necessarily omitted many questions and much detail, only touches upon the complexities of this ancient culture, where women's remarkable legal equality and ability to own and dispose of property must be seen in the light of the social world in which they lived--a world dominated, at least in the range of records which have been preserved for us, by men and men's concerns.Relevant links:
ABOUT THE AUTHOR | Janet H. Johnson
The Oriental Institute
Janet H. Johnson, professor of Egyptology in the Oriental Institute and department of Near Eastern languages and civilizations at the University of Chicago, is also a member of the university committees on the ancient Mediterranean world, Jewish studies, and gender studies. Her main interests include Egyptian language and Egypt in the "Late Period" (1st millennium B.C.). Publications include the 3rd edition (online) of her teaching grammar of Demotic, Thus Wrote 'Onchsheshonqy, as well as numerous articles and books. She is the director of the Chicago Demotic Dictionary Project and director of the Egyptian Reading Book Project.COPYRIGHT | Copyright 2002 the University of Chicago.
(c) 2004 The University of Chicago :: Please direct questions or comments to [email protected] | http://fathom.lib.uchicago.edu/1/777777190170/ | 13 |
14 | The First Amendment to the United States Constitution is a part of the Bill of Rights. Textually, it prevents the U.S. Congress from infringing on six rights. These guarantees were that the Congress would not:
The First Amendment, along with the rest of the Bill of Rights, was proposed by Congress in 1789, to be ratified by the requisite number of states in 1791. As with the remaining Amendments of the Bill of Rights, the First Amendment was passed in order to answer protestations that the newly created Constitution did not include sufficient guarantees of civil liberties.
The First Amendment only explicitly disallows any of the rights from being abridged by Congress. Over time, however, the courts held that this extends to the executive and judicial branches. The Court has held that the Fourteenth Amendment incorporates the First Amendment against the actions of the states.
- Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
The Supreme Court has, over the years, established rules and tests for the constitutionality of legislation falling under the First Amendment. Lower courts tend to follow these rules in interpreting the text of the Constitution. Such classic phrases as "clear and present danger," "without redeeming social value," and "wall of separation between church and state" are not found in the text of the Constitution, but have been important in court decisions, and have entered the legal and popular culture.
Because these tests do not appear in the text of the Constitution itself, several commentators, including Thomas Ladanyi and Barry Krusch , believe that such interpretations have functioned to re-write the amendment, creating what they refer to as the virtual first amendment. On the other hand, some commentators argue that if the Supreme Court establishes a rule or test when interpreting a constitutional provision, it does not actually alter the underlying document; rather, they suggest, the court merely creates a standard by which cases are to be judged. According to the commentators who favor the concept of a virtual text, the alternative view does not adequately account for the disparity between the text of the tests (i.e. the Miller obscenity test and dozens of others) and the text of the Constitution itself.
Establishment of religion
Main article: Establishment Clause of the First Amendment
The Establishment Clause of the First Amendment plainly prohibits the establishment of a national religion by Congress or the preference of one religion over another. Prior to the enactment of the Fourteenth Amendment, the Supreme Court generally took the position that the substantive protections of the Bill of Rights did not apply to actions by state governments. Subsequently, under the "incorporation doctrine," certain selected provisions were applied to states. It was not, however, until the middle and later years of the twentieth century that the Supreme Court began to interpret the establishment and free exercise clauses in such a manner as to reduce substantially the promotion of religion by state governments. (For example, in the Board of Education of Kiryas Joel Village School District v. Grumet , Justice David Souter concluded that "government should not prefer one religion to another, or religion to irreligion.")
Free exercise of religion
Main article: Free Exercise Clause of the First Amendment
The free exercise clause has often been interpreted to include two freedoms: the freedom to believe, and the freedom to act. The former liberty is absolute, while the latter often faces state restriction. Jehovah's Witnesses, a religious group, was often the target of such restriction. Several cases involving the Witnesses permitted the Court to expound the free exercise clause. The Warren Court adopted a liberal view of the clause, the "compelling interest" doctrine (whereby a state must show a compelling interest in restricting religion-related activities), but later decisions have reduced the scope of this interpretation.
Freedom of speech
Remarkably, the Supreme Court did not consider a single case in which it was asked to strike down a federal law on the basis of the free speech clause until the twentieth century. The Alien and Sedition Acts of 1798 were never ruled upon by the Supreme Court, and even the leading critics of the law, Thomas Jefferson and James Madison, argued for the laws' unconstitutionality on the basis of the Tenth Amendment, not the First Amendment.
After World War I, several cases involving laws limiting speech came before the Supreme Court. The Espionage Act of 1917 imposed a maximum sentence of twenty years for anyone who caused or attempted to cause "insubordination, disloyalty, mutiny, or refusal of duty in the military or naval forces of the United States." Under the Act, over two thousand prosecutions were commenced. For instance, one filmmaker was sentenced to ten years imprisonment because his portrayal of British soldiers in a movie about the American Revolution impugned the good faith of an American ally, the United Kingdom. The Sedition Act of 1918 went even farther, criminalizing "disloyal," "scurrilous" or "abusive" language against the government.
The Supreme Court was for the first time requested to strike down a law violating the free speech clause in 1919. The case involved Charles Schenck, who had during the war published leaflets challenging the conscription system then in effect. The Supreme Court unanimously upheld Schenck's conviction for violating the Espionage Act when it decided Schenck v. United States. Justice Oliver Wendell Holmes, Jr., writing for the Court, suggested that "the question in every case is whether the words used are in such circumstances and are of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent."
The "clear and present danger" test of Schenck was extended in Debs v. United States , again by Justice Oliver Wendell Holmes. The case involved a speech made by Eugene V. Debs, a political activist. Debs had not spoken any words that posed a "clear and present danger" to the conscription system, but a speech in which he denounced militarism was nonetheless found to be sufficient grounds for his conviction. Justice Holmes suggested that the speech had a "natural tendency" to occlude the draft.
Thus, the Supreme Court effectively shaped the First Amendment in such a manner as to permit a multitude of restrictions on speech. Further restrictions on speech were accepted by the Supreme Court when it decided Gitlow v. New York in 1925. Writing for the majority, Justice Edward Sanford suggested that states could punish words that "by their very nature, involve danger to the public peace and to the security of the state." Lawmakers were given the freedom to decide which speech would constitute a danger.
Freedom of speech was influenced by anti-Communism during the Cold War. In 1940, Congress replaced the Sedition Act of 1918, which had expired in 1921. The Smith Act passed in that year made punishable the advocacy of "the propriety of overthrowing or destroying any government in the United States by force and violence." The law was mainly used as a weapon against Communist leaders. The constitutionality of the Act was questioned in the case Dennis v. United States. The Court upheld the law in 1951 by a six-two vote (one Justice, Tom Clark , did not participate because he had previously ordered the prosecutions when he was Attorney General). Chief Justice Fred M. Vinson relied on Oliver Wendell Holmes' "clear and present danger" test when he wrote for the majority. Vinson suggested that the doctrine did not require the government to "wait until the putsch is about to be executed, the plans have been laid and the signal is awaited," thereby broadly defining the words "clear and present danger." Thus, even though there was no immediate danger posed by the Communist Party's ideas, their speech was restricted by the Court.
Dennis v. United States has never been explicitly overruled by the Court, but future decisions have in practice reversed the case. In 1957, the Court changed its interpretation of the Smith Act in deciding Yates v. United States. The Supreme Court ruled that the Act was aimed at "the advocacy of action, not ideas." Thus, the advocacy of abstract doctrine remains protected under the First Amendment. Only speech explicitly inciting the forcible overthrow of the government remains punishable under the Smith Act.
The Supreme Court under Chief Justice Earl Warren expanded free speech protections in the 1960s, though there were exceptions. In 1968, for example, the Court upheld a law prohibiting the mutilation of draft cards in United States v. O'Brien. The Court ruled that protesters could not burn draft cards because doing so would interfere with the "smooth and efficient functioning" of the draft system.
In 1969, the Supreme Court ruled that free speech rights extended to students in school while deciding Tinker v. Des Moines. The case involved several students who were punished for wearing black arm-bands to protest the Vietnam War. The Supreme Court ruled that the school could not restrict symbolic speech that did not cause undue interruptions of school activities. Justice Abe Fortas wrote, "state-operated schools may not be enclaves of totalitarianism. School officials do not possess absolute authority over their students. Students ... are possessed of fundamental rights which the State must respect, just as they themselves must respect their obligations to the State." The decision was arguably overruled, or at least undermined, by Bethel School District v. Fraser (1986), in which the Court held a student could be punished for his speech before a public assembly.
Also in 1969, the Court decided the landmark Brandenburg v. Ohio, which overruled Whitney v. California, a 1927 case in which a woman was imprisoned for aiding the Communist Party. Brandenburg effectively swept away Dennis as well, casting the right to speak freely of violent action and revolution in broad terms: "[Our] decisions have fashioned the principle that the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action." Some claim that Brandenburg essentially sets forth a reworded "clear and present danger" test, but the accuracy of such statements is hard to judge. The Court has never heard or decided a case involving seditious speech since Brandenburg was handed down.
The divisive issue of flag burning as a form of protest came before the Supreme Court in 1989, as it decided Texas v. Johnson. The Supreme Court reversed the conviction of Gregory Johnson for burning the flag by a vote of five to four. Justice William J. Brennan, Jr. asserted that "if there is a bedrock principle underlying the First Amendment, it is that government may not prohibit the expression of an idea simply because society finds the idea offensive or disagreeable." Many in Congress vilified the decision of the Court. The House unanimously passed a resolution denouncing the Court; the Senate did the same with only three dissents. Congress passed a federal law barring flag burning, but the Supreme Court struck it down as well in United States v. Eichman (1990). Many attempts have been made to amend the Constitution to allow Congress to prohibit the desecration of the flag. Since 1995, the Amendment has consistently mustered sufficient votes to pass in the House of Representatives, but not in the Senate. Most recently, in 2000, the Senate voted 63–37 in favor of the amendment, which fell four votes short of the requisite two-thirds majority.
The federal government and the states have long been permitted to restrict obscene or pornographic speech. The exact definition of obscenity and pornography, however, has changed over time. Justice Potter Stewart famously stated that although he could not define pornography, he "kn[ew] it when [he] s[aw] it."
When it decided Rosen v. United States in 1896, the Supreme Court adopted the same obscenity standard as had been articulated in a famous British case, Regina v. Hicklin. The Hicklin standard defined material as obscene if it tended "to deprave or corrupt those whose minds are open to such immoral influences, and into whose hands a publication of this sort may fall." Thus, the standards of the most sensitive members of the community were the standards for obscenity. In 1957, the Court ruled in Roth v. United States that the Hicklin test was inappropriate. Instead, the Roth test for obscenity was "whether to the average person, applying contemporary community standards, the dominant theme of the material, taken as a whole, appeals to the prurient interest."
The Roth test was expanded when the Court decided Miller v. California in 1973. Under the Miller test, a work is obscene if it would be found appealing to the prurient interest by an average person applying contemporary community standards, depicts sexual conduct in a patently offensive way and has no serious literary, artistic, political or scientific value. Note that "community" standards—not national standards—are applied as to whether the material appeals to the prurient interest; thus, material may be deemed obscene in one locality but not in another. Note, however, that national standards are applied as to whether the material is of value. Child pornography is not subject to the Miller test, as the Supreme Court decided in 1982. The Court felt that the government's interest in protecting children from abuse was paramount.
Mere possession of obscene material in the home may not be prohibited by law. In writing for the Court in the case of Stanley v. Georgia , Justice Thurgood Marshall wrote, "if the First Amendment means anything, it means that a State has no business telling a man sitting in his own house what books he may read or what films he may watch." It is not, however, unconstitutional for the government to prevent the mailing or sale of obscene items, though they may be viewed only in private.
Libel, slander, and private action
The American prohibition on defamatory speech or publications—slander and libel—traces its origins to English law. The nature of defamation law was vitally changed by the Supreme Court in 1964, while deciding New York Times Co. v. Sullivan. The New York Times had published an advertisement indicating that officials in Montgomery, Alabama had acted violently in suppressing the protests of African-Americans during the Civil Rights Movement. The Montgomery Police Commissioner, L. B. Sullivan , sued the Times for libel on the grounds that the advertisement damaged his reputation. The Supreme Court unanimously overruled the $500,000 judgment against the Times. Justice William J. Brennan suggested that public officials may sue for libel only if the publisher published the statements in question with "actual malice," a difficult standard to meet.
The actual malice standard applies to both public officials and public figures, including celebrities. Though the details vary from state to state, private individuals normally need only to prove negligence on the part of the defendant.
As the Supreme Court ruled in Gertz v. Robert Welch, Inc. (1974), opinions cannot be considered defamatory. It is thus permissible to suggest, for instance, that a lawyer is a bad one, but not permissible to declare that the lawyer is ignorant of the law: the former constitutes a statement of values, but the latter is a statement alleging a fact.
More recently, in Milkovich v. Lorain Journal Co. , 497 U.S. 1 (1990), the Supreme Court backed off from the protection from "opinion" announced in Gertz. The court in Milkovich specifically held that there is no wholesale exemption to defamation law for statements labeled "opinion," but instead that a statement must be provably false (falsifiable) before it can be the subject of a libel suit.
In 1988, Hustler Magazine v. Falwell extended the "actual malice" standard to intentional infliction of emotional distress in a ruling which protected a parodic caricature. In the ruling, "actual malice" was described as "knowledge that the statement was false or with reckless disregard as to whether or not it was true."
Ordinarily, the First Amendment only applies to prohibit direct government censorship. The protection from libel suits recognizes that the power of the state is needed to enforce a libel judgment between private persons. The Supreme Court's scrutiny of defamation suits is thus sometimes considered part of a broader trend in U.S. jurisprudence away from the strict state action requirement, and into the application of First Amendment principles when private actors invoke state power.
Likewise, the Noerr-Pennington doctrine is a rule of law that often prohibits the application of antitrust law to statements made by competitors before public bodies: a monopolist may freely go before the city council and urge the denial of its competitor's building permit without being subject to Sherman Act liability. This principle is being applied to litigation outside the antitrust context, including state tort suits for intentional interference with business relations and "SLAPP Suits."
Similarly, some states have adopted, under their protections for free speech, the Pruneyard doctrine, which prohibits private property owners whose property is equivalent to a traditional public forum (often shopping malls and grocery stores) from enforcing their private property rights to exclude political speakers and petition-gatherers. This doctrine has been rejected as a matter of federal constitutional law, but is meeting growing acceptance as a matter of state law.
The Federal Election Campaign Act of 1971 and related laws restricted the monetary contributions that may be made to political campaigns and expenditure by candidates. The Supreme Court considered the constitutionality of the Act in Buckley v. Valeo, decided in 1976. The Court affirmed some parts of the Act and rejected others. The Court concluded that limits on campaign contributions "serve[d] the basic governmental interest in safeguarding the integrity of the electoral process without directly impinging upon the rights of individual citizens and candidates to engage in political debate and discussion." At the same time, the Court overturned the expenditure limits, which it found imposed "substantial restraints on the quantity of political speech."
Further rules on campaign finance were scrutinized by the Court when it determined McConnell v. Federal Election Commission in 2003. The case centered on the Bipartisan Campaign Reform Act of 2002, a law that introduced several new restrictions on campaign financing. The Supreme Court upheld provisions which barred the raising of soft money by national parties and the use of soft money by private organizations to finance certain election-related advertisements. At the same time, the Court struck down the "choice of expenditure" rule, which required that parties could either make coordinated expenditures for all its candidates, or permit candidates to spend independently, but not both, further stating that a "provision place[d] an unconstitutional burden on the parties' right to make unlimited independent expenditures." The Supreme Court also ruled that the provision preventing minors from making political contributions was unconstitutional, relying on the precedent on the Tinker case. For additional details, see campaign finance reform.
Free speech zones came into existence soon after the September 11, 2001 terrorist attacks as part of George W. Bush's security campaign. Free speech zones are set up by the Secret Service who scout locations where the president is to pass through or speak at. Officials target those who carry anti-Bush signs (and sometimes pro-Bush signs) and escort them to the free speech zones prior to and during the event. Reporters are often barred by local officials from displaying protesters on camera or speaking to them within the zone. Protesters who refuse to go to the free speech zone are often arrested and charged with trespassing, disorderly conduct and resisting arrest . In 2003, a seldom-used federal law was brought up that says that "entering a restricted area around the President of the United States" is a crime.
A small minority has questioned whether involuntary commitment laws, when the diagnosis of mental illness leading, in whole or in part, to the commitment, was made to some degree on the basis of the speech or writings of the committed individual, violates the right of freedom of speech of such individuals.
The First Amendment implications of involuntary psychiatric drugging have also been questioned. Though the District Court in Mills v. Rogers 457 U.S. 291 (1982) found that "whatever powers the Constitution has granted our government, involuntary mind control is not one of them," this finding was not of precedential value, and the Supreme Court ruling was essentially inconclusive.
Freedom of the press, like freedom of speech, is subject to restrictions on bases such as defamation law. Restrictions, however, have been struck down if they are aimed at the political message or content of newspapers.
Taxation of the press
The Government retains the right to tax newspapers, just as it may tax other commercial products. Generally, however, taxes that focus exclusively on newspapers have been found unconstitutional. In Grosjean v. American Press Co. (1936) the Court invalidated a state tax on newspaper advertising revenues. Similarly, some taxes that give preferential treatment to the press have been struck down. In 1987, for instance, the Court invalidated an Arkansas law exempting "religious, professional, trade and sports journals" from taxation since the law amounted to the regulation of newspaper content.
In 1991, deciding Leathers v. Medlock , the Supreme Court found that states may treat different components of the media differently, for instance by taxing cable television but not newspapers. The Court found that "differential taxation of speakers, even members of the press, does not implicate the First Amendment unless the tax is directed at, or presents the danger of suppressing, particular ideas."
The courts have rarely treated content-based regulation of the press with any sympathy. In Miami Herald Pub. Co. v. Tornillo (1971), the Court unanimously struck down a state law requiring newspapers criticizing political candidates to publish their responses. The state claimed that the law had been passed to ensure press responsibility. Finding that only freedom, and not press responsibility, is mandated by the First Amendment, the Supreme Court ruled that the government may not force newspapers to publish that which they do not desire to publish.
Content-based regulation of television and radio, however, have been sustained by the Supreme Court in various cases. Since there are a limited number of frequencies for non-cable television and radio stations, the government licenses them to various companies. The Supreme Court, however, has ruled that the problem of scarcity does not permit the raising of a First Amendment issue. The government may restrain broadcasters, but only on a content-neutral basis.
Petition and assembly
The right to petition the government has been interpreted as extending to petitions of all three branches: the Congress, the executive and the judiciary. The Supreme Court has interpreted "redress of grievances" broadly; thus, it is possible for one to request the government to exercise its powers in furtherance of the general public good. However, a few times Congress has directly limited the right to petition. During the 1790s, Congress passed the Alien and Sedition Acts, punishing opponents of the Federalist Party; the Supreme Court never ruled on the matter. In 1835 the House of Representatives adopted the "Gag Rule," barring abolitionist petitions calling for the end of slavery. The Supreme Court did not hear a case related to the rule, which was in any event abolished in 1840. During World War I, individuals petitioning for the repeal of sedition and espionage laws (see above) were punished; again, the Supreme Court did not rule on the matter.
The right of assembly was originally closely tied to the right to petition. One significant case involving the two rights was United States v. Cruikshank (1876). There, the Supreme Court held that citizens may "assemble for the purpose of petitioning Congress for a redress of grievances." Essentially, it was held that the right to assemble was secondary, while the right to petition was primary. Later cases, however, have expanded the meaning of the right to assembly. Hague v. CIO (1939), for instance, refers to the right to assemble for the "communication of views on national questions" and for "disseminating information."
Most provisions of the United States Bill of Rights are based on the English Bill of Rights (1689) and on other aspects of English law. The English Bill of Rights, however, does not include many of the protections found in the First Amendment. For example, while the First Amendment guarantees freedom of speech to the general populace, the English Bill of Rights only protected "freedom of speech and debates or proceedings in Parliament." The Declaration of the Rights of Man and of the Citizen, a French revolutionary document passed only weeks before Congress proposed the Bill of Rights, contains certain guarantees that are similar to the First Amendment's. For instance, it suggests that "every citizen may, accordingly, speak, write, and print with freedom."
Freedom of speech in the United States is more extensive than nearly any other nation in the world. While the First Amendment does not explicitly set restrictions on freedom of speech, other declarations of rights sometimes do so. The European Convention on Human Rights, for example, permits restrictions "in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or the rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary," and in practice these loopholes have been interpreted quite broadly by the courts of Europe.
The First Amendment was one of the first guarantees of religious freedom: neither the English Bill of Rights, nor the French Declaration of Rights, contains an equivalent guarantee. Neither is the United States a theocracy like Iran, nor is it an officially atheist state like the People's Republic of China, due to the constraints imposed by the First Amendment. | http://www.dogluvers.com/dog_breeds/First_Amendment_to_the_United_States_Constitution | 13 |
15 | |Classification and external resources|
Obstructive sleep apnea
Sleep apnea (or sleep apnoea in British English; //) is a sleep disorder characterized by abnormal pauses in breathing or instances of abnormally low breathing during sleep. Each pause in breathing, called an apnea, can last from at least ten seconds to minutes, and may occur 5 to 30 times or more an hour. Similarly, each abnormally low breathing event is called a hypopnea. Sleep apnea is often diagnosed with an overnight sleep test called a polysomnogram, or "sleep study".
There are three forms of sleep apnea: central (CSA), obstructive (OSA), and complex or mixed sleep apnea (i.e., a combination of central and obstructive) constituting 0.4%, 84% and 15% of cases respectively. In CSA, breathing is interrupted by a lack of respiratory effort; in OSA, breathing is interrupted by a physical block to airflow despite respiratory effort, and snoring is common.
Regardless of type, an individual with sleep apnea is rarely aware of having difficulty breathing, even upon awakening. Sleep apnea is recognized as a problem by others witnessing the individual during episodes or is suspected because of its effects on the body (sequelae). Symptoms may be present for years (or even decades) without identification, during which time the sufferer may become conditioned to the daytime sleepiness and fatigue associated with significant levels of sleep disturbance.
Sleep apnea affects not only adults but some children as well. As stated by El-Ad, "patients complain about excessive daytime sleepiness (EDS) and impaired alertness". In other words, common effects of sleep apnea include daytime fatigue, a slower reaction time, and vision problems. Moreover, patients are examined using “standard test batteries” in order to further identify parts of the brain that are affected by sleep apnea. Tests have shown that certain parts of the brain cause different effects. The “executive functioning” part of the brain affects the way the patient plans and initiates tasks. Second, the part of the brain that deals with attention causes difficulty in paying attention, working effectively and processing information when in a waking state. Thirdly, the part of the brain that uses memory and learning is also affected. Due to the disruption in daytime cognitive state, behavioral effects are also present. This includes moodiness, belligerence, as well as a decrease in attentiveness and drive. Another symptom of Sleep Apnea is Sleep Paralysis. In severe cases, the fear of sleep due to sleep paralysis can lead to Insomnia. These effects become very hard to deal with, thus the development of depression may transpire. There is also increasing evidence that sleep apnea may also lead to liver function impairment, particularly fatty liver diseases (see steatosis). Finally, because there are many factors that could lead to some of the effects previously listed, some patients are not aware that they suffer from sleep apnea and are either misdiagnosed, or just ignore the symptoms altogether.
The diagnosis of sleep apnea is based on the conjoint evaluation of clinical symptoms (e.g. excessive daytime sleepiness and fatigue) and of the results of a formal sleep study (polysomnography, or reduced channels home based test). The latter aims at establishing an "objective" diagnosis indicator linked to the quantity of apneic events per hour of sleep (Apnea Hypopnea Index(AHI), or Respiratory Disturbance Index (RDI)), associated to a formal threshold, above which a patient is considered as suffering from sleep apnea, and the severity of their sleep apnea can then be quantified. Mild OSA (Obstructive Sleep Apneas) ranges from 5 to 14.9 events per hour of sleep, moderate OSA falls in the range of 15–29.9 events per hour of sleep, and severe OSA would be a patient having over 30 events per hour of sleep.
Nevertheless, due to the number and variability in the actual symptoms and nature of apneic events (e.g., hypopnea vs apnea, central vs obstructive), the variability of patients' physiologies, and the intrinsic imperfections of the experimental setups and methods, this field is opened to debate. Within this context, the definition of an apneic event depends on several factors (e.g. patient's age) and account for this variability through a multi-criteria decision rule described in several, sometimes conflicting, guidelines. One example of a commonly adopted definition of an apnea (for an adult) includes a minimum 10 second interval between breaths, with either a neurological arousal (a 3-second or greater shift in EEG frequency, measured at C3, C4, O1, or O2) or a blood oxygen desaturation of 3–4% or greater, or both arousal and desaturation.
Oximetry, which may be performed overnight in a patient's home, is an easier alternative to formal sleep study (polysomnography). In one study, normal overnight oximetry was very sensitive and so if normal, sleep apnea was unlikely. In addition, home oximetry may be equally effective in guiding prescription for automatically self-adjusting continuous positive airway pressure.
Obstructive sleep apnea
Obstructive sleep apnea (OSA) is the most common category of sleep-disordered breathing. The muscle tone of the body ordinarily relaxes during sleep, and at the level of the throat the human airway is composed of collapsible walls of soft tissue which can obstruct breathing during sleep. Mild occasional sleep apnea, such as many people experience during an upper respiratory infection, may not be important, but chronic severe obstructive sleep apnea requires treatment to prevent low blood oxygen (hypoxemia), sleep deprivation, and other complications.
Individuals with low muscle tone and soft tissue around the airway (e.g., because of obesity) and structural features that give rise to a narrowed airway are at high risk for obstructive sleep apnea. The elderly are more likely to have OSA than young people. Men are more likely to suffer sleep apnea than women and children are, though it is not uncommon in the last two population groups.
The risk of OSA rises with increasing body weight, active smoking and age. In addition, patients with diabetes or "borderline" diabetes have up to three times the risk of having OSA.
Some treatments involve lifestyle changes, such as avoiding alcohol or muscle relaxants, losing weight, and quitting smoking. Many people benefit from sleeping at a 30-degree elevation of the upper body or higher, as if in a recliner. Doing so helps prevent the gravitational collapse of the airway. Lateral positions (sleeping on a side), as opposed to supine positions (sleeping on the back), are also recommended as a treatment for sleep apnea, largely because the gravitational component is smaller in the lateral position. Some people benefit from various kinds of oral appliances to keep the airway open during sleep. Continuous positive airway pressure (CPAP) is the most effective treatment for severe obstructive sleep apnea but oral appliances are considered a first line approach equal to CPAP for mild to moderate sleep apnea according to the AASM parameters of care. There are also surgical procedures to remove and tighten tissue and widen the airway.
Snoring is a common finding in people with this syndrome. Snoring is the turbulent sound of air moving through the back of the mouth, nose, and throat. Although not everyone who snores is experiencing difficulty breathing, snoring in combination with other conditions such as overweight and obesity has been found to be highly predictive of OSA risk. The loudness of the snoring is not indicative of the severity of obstruction, however. If the upper airways are tremendously obstructed, there may not be enough air movement to make much sound. Even the loudest snoring does not mean that an individual has sleep apnea syndrome. The sign that is most suggestive of sleep apneas occurs when snoring stops.
Other indicators include (but are not limited to): hypersomnolence, obesity BMI >30, large neck circumference (16 in (410 mm) in women, 17 in (430 mm) in men), enlarged tonsils and large tongue volume, micrognathia, morning headaches, irritability/mood-swings/depression, learning and/or memory difficulties, and sexual dysfunction.
The term "sleep-disordered breathing" is commonly used in the U.S. to describe the full range of breathing problems during sleep in which not enough air reaches the lungs (hypopnea and apnea). Sleep-disordered breathing is associated with an increased risk of cardiovascular disease, stroke, high blood pressure, arrhythmias, diabetes, and sleep deprived driving accidents. When high blood pressure is caused by OSA, it is distinctive in that, unlike most cases of high blood pressure (so-called essential hypertension), the readings do not drop significantly when the individual is sleeping. Stroke is associated with obstructive sleep apnea.
In the June 27, 2008, edition of the journal Neuroscience Letters, researchers revealed that people with OSA show tissue loss in brain regions that help store memory, thus linking OSA with memory loss. Using magnetic resonance imaging (MRI), the scientists discovered that sleep apnea patients' mammillary bodies were nearly 20 percent smaller, particularly on the left side. One of the key investigators hypothesized that repeated drops in oxygen lead to the brain injury.
Central sleep apnea
||This article needs additional citations for verification. (May 2010)|
In pure central sleep apnea or Cheyne–Stokes respiration, the brain's respiratory control centers are imbalanced during sleep. Blood levels of carbon dioxide, and the neurological feedback mechanism that monitors them, do not react quickly enough to maintain an even respiratory rate, with the entire system cycling between apnea and hyperpnea, even during wakefulness. The sleeper stops breathing and then starts again. There is no effort made to breathe during the pause in breathing: there are no chest movements and no struggling. After the episode of apnea, breathing may be faster (hyperpnea) for a period of time, a compensatory mechanism to blow off retained waste gases and absorb more oxygen.
While sleeping, a normal individual is "at rest" as far as cardiovascular workload is concerned. Breathing is regular in a healthy person during sleep, and oxygen levels and carbon dioxide levels in the bloodstream stay fairly constant. The respiratory drive is so strong that even conscious efforts to hold one's breath do not overcome it. Any sudden drop in oxygen or excess of carbon dioxide (even if tiny) strongly stimulates the brain's respiratory centers to breathe.
In central sleep apnea, the basic neurological controls for breathing rate malfunction and fail to give the signal to inhale, causing the individual to miss one or more cycles of breathing. If the pause in breathing is long enough, the percentage of oxygen in the circulation will drop to a lower than normal level (hypoxaemia) and the concentration of carbon dioxide will build to a higher than normal level (hypercapnia). In turn, these conditions of hypoxia and hypercapnia will trigger additional effects on the body. Brain cells need constant oxygen to live, and if the level of blood oxygen goes low enough for long enough, the consequences of brain damage and even death will occur. Fortunately, central sleep apnea is more often a chronic condition that causes much milder effects than sudden death. The exact effects of the condition will depend on how severe the apnea is and on the individual characteristics of the person having the apnea. Several examples are discussed below, and more about the nature of the condition is presented in the section on Clinical Details.
In any person, hypoxia and hypercapnia have certain common effects on the body. The heart rate will increase, unless there are such severe co-existing problems with the heart muscle itself or the autonomic nervous system that makes this compensatory increase impossible. The more translucent areas of the body will show a bluish or dusky cast from cyanosis, which is the change in hue that occurs owing to lack of oxygen in the blood ("turning blue"). Overdoses of drugs that are respiratory depressants (such as heroin, and other opiates) kill by damping the activity of the brain's respiratory control centers. In central sleep apnea, the effects of sleep alone can remove the brain's mandate for the body to breathe.
- Normal Respiratory Drive: After exhalation, the blood level of oxygen decreases and that of carbon dioxide increases. Exchange of gases with a lungful of fresh air is necessary to replenish oxygen and rid the bloodstream of built-up carbon dioxide. Oxygen and carbon dioxide receptors in the blood stream (called chemoreceptors) send nerve impulses to the brain, which then signals reflex opening of the larynx (so that the opening between the vocal cords enlarges) and movements of the rib cage muscles and diaphragm. These muscles expand the thorax (chest cavity) so that a partial vacuum is made within the lungs and air rushes in to fill it.
- Physiologic effects of central apnea: During central apneas, the central respiratory drive is absent, and the brain does not respond to changing blood levels of the respiratory gases. No breath is taken despite the normal signals to inhale. The immediate effects of central sleep apnea on the body depend on how long the failure to breathe endures. At worst, central sleep apnea may cause sudden death. Short of death, drops in blood oxygen may trigger seizures, even in the absence of epilepsy. In people with epilepsy, the hypoxia caused by apnea may trigger seizures that had previously been well controlled by medications.[verification needed] In other words, a seizure disorder may become unstable in the presence of sleep apnea. In adults with coronary artery disease, a severe drop in blood oxygen level can cause angina, arrhythmias, or heart attacks (myocardial infarction). Longstanding recurrent episodes of apnea, over months and years, may cause an increase in carbon dioxide levels that can change the pH of the blood enough to cause a metabolic acidosis.
Mixed apnea and complex sleep apnea
Some people with sleep apnea have a combination of both types. When obstructive sleep apnea syndrome is severe and longstanding, episodes of central apnea sometimes develop. The exact mechanism of the loss of central respiratory drive during sleep in OSA is unknown but is most commonly related to acid–base and CO2 feedback malfunctions stemming from heart failure. There is a constellation of diseases and symptoms relating to body mass, cardiovascular, respiratory, and occasionally, neurological dysfunction that have a synergistic effect in sleep-disordered breathing. In some cases, a side effect from the lack of sleep is a mild case of Excessive Daytime Sleepiness (EDS) where the subject has had minimal sleep and this extreme fatigue over time takes its toll on the subject. The presence of central sleep apnea without an obstructive component is a common result of chronic opiate use (or abuse) owing to the characteristic respiratory depression caused by large doses of narcotics.
Complex sleep apnea has recently been described by researchers as a novel presentation of sleep apnea.[dubious ] Patients with complex sleep apnea exhibit OSA, but upon application of positive airway pressure the patient exhibits persistent central sleep apnea. This central apnea is most commonly noted while on CPAP therapy after the obstructive component has been eliminated. This has long been seen in sleep laboratories and has historically been managed either by CPAP or BiLevel therapy. Adaptive servo-ventilation (ASV) modes of therapy have been introduced to attempt to manage this complex sleep apnea. Studies have demonstrated marginally superior performance of the adaptive servo ventilators in treating Cheyne–Stokes breathing; however, no longitudinal studies have yet been published, nor have any results been generated that suggest any differential outcomes versus standard CPAP therapy. At the AARC 2006 in Las Vegas, NV, researchers reported successful treatment of hundreds of patients on ASV therapy; however, these results have not been reported in peer-reviewed publications as of July 2007[update].
An important finding by Dernaika et al. suggests that transient central apnea produced during CPAP titration (the so-called "complex sleep apnea") is "…transient and self-limited." The central apneas may in fact be secondary to sleep fragmentation during the titration process. As of July 2007[update], there has been no alternate convincing evidence produced that these central sleep apnea events associated with CPAP therapy for obstructive sleep apnea are of any significant pathophysiologic importance.[dated info]
Research is ongoing, however, at the Harvard Medical School, including adding dead space to positive airway pressure for treatment of complex sleep-disordered breathing.
Treatment often starts with behavioral therapy. Many patients are told to avoid alcohol, sleeping pills, and other sedatives, which can relax throat muscles, contributing to the collapse of the airway at night.
Possibly owing to changes in pulmonary oxygen stores, sleeping on one's side (as opposed to on one's back) has been found to be helpful for central sleep apnea with Cheyne–Stokes respiration.
Oral appliances
General dentists can fabricate an oral appliance. The oral appliance, called a mandibular advancement splint, is a custom-made mouthpiece that shifts the lower jaw forward and opens the bite slightly, which opens up the airway. Oral appliance therapy (OAT) is usually successful in patients with mild to moderate obstructive sleep apnea. OAT is a relatively new treatment option for sleep apnea in the United States, but it is much more common in Canada and Europe.
Continuous positive airway pressure
For moderate to severe sleep apnea, the most common treatment is the use of a continuous positive airway pressure (CPAP) or automatic positive airway pressure (APAP) device, which 'splints' the patient's airway open during sleep by means of a flow of pressurized air into the throat. The patient typically wears a plastic facial mask, which is connected by a flexible tube to a small bedside CPAP machine. The CPAP machine generates the required air pressure to keep the patient's airways open during sleep. Advanced models may warm or humidify the air and monitor the patient's breathing to ensure proper treatment.
Although CPAP therapy is extremely effective in reducing apneas and less expensive than other treatments, some patients find it extremely uncomfortable. Many patients refuse to continue the therapy or fail to use their CPAP machines on a nightly basis, especially in the long term. One way to ensure CPAP therapy remains comfortable and effective for patients is to carefully consider the right CPAP face mask to be used. CPAP masks come in different shapes, sizes and materials to ensure effective treatment for obstructive sleep apnea. It is important to select the right mask to fit each patient.
It is not clear that CPAP reduces hypertension or cardiovascular events in patients who do not have daytime sleepiness; however, the lack of benefit may be partly due to noncompliance with therapy.
Several surgical procedures (sleep surgery) are used to treat sleep apnea, although they are normally a second line of treatment for those who reject CPAP treatment or are not helped by it. Surgical treatment for obstructive sleep apnea needs to be individualized in order to address all anatomical areas of obstruction. Often, correction of the nasal passages needs to be performed in addition to correction of the oropharynx passage. Septoplasty and turbinate surgery may improve the nasal airway. Tonsillectomy and uvulopalatopharyngoplasty (UPPP or UP3) are available to address pharyngeal obstruction. Base-of-tongue advancement by means of advancing the genial tubercle of the mandible may help with the lower pharynx. Many other treatments are available, including hyoid bone myotomy and suspension and various radiofrequency technologies.
Other surgery options may attempt to shrink or stiffen excess tissue in the mouth or throat; procedures done at either a doctor's office or a hospital. Small shots or other treatments, sometimes in a series, are used for shrinkage, while the insertion of a small piece of stiff plastic is used in the case of surgery whose goal is to stiffen tissues.
The Pillar Procedure is a minimally invasive treatment for snoring and obstructive sleep apnea. This procedure was FDA indicated in 2004. During this procedure, three to six or more Dacron (the material used in permanent sutures) strips are inserted into the soft palate, using a modified syringe and local anesthetic. While the procedure was initially approved for the insertion of three "pillars" into the soft palate, it was found that there was a significant dosage response to more pillars, with appropriate candidates. After this brief and virtually painless outpatient operation, which usually lasts no more than 30 minutes, the soft palate is more rigid and snoring and sleep apnea can be reduced. This procedure addresses one of the most common causes of snoring and sleep apnea — vibration or collapse of the soft palate (the soft part of the roof of the mouth). If there are other factors contributing to snoring or sleep apnea, such as the nasal airway or an enlarged tongue, it will likely need to be combined with other treatments to be more effective.
The Stanford Center for Excellence in Sleep Disorders Medicine achieved a 95% cure rate of sleep apnea patients by surgery. Maxillomandibular advancement (MMA) is considered the most effective surgery for sleep apnea patients, because it increases the posterior airway space (PAS). The main benefit of the operation is that the oxygen saturation in the arterial blood increases. In a study published in 2008, 93.3.% of surgery patients achieved an adequate quality of life based on the Functional Outcomes of Sleep Questionnaire (FOSQ). Surgery led to a significant increase in general productivity, social outcome, activity level, vigilance, intimacy, and intercourse. Overall risks of MMA surgery are low: The Stanford University Sleep Disorders Center found 4 failures in a series of 177 patients, or about one out of 44 patients. However, health professionals are often unsure as to who should be referred for surgery and when to do so: some factors in referral may include failed use of CPAP or device use; anatomy which favors rather than impeding surgery; or significant craniofacial abnormalities which hinder device use. Maxillomandibular advancement surgery is often combined with Genioglossus Advancement, as both are skeletal surgeries for sleep apnea.
Several inpatient and outpatient procedures use sedation. Many drugs and agents used during surgery to relieve pain and to depress consciousness remain in the body at low amounts for hours or even days afterwards. In an individual with either central, obstructive or mixed sleep apnea, these low doses may be enough to cause life-threatening irregularities in breathing or collapses in a patient’s airways. Use of analgesics and sedatives in these patients postoperatively should therefore be minimized or avoided.
Surgery on the mouth and throat, as well as dental surgery and procedures, can result in postoperative swelling of the lining of the mouth and other areas that affect the airway. Even when the surgical procedure is designed to improve the airway, such as tonsillectomy and adenoidectomy or tongue reduction, swelling may negate some of the effects in the immediate postoperative period. Once the swelling resolves and the palate becomes tightened by postoperative scarring, however, the full benefit of the surgery may be noticed.
A sleep apnea patient undergoing any medical treatment must make sure his or her doctor and anesthetist are informed about the sleep apnea. Alternative and emergency procedures may be necessary to maintain the airway of sleep apnea patients. If an individual suspects he or she may have sleep apnea, communication with their doctor about possible preprocedure screening may be in order.
Alternative treatments
Other studies have also suggested that strengthening the muscles around the upper airway may combat sleep apnea. A 2001 study investigated changes after Tongue Muscle Training (ZMT®) in respiratory parameters during night-time sleep of patients with increased respiratory disease index. 40 sleep apnea patients, which up to this time had been treated with nCPAP, underwent electrostimulation of the suprahyoidal musculature for 5 weeks with a special EMS-device. The apnea, hypopnea and desaturation indexes were reduced in 26 of the 40 patients (65%) by an average of approximately one half. A 2005 study in the British Medical Journal found that learning and practicing the didgeridoo helped reduce snoring and sleep apnea as well as daytime sleepiness. This appears to work by strengthening muscles in the upper airway, thus reducing their tendency to collapse during sleep. A 2009 study published in the American Journal of Respiratory and Clinical Care Medicine found that patients who practiced a series of tongue and throat exercises for 30 minutes a day showed a marked decline in sleep apnea symptoms after three months. Patients experienced an average of 39% fewer apnea episodes after successfully completing the treatments.
Cannabis derivatives have also been studied in the treatment of sleep apnea. A 2002 study published in the Sleep journal found that orally administered THC was able to stabilize respiration in rats and bulldogs during all sleep stages, decreasing apnea indexes during NREM and REM sleep stages by 42% and 58% respectively. A 2013 proof of concept trial published in the Frontiers of Psychology journal found that dronabinol (synthetic THC) was able to reduce apnea indexes by 32% on average in the 17 human subjects that were studied. Lead study author Dr. David Carley subsequently received a $5 million grant from the National Institutes of Health (NIH) to conduct a Phase II clinical trial.[unreliable source?]
The Wisconsin Sleep Cohort Study estimated in 1993 that roughly one in every 15 Americans were affected by at least moderate sleep apnea. It also estimated that in middle-age as many as nine percent of women and 24 percent of men were affected, undiagnosed and untreated.
The costs of untreated sleep apnea reach further than just health issues. It is estimated that in the U.S. the average untreated sleep apnea patient's annual health care costs $1,336 more than an individual without sleep apnea. This may cause $3.4 billion/year in additional medical costs. Whether medical cost savings occur with treatment of sleep apnea remains to be determined.
A 2012 study has shown that hypoxia (an inadequate supply of oxygen) that characterizes sleep apnea promotes angiogenesis which increase vascular and tumor growth, which in turn results in a 4.8 times higher incidence of cancer mortality.
The clinical picture of this condition has long been recognized as a character trait, without an understanding of the disease process. The term "Pickwickian syndrome" that is sometimes used for the syndrome was coined by the famous early 20th century physician, William Osler, who must have been a reader of Charles Dickens. The description of Joe, "the fat boy" in Dickens's novel The Pickwick Papers, is an accurate clinical picture of an adult with obstructive sleep apnea syndrome.
The early reports of obstructive sleep apnea in the medical literature described individuals who were very severely affected, often presenting with severe hypoxemia, hypercapnia and congestive heart failure.
The management of obstructive sleep apnea was revolutionized with the introduction of continuous positive airway pressure (CPAP), first described in 1981 by Colin Sullivan and associates in Sydney, Australia. The first models were bulky and noisy, but the design was rapidly improved and by the late 1980s CPAP was widely adopted. The availability of an effective treatment stimulated an aggressive search for affected individuals and led to the establishment of hundreds of specialized clinics dedicated to the diagnosis and treatment of sleep disorders. Though many types of sleep problems are recognized, the vast majority of patients attending these centers have sleep-disordered breathing.
See also
- "Sleep Apnea: What Is Sleep Apnea?". NHLBI: Health Information for the Public. U.S. Department of Health and Human Services. 2009-05. Retrieved 2010-08-05.
- Morgenthaler TI, Kagramanov V, Hanak V, Decker PA (September 2006). "Complex sleep apnea syndrome: is it a unique clinical syndrome?". Sleep 29 (9): 1203–9. PMID 17040008. Lay summary – Science Daily (September 4, 2006).
- "Sleep Apnea: Key Points". NHLBI: Health Information for the Public. U.S. Department of Health and Human Services.
- El-Ad, Baruch; Lavie, Peretz (2005). "Effect of sleep apnea on cognition and mood". International Review of Psychiatry 17 (4): 577–582. doi:10.1080/09540260500104508.
- Aloia, M.S.; Sweet, L.H.; Jerskey, B.A.; Zimmerman, M.; Arnedt, T.J.; Millman, R.P. (2009). "Treatment effects on brain activity during a working memory task in obstructive sleep apnea". Journal of Sleep Research (Wiley-Blackwell) 18 (4): 404–410. doi:10.1111/j.1365-2869.2009.00755.x. Retrieved 17 February 2012.
- Sculthorpe LD, Douglass AB (July 2010). "Sleep pathologies in depression and the clinical utility of polysomnography". Can J Psychiatry 55 (7): 413–21. PMID 20704768.
- MH Ahmed, CD Byrne (2010). "Obstructive sleep apnea syndrome and fatty liver: association or causal link?". World J Gastroenterol 16 (34): 4243–52. PMC 2937104. PMID 20818807.
- H Singh, R Pollock, J Uhanova, M Kryger, K Hawkins, GY Minuk (2005). "Symptoms of Obstructive Sleep Apnea in Patients with Nonalcoholic Fatty Liver Disease". Digestive Diseases and Sciences 50 (12): 2338–2343. doi:10.1007/s10620-005-3058-y.
- F Tanne, F Gagnadoux, O Chazouilleres, B Fleury, D Wendum, E Lasnier, B Labeau, R Poupon, L Serfaty (2005). "Chronic Liver Injury During Obstructive Sleep Apnea". Hepatology 41 (6): 1290–1296. doi:10.1002/hep.20725.
- Redline S, Budhiraja R, Kapur V et al. (2007). "Reliability and validity of respiratory event measurement and scoring". J Clin Sleep Med 3 (2): 169–200. PMID 17557426.
- AASM Task Force (1999). "Sleep–Related Breathing Disorders in Adults – Recommendations for Syndrome Definition and Measurement Techniques in Clinical Research". SLEEP 22 (5): 667–689. PMID 10450601.
- Ruehland WR, Rochford PD, O'Donoghue FJ, Pierce RJ, Singh P, Thornton AT (2009). "The new aasm criteria for scoring hypopneas: Impact on the apnea hypopnea index". SLEEP 32 (2): 150–157. PMC 2635578. PMID 19238801.
- Sériès, F.; Marc, I.; Cormier, Y.; La Forge, J. (1993). "Utility of nocturnal home oximetry for case finding in patients with suspected sleep apnea hypopnea syndrome". Annals of internal medicine 119 (6): 449–453. PMID 8357109.
- Whitelaw WA, Brant RF, Flemons WW (2005). "Clinical usefulness of home oximetry compared with polysomnography for assessment of sleep apnea.". Am J Respir Crit Care Med 171 (2): 188–93. doi:10.1164/rccm.200310-1360OC. PMID 15486338. Review in: ACP J Club. 2005 Jul-Aug;143(1):21
- "Sleep Apnea: Who Is At Risk for Sleep Apnea?". NHLBI: Health Information for the Public. U.S. Department of Health and Human Services.
- Neill AM, Angus SM, Sajkov D, McEvoy RD (January 1997). "Effects of sleep posture on upper airway stability in patients with obstructive sleep apnea". American Journal of Respiratory and Critical Care Medicine 155 (1): 199–204. PMID 9001312.
- Xiheng, Guo; Chen, Wang; Hongyu, Zhang; Weimin, Kong; Li, An; Li, Liu; Xinzhi, Weng (2003). The Study Of The Influence Of Sleep Position On Sleep Apnea. Cardinal Health.[dead link]
- Loord H, Hultcrantz E (August 2007). "Positioner--a method for preventing sleep apnea". Acta Oto-laryngologica 127 (8): 861–8. doi:10.1080/00016480601089390. PMID 17762999.
- Szollosi I, Roebuck T, Thompson B, Naughton MT (August 2006). "Lateral sleeping position reduces severity of central sleep apnea / Cheyne–Stokes respiration". Sleep 29 (8): 1045–51. PMID 16944673.
- Vennelle M, White S, Riha RL, Mackay TW, Engleman HM, Douglas NJ (February 2010). "Randomized controlled trial of variable-pressure versus fixed-pressure continuous positive airway pressure (CPAP) treatment for patients with obstructive sleep apnea/hypopnea syndrome (OSAHS)". Sleep 33 (2): 267–71. PMC 2817914. PMID 20175411.
- Morris LG, Kleinberger A, Lee KC, Liberatore LA, Burschtin O (November 2008). "Rapid risk stratification for obstructive sleep apnea, based on snoring severity and body mass index". Otolaryngology – Head and Neck Surgery 139 (5): 615–8. doi:10.1016/j.otohns.2008.08.026. PMID 18984252.
- Yan-fang S, Yu-ping W (August 2009). "Sleep-disordered breathing: impact on functional outcome of ischemic stroke patients". Sleep Medicine 10 (7): 717–9. doi:10.1016/j.sleep.2008.08.006. PMID 19168390.
- Bixler EO, Vgontzas AN, Lin HM, et al. (November 2008). "Blood pressure associated with sleep-disordered breathing in a population sample of children". Hypertension 52 (5): 841–6. doi:10.1161/HYPERTENSIONAHA.108.116756. PMID 18838624.
- Leung RS (2009). "Sleep-disordered breathing: autonomic mechanisms and arrhythmias". Progress in Cardiovascular Diseases 51 (4): 324–38. doi:10.1016/j.pcad.2008.06.002. PMID 19110134.
- Silverberg DS, Iaina A, Oksenberg A (January 2002). "Treating obstructive sleep apnea improves essential hypertension and life". American Family Physician 65 (2): 229–36. PMID 11820487.
- Grigg-Damberger M (February 2006). "Why a polysomnogram should become part of the diagnostic evaluation of stroke and transient ischemic attack". Journal of Clinical Neurophysiology 23 (1): 21–38. doi:10.1097/01.wnp.0000201077.44102.80. PMID 16514349.
- Yaggi HK, Concato J, Kernan WN, Lichtman JH, Brass LM, Mohsenin V (November 2005). "Obstructive sleep apnea as a risk factor for stroke and death". The New England Journal of Medicine 353 (19): 2034–41. doi:10.1056/NEJMoa043104. PMID 16282178.
- Kumar R, Birrer BV, Macey PM, et al. (June 2008). "Reduced mammillary body volume in patients with obstructive sleep apnea". Neuroscience Letters 438 (3): 330–4. doi:10.1016/j.neulet.2008.04.071. PMID 18486338.
- Kumar R, Birrer BV, Macey PM, et al. (June 2008). "Reduced mammillary body volume in patients with obstructive sleep apnea". Neuroscience Letters 438 (3): 330–4. doi:10.1016/j.neulet.2008.04.071. PMID 18486338. Lay summary – Newswise (June 6, 2008).
- Dernaika T, Tawk M, Nazir S, Younis W, Kinasewitz GT (July 2007). "The significance and outcome of continuous positive airway pressure-related central sleep apnea during split-night sleep studies". Chest 132 (1): 81–7. doi:10.1378/chest.06-2562. PMID 17475636.
- Thomas RJ (March 2005). "Effect of added dead space to positive airway pressure for treatment of complex sleep-disordered breathing". Sleep Medicine 6 (2): 177–8. doi:10.1016/j.sleep.2004.11.004. PMID 15716223.
- "How Is Sleep Apnea Treated?". National Heart, Lung, and Blood Institute.
- White DP, Zwillich CW, Pickett CK, Douglas NJ, Findley LJ, Weil JV (October 1982). "Central sleep apnea: Improvement with acetazolamide therapy". Archives of Internal Medicine 142 (10): 1816–9. doi:10.1001/archinte.142.10.1816. PMID 6812522.
- "Sleep Apnea". Diagnosis Dictionary. Psychology Today.
- Mayos M, Hernández Plaza L, Farré A, Mota S, Sanchis J (February 2001). "[The effect of nocturnal oxygen therapy in patients with sleep apnea syndrome and chronic airflow limitation]". Archivos de Bronconeumología (in Spanish) 37 (2): 65–8. PMID 11181239.
- Breitenbücher A, Keller-Wossidlo H, Keller R (November 1989). "[Transtracheal oxygen therapy in obstructive sleep apnea syndrome]". Schweizerische Medizinische Wochenschrift (in German) 119 (46): 1638–41. PMID 2609134.
- Machado MA, Juliano L, Taga M, de Carvalho LB, do Prado LB, do Prado GF (December 2007). "Titratable mandibular repositioner appliances for obstructive sleep apnea syndrome: are they an option?". Sleep & Breathing 11 (4): 225–31. doi:10.1007/s11325-007-0109-y. PMID 17440760.
- General Information about Sleep Apnea Machines
- Hsu AA, Lo C (December 2003). "Continuous positive airway pressure therapy in sleep apnoea". Respirology 8 (4): 447–54. doi:10.1046/j.1440-1843.2003.00494.x. PMID 14708553.
- Barbé F, Durán-Cantolla J, Sánchez-de-la-Torre M, et al. (May 2012). "Effect of continuous positive airway pressure on the incidence of hypertension and cardiovascular events in nonsleepy patients with obstructive sleep apnea: a randomized controlled trial". JAMA 307 (20): 2161–8. doi:10.1001/jama.2012.4366. PMID 22618923.
- Li KK, Riley RW, Powell NB, Troell R, Guilleminault C (November 1999). "Overview of phase II surgery for obstructive sleep apnea syndrome". Ear, Nose, & Throat Journal 78 (11): 851, 854–7. PMID 10581838.
- Prinsell JR (November 2002). "Maxillomandibular advancement surgery for obstructive sleep apnea syndrome". Journal of the American Dental Association 133 (11): 1489–97; quiz 1539–40. PMID 12462692.
- Lye KW, Waite PD, Meara D, Wang D (May 2008). "Quality of life evaluation of maxillomandibular advancement surgery for treatment of obstructive sleep apnea". Journal of Oral and Maxillofacial Surgery 66 (5): 968–72. doi:10.1016/j.joms.2007.11.031. PMID 18423288.
- Li KK, Powell NB, Riley RW, Troell RJ, Guilleminault C (2000). "Long-Term Results of Maxillomandibular Advancement Surgery". Sleep & Breathing 4 (3): 137–140. doi:10.1007/s11325-000-0137-3. PMID 11868133.
- MacKay, Stuart (June 2011). "Treatments for snoring in adults". Australian Prescriber (34): 77–79.
- Johnson, T. Scott; Broughton, William A.; Halberstadt, Jerry (2003). Sleep Apnea – The Phantom of the Night: Overcome Sleep Apnea Syndrome and Win Your Hidden Struggle to Breathe, Sleep, and Live. New Technology Publishing. ISBN 978-1-882431-05-2.[page needed]
- National Heart, Lung, and Blood Institute (2012). "What is Sleep Apnea?". National Institutes of Health. Retrieved 15 February 2013.
- Gessmann HW et al: The Tongue Muscle Training (ZMT®) in nCPAP Patients with Obstructive Sleep Apnea Syndrome (OSAS). PIB Publisher Duisburg, Germany 2001
- Puhan MA, Suarez A, Lo Cascio C, Zahn A, Heitz M, Braendli O (February 2006). "Didgeridoo playing as alternative treatment for obstructive sleep apnoea syndrome: randomised controlled trial". BMJ 332 (7536): 266–70. doi:10.1136/bmj.38705.470590.55. PMC 1360393. PMID 16377643.
- Guimarães KC, Drager LF, Genta PR, Marcondes BF, Lorenzi-Filho G (May 2009). "Effects of oropharyngeal exercises on patients with moderate obstructive sleep apnea syndrome". Am. J. Respir. Crit. Care Med. 179 (10): 962–6. doi:10.1164/rccm.200806-981OC. PMID 19234106.
- Carley DW, Paviovic S, Janelidze M, Radulovacki M (June 2002). "Functional role for cannabinoids in respiratory stability during sleep.". Sleep 25 (4): 391–8. PMID 12071539.
- Prasad B, Radulovacki MG, Carley DW (Jan 2013). "Proof of concept trial of dronabinol in obstructive sleep apnea.". Front Psychiatry 4 (1). doi:10.3389/fpsyt.2013.00001. PMC 3550518. PMID 23346060.
- "Medical Marijuana: A Treatment For Sleep Apnea?". TruthOnPot.com. 16 February 2013. Retrieved 28 April 2013.
- Young T, Palta M, Dempsey J, Skatrud J, Weber S, Badr S (April 1993). "The occurrence of sleep-disordered breathing among middle-aged adults". The New England Journal of Medicine 328 (17): 1230–5. doi:10.1056/NEJM199304293281704. PMID 8464434.
- Lee W, Nagubadi S, Kryger MH, Mokhlesi B (June 1, 2008). "Epidemiology of obstructive sleep apnea: a population-based perspective". Expert Rev Respir Med 2 (3): 349–64. doi:10.1586/174763126.96.36.1999. PMC 2727690. PMID 19690624.
- Young T, Peppard PE, Gottlieb DJ (May 2002). "Epidemiology of obstructive sleep apnea: a population health perspective". American Journal of Respiratory and Critical Care Medicine 165 (9): 1217–39. doi:10.1164/rccm.2109080. PMID 11991871.
- Kapur V, Blough DK, Sandblom RE, et al. (September 1999). "The medical cost of undiagnosed sleep apnea". Sleep 22 (6): 749–55. PMID 10505820.
- torontosun.com – Study links sleep apnea with higher cancer deaths, 2012-05-20
- Nieto FJ, Peppard PE, Young T, Finn L, Hla KM, Farré R (May 2012). "Sleep disordered breathing and cancer mortality: results from the Wisconsin Sleep Cohort Study". Am J Respir Crit Care Med. doi:10.1164/rccm.201201-0130OC. PMID 22610391.
- "Sleep apnea ups cancer death risk five-fold". The Times Of India. 2012-05-27. Retrieved 27 May 2012.
- Sullivan CE, Issa FG, Berthon-Jones M, Eves L. (April 1981). "Reversal of obstructive sleep apnoea by continuous positive airway pressure applied through the nares". Lancet 1 (8225): 862–5. doi:10.1016/S0140-6736(81)92140-1. PMID 6112294.
General references
- Kalra M, Chakraborty R (March 2007). "Genetic susceptibility to obstructive sleep apnea in the obese child". Sleep Medicine 8 (2): 169–75. doi:10.1016/j.sleep.2006.09.003. PMID 17275401.
- "Sleep-related breathing disorders in adults: recommendations for syndrome definition and measurement techniques in clinical research. The Report of an American Academy of Sleep Medicine Task Force". Sleep 22 (5): 667–89. August 1999. PMID 10450601.
- Bell RB, Turvey TA (March 2001). "Skeletal advancement for the treatment of obstructive sleep apnea in children". The Cleft Palate-craniofacial Journal 38 (2): 147–54. doi:10.1597/1545-1569(2001)038<0147:SAFTTO>2.0.CO;2. PMID 11294542.
- Caples SM, Gami AS, Somers VK (February 2005). "Obstructive sleep apnea". Annals of Internal Medicine 142 (3): 187–97. doi:10.1001/archinte.142.1.187. PMID 15684207.
- Cohen MM, Kreiborg S (September 1992). "Upper and lower airway compromise in the Apert syndrome". American Journal of Medical Genetics 44 (1): 90–3. doi:10.1002/ajmg.1320440121. PMID 1519659.
- de Miguel-Díez J, Villa-Asensi JR, Alvarez-Sala JL (December 2003). "Prevalence of sleep-disordered breathing in children with Down syndrome: polygraphic findings in 108 children". Sleep 26 (8): 1006–9. PMID 14746382.
- García Urbano, Jesús (2010 Ripano Editorial Médica S.A. Orthoapnea). Orthoapnea. Snoring and obstructive Apnea. Solutions to sleeping problems. Ripano Editorial Médica S.A.
- Mathur R, Douglas NJ (September 1994). "Relation between sudden infant death syndrome and adult sleep apnoea/hypopnoea syndrome". Lancet 344 (8925): 819–20. doi:10.1016/S0140-6736(94)92375-2. PMID 7916096.
- Mortimore IL, Douglas NJ (September 1997). "Palatal muscle EMG response to negative pressure in awake sleep apneic and control subjects". American Journal of Respiratory and Critical Care Medicine 156 (3 Pt 1): 867–73. PMID 9310006.
- Perkins JA, Sie KC, Milczuk H, Richardson MA (March 1997). "Airway management in children with craniofacial anomalies". The Cleft Palate-craniofacial Journal 34 (2): 135–40. doi:10.1597/1545-1569(1997)034<0135:AMICWC>2.3.CO;2. PMID 9138508.
- Sculerati N, Gottlieb MD, Zimbler MS, Chibbaro PD, McCarthy JG (December 1998). "Airway management in children with major craniofacial anomalies". The Laryngoscope 108 (12): 1806–12. doi:10.1097/00005537-199812000-00008. PMID 9851495.
- Shepard JW, Thawley SE (May 1990). "Localization of upper airway collapse during sleep in patients with obstructive sleep apnea". The American Review of Respiratory Disease 141 (5 Pt 1): 1350–5. PMID 2339852.
- Sher AE (August 1990). "Obstructive sleep apnea syndrome: a complex disorder of the upper airway". Otolaryngologic Clinics of North America 23 (4): 593–608. PMID 2199896.
- Shott SR, Amin R, Chini B, Heubi C, Hotze S, Akers R (April 2006). "Obstructive sleep apnea: Should all children with Down syndrome be tested?". Archives of Otolaryngology--Head & Neck Surgery 132 (4): 432–6. doi:10.1001/archotol.132.4.432. PMID 16618913.
- Shouldice RB, O'Brien LM, O'Brien C, de Chazal P, Gozal D, Heneghan C (June 2004). "Detection of obstructive sleep apnea in pediatric subjects using surface lead electrocardiogram features". Sleep 27 (4): 784–92. PMID 15283015.
- Andreoli, Thomas E.; Cecil, Russell La Fayette; Carpenter, Charles C. J.; Griggs, Robert C.; Loscalzo, Joseph (2001). "Disordered Breathing". Cecil essentials of medicine. Philadelphia: W.B. Saunders. pp. 210–1. ISBN 978-0-7216-8179-5.
- Strollo PJ, Rogers RM (January 1996). "Obstructive sleep apnea". The New England Journal of Medicine 334 (2): 99–104. doi:10.1056/NEJM199601113340207. PMID 8531966.
- Sullivan CE, Issa FG, Berthon-Jones M, Eves L (April 1981). "Reversal of obstructive sleep apnoea by continuous positive airway pressure applied through the nares". Lancet 1 (8225): 862–5. doi:10.1016/S0140-6736(81)92140-1. PMID 6112294.
- In May 2011, the VOA Special English service of the Voice of America broadcast a program on sleep apnea. A transcript and MP3 of the program, intended for English learners, can be found at Why Sleep Apnea Raises Risk of Stroke, Heart Attack. | http://en.wikipedia.org/wiki/Sleep_apnea | 13 |
16 | Authentic assessment comprises a variety of assessment techniques that share the following characteristics: (1) direct measurement of skills that relate to long-term educational outcomes such as success in the workplace; (2) tasks that require extensive engagement and complex performance; and (3) an analysis of the processes used to produce the response. Authentic assessment is often defined by what it is not: Its antonyms include: norm-referenced standardized tests; fixed-choice multiple-choice or true/false tests; fill-in-the-blank tests. Synonyms include: performance assessment, portfolios, and projects. Dynamic (Lidz, 1991) or responsive assessment (Henning-Stout, 1991) are other terms associated with authentic assessment. Authentic assessment has been a popular method for assessing student learning among specific populations of students such as those with severe disabilities (Coutinho & Malouf, 1993), very young children (Grisham-Brown, Hallam, & Brook-shire, 2006), and gifted students (Moore, 2005). In addition, specific disciplines such as the arts (Popovich, 2006), science (Oh, Kim, Garcia, & Krilowicz, 2005) and teacher education (Gatlin & Jacob, 2002) have embraced authentic assessment for its emphasis on process over product. Grant Wiggins described authentic assessments as “faithful representations of the contexts encountered in a field of study or in the real-life ‘tests’ of adult life” (1993, p. 206).
Authentic assessment was a significant component of the 1990s education reform zeitgeist, and Wiggins was one its most prolific and convincing proponents (Terwilliger, 1997). Wiggins (1993) asserted that traditional methods of student assessment (i.e., forced choice tests such as multiple-choice, true/false test, etc.) fail to elicit complex intellectual performance valued in real life experiences and result in a narrowing of the curriculum to basic skills, including test taking skills. At a time when standardized minimum competency tests had been largely rejected for reducing or diminishing the curriculum, and content standards emphasizing higher-ordered thinking skills were articulated within many disciplines and states, authentic assessment gained considerable traction.
Subsequently, educators may have engaged in authentic assessment to rebel against the top-down accountability of high-stakes standardized testing (Salvia & Ysseldyke, 2004). Since the 2002 No Child Left Behind (NCLB) Act, there has been a greater focus on large-scale standardized testing. There is a lack of connection between the federal and state policy makers and public school educators. In an ideal educational setting, professional educators in all arenas would guide the learners' movement toward the standards. This would be developed in an organic process with student, site, and community input. However, the current practice is that standards are developed by remote government bureaucrats in state or federal buildings far removed from the students and those who are in contact with the students on a daily basis (Henning-Stout, 1996). There is a feeling of imposition on school site educators by state and federal officials, which compounds the challenges towards the ideal development of authentic assessment.
Educators' desire for authenticity in assessment and learning is not free from the polemics of political climates that define that nature of modern education.
Assessment data are used for multiple purposes, including making accountability, eligibility, and instructional decisions. The purpose of the assessment directs the analyses. For example, authentic assessment data collected for determining whether a school, district, or state is sufficiently educating students will require data to be aggregated at the systems level, as well as disaggregated by various sub-populations of students, in order to make such accountability decisions. Authentic assessment to determine whether a student meets specific state or national special education criteria must be corroborated by other types of data given the significant ramifications for the student (Lidz, 1991). Data collected to inform instruction must be analyzed relative to the curriculum and instruction provided to the students in a particular class. Authentic assessment data can be analyzed by qualitative or quantitative methods.
A qualitative analysis of a student's performance typically describes skills that were demonstrated and errors that were made thereby providing a narrative of what the student knows and is able to do, and what the student needs to learn or improve upon. Narratives also allow the student's performance to be considered within the context of the assessment. For example, Alverno College is nationally recognized for its narrative assessments of eight core abilities in a manner that is contextually relevant for each discipline (Alverno College Faculty, 1994).
A quantitative analysis of authentic assessment data applies a scoring rubric or checklist to judge student responses relative to criteria within a restricted range of four or more proficiency levels (e.g., advanced proficient, proficient, partially proficient, and failure). Scoring rubrics can be either analytic or holistic. Analytic analyses require defining and assessing different dimensions of a task. For example, the spelling, sentence structure, vocabulary, accuracy, level of detail, and coherence of an essay may be judged independently. Holistic analysis assigns an overall score to a student's performance, like judging an Olympic gymnastic competition.
Three variations of authentic assessments most frequently discussed are dynamic (Hilliard, 1995; Lidz, 1991), performance, and portfolio assessment (Salvia & Ysseldyke, 2004). Proponents of authentic assessment (Hilliard, 1995; Lidz, 1991; Meyer, 1992) have observed that many people think they are conducting it when in fact they are not. The multiple purposes for assessments and the general nature of many of the terms associated with authentic assessment has resulted in variation among researchers and practitioners in what is considered authentic or dynamic assessment (Cum-ming & Maxwell, 1999; Newton, 2007).
Dynamic assessment is conducted within a test-intervene-retest format or process. For example, an educator first administers a test to a student; then the adult intervenes by asking questions about the child's incorrect or unexpected answers to improve the student's cognitive processes. Finally, the adult administers the same or a similar test to the child to see if the child has developed a new strategy for solving the problem. Thus, dynamic assessment attempts to measure the student's level of modifiability.
Compared to dynamic assessment, performance and portfolio assessment are more commonly used in classroom settings (Salvia & Ysseldyke, 2004). Performance assessments require students to complete or demonstrate the behavior that educators want to measure (Meyer, 1992). For a performance task to be authentic, it must be completed within a real-world context, which includes shifting the locus of control to the student in that the student chooses the topic, the time needed for completion, and the general conditions under which the writing sample is generated (Meyer, 1992). Portfolio assessments are an accumulation of artifacts that demonstrate progress toward valued real-world outcomes, are often produced in collaboration, require student reflection, and are evaluated on multiple dimensions (Salvia & Ysseldyke, 2004).
A major strength of authentic assessment is its connection to real-life skills (Meyer, 1992). Proponents of authentic assessment are quick to point out that life is not a series of isolated multiple-choice questions but full of complex, embedded problems to be solved (Wiggins, 1993). Accordingly, authentic assessments require students to solve complex problems or produce multi-step projects, often in collaboration with others. In this way, higher-ordered learning skills such as synthesis, analysis, collaboration, and problem solving are assessed. In fact, the purpose of authentic assessment is to measure students' ability to apply their knowledge and thinking skills to solving tasks that simulate real-world events or activities (see Table 1, for examples; Wiggins, 1993).
Authentic assessments attempt to seamlessly combine teaching, learning, and assessment to promote student motivation, engagement, and higher-ordered learning skills (Eder, 2004). Because assessment is part of instruction, teacher and students share an understanding of the criteria for performance; in some cases, students even contribute to defining the expectations for the task. The assumption is that students perform better when they know how they will be judged. Often students are asked to reflect and evaluate their own performance in order to promote deeper understanding of the learning objectives as well as foster higher order learning skills (i.e., self-reflection and evaluation).
Authentic assessments are often described as developmental because of the focus on students' burgeoning abilities to learn how to learn in the subject (Wiggins, 1993). For example, students' shortcomings in knowledge and how they apply their knowledge can be examined through carefully analyzing of their log books or by asking probing questions, in order to identify what needs to be taught or re-taught. Thus, the process by which students arrived at their final response or product is assessed (Mehrens, 1992).
Authentic assessments also have limitations. These include subjectivity in scoring, the costliness of administering and scoring, and the narrow range of skills that are typically assessed (Mehrens, 1992). By emphasizing complexity and relevance rather than structure and standardization, inter-rater reliability can be difficult to achieve with authentic assessment. Inter-rater agreement is increased with clearly defined criteria, including exemplars and non-exemplars and initial and on-going training of the evaluators. Unfortunately educators rarely have adequate guidelines to help analyze and score student products (Ysseldyke & Salvia, 2004). The logistics and training demands of authentic assessment have made its wide-spread adoption among general education prohibitive. Selecting artifacts to include in a portfolio can also be a challenge. In order to avoid the portfolio's becoming a meaningless accumulation of student work, there needs to be some selection process that distinguishes critical works from mementos (Hass & Osborne, 2002). Lastly, the emphasis on assessing knowledge in-depth or in application, often limits the amount of content knowledge that is assessed. For example, an authentic assessment that requires students in a biology class to design the ideal zoo would not test what students know about photosynthesis. Terwilliger proposed that the specificity of authentic assessment evaluation criteria to a particular task may limit its value as a measure of general learning outcomes.
Henning-Stout (1996) stated, “Academic assessment is authentic when it reflects performance on tasks that are meaningful to the learner” (p. 234). One strength of authentic assessment is the strong connection to the development of lessons and interventions that have real-life applications. If the learners being assessed are aware of their ability to self-regulate (Dembo, 2004) and make the appropriate changes during the learning process, they will achieve the transfer of knowledge that is necessary for learning to occur (Lidz, 1991). More importantly, they should be able to solve real-world tasks and be able to process new information within the construct of that task.
When given clear standards (Henning-Stout, 1996) and reliable and valid methods (Salvia & Ysseldke, 2004) for conducting authentic assessment, teachers can inform students of the level of expected performance and provide direct feedback about students' process towards meeting those standards. With dynamic assessment students receive immediate feedback about their process and their own problem-solving skills. The portfolio assessment provides individual students with an opportunity to physically and cognitively organize and monitor their learning process.
For educators concerned with social justice in the development of curriculum, pedagogy and assessment, authentic assessment provides ways for students outside the norm of the standard assessment to express their understanding of material (Henning-Stout, 1996; Hill-iard, 1995; Louise, 2007; Newfield, Andrew, Stein, & Maungedzo, 2003). For example, the government of South Africa has moved away from high stakes standardized assessments for categorizing, labeling, and tracking students towards portfolio assessments that are developed in conjunction with local communities (New-field et al., 2003).
Authentic assessment has also been used to train professionals. School administrators and teachers have been evaluated using portfolio assessments (Gatlin & Jacobs, 2002; Meadows & Dyal, 1999) as well as school psychology graduate students (Hass & Osborn, 2002; Prus, Matton, Thomas, & Robinson-Zañartu, 1996).
See also:Classroom Assessment
Alverno College Faculty. (1994). Student assessment-as-learning at Alverno College. Milwaukee, WI: Alverno Productions.
Coutinho, M., & Malouf, D. (1993). Performance assessment and children with disabilities: Issues and possibilities. Teaching Exceptional Children, 25(4), 63–67.
Cumming, J. J., & Maxwell, G. S. (1999). Contextualizing Authentic Assessment. Assessment in Education, 6(2), 177–194.
Dembo, M. H. (2004,). Don't lose sight of the students. Principal Leadership, April, 37–42.
Edger, D. J., (2004). General education assessment within the disciplines. Journal of General Education, 53(2), 135–157.
Gatlin, L., & Jacob, S. (2002). Standards-based digital portfolios: A component of authentic assessment for preservice teachers. Action in Teacher Education, 23(4), 28–34.
Grisham-Brown, J., Hallam, R., & Brookshire, R. (2006). Using authentic assessment to evidence children's progress toward early learning standards. Early Childhood Education Journal, 34(1), 45–51.
Hass, M., & Osborn, J. (2002). Using formative portfolios to enhance graduate school psychology programs. California School Psychologist, 7, 75–84.
Hilliard, A. G. (1995). Testing African American Students (2nd ed.). Chicago: Third World Press.
Lidz, C. (1991). Practitioner's Guide to Dynamic Assessment. New York: Guilford Press.
Meadows, R. B., & Dyal, A.B. (1999). Implementing portfolio assessment in the development of school administrators: improving preparation for educational leadership. Education, 120(2), 304–314.
Mehrens, W. A., (1992, Spring). Using performance assessment for accountability purposes. Educational Measurement: Issues and Practice, 11(1), 3–20.
Meyer, C. (1992). What's the difference between authentic and performance assessment? Education Leadership, 49(8), 39–40.
Moore, M. (2005). Meeting the educational needs of young gifted readers in the regular classroom. Gifted Child Today, 28(4), 40–47, 65.
Newfield, D., Andrew, D., Stein, P., & Maungedzo, R. (2003). ‘No number can describe how good it was’: assessment issues in the multimodal classroom. Assessment in Education, 10 (1), 61–81.
Oh, D. M., Kim, J. M., Garcia, R. E., & Krilowicz, B. L. (2005). Valid and reliable authentic assessment of culminating student performance in the biomedical sciences. Advances in Physiology Education, 29(2), 83–93.
Popovich, K. (2006). Designing and implementing ‘exemplary content, curriculum, and assessment in art education.’ Art Education, 59(6), 33–39.
Prus, J., Matton, L., Thomas, A., & Robinson-Zañartu, C. (1996). Using portfolios to assess the performance of school psychology graduate students. Paper presented at the meeting of the National Association of School Psychologists, Atlanta, Georgia.
Salvia, J., & Ysseldyke, J. E. (2004). Assessment in special and inclusive education (9th ed.). New York: Houghton Mifflin.
Terwilliger, J. (1997). Semantics, psychometrics and assessment reform: A close look at ‘authentic’ assessments. Educational Researcher, 26(8), 24–27.
Wiggins, G. (1993). Assessment: Authenticity, context and validity. Phi Delta Kappan, 75(3), 200–214.
Add your own comment
Today on Education.com
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- 10 Fun Activities for Children with Autism
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- Should Your Child Be Held Back a Grade? Know Your Rights
- Bullying in Schools
- First Grade Sight Words List
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working | http://www.education.com/reference/article/authentic-assessment/ | 13 |
121 | This lesson teaches and reinforces the “economic way of thinking” along with the personal finance terms: spend, save, invest and donate--in the context of making economic decisions or choices with money. The concepts of philanthropy and contributing to the common good are integrated into the lesson and unit. Incentives relating to why people spend, save, invest and donate will also be explored.
Teacher Note: This unit is designed for use with Money Smart Choices: Financial Literacy and Philanthropy, http://www.learningtogive.org/moneysmartchoices/, an interactive web site created through a partnership between the National Endowment for Financial Education® or NEFE® and The League: Curriculum by Learning to Give. The unit can be used effectively even if Internet access is not available to students. All of the content of the web site is provided in the lesson’s Instructional Procedures or Attachments. Adapt this lesson, and all lessons in this unit, as needed for student level. Specific activities can be omitted or enhanced to meet learner needs.
Three or Four 45-50 Minute Class Periods
The learner will
- define philanthropy (philanthropist) as giving time, talent, or treasure, and taking action for the common good.
- describe the economic and financial concepts of: resources, scarcity, choice, benefits, costs, opportunity cost, interest, interest rate, principal, simple interest, compound interest, compounding.
- define the vocabulary words spend, save, invest, and donate.
- discuss motivations for giving, and options for donating.
- describe choices one can make with money.
Day One: Four Things You Can Do With Money
Display a $20 bill and ask the students what economic choices they would make with $20 if it were given to them. Ask students if they ever receive gifts of money for holidays or special occasions, or if they have other sources of income, such as an allowance or part-time jobs. Discuss income briefly with students and ask what they usually do with their own money.
Prior to class, review Personal Finance Definitions: Save, Invest, Spend, Donate (Attachment Two). Either print the definitions for students or make an overhead transparency for use as you discuss the four terms.
Using an overhead transparency of the Economics and Money Visual Organizer (Attachment One), access student prior knowledge about economics. Tell the class that today, and in succeeding lessons, they will be learning about the four things people can do with money with the goal of becoming better money managers themselves.
Teacher note: Prior to class, review Personal Finance Definitions: Save, Invest, Spend, Donate (Attachment Two). Either print the definitions for students or make an overhead transparency for use as you discuss the four terms.
- Review the definitions using personal or student examples whenever possible.
- On four separate pieces of chart paper, list the words: Spend, Save, Invest, and Donate as headings. (Save these charts for later use)
- Group Activity (approximately 10 minutes) Arrange the class into four groups to read, research, and take notes regarding key points for each term. Hand out Attachment Three: Creating A Spending Plan to students. Ask students to read and highlight important information from the definitions and from Attachment Three for transfer to their group’s chart paper, using only the upper two-thirds of each chart paper. On the bottom third of each chart paper, leave room to make a simple T-chart showing Benefits on one side and Opportunity Cost on the other side.
- Groups prepare their chart for whole class reporting and viewing. Each group summarizes their findings for the class.
- Lead a brief follow-up class discussion after each group reports, generating new ideas/key points to remember for each word. Add words or short phrases to each group’s chart paper as based on whole group contributions.
Teacher Note: Remind students that opportunity costs are individually determined, depend on the ‘eye of the beholder’, and vary according to individual values, preferences, and perceptions.
- For each word, consider asking the following questions at the appropriate discussion time:
- Why save money? What are some benefits and costs of saving? What is a possible opportunity cost (the next best alternative you give up) of saving? Why should it be the first consideration to “pay yourself first”?
- What does it mean to spend money? Why is balance needed between wants and needs? What are some benefits of spending? Some costs? What is an opportunity cost of spending your income?
- What does it mean to invest money? When does saving become investing? What are some benefits and costs of investing? What is an opportunity cost of investing? (Money may not be readily available for use).
- What does it mean to donate money? What are some benefits and costs of donating? What is a possible opportunity cost for donating to a charity/nonprofit? Why is giving important?
- After the discussion, display the four charts in the following manner in the front of the classroom:
Spend Save Donate Invest
- Use the charts visual positions to explain that:
- Donating is a subset of spending, and donating or giving wisely contributes to the common good.
- Investing is a subset of saving. Investing a portion of savings results in higher returns, through compounding of interest.
- (Optional) Hand out and review Attachment Six: Letter to Families.
Day Two: About Donating
Write the definition of common good, “for the benefit of all,” on a display area. Ask students: Who has a responsibility for the common good?
- Display the definition of philanthropy: giving time, talent, or treasure, and taking action for the common good. Challenge the class to pronounce “philanthropy” quickly three times?
- Discuss the idea that people can give time, talent, or treasure for the common good. They can be philanthropic without having to be wealthy in monetary terms.
- Ask what the students’ philanthropic treasure, time and talent might be (money, personal goods of value, time to offer help to someone, talents they might use to help someone in need, etc.)
- Discuss how people of all ages donate time, talent, and treasure to a cause, individuals or nonprofit organizations. Use personal, student, or local examples. If time permits, share one of the many inspiring stories of youth philanthropy (see Bibliographical References).
- Explain to students that they will be reading about philanthropy and will have an opportunity in the next few days to decide if they want to raise funds for a philanthropic cause. Hand out Attachment Four: Understanding Philanthropy and Nonprofits and read together as a class.
- Discuss with the class:
- Who benefits from philanthropy?
- What benefits does the school, neighborhood, community, nation, or world receive?
- Is philanthropy a choice? How valuable is this freedom to be philanthropic to our democracy? (If needed, explain that “common good” is an important fundamental democratic principal.)
- (Optional) Use the Anne Frank quote from The Common Good section of Attachment Four: Understanding Philanthropy and Nonprofits to enhance the discussion:
“How wonderful it is that nobody needs to wait a single moment before starting to improve the world."
- Brainstorm with students a list of local examples of philanthropy or of charities/nonprofits working for the common good in your community. (Examples could include various school fund drives, local programs for hungry and homeless people, arts events, faith based programs, local parks, environmental groups, etc.)
- Assign Homework: Hand out and review the assignment using Attachment Five: Choosing Your Nonprofit. Writing assignment should be completed before beginning Day Three of this lesson.
Teacher Note: The text on Attachment Five: Choosing Your Nonprofit is available online at www.learningtogive.org/moneysmartchoices/ Researching specific local charities/nonprofits by your local zip code(s) can be done at: www.guidestar.org. Type in the zip code in the Find Nonprofits Search box for a list of nonprofits in your area. To locate further information about the nonprofit’s mission, register free for Guide Star Basic by submitting your e-mail address and a password.
- Ask students to reflect on:
- how what they have learned thus far might impact their attitudes and behavior regarding money.
- whether or not the act of philanthropy can be considered as an “investment.”
- Conclude Day Two by explaining that what you have done today is preparation for making a choice in Day Three of the charity(ies) the class wishes to support in a class fund-raising campaign.
Teacher Note: Prior to Day Three, research an Internet interest calculator (Search by using keyword: interest calculator, or compound interest calculator) to demonstrate to students how small amounts of savings can provide big returns when left alone with interest compounded.
Part One: Saving and Investing (20-35 minutes)
Part Two: Choosing a Nonprofit (10-15 minutes)
Teacher Note: Part One is designed to address the concepts of investing, interest, and computing simple interest and compound interest. Basic math percentage and decimal computations are involved, mental math, paper and pencil calculations, and also calculators, so students may better appreciate the power that compounding interest has for their own investing futures. Assess how much of this content is appropriate for your students and adjust the lesson accordingly. If more time is needed to effectively learn the concepts, the Part Two activities can be postponed to another day.
See Bibliographical References for other resources to teach compounding.
Write the following sentence on a display area and discuss:
“It isn’t what you make, but what you save and invest, that determines your wealth.”
Ask students to suggest what things affect how saved and invested money work for people in the long-term.
Lead the discussion to include these three things:
- the amount you save
- the rate of interest you receive
- the length of time you leave money and interest earnings in an account
- Refer to the “invest” chart paper from Day One and explain that when it comes to money invested, there are only two broad types of investments, you can either “loan it “or you can “own it”.
- When you “loan it”, you let someone use your money for a period of time. Your money grows by receiving additional money payments (interest) from other individuals or groups such as banks, companies, governments, etc., who pay you for the privilege of being able to use the money you loaned (invested) with them. Examples are checking accounts, savings accounts, money market accounts, Certificates of Deposit (CD’s), Treasury bills, notes, and bonds, corporate and municipal bonds. Money payments received over and above what was originally loaned out, (invested by you), is called interest. It is like being paid “rent” for being able to use your money.
- When you “own it”, you exchange your money for something else, usually something like common stock in a company, a mutual fund, real estate property of some kind, gold, or collectible items such as rare coins, etc. When you “own it”, you are not promised a return on your money. To get your money back, you would have to sell it with the hope of getting more than you paid for it. There is no guarantee here.
- Explain that “interest is interesting” because it can go two ways, it can be a benefit and a cost. As well as being a source of income (benefit), it might also have to be paid to someone else for the privilege of borrowing (cost).
- Explain that interest is simply the money payment for being able to use someone else’s money. Interest is either “payments spent for the use of borrowed money or payments received (income) for invested money”, depending on which side of the transaction one is on. When one saves and then invests money, it is the money received. When one borrows, it is the additional money one spends for that privilege.
- Explain to students:
- Principal is the original amount of money set aside to invest such as in a savings account, without including interest earned. The interest rate (expressed as a percentage of the principal) is the price paid for using someone else’s money.
- Simple interest is paid to the depositor when it is earned and is not added to the principal.
- Compound interest is interest earned on savings that includes previously earned interest. Interest earned in any time period is added to the principal. Future interest calculations are made on the higher amount of the original principal plus the interest that was added to it. Over the long- term, this is like a snowball that keeps getting bigger, as long as the interest earned is reinvested.
- Show on a display area or use an Internet interest calculator to show appropriate computations of simple interest earned on principal and compounding of interest differences.
- Ask students to calculate various simple interest rate percentages, properly converting percents to decimals, and correctly multiply to obtain the interest amount.
- Ask students what will happen, if interest is added to the principal from the first time period. (Interest earned will be higher because the principal has increased due to interest being added to it from the previous time period.)
- Now calculate a compound interest example through at least three time periods with the same interest rate and the same principal amount, so students can see and follow the compounding process. Students should notice that each time period they earned interest; they earned more than they did the last time.
- Assign a compound interest calculation for student practice over at least three time periods. Check their work to be sure they understand. Then check with calculators to see if they come up with the same answers.
- An online Compounding Calculator, such as the one found at http://www.themint.org/kids/compounding-calculator.html can be used to show students the power of compounding interest stretched over more years. Simply type in the amount saved each year, the interest rate earned, and the number of years invested and click on “Calculate” to see the power of investing and compounded interest.
Part Two of Day Three: Choosing a Nonprofit (10 -15 minutes)
- Ask the students to reflect on some reasons why people give time, talent, or treasure. Challenge the students to raise money to make a class donation. Review the benefits of donating from the chart. Talk about the costs of donating and remind students that every choice they make has an opportunity cost.
- Review the economic reality of scarcity, “the condition of not being able to have all of the goods and services that you want.” Tell the students that many nonprofits exist in response to scarcity.
- Remind students that because of scarcity, everyone is forced to make choices. Emphasize that every choice, even the choice to choose a nonprofit, has an opportunity cost.
- Refer to the Homework from the previous lesson. Ask the class to determine their top three philanthropic causes by taking a class opinion poll. Read the list of categories from Attachment Five: Choosing Your Nonprofit and ask students to raise their hands for their top three causes. Count the number of votes for each cause to determine the class’ priorities. (Add others as appropriate from student suggestions.)
- If time permits, brainstorm, using the original “donate” chart, local people, organizations, and/or charitable groups who could make good use of donations for those three charitable causes.
- Tell students they will decide on another day about a specific nonprofit charity (or possibly more than one) to benefit from their class fund-raising efforts by using an economic decision making model.
- Show the students the jar for collecting money. Talk with the students about where the money might come from. They are not to solicit money—it should come from them, families, peers, or from an organized class fund-raising activity. Students could donate spare change, offer to do jobs to earn money to donate, work with parents to come up with ideas, or conduct a fund-raiser in the school or community.
- Place a small financial contribution of your own in the jar so all can see. Tell students you are confident they will make a good decision in choosing a specific cause for the donation.
- If time permits, debrief Day Three by posing questions such as:
- Why do most young adults not save? (Most young adults do not save because they perceive that the opportunity cost -the next best alternative they give up- is too great when they choose to save). However, when they begin to understand the power of compound interest over time, they may decide that the benefits of saving are, in fact, greater than the benefits of spending the money immediately.
- How important is knowledge of basic math when it comes to saving and investing? (Knowing some basic mathematics computation skills, both paper and pencil and with calculators, makes it possible for anyone to make better economic and personal financial decisions. This can greatly impact how much money they have to spend, donate, save, or invest.)
- What do saving and math knowledge have to do with philanthropy?
(This knowledge allows a person to build wealth, to be more able to give something back to the community and society.)
Ask students to identify real life examples of scarcity and opportunity cost.
Ask students to reflect in writing on why people give, or why they personally think it is important to give or donate.
Interactive Parent/Student Homework:
Optional - Send home a note introducing the unit and explaining that the class will be raising money for a donation to a charitable cause. (See Attachment Six: Letter to Families.)
A short writing assignment, based on reading (Attachment Five: Choosing Your Nonprofit, is included at the end of Day Two. The assignment is due before beginning Day Three activities.
Collect brochures, pamphlets, and flyers from local savings institutions and compare interest rates and features of various savings/investment plans. Consider inviting an appropriate community-based financial institution representative to speak to your class about saving and investing, how compound interest affects savings/investment plans, and the benefits of early and regular saving.
Lesson Developed By:John Noling
Spend: to pay out, trade money for goods or services, use money freely. Spending includes paying taxes, donating to charity, and spending on other wants and needs.
Save: to put by as a store or reserve (such as part of an allowance each week); to accumulate or put aside for a particular purpose or occasion (example: to purchase a portable listening device or save for a vacation trip in the short term (less than a year). This is often done by placing money to be saved in a low risk, low return savings account.
Invest: a subset or form of saving where money is put someplace with the hope and intention of making a financial gain in the longer term. Money invested is money you can "put away" and not miss on a day-to-day basis. Saving becomes investing when the resource (money, property, human labor and talent, gifts of nature) is directed to a place where it will increase in value. Investing may also refer to people or businesses spending money to buy capital resources (factories, equipment, etc.) or human resources (people skills and abilities) with the idea of improving productivity and financial wealth or profit.
When it comes to money invested, you can either “loan it” or “own it” If you “loan it” to others, it receives interest (additional money payments paid to you besides payment for the amount loaned out). Examples include checking, savings, and money market accounts, Certificates of deposit (CD’s), U.S. Savings Bonds, Treasury bills, notes, and bonds, corporate and municipal bonds, etc.
Investments that are “owned” include common stocks in companies, stock mutual funds, real estate, commodities such as corn and pork bellies, and collectibles such gold, rare coins, etc. When you “own it”, you exchange your money for something else with no promise that you will get your money back. To get your money back and more, you will hopefully sell it for more than you paid for it.
Donate: to voluntarily make a free gift or a grant of; contribute or give esp. to a charity or charitable cause (money for a soup kitchen or food pantry) or toward a public-service institution (someone donates land for a park). Donate is a subset of spend.
Once you know your income, the first category to consider is savings. “What?” you might ask. “Isn’t a spending plan meant for planning what you spend?” It’s true that plans help you manage your spending. But they also help manage what you save.
Wise financial experts often say, “It’s not what you earn that matters, it’s what you keep.” All of the income in the world won’t help you if you spend every dime. If you make a million and spend a million, what are you left with? Zero. In a way, you’re no better off than the person who makes $100 and spends $100. You might have bought a lot of stuff in the process, but you’re still left with nothing.
Saving is important because it helps you care for yourself over the long term. If you’re lucky, you’ll live many years. There may be times when you’ll need extra money for an expense you didn’t expect. There may be times when you’ll need money for a special purchase. If you’re a good saver, you’ll have that money when you need it.
Plus, having savings helps you to feel secure. When you’ve got money saved, that’s just the point—you have it. It’s best to develop a balance between spending and saving. Both are important money management skills.
The easiest way to save money is to follow one simple principle: pay yourself first. Every time you receive any income, make a point to save some. A good rule of thumb is to save 10 percent of all you earn. Some people even save 20 percent or 50 percent!
When you create your budget, make “savings” your first expense category. Put your savings away before you spend any of your income. Saving is like writing a paycheck to yourself. It shows that your goals are important.
After savings, the next step in creating a spending plan is to consider your spending habits. Once you’ve set aside money to save, you’ll have a certain amount of income left. This is the money you can spend.
When planning your spending, you must make choices. You have a certain amount of money to work with. How will you divide it up? When making spending choices, it helps to know the difference between needs and wants. You should make sure your spending covers a balance of both.
Needs vs. Wants
“Needs” are items that you truly must have. These are things you can’t get along without. For instance, we all need a place to live and food to eat. We need water and clothing. We may not want to spend our money on these things, but they’re important. We have to make sure they’re covered first.
“Wants” are items that you would like to have. You could do without these items if you had to. For instance, you might want a new shirt or a certain CD. You might want to go to the movies or to buy a cool video game. You don’t really need these things like you need food and shelter. You just feel that they’re important because they appeal to you in some way.
When you plan your spending, you’re in charge. You get to decide how you will use your money to support yourself. If you can cover a balance of needs and wants, you’ll have the best results.
Sure, it’s exciting to spend your money on wants. Everyone loves to have fun new things. It’s important to give yourself some things you want, so that you enjoy your money. If you only spent your money on things you needed, that would be pretty dull. But you must make sure to cover your needs as well. If you don’t plan for your needs, you might miss something important.
When creating your spending plan, try for a balance of needs and wants. Consider your needs first. Set aside money for the important things, like school supplies or gas for your car. Then be sure to plan for some wants. If you’re like most people, you probably have a lot of wants, so you may not be able to buy all of them. That’s perfectly OK. Focus on the most important ones instead.
Along with covering your needs and wants, you might want to use some of your money for investing. Chances are, you may not have a lot of money to put toward investing right now. But you can start small and watch your money grow.
Investing is the process of earning money with your money. Investing wisely is the key to a secure future. Through investing, you can grow your money so eventually you can retire. This means you have enough money saved so that you no longer have to work. If you invest well, you can even retire early!
As a young person, you’ve got a huge advantage over adults. That advantage is time. Time is your best friend when you’re saving money. That’s especially true when you’re investing for the future. Here’s how it works:
Even a small amount of money can make a difference if you start early. The longer you invest, the more your money will grow.
Finally, an important part of your spending plan involves the money that you choose to share with others. This type of giving is called donating. In the next section, we’ll talk about why giving is important. For now, we’ll look at the different ways to donate and how a spending plan can help you do so.
Your spending plan will help you know how much money you have to help support important causes. Some people choose to give 10 percent of their income away. Others give less than 10 percent, or more. The amount you give is up to you. What’s important is that you plan your giving wisely.
In your spending plan, you may wish to include a category for donating funds. This category includes money that you will give to organizations and even individuals. It’s a good idea to choose a set amount for giving each month. Make it an amount that you easily can afford. There’s no point to giving if it’s draining for you. Your giving should make you feel empowered.
Volunteering your time
In addition to giving money, you may wish to donate your time to support causes. Time donations aren’t factored into a spending plan. But they’re important to mention because they’re very valuable. Most organizations can use help in the form of financial support. Many also can use volunteer time.
When you volunteer, you give of your time to help a group or individual. You don’t charge any money for your services—you simply donate your time. Volunteering can go a long way to help many organizations. Plus, it can be especially rewarding. When you donate money, you may not see how your gifts are put to use. When you volunteer, you see your work in action! You get to experience how your time donation helps.
First, let’s take a look at why donating is important. Donating helps support vital organizations in our communities. These are called nonprofits.
Nonprofits are formed to achieve certain purposes. They contribute to and help support the common good. Nonprofits can serve whole communities. They also can serve specific groups. Usually nonprofits work toward a cause. They are established to help people, and communities, better themselves.
Nonprofits differ from businesses in one key way. Businesses are formed to earn money, known as profit. After a business pays its bills, it keeps any leftover income. This extra income, or profit, is usually given to the company’s owners or investors. The profit represents a reward for succeeding in business.
Unlike businesses, nonprofits do not give out profits. If they earn extra income, they use these funds to run their organization. Nonprofits also usually do not pay taxes. Most have what is known as tax-exempt status. Instead of paying taxes, they can devote their resources to helping the community.
Nonprofits meet many community needs that businesses and government do not. These needs range from education and health care to crime prevention. Nonprofit religious organizations also provide important functions to the community. Most of us benefit from the work of nonprofits throughout our lives. It is important to support these programs to promote the common good.
The Common Good
“How wonderful it is that nobody needs to wait a single moment before starting to improve the world.” —Anne Frank, German-Jewish teenager (1929-1945)
The “common good” is defined as conditions that benefit all people in society. These conditions benefit everyone equally. One example of such a condition is world peace. Another example of a common good is a health care system that all people can afford.
Though the common good benefits everyone, it may not happen automatically. People must cooperate to create the common good. When a common good is maintained, its benefits are enjoyed by the entire society. Reducing pollution, for example, enables all people to live in a healthier environment.
Most nonprofits depend on individual giving. Nonprofits may make some money through their programs, but they often need donations to survive. Businesses donate to nonprofits,
and the government may give them money as well. But the majority of nonprofit donations are from individuals. The success of nonprofits depends on the generosity of people just like you.
In 2003, financial donations to organizations exceeded $240 billion. Most people would think that businesses or foundations gave the majority of that amount. However, the opposite is actually true. Individual donations made up 74.5 percent of it, or more than $179 billion. Source: Giving USA 2004.
The act of giving to charitable causes is known as philanthropy. A philanthropist is a person who donates time, talent and treasure and takes action to support the common good. Perhaps the greatest benefit of philanthropy is that it creates a positive impact. It can bring about very important changes. These changes create positive life experiences for others.
One way in which philanthropy helps others is through advocacy. To advocate is to speak up for something. Many nonprofits help society by fighting for important causes. A nonprofit may advocate for justice, for example. Through philanthropy, youth have the power to promote many causes, from equality to world peace. Philanthropy is a personal way to make the world a better place.
Throughout our history, Americans have benefited from the generosity of many individuals. These individuals were pioneers in donating their time and money:
The contributions of these individuals continue to affect us today. Many people now volunteer at libraries and fire departments. The Red Cross is an international organization. What started as a donation turned into a commitment? The rest, as they say, is history.
One in four Americans is under the age of 18. That amounts to about 70 million youth in our country. If you are part of this group, you may not be able to vote yet or even drive. Still, you do have power. You particularly have the power to make a difference in your community.
As a youth, how you spend your time matters. About 13 million U.S. teens volunteer three hours per week, on average. That adds up to over 2 billion hours per year! Youth volunteers are making a difference in their communities. Working together, they tackle problems such as pollution, poverty, and more.
Animal Rights Organization
|Mission||Our mission is to provide a natural sanctuary for rescued wild animals. We also educate the public regarding animal rights.|
|We operate a wildlife sanctuary that serves as a permanent home for rescued animals. The sanctuary is open to the public. We also provide educational programs for groups that visit the sanctuary.|
|Type of Support||We seek financial donations from individuals. These donors are individuals who participate in our membership and animal adoption programs. We also welcome volunteers who help in our gift shop and admissions areas.|
|Mission||Our mission is to provide the community with opportunities to connect with plants. We also educate the community about native plants and water-saving techniques.|
|Program||We maintain a 23-acre garden with many varieties of plants for the community to enjoy. We offer a wide variety of classes on specialty gardening. We also maintain an extensive library of reference materials.|
|Type of Support||We seek financial donations from individuals who become members of the organization. We also seek youth and adult volunteers to help with fundraising events, such as the annual plant sale and used book sale.|
|Mission||Our mission is to provide food supplies to more than 800 hunger relief programs. Last year, we distributed 20.5 million pounds of food. This was enough to provide 43,000 meals each day to needy children, seniors and families.|
|We serve as a central supplier to hunger-relief agencies. We have special programs for meeting the needs of children at schools. We also collect food supplies from hotels, restaurants, and supermarkets.|
|Type of Support||Donations of money are always welcome to support our program costs. We also seek volunteers (must be age 14 or over) to sort and package food. Volunteers help fill agency food orders, work in the office, or help in our break room.|
Community Music Organization
|Mission||Our mission is to connect the community with music through concerts and education.|
|We provide concerts 50 weeks of every year. We offer a music school for children and adults. We also offer music outreach programs for schools.|
|Type of Support||Financial donations are welcome in the form of memberships. Volunteers are needed to help with concerts, special fundraising events, and school registration. Volunteers must be 14 or older.|
Human Right Organization
|Mission||Our mission is to protect human rights in countries throughout the world. We focus on promoting justice and legal reform.|
|Our organization runs three main programs. First, we develop publicity to raise awareness about human rights violations. Second, we organize campaigns to help free individuals who have been unfairly imprisoned. Third, we conduct research and publish research papers on human rights.|
|Type of Support||Our organization runs three main programs. First, we develop publicity to raise awareness about human rights violations. Second, we organize campaigns to help free individuals who have been unfairly imprisoned. Third, we conduct research and publish research papers on human rights.|
Environmental Protection Organization
|Mission||Our mission is to educate youth about environmental issues. We focus on increasing youth efforts to protect the environment.|
|We operate educational programs in schools on the subject of how to protect the environment. These programs teach students how to preserve natural resources.|
|Type of Support||We seek individual donations, which provide 75 percent of our revenue. We also welcome student volunteers on school projects that benefit the environment.|
|Mission||Our mission is to help the people in our community achieve their full potential. We provide access to printed material, electronic resources, and librarian assistance.|
|We offer a lending library with tens of thousands of items. We offer reading programs for children and adults. We offer services for persons with disabilities. We also offer computer access and meeting rooms for the community.|
|Type of Support||Financial donations are welcome in the form of memberships. Volunteers are needed to help with special fundraising events, such as the used book sale. Volunteers must be 14 or older.|
|Mission||Our mission is to provide temporary emergency housing and services to those in need in our community.|
|We offer a central 300-bed shelter in an inner-city location. Residents receive a bed, food, and various services. These include medical care, help with employment, and educational opportunities.|
|Type of Support||Financial donations to support our programs are always welcome. We also gladly accept non-perishable food and clothing donations.|
Cancer Research Organization
|Mission||Our mission is to eliminate cancer as a major health problem. We focus on preventing cancer, saving lives, and diminishing suffering from cancer.|
|We offer research, education, advocacy, and service programs related to cancer.|
|Type of Support||Sought Financial donations to support our programs are always welcome. Volunteers are needed at special events to raise cancer awareness. We also seek volunteers to work in our office locations.|
Youth Mentoring Program
|Mission||Our mission is to help children through mentoring relationships with a caring adult.|
|We match children ages 6 to18 with mentors. These matches meet to share activities for a few hours each month.|
|Type of Support||Financial donations to support our programs are always welcome. We also seek volunteer adult mentors to participate in our programs.|
Dear Family Members,
Our class has started a financial literacy and philanthropy unit called “Money Smart Teens.” Students will be asked to think about choices people make with their money, including spending, saving, investing, and donating.
To be successful in the long-term, your son or daughter will probably have to know more about money than we or our parents ever had to know. Knowing about managing money wisely will give your student freedom and choices in life that they would not otherwise have. Knowledge may help prevent somebody taking advantage of them. Knowledge might even make them wealthy, or at least financially independent!
We will learn the differences between spending, saving, investing, and donating. We will focus on saving and investing, to examine the importance of saving early and regularly. The concept of philanthropy (voluntarily giving or sharing time, talents or treasure for the common good of everyone) will be introduced and practiced by our class. We will learn about the nonprofit charities that make life better for all of us in our community.
One of our projects involves collecting small change to donate to a special charity/nonprofit chosen by the students. The money that our class gathers until the date of _____________ will be collected in one large classroom bank. Students will not be directly soliciting money for this project, but may contribute personally, or as a family, if they wish.
Students will be asked to recommend a charity/nonprofit cause to benefit from our class donations. Then we will make our decision as a class, using an economic decision making model. Please discuss any nonprofit charities that are important to your family with your son or daughter.
We will present our donation to the chosen charity in a classroom presentation at a later date and you will be notified of this special day in case you can join us. If you would like to contribute any of your time, talent, or treasure to our efforts at any time in the coming days, we welcome your assistance!
A little financial knowledge will go a long way toward helping your son or daughter be an informed and responsible consumer, producer, and citizen. Thanks for any assistance or advice you may offer. Feel free to contact me with any questions or concerns.
E-mail Address if appropriate
All rights reserved. Permission is granted to freely use this information for nonprofit (noncommercial), educational purposes only. Copyright must be acknowledged on all copies. | http://learningtogive.org/lessons/unit406/lesson1.html | 13 |
15 | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
G protein coupled receptors (GPCRs), also known as seven-transmembrane domain receptors, 7TM receptors, heptahelical receptors, serpentine receptor, and G protein-linked receptors (GPLR), constitute a large protein family of receptors that sense molecules outside the cell and activate inside signal transduction pathways and, ultimately, cellular responses. They are called transmembrane receptors because they pass through the cell membrane, and they are called seven-transmembrane receptors because they pass through the cell membrane seven times.
G protein-coupled receptors are found only in eukaryotes, including yeast, choanoflagellates, and animals. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases, and are also the target of approximately 40% of all modern medicinal drugs. The 2012 Nobel prize in chemistry was awarded to Brian Kobilka and Robert Lefkowitz for their work that was "crucial for understanding how G-protein–coupled receptors function."
There are two principal signal transduction pathways involving the G protein-coupled receptors: the cAMP signal pathway and the phosphatidylinositol signal pathway. When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G-protein by exchanging its bound GDP for a GTP. The G-protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13).:1160
The exact size of the GPCR superfamily is unknown but nearly 800 different human genes (or ≈4% of the entire protein-coding genome) have been predicted from genome sequence analysis. Although numerous classification schemes have been proposed, the superfamily is classically divided into three main classes (A, B, and C) with no detectable shared sequence homology between classes. The largest class by far is class A, which accounts for nearly 85% of the GPCR genes. Of class A GPCRs, over half of these are predicted to encode olfactory receptors while the remaining receptors are liganded by known endogenous compounds or are classified as orphan receptors. Despite the lack of sequence homology between classes, all GPCRs share a common structure and mechanism of signal transduction.
- Class A (or 1) (Rhodopsin-like)
- Class B (or 2) (Secretin receptor family)
- Class C (or 3) (Metabotropic glutamate/pheromone)
- Class D (or 4) (Fungal mating pheromone receptors)
- Class E (or 5) (Cyclic AMP receptors)
- Class F (or 6) (Frizzled/Smoothened)
The very large rhodopsin A group has been further subdivided into 19 subgroups (A1-A19). More recently, an alternative classification system called GRAFS (Glutamate, Rhodopsin, Adhesion, Frizzled/Taste2, Secretin) has been proposed.
The human genome encodes thousands of G protein-coupled receptors, about 350 of which detect hormones, growth factors, and other endogenous ligands. Approximately 150 of the GPCRs found in the human genome have unknown functions.
Some web-servers and bioinformatics prediction methods have been used for predicting the classification of GPCRs according to their amino acid sequence alone, by means of the pseudo amino acid composition approach.
Physiological roles Edit
GPCRs are involved in a wide variety of physiological processes. Some examples of their physiological roles include:
- The visual sense: the opsins use a photoisomerization reaction to translate electromagnetic radiation into cellular signals. Rhodopsin, for example, uses the conversion of 11-cis-retinal to all-trans-retinal for this purpose
- The sense of smell: receptors of the olfactory epithelium bind odorants (olfactory receptors) and pheromones (vomeronasal receptors)
- Behavioral and mood regulation: receptors in the mammalian brain bind several different neurotransmitters, including serotonin, dopamine, GABA, and glutamate
- Regulation of immune system activity and inflammation: chemokine receptors bind ligands that mediate intercellular communication between cells of the immune system; receptors such as histamine receptors bind inflammatory mediators and engage target cell types in the inflammatory response
- Autonomic nervous system transmission: both the sympathetic and parasympathetic nervous systems are regulated by GPCR pathways, responsible for control of many automatic functions of the body such as blood pressure, heart rate, and digestive processes
- Cell density sensing: A novel GPCR role in regulating cell density sensing.
- Homeostasis modulation (e.g., water balance).
- Involved in tumor growth and metastasis.
Receptor structure Edit
GPCRs are integral membrane proteins that possess seven membrane-spanning domains or transmembrane helices. The extracellular parts of the receptor can be glycosylated. These extracellular loops also contain two highly-conserved cysteine residues that form disulfide bonds to stabilize the receptor structure. Some seven-transmembrane helix proteins (channelrhodopsin) that resemble GPCRs may contain ion channels, within their protein.
Similar to GPCRs, the adiponectin receptors 1 and 2 (ADIPOR1 and ADIPOR2) also possess 7 transmembrane domains. However ADIPOR1 and ADIPOR2 are orientated oppositely to GPCRs in the membrane (i.e., cytoplasmic N-terminus, extracellular C-terminus) and do not associate with G proteins.
Early structural models for GPCRs were based on their weak analogy to bacteriorhodopsin, for which a structure had been determined by both electron diffraction (PDB 2BRD, Template:PDB2) and X ray-based crystallography (Template:PDB2). In 2000, the first crystal structure of a mammalian GPCR, that of bovine rhodopsin (Template:PDB2), was solved. While the main feature, the seven transmembrane helices, is conserved, the relative orientation of the helices differ significantly from that of bacteriorhodopsin. In 2007, the first structure of a human GPCR was solved (Template:PDB2, Template:PDB2). This was followed immediately by a higher resolution structure of the same receptor (Template:PDB2). This human β2-adrenergic receptor GPCR structure, proved highly similar to the bovine rhodopsin in terms of the relative orientation of the seven-transmembrane helices. However the conformation of the second extracellular loop is entirely different between the two structures. Since this loop constitutes the "lid" that covers the top of the ligand binding site, this conformational difference highlights the difficulties in constructing homology models of other GPCRs based only on the rhodopsin structure.
The structures of activated and/or agonist-bound GPCRs have also been determined. These structures indicate how ligand binding at the extracellular side of a receptor leads to conformational changes in the cytoplasmic side of the receptor. The biggest change is an outward movement of the cytoplasmic part of the 5th and 6th transmembrane helix (TM5 and TM6). The structure of activated beta-2 adrenergic receptor in complex with Gs confirmed that the Gα binds to a cavity created by this movement.
Structure-function relationships Edit
Structurally, GPCRs are characterized by an extracellular N-terminus, followed by seven transmembrane (7-TM) α-helices (TM-1 to TM-7) connected by three intracellular (IL-1 to IL-3) and three extracellular loops (EL-1 to EL-3), and finally an intracellular C-terminus. The GPCR arranges itself into a tertiary structure resembling a barrel, with the seven transmembrane helices forming a cavity within the plasma membrane that serves a ligand-binding domain that is often covered by EL-2. Ligands may also bind elsewhere, however, as is the case for bulkier ligands (e.g., proteins or large peptides), which instead interact with the extracellular loops, or, as illustrated by the class C metabotropic glutamate receptors (mGluRs), the N-terminal tail. The class C GPCRs are distinguished by their large N-terminal tail, which also contains a ligand-binding domain. Upon glutamate-binding to an mGluR, the N-terminal tail undergoes a conformational change that leads to its interaction with the residues of the extracellular loops and TM domains. The eventual effect of all three types of agonist-induced activation is a change in the relative orientations of the TM helices (likened to a twisting motion) leading to a wider intracellular surface and "revelation" of residues of the intracellular helices and TM domains crucial to signal transduction function (i.e., G-protein coupling). Inverse agonists and antagonists may also bind to a number of different sites, but the eventual effect must be prevention of this TM helix reorientation.
The structure of the N- and C-terminal tails of GPCRs may also serve important functions beyond ligand-binding. In particular, the C-terminus often contains serine (Ser) or threonine (Thr) residues that, when phosphorylated, increase the affinity of the intracellular surface for the binding of scaffolding proteins called β-arrestins (β-arr). Once bound, β-arrestins both sterically prevent G-protein coupling and may recruit other proteins leading to the creation of signaling complexes involved in extracellular-signal regulated kinase (ERK) pathway activation or receptor endocytosis (internalization). As the phosphorylation of these Ser and Thr residues often occurs as a result of GPCR activation, the β-arr-mediated G-protein-decoupling and internalization of GPCRs are important mechanisms of desensitization.
A final common structural theme among GPCRs is palmitoylation of one or more sites of the C-terminal tail or the intracellular loops. Palmitoylation is the covalent modification of cysteine (Cys) residues via addition of hydrophobic acyl groups, and has the effect of targeting the receptor to cholesterol- and sphingolipid-rich microdomains of the plasma membrane called lipid rafts. As many of the downstream transducer and effector molecules of GPCRs (including those involved in negative feedback pathways) are also targeted to lipid rafts, this has the effect of facilitating rapid receptor signaling.
GPCRs respond to extracellular signals mediated by a huge diversity of agonists, ranging from proteins to biogenic amines to protons, but all transduce this signal via a mechanism of G-protein coupling. This is made possible by virtue of a guanine-nucleotide exchange factor (GEF) domain primarily formed by a combination of IL-2 and IL-3 along with adjacent residues of the associated TM helices.
The G protein-coupled receptor is activated by an external signal in the form of a ligand or other signal mediator. This creates a conformational change in the receptor, causing activation of a G protein. Further effect depends on the type of G protein.
Ligand binding Edit
GPCRs include receptors for sensory signal mediators (e.g., light and olfactory stimulatory molecules); adenosine, bombesin, bradykinin, endothelin, γ-aminobutyric acid (GABA), hepatocyte growth factor (HGF), melanocortins, neuropeptide Y, opioid peptides, opsins, somatostatin, GH, tachykinins, members of the vasoactive intestinal peptide family, and vasopressin; biogenic amines (e.g., dopamine, epinephrine, norepinephrine, histamine, glutamate (metabotropic effect), glucagon, acetylcholine (muscarinic effect), and serotonin); chemokines; lipid mediators of inflammation (e.g., prostaglandins, prostanoids, platelet-activating factor, and leukotrienes); and peptide hormones (e.g., calcitonin, C5a anaphylatoxin, follicle-stimulating hormone (FSH), gonadotropin-releasing hormone (GnRH), neurokinin, thyrotropin-releasing hormone (TRH), cannabinoids, and oxytocin). GPCRs that act as receptors for stimuli that have not yet been identified are known as orphan receptors.
Whereas, in other types of receptors that have been studied, wherein ligands bind externally to the membrane, the ligands of GPCRs typically bind within the transmembrane domain. However, protease-activated receptors are activated by cleavage of part of their extracellular domain.
Conformational change Edit
The transduction of the signal through the membrane by the receptor is not completely understood. It is known that the inactive G protein is bound to the receptor in its inactive state. Once the ligand is recognized, the receptor shifts conformation and, thus, mechanically activates the G protein, which detaches from the receptor. The receptor can now either activate another G protein or switch back to its inactive state. This is an overly simplistic explanation, but suffices to convey the overall set of events.
It is believed that a receptor molecule exists in a conformational equilibrium between active and inactive biophysical states. The binding of ligands to the receptor may shift the equilibrium toward the active receptor states. Three types of ligands exist: agonists are ligands that shift the equilibrium in favour of active states; inverse agonists are ligands that shift the equilibrium in favour of inactive states; and neutral antagonists are ligands that do not affect the equilibrium. It is not yet known how exactly the active and inactive states differ from each other.
G-protein activation/deactivation cycle Edit
- See also: G protein
When the receptor is inactive, the GEF domain may be bound to an also inactive α-subunit of a heterotrimeric G-protein. These "G-proteins" are a trimer of α, β, and γ subunits (known as Gα, Gβ, and Gγ, respectively) that is rendered inactive when reversibly bound to Guanosine diphosphate (GDP) (or alternatively, no guanine nucleotide) but active when bound to Guanosine triphosphate (GTP). Upon receptor activation, the GEF domain, in turn, allosterically activates the G-protein by facilitating the exchange of a molecule of GDP for GTP at the G-protein's α-subunit. The cell maintains a 10:1 ratio of cytosolic GTP:GDP so exchange for GTP is ensured. At this point, the subunits of the G-protein dissociate from the receptor, as well as each other, to yield a Gα-GTP monomer and a tightly interacting Gβγ dimer, which are now free to modulate the activity of other intracellular proteins. The extent to which they may diffuse, however, is limited due to the palmitoylation of Gα and the presence of a molecule of Glycosylphosphatidylinositol (GPI) that has been covalently added to the C-termini of Gγ. The phosphatidylinositol moiety of the GPI-linkage contains two hydrophobic acyl groups that anchor any GPI-linked proteins (e.g. Gβγ) to the plasma membrane, and also, to some extent, to the local lipid raft. (Compare this to the effect of palmitoylation on GPCR localization discussed above)
Because Gα also has slow GTP→GDP hydrolysis capability, the inactive form of the α-subunit (Gα-GDP) is eventually regenerated, thus allowing reassociation with a Gβγ dimer to form the "resting" G-protein, which can again bind to a GPCR and await activation. The rate of GTP hydrolysis is often accelerated due to the actions of another family of allosteric modulating proteins called Regulators of G-protein Signaling, or RGS proteins, which are a type of GTPase-Activating Protein, or GAP. In fact, many of the primary effector proteins (e.g. adenylate cyclases) that become activated/inactivated upon interaction with Gα-GTP also have GAP activity. Thus, even at this early stage in the process, GPCR-initiated signaling has the capacity for self-termination.
GPCR signaling Edit
If a receptor in an active state encounters a G protein, it may activate it. Some evidence suggests that receptors and G proteins are actually pre-coupled. For example, binding of G proteins to receptors affects the receptor's affinity for ligands. Activated G proteins are bound to GTP.
Further signal transduction depends on the type of G protein. The enzyme adenylate cyclase is an example of a cellular protein that can be regulated by a G protein, in this case the G protein Gs. Adenylate cyclase activity is activated when it binds to a subunit of the activated G protein. Activation of adenylate cyclase ends when the G protein returns to the GDP-bound state.
Adenylate cyclases (of which 9 membrane-bound and one cytosolic forms are known in humans) may also be activated or inhibited in other ways (e.g., Ca2+/Calmodulin binding), which can modify the activity of these enzymes in an additive or synergistic fashion along with the G proteins.
The signaling pathways activated through a GPCR are limited by the primary sequence and tertiary structure of the GPCR itself but ultimately determined by the particular conformation stabilized by a particular ligand, as well as the availability of transducer molecules. Currently, GPCRs are considered to utilize two primary types of transducers: G-proteins and β-arrestins. Because β-arr’s only have high affinity to the phosphorylated form of most GPCRs (see above or below), the majority of signaling is ultimately dependent upon G-protein activation. However, the possibility for interaction does allow for G-protein independent signaling to occur.
G-protein-dependent signaling Edit
There are three main G-protein-mediated signaling pathways, mediated by four sub-classes of G-proteins distinguished from each other by sequence homology (Gαs, Gαi/o, Gαq/11, and Gα12/13). Each sub-class of G-protein consists of multiple proteins, each the product of multiple genes and/or splice variations that may imbue them with differences ranging from subtle to distinct with regard to signaling properties, but in general they appear reasonably grouped into four classes. Because the signal transducing properties of the various possible βγ combinations do not appear to radically differ from one another, these classes are defined according to the isoform of their α-subunit.:1163
While most GPCRs are capable of activating more than one Gα-subtype, they also show a preference for one subtype over another. When the subtype activated depends on the ligand that is bound to the GPCR, this is called functional selectivity (also known as agonist-directed trafficking, or conformation specific agonism). However, the binding of any single particular agonist may also initiate activation of multiple different G-proteins, as it may be capable of stabilizing more than one conformation of the GPCR’s GEF domain, even over the course of a single interaction. Additionally, a conformation that preferably activates one isoform of Gα may activate another if the preferred is less available. Furthermore, feedback pathways may result in receptor modifications (e.g. phosphorylation) that alter the G-protein preference. Regardless of these various nuances, the GPCR’s preferred coupling partner is usually defined according to the G-protein most obviously activated by the endogenous ligand under most physiological and/or experimental conditions.
Gα signaling Edit
- The effector of both the Gαs and Gαi/o pathways is the Cyclic-adenosine monophosphate (cAMP) generating enzyme Adenylate Cyclase, or AC. While there are ten different AC gene products in mammals, each with subtle differences in tissue distribution and/or function, all catalyze the conversion of cytosolic Adenosine Triphosphate (ATP) to cAMP, and all are directly stimulated by G-proteins of the Gαs class. Conversely, interaction with Gα subunits of the Gαi/o type inhibits AC from generating cAMP. Thus, a GPCR coupled to Gαs counteracts the actions of a GPCR coupled to Gαi/o, and vice versa. The level of cytosolic cAMP may then determine the activity of various ion channels as well as members of the ser/thr specific Protein Kinase A (PKA) family. Thus cAMP is considered a second messenger and PKA a secondary effector.
- The effector of the Gαq/11 pathway is Phospholipase C-β (PLCβ), which catalyzes the cleavage of membrane-bound phosphatidylinositol 4,5-biphosphate (PIP2) into the second messengers inositol (1,4,5) trisphosphate (IP3) and diacylglycerol (DAG). IP3 acts on IP3 receptors found in the membrane of the endoplasmic reticulum (ER) to elicit Ca2+ release from the ER, while DAG diffuses along the plasma membrane where it may activate any membrane localized forms of a second ser/thr kinase called Protein Kinase C (PKC). Since many isoforms of PKC are also activated by increases in intracellular Ca2+, both these pathways can also converge on each other to signal through the same secondary effector. Elevated intracellular Ca2+ also binds and allosterically activates proteins called Calmodulins, which in turn go on to bind and allosterically activate enzymes such as Ca2+/Calmodulin-dependant Kinases (CAMKs).
- The effectors of the Gα12/13 pathway are three RhoGEFs (p115-RhoGEF, PDZ-RhoGEF, and LARG), which, when bound to Gα12/13 allosterically activate the cytosolic small GTPase, Rho. Once bound to GTP, Rho can then go on to activate various proteins responsible for cytoskeleton regulation such as Rho-kinase (ROCK). Most GPCRs that couple to Gα12/13 also couple to other sub-classes, often Gαq/11.
Gβγ signaling Edit
The above descriptions ignore the effects of Gβγ–signalling, which can also be important, in particular in the case of activated Gαi/o-coupled GPCRs. The primary effectors of Gβγ are various ion channels, such as G-protein-regulated Inwardly Rectifying K+ channels (GIRKs), P/Q- and N-type voltage-gated Ca2+ Channels, as well as some isoforms of AC and PLC, along with some Phosphoinositide-3-Kinase (PI3K) isoforms.
G-Protein-independent signaling Edit
Although they are classically thought of working only together, GPCRs may signal through G-protein-independent mechanisms, and heterotrimeric G-proteins may play functional roles independent of GPCRs. GPCRs may signal independently through many proteins already mentioned for their roles in G-protein-dependent signaling such as β-arrs, GRKs, and Srcs. Additionally, further scaffolding proteins involved in subcellular localization of GPCRs (e.g., PDZ-domain-containing proteins) may also act as signal transducers. Most often the effector is a member of the MAPK family.
In the late 1990s, evidence began accumulating to suggest that some GPCRs are able to signal without G proteins. The ERK2 mitogen-activated protein kinase, a key signal transduction mediator downstream of receptor activation in many pathways, has been shown to be activated in response to cAMP-mediated receptor activation in the slime mold D. discoideum despite the absence of the associated G protein α- and β-subunits.
In mammalian cells, the much-studied β2-adrenoceptor has been demonstrated to activate the ERK2 pathway after arrestin-mediated uncoupling of G-protein-mediated signaling. Therefore it seems likely that some mechanisms previously believed purely related to receptor desensitisation are actually examples of receptors switching their signaling pathway, rather than simply being switched off.
In kidney cells, the bradykinin receptor B2 has been shown to interact directly with a protein tyrosine phosphatase. The presence of a tyrosine-phosphorylated ITIM (immunoreceptor tyrosine-based inhibitory motif) sequence in the B2 receptor is necessary to mediate this interaction and subsequently the antiproliferative effect of bradykinin.
GPCR-independent signaling by heterotrimeric G-proteins Edit
Although it is a relatively immature area of research, it appears that heterotrimeric G-proteins may also take part in non-GPCR signaling. There is evidence for roles as signal transducers in nearly all other types of receptor-mediated signaling, including integrins, receptor tyrosine kinases (RTKs), cytokine receptors (JAK/STATs), as well as modulation of various other "accessory" proteins such as GEFs, Guanine-nucleotide Dissociation Inhibitors (GDIs) and protein phosphatases. There may even be specific proteins of these classes whose primary function is as part of GPCR-independent pathways, termed Activators of G-protein Signalling (AGS). Both the ubiquity of these interactions and the importance of Gα vs. Gβγ subunits to these processes are still unclear.
Details of cAMP and PIP2 pathways Edit
cAMP signal pathway Edit
- Main article: cAMP-dependent pathway
The cAMP signal transduction contains 5 main characters: stimulative hormone receptor (Rs) or inhibitory hormone receptor (Ri);Stimulative regulative G-protein (Gs) or inhibitory regulative G-protein (Gi);Adenylyl cyclase; Protein Kinase A (PKA); and cAMP phosphodiesterase.
Stimulative hormone receptor (Rs) is a receptor that can bind with stimulative signal molecules, while inhibitory hormone (Ri) is a receptor that can bind with inhibitory signal molecules.
Stimulative regulative G-protein is a G protein-linked to stimulative hormone receptor (Rs) and its α subunit upon activation could stimulate the activity of an enzyme or other intracellular metabolism. On the contrary, inhibitory regulative G-protein is linked to an inhibitory hormone receptor and its α subunit upon activation could inhibit the activity of an enzyme or other intracellular metabolism.
The Adenylyl cyclase is a 12-transmembrane glucoprotein that catalyzes ATP to form cAMP with the help of cofactor Mg2+ or Mn2+. The cAMP produced is a second messenger in cellular metabolism and is an allosteric activator to Protein kinase A.
Protein kinase A is an important enzyme in cell metabolism due to its ability to regulate cell metabolism by phosphorylating specific committed enzymes in the metabolic pathway. It can also regulate specific gene expression, cellular secretion, and membrane permeability. The protein enzyme contains two catalytic subunits and two regulatory subunits. When there is no cAMP,the complex is inactive. When cAMP binds to the regulatory subunits, their conformation is altered, causing the dissociation of the regulatory subunits, which activates protein kinase A and allows further biological effects.
cAMP phosphodiesterase is an enzyme that can degrade cAMP to 5'-AMP, which terminates the signal.
Phosphatidylinositol signal pathway Edit
In the phosphatidylinositol signal pathway, the extracellular signal molecule binds with the G-protein receptor (Gq) on the cell surface and activates phospholipase C, which is located on the plasma membrane. The lipase hydrolyzes phosphatidylinositol 4,5-bisphosphate (PIP2) into two second messengers: Inositol 1,4,5-trisphosphate (IP3) and Diacylglycerol (DAG). IP3 binds with the receptor in the membrane of the smooth endoplasmic reticulum and mitochondria, help open the Ca2+ channel. DAG helps activate Protein Kinase C (PKC), which phosphorylates many other proteins, changing their catalytic activities, leading to cellular responses. The effects of Ca2+ is also remarkable: it cooperates with DAG in activating PKC and can activate CaM kinase pathway, in which calcium modulated protein calmodulin (CaM) binds Ca2+, undergoes a change in conformation, and activates CaM kinase II, which has unique ability to increase its binding affinity to CaM by autophosphorylation, making CaM unavailable for the activation of other enzymes. The kinase then phosphorylates target enzymes, regulating their activities. The two signal pathways are connected together by Ca2+-CaM, which is also a regulatory subunit of adenylyl cyclase and phosphodiesterase in cAMP signal pathway.
Receptor regulation Edit
GPCRs become desensitized when exposed to their ligand for a prolonged period of time. There are two recognized forms of desensitization: 1) homologous desensitization, in which the activated GPCR is downregulated; and 2) heterologous desensitization, wherein the activated GPCR causes downregulation of a different GPCR. The key reaction of this downregulation is the phosphorylation of the intracellular (or cytoplasmic) receptor domain by protein kinases.
Phosphorylation by cAMP-dependent protein kinases Edit
Cyclic AMP-dependent protein kinases (protein kinase A) are activated by the signal chain coming from the G protein (that was activated by the receptor) via adenylate cyclase and cyclic AMP (cAMP). In a feedback mechanism, these activated kinases phosphorylate the receptor. The longer the receptor remains active, the more kinases are activated, the more receptors are phosphorylated. In β2-adrenoceptors, this phosphorylation results in the switching of the coupling from the Gs class of G-protein to the Gi class. cAMP-dependent PKA mediated phosphorylation can cause heterologous desensitisation in receptors other than those activated.
Phosphorylation by GRKs Edit
Phosphorylation of the receptor can have two consequences:
- Translocation: The receptor is, along with the part of the membrane it is embedded in, brought to the inside of the cell, where it is dephosphorylated within the acidic vesicular environment and then brought back. This mechanism is used to regulate long-term exposure, for example, to a hormone, by allowing resensitisation to follow desensitisation. Alternatively, the receptor may undergo lysozomal degradation, or remain internalised, where it is thought to participate in the initiation of signalling events, the nature of which depend on the internalised vesicle's subcellular localisation.
- Arrestin linking: The phosphorylated receptor can be linked to arrestin molecules that prevent it from binding (and activating) G proteins, effectively switching it off for a short period of time. This mechanism is used, for example, with rhodopsin in retina cells to compensate for exposure to bright light. In many cases, arrestin binding to the receptor is a prerequisite for translocation. For example, beta-arrestin bound to β2-adrenoreceptors acts as an adaptor for binding with clathrin, and with the beta-subunit of AP2 (clathrin adaptor molecules); thus the arrestin here acts as a scaffold assembling the components needed for clathrin-mediated endocytosis of β2-adrenoreceptors.
Mechanisms of GPCR signal termination Edit
As mentioned above, G-proteins may terminate their own activation due to their intrinsic GTP→GDP hydrolysis capability. However, this reaction proceeds at a slow rate (≈.02 times/sec) and thus it would take around 50 seconds for any single G-protein to deactivate if other factors did not come into play. Indeed, there are around 30 isoforms of RGS proteins that, when bound to Gα through their GAP domain, accelerate the hydrolysis rate to ≈30 times/sec. This 1500-fold increase in rate allows for the cell to respond to external signals with high speed, as well as spatial resolution due to limited amount of second messenger that can be generated and limited distance a G-protein can diffuse in .03 seconds. For the most part, the RGS proteins are promiscuous in their ability to activate G-proteins, while which RGS is involved in a given signaling pathway seems more determined by the tissue and GPCR involved than anything else. Additionally, RGS proteins have the additional function of increasing the rate of GTP-GDP exchange at GPCRs, (i.e. as a sort of co-GEF) further contributing to the time resolution of GPCR signaling.
In addition, the GPCR may be desensitized itself. This can occur as:
- a direct result of ligand occupation, wherein the change in conformation allows recruitment of GPCR-Regulating Kinases (GRKs), which go on to phosphorylate various serine/threonine residues of IL-3 and the C-terminal tail. Upon GRK phosphorylation, the GPCR's affinity for β-arrestin (β-arrestin-1/2 in most tissues) is increased, at which point β-arrestin may bind and act to both sterically hinder G-protein coupling as well as initiate the process of receptor internalization through clathrin-mediated endocytosis. Because only the liganded receptor is desensitized by this mechanism, it is called homologous desensitization
- the affinity for β-arr may increased in a ligand occupation and GRK-independent manner through phosphorylation of different ser/thr sites (but also of IL-3 and the C-terminal tail) by PKC and PKA. These phosphorylations are often sufficient to impair G-protein coupling on their own as well.
- PKC/PKA may, instead, phosphorylate GRKs, which can also lead to GPCR phosphorylation and β-arrestin binding in an occupation-independent manner. These latter two mechanisms allow for desensitization of one GPCR due to the activities of others, or heterologous desensitization. GRKs may also have GAP domains and so may contribute to inactivation through non-kinase mechanisms as well. A combination of these mechanisms may also occur.
Once β-arrestin is bound to a GPCR, it undergoes a conformational change allowing it to serve as a scaffolding protein for an adaptor complex termed AP-2, which in turn recruits another protein called clathrin. If enough receptors in the local area recruit clathrin in this manner, they aggregate and the membrane buds inwardly as a result of interactions between the molecules of clathrin, in a process called opsonization. Once the pit has been pinched off, the plasma membrane due to the actions of two other proteins called amphiphysin and dynamin, it is now an endocytic vesicle. At this point, the adapter molecules and clathrin have dissociated, and the receptor is either trafficked back to the plasma membrane or targeted to lysosomes for degradation.
At any point in this process, the β-arrestins may also recruit other proteins—such as the non-receptor tyrosine kinase (nRTK), c-SRC—which may activate ERK1/2, or other mitogen-activated protein kinase (MAPK) signaling through, for example, phosphorylation of the small GTP-ase, Ras, or recruit the proteins of the ERK cascade directly (i.e., Raf-1, MEK, ERK-1/2) at which point signaling is initiated due to their close proximity to one another. Another target of c-SRC are the dynamin molecules involved in endocytosis. Dynamins polymerize around the neck of an incoming vesicle, and their phosphorylation by c-SRC provides the energy necessary for the conformational change allowing the final "pinching off" from the membrane.
GPCR cellular regulation Edit
Receptor desensitization is mediated through a combination phosphorylation, β-arr binding, and endocytosis as described above. Downregulation occurs when endocytosed receptor is embedded in an endosome that is trafficked to merge with an organelle called a lysosome. Because lysosomal membranes are rich in proton pumps, their interiors have low pH (≈4.8 vs. the pH≈7.2 cytosol), which acts to denature the GPCRs. Additionally, lysosomes contain many degradative enzymes, including proteases, which can function only at such low pH, and so the peptide bonds joining the residues of the GPCR together may be cleaved. Whether or not a given receptor is trafficked to a lysosome, detained in endosomes, or trafficked back to the plasma membrane depends on a variety of factors, including receptor type and magnitude of the signal. GPCR regulation is additionally mediated by gene transcription factors. These factors can increase or decrease gene transcription and thus increase or decrease the generation of new receptors (up- or down-regulation) that travel to the cell membrane.
Receptor oligomerization Edit
- Main article: GPCR oligomer
G-protein-coupled receptor oligomerisation is a widespread phenomenon. One of the best-studied example is the metabotropic GABAB receptor. This so-called constitutive receptor is formed by heterodimerization of GABABR1 and GABABR2 subunits. Expression of the GABABR1 without the GABABR2 in heterologous systems leads to retention of the subunit in the endoplasmic reticulum. Expression of the GABABR2 subunit alone, meanwhile, leads to surface expression of the subunit, although with no functional activity (i.e., the receptor does not bind agonist and cannot initiate a response following exposure to agonist). Expression of the two subunits together leads to plasma membrane expression of functional receptor. It has been shown that GABABR2 binding to GABABR1 causes masking of a retention signal of functional receptors.
Origin and diversification of the superfamily Edit
Signal transduction mediated by the superfamily of GPCRs dates back to the origin of multicellularity. Mammalian-like GPCRs are found in fungi, and have been classified according to the GRAFS classification system based on GPCR fingerprints. Identification of the superfamily members across the eukaryotic domain, and comparison of the family-specific motifs, have shown that the superfamily of GPCRs have a common origin. Characteristic motifs indicate that three of the five GRAFS families, Rhodopsin, Adhesion, and Frizzled, evolved from the Dictyostelium discoideum cAMP receptors before the split of Opisthokonts. Later the Secretin family evolved from the Adhesion GPCR receptor family before the split of nematodes.
Dictyostelium discoideum Edit
See also Edit
- Orphan receptor
- Pepducins, a class of drug candidates targeted at GPCRs
- G protein-coupled receptors database
- Metabotropic receptor
- ↑ King N, Hittinger CT, Carroll SB (2003). Evolution of key cell signaling and adhesion protein families predates animal origins. Science 301 (5631): 361–3.
- ↑ Filmore D (2004). It's a GPCR world. Modern Drug Discovery 2004 (November): 24–28.
- ↑ Overington JP, Al-Lazikani B, Hopkins AL (December 2006). How many drug targets are there?. Nat Rev Drug Discov 5 (12): 993–6.
- ↑ includeonly>Royal Swedish Academy of Sciences. "The Nobel Prize in Chemistry 2012 Robert J. Lefkowitz, Brian K. Kobilka", 10 October 2012. Retrieved on 10 October 2012.
- ↑ 5.0 5.1 Gilman AG (1987). G proteins: transducers of receptor-generated signals. Annu. Rev. Biochem. 56: 615–49.
- ↑ 6.0 6.1 Wettschureck N, Offermanns S (October 2005). Mammalian G proteins and their cell type specific functions. Physiol. Rev. 85 (4): 1159–204.
- ↑ Bjarnadóttir TK, Gloriam DE, Hellstrand SH, Kristiansson H, Fredriksson R, Schiöth HB (September 2006). Comprehensive repertoire and phylogenetic analysis of the G protein-coupled receptors in human and mouse. Genomics 88 (3): 263–73.
- ↑ Attwood TK, Findlay JB (1994). Fingerprinting G-protein-coupled receptors. Protein Eng 7 (2): 195–203.
- ↑ Kolakowski LF Jr (1994). GCRDb: a G-protein-coupled receptor database. Receptors Channels 2 (1): 1–7.
- ↑ Foord SM, Bonner TI, Neubig RR, Rosser EM, Pin JP, Davenport AP, Spedding M, Harmar AJ (2005). International Union of Pharmacology. XLVI. G protein-coupled receptor list. Pharmacol Rev 57 (2): 279–88.
- ↑ InterPro
- ↑ Joost P, Methner A (2002). Phylogenetic analysis of 277 human G-protein-coupled receptors as a tool for the prediction of orphan receptor ligands. Genome Biol 3 (11): research0063.1–0063.16.
- ↑ Bjarnadottir TK, Gloriam DE, Hellstrand SH, Kristiansson H, Fredriksson R, Schioth HB (2006). Comprehensive repertoire and phylogenetic analysis of the G protein-coupled receptors in human and mouse. Genomics 88 (3): 263–73.
- ↑ Vassilatis DK, Hohmann JG, Zeng H, Li F, Ranchalis JE et al (2003). The G protein-coupled receptor repertoires of human and mouse. Proc Natl Acad Sci USA 100 (8): 4903–4908.
- ↑ Xiao X, Wang P, Chou KC (2009). A cellular automaton image approach for predicting G-protein-coupled receptor functional classes. Journal of Computational Chemistry 30 (9): 1414–1423.
- ↑ Qiu JD, Huang JH, Liang RP, Lu XQ (July 2009). Prediction of G-protein-coupled receptor classes based on the concept of Chou's pseudo amino acid composition: an approach from discrete wavelet transform. Anal. Biochem. 390 (1): 68–73.
- ↑ Gu Q, Ding YS, Zhang TL (May 2010). Prediction of G-Protein-Coupled Receptor Classes in Low Homology Using Chou's pseudo amino acid composition with Approximate Entropy and Hydrophobicity Patterns. Protein Pept. Lett. 17 (5): 559–67.
- ↑ Hazell GG, Hindmarch CC, Pope GR, Roper JA, Lightman SL, Murphy D, O'Carroll AM, Lolait SJ (July 2011). G protein-coupled receptors in the hypothalamic paraventricular and supraoptic nuclei - serpentine gateways to neuroendocrine homeostasis. Front Neuroendocrinol 33 (1): 45–66.
- ↑ Dorsam RT, Gutkind JS. (Feb 2007). G-protein-coupled receptors and cancer. Nat Rev Cancer 7 (2): 79–94.
- ↑ Yamauchi T, Kamon J, Ito Y, Tsuchida A, Yokomizo T, Kita S, Sugiyama T, Miyagishi M, Hara K, Tsunoda M, Murakami K, Ohteki T, Uchida S, Takekawa S, Waki H, Tsuno NH, Shibata Y, Terauchi Y, Froguel P, Tobe K, Koyasu S, Taira K, Kitamura T, Shimizu T, Nagai R, Kadowaki T (June 2003). Cloning of adiponectin receptors that mediate antidiabetic metabolic effects. Nature 423 (6941): 762–9.
- ↑ Grigorieff N, Ceska TA, Downing KH, Baldwin JM, Henderson R (1996). Electron-crystallographic refinement of the structure of bacteriorhodopsin. J. Mol. Biol. 259 (3): 393–421.
- ↑ Kimura Y, Vassylyev DG, Miyazawa A, Kidera A, Matsushima M, Mitsuoka K, Murata K, Hirai T, Fujiyoshi Y (1997). Surface of bacteriorhodopsin revealed by high-resolution electron crystallography. Nature 389 (6647): 206–11.
- ↑ Pebay-Peyroula E, Rummel G, Rosenbusch JP, Landau EM (1997). X-ray structure of bacteriorhodopsin at 2.5 angstroms from microcrystals grown in lipidic cubic phases. Science 277 (5332): 1676–81.
- ↑ Palczewski K, Kumasaka T, Hori T, Behnke CA, Motoshima H, Fox BA, Trong IL, Teller DC, Okada T, Stenkamp RE, Yamamoto M, Miyano M (2000). Crystal structure of rhodopsin: A G protein-coupled receptor. Science 289 (5480): 739–45.
- ↑ Rasmussen SG, Choi HJ, Rosenbaum DM, Kobilka TS, Thian FS, Edwards PC, Burghammer M, Ratnala VR, Sanishvili R, Fischetti RF, Schertler GF, Weis WI, Kobilka BK (2007). Crystal structure of the human β2-adrenergic G-protein-coupled receptor. Nature 450 (7168): 383–7.
- ↑ Cherezov V, Rosenbaum DM, Hanson MA, Rasmussen SG, Thian FS, Kobilka TS, Choi HJ, Kuhn P, Weis WI, Kobilka BK, Stevens RC (2007). High-resolution crystal structure of an engineered human β2-adrenergic G protein-coupled receptor. Science 318 (5854): 1258–65.
- ↑ Rosenbaum DM, Cherezov V, Hanson MA, Rasmussen SG, Thian FS, Kobilka TS, Choi HJ, Yao XJ, Weis WI, Stevens RC, Kobilka BK (2007). GPCR engineering yields high-resolution structural insights into β2-adrenergic receptor function. Science 318 (5854): 1266–73.
- ↑ Rasmussen SG, Choi HJ, Fung JJ, Pardon E, Casarosa P, Chae PS, Devree BT, Rosenbaum DM, Thian FS, Kobilka TS, Schnapp A, Konetzki I, Sunahara RK, Gellman SH, Pautsch A, Steyaert J, Weis WI, Kobilka BK (January 2011). Structure of a nanobody-stabilized active state of the β(2) adrenoceptor. Nature 469 (7329): 175–80.
- ↑ Rosenbaum DM, Zhang C, Lyons JA, Holl R, Aragao D, Arlow DH, Rasmussen SG, Choi HJ, Devree BT, Sunahara RK, Chae PS, Gellman SH, Dror RO, Shaw DE, Weis WI, Caffrey M, Gmeiner P, Kobilka BK (January 2011). Structure and function of an irreversible agonist-β(2) adrenoceptor complex. Nature 469 (7329): 236–40.
- ↑ Warne T, Moukhametzianov R, Baker JG, Nehmé R, Edwards PC, Leslie AG, Schertler GF, Tate CG (January 2011). The structural basis for agonist and partial agonist action on a β(1)-adrenergic receptor. Nature 469 (7329): 241–4.
- ↑ Xu F, Wu H, Katritch V, Han GW, Jacobson KA, Gao ZG, Cherezov V, Stevens RC (April 2011). Structure of an agonist-bound human A2A adenosine receptor. Science 332 (6027): 322–7.
- ↑ Rasmussen SG, Devree BT, Zou Y, Kruse AC, Chung KY, Kobilka TS, Thian FS, Chae PS, Pardon E, Calinski D, Mathiesen JM, Shah ST, Lyons JA, Caffrey M, Gellman SH, Steyaert J, Skiniotis G, Weis WI, Sunahara RK, Kobilka BK (July 2011). Crystal structure of the β(2) adrenergic receptor-Gs protein complex. Nature 477 (7366): 549–55.
- ↑ Lohse MJ, Benovic JL, Codina J, Caron MG, Lefkowitz RJ (June 1990). β-Arrestin: a protein that regulates β-adrenergic receptor function. Science 248 (4962): 1547–1550.
- ↑ Luttrell LM, Lefkowitz RJ (February 2002). The role of beta-arrestins in the termination and transduction of G-protein-coupled receptor signals. J. Cell. Sci. 115 (Pt 3): 455–65.
- ↑ Millar RP, Newton CL (January 2010). The year in G protein-coupled receptor research. Mol. Endocrinol. 24 (1): 261–74.
- ↑ Brass LF (September 2003). Thrombin and platelet activation. Chest 124 (3 Suppl): 18S–25S.
- ↑ Rubenstein, Lester A. and Lanzara, Richard G. (1998). Activation of G protein-coupled receptors entails cysteine modulation of agonist binding. Journal of Molecular Structure (Theochem) 430: 57–71.
- ↑ http://www.bio-balance.com/
- ↑ Kim JY, Haastert PV, Devreotes PN (April 1996). Social senses: G-protein-coupled receptor signaling pathways in Dictyostelium discoideum. Chem. Biol. 3 (4): 239–43.
- ↑ Duchene J, Schanstra JP, Pecher C, Pizard A, Susini C, Esteve JP, Bascands JL, Girolami JP (2002). A novel protein-protein interaction between a G protein-coupled receptor and the phosphatase SHP-2 is involved in bradykinin-induced inhibition of cell proliferation. J Biol Chem 277 (43): 40375–83.
- ↑ Chen-Izu Y, Xiao RP, Izu LT, Cheng H, Kuschel M, Spurgeon H, Lakatta EG (November 2000). G(i)-dependent localization of beta(2)-adrenergic receptor signaling to L-type Ca(2+) channels. Biophys. J. 79 (5): 2547–56.
- ↑ 42.0 42.1 Tan CM, Brady AE, Nickols HH, Wang Q, Limbird LE (2004). Membrane trafficking of G protein-coupled receptors. Annu. Rev. Pharmacol. Toxicol. 44: 559–609.
- ↑ Krueger KM, Daaka Y, Pitcher JA, Lefkowitz RJ (1997). The role of sequestration in G protein-coupled receptor resensitization. Regulation of β2-adrenergic receptor dephosphorylation by vesicular acidification. J. Biol. Chem. 272 (1): 5–8.
- ↑ Laporte SA, Oakley RH, Holt JA, Barak LS, Caron MG (2000). The interaction of β-arrestin with the AP-2 adaptor is required for the clustering of β2-adrenergic receptor into clathrin-coated pits. J. Biol. Chem. 275 (30): 23120–6.
- ↑ Laporte SA, Oakley RH, Zhang J, Holt JA, Ferguson SS, Caron MG, Barak LS (1999). The beta2-adrenergic receptor/betaarrestin complex recruits the clathrin adaptor AP-2 during endocytosis. Proc. Natl. Acad. Sci. U.S.A. 96 (7): 3712–7.
- ↑ Margeta-Mitrovic M, Jan YN, Jan LY (2000). A trafficking checkpoint controls GABA(B) receptor heterodimerization. Neuron 27 (1): 97–106.
- ↑ White JH, Wise A, Main MJ, Green A, Fraser NJ, Disney GH, Barnes AA, Emson P, Foord SM, Marshall FH (1998). Heterodimerization is required for the formation of a functional GABA(B) receptor. Nature 396 (6712): 679–82.
- ↑ Krishnan A, Alme´n MS, Fredriksson R, Schiöth HB (2012). The Origin of GPCRs: Identification of Mammalian like Rhodopsin, Adhesion, Glutamate and Frizzled GPCRs in Fungi. PLoS ONE 7 (1): e29817.
- ↑ Nordström KJ, Sällman Almén M, Edstam MM, Fredriksson R, Schiöth HB (2011). Independent HHsearch, Needleman–Wunsch-Based, and Motif Analyses Reveal the Overall Hierarchy for Most of the G Protein-Coupled Receptor Families. Mol Biol Evol 28 (9): 2471–80.
- ↑ Bakthavatsalam D, Brazill D, Gomer RH, Eichinger L, Rivero F, Noegel AA (2007). A G protein-coupled receptor with a lipid kinase domain is involved in cell-density sensing. Curr Biol 17 (10): 892–7.
- MeSH G-protein-coupled+receptors
- Wikipedia:MeSH D12.776#MeSH D12.776.543.750.100 --- receptors.2C g-protein-coupled
- GPCR Database. IUPHAR Database. International Union of Basic and Clinical Pharmacology. URL accessed on 2008-08-11.
- Vriend G, Horn F. GPCRDB: Information system for G protein-coupled receptors (GPCRs). Molecular Class-Specific Information System (MCSIS) project. URL accessed on 2008-08-11.
- G Protein-Coupled Receptors on the NET. URL accessed on 2010-11-10.
- The Nobel Prize in Chemistry 2012. URL accessed on 2012-10-10.
- A phylogenetic tree of all human GPCRs. <cite style="font-style:normal" >Vassilatis DK, Hohmann JG, Zeng H, Li F, Ranchalis JE, Mortrud MT, Brown A, Rodriguez SS, Weller JR, Wright AC, Bergmann JE, Gaitanaris GA (2003). The G protein-coupled receptor repertoires of human and mouse. Proc Natl Acad Sci USA 100 (8): 4903–8.. URL accessed on 2008-08-11.</cite>
- GPCR Reference Library. URL accessed on 2008-08-11.
- GPCR structures in the PDB
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | http://psychology.wikia.com/wiki/G_protein_coupled_receptor | 13 |
60 | Overview | How do we know when the economy is in a recession? How do key economic indicators perform in a downturn? In this lesson, students create graphs of various economic measurements, using quantitative and qualitative reasoning skills to compare, contrast and correlate the performance of measures like gross domestic product, unemployment and personal income.
Materials | Student notebooks, graph paper, computers with Internet access, spreadsheet software like Excel (optional), graphing calculators (optional).
Warm-Up | In their notebooks, students write down some words and phrases they associate with the term recession. After a few minutes, have students pair up to share ideas, and then hold a brief whole-group discussion to identify some common responses.
Ask, how do we know if we are in a recession? Suggest that they consider the question from the perspective of an economist, a politician, a business person, or an average citizen. Ask, what kinds of economic indicators might be important in determining if an economy is in recession?
Generate a list of key indicators through a short discussion and jot ideas on the board, making sure the list includes measurements like gross domestic product (G.D.P.), personal income, unemployment, home prices, business profits, stock prices and poverty rates. Have students identify which of these measurements might rise during a recession, which might fall, and which might remain steady. They might also generate ideas for some unconventional, creative or informal economic indicators.
At this point, you might share with the class parts of the overview on the Times Topics page about recession, like these three paragraphs:
There is no official definition of recession, and no official body to decree that one has begun or ended. Indeed, a clear picture of the state of the economy usually comes months or even years later. Recessions are commonly described as two or more quarters of a declining gross domestic product.
That definition is not used by the National Bureau of Economic Research, a private, non-partisan group based in Cambridge, Mass., whose findings on swings in the business cycle have come to be generally accepted as the definitive dates for recessions and expansions. Its definition of recession is: “a significant decline in economic activity spread across the economy, lasting more than a few months, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales.”
A recession is a significant decline in economic activity spread across the economy, lasting more than a few months, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales. A recession begins just after the economy reaches a peak of activity and ends as the economy reaches its trough. Between trough and peak, the economy is in an expansion. Expansion is the normal state of the economy; most recessions are brief and they have been rare in recent decades. The postwar average, excluding the 2001 recession, is eleven months.
Show the class the infographic “Change in G.D.P. Since 1999,” and ask them to speculate as to what the other measurements might look like over the same time period. Tell them that they will soon graph that data, after reading a New York Times article about a new report about the level of poverty in the United States.
Related | The article “Poor Are Still Getting Poorer, but Downturn’s Punch Varies, Census Data Show” discusses the findings of the United States Census Bureau’s poverty report:
The discouraging numbers spilling from the Census Bureau’s poverty report this week were a disquieting reminder that a weak economy continues to spread broad and deep pain.
And so it does. But not evenly.
The Midwest is battered, but the Northeast escaped with a lighter knock. The incomes of young adults have plunged — but those of older Americans have actually risen. On the whole, immigrants have weathered the storm a bit better than people born here. In rural areas, poverty remained unchanged last year, while in suburbs it reached the highest level since 1967, when the Census Bureau first tracked it.
Yet one old problem has not changed: the poor have rapidly gotten poorer.
Read the entire article with your class, using the questions below.
Questions | For discussion and reading comprehension:
- Which segment of the American population has been least affected by the recent downturn? Why?
- In which regions has household income been falling the past three years?
- What is one reason immigrants with citizenship have seen their income decrease less than natural-born Americans?
- How many Americans are currently living below the poverty line?
- According to the accompanying infographic, “Who Suffered the Most,” who really did suffer the most?
From The Learning Network
- Get With the Programs: Exploring the Implications of Federal Budget Cuts
- Onward and Upward? Documenting Local Economic Conditions
- Teaching With Infographics: Social Studies, History, Economics
- Times Topics: United States Economy
- Economix: What Does ‘Economic Growth’ Mean for Americans?
- U.S. Economy Grew Slower in Spring Than Previously Reported
Around the Web
Activity | Tell students they will work in pairs or small groups to find data and graph data on economic indicators.
Assign or allow each group to choose one or more of the following economic measurements: Real G.D.P.; personal income; unemployment; corporate profits; poverty; home prices; wages. Have all groups work with the same time period, like the past 10 years, and have them use NYTimes.com and other reliable resources to find data on their measures.
For each measure, students create two graphs, showing the following information, as well as performing the associated analysis:
- The actual value of the measurement over time. They then compared these graphs qualitatively, looking at the shapes of graphs to see how closely their movements are aligned, thereby exploring the question “Which economic indicators move similarly during a recession?”
- The percentage change in that measurement over time. Percentage change from month-to-month, quarter-to-quarter, or year-to-year can be calculated with the following formula: Percent Change = ( Old Value – New Value ) /Old Value. They then compare these graphs quantitatively by identifying which indicators move up or down in synch, and whether the magnitude of their change is also similar. For example, students can investigate if a 1% drop in Real G.D.P. means there will also be a 1% drop in corporate profits. Students can also explore the inverse relationship between, say, Real G.D.P. and unemployment, investigating how a decrease in G.D.P. is related to an increase in the unemployment rate.
Furthermore, students can explore relationships between quantity and rate-of-change by considering the two graphs for a given measure. A positive percentage change means a positive slope in the original graph, and higher (or lower) percent change translates into steeper graphs.
For data, students can visit the Department of Commerce’s Bureau of Economic Analysis, including its page on the state of nation’s economy and its index of interactive data tables.
For example, students can look at Real G.D.P. from 2005 to 2011. Using the bureau’s interactive table system, students can select the desired years, choose their indicators and even graph and compare multiple measures simultaneously.
Students can find unemployment data at the Department of Labor’s Bureau of Labor Statistics, data on personal income at the Bureau of Economic Analysis, and home price data at the Federal Housing Finance Agency. In addition, internet resources such as Google’s Public Data Explorer and Wolfram Alpha can be used to search, collect, and represent economic data.
After creating their sets of graphs, have students compare their results with those of classmates who looked at different economic indicators. Have students summarize their findings by identifying which economic factors move in a similar fashion, and which have an inverse relationship.
Going Further | Once a picture of the country’s economy has been created through the above analysis, students explore regional and local economic indicators by performing the same kind of analyses. As a start, state-by-state G.D.P. data is available here.
Are we in a recession? Is it U-shaped? Is it V-shaped? Is it a double-dip recession? Students check out the various recession shapes and compare them with their graphs. For an example of a double-dip recession, use Wolfram Alpha to look at US Real G.D.P. between 1978 and 1983.
Play pundit by looking at President Obama’s deficit reduction plan and writing a letter to the editor or Op-Ed about how you think measures like G.D.P., unemployment and poverty will be affected by the new budget.
4. Understands and applies basic and advanced properties of the concept of measurement.
6. Understands and applies basic and advanced concepts of statistics.
9. Understands the general nature and uses of mathematics.
5. Understands employment, income, and income distribution in a market economy.
9. Understands how Gross Domestic Product and inflation and deflation provide indications of the state of the economy.
6. Understands the nature and uses of different forms of technology.
13. Analyzes and interprets data using common statistical procedures, charts, and graphs. | http://learning.blogs.nytimes.com/2011/09/28/nowhere-to-go-but-up-analyzing-economic-measures-in-a-downturn/ | 13 |
21 | Inequality is concerned with disparities in the distribution of a certain metric, which can be income, health or any other material or non-material asset. Inequality typically refers to within country inequality on individual or group level, such as between gender, urban and rural population, race etc. Inequality among countries are referred to as international inequality.
Inequality is closed linked to the ideas of equity, which has two contrasting concepts: equality of opportunity and equality of outcome.
The first one, equality of opportunity, is concerned with the equal potential of every individual access public services and rights. Liberal thinking considers equality of opportunity, in particular in education, as fundamental precondition for a truly meritocratic system. Equality of opportunity to access health care, education or job openings are today considered fundamental in the European welfare systems.
The second aspect of equity, equality of outcome, is concerned with the actual outcome of asset distribution within or among countries. Policies to reach equality of outcome often include direct and indirect redistribution.
The concepts can however not always been seen in contrast. For example, the equality of outcome, i.e. the effective assets of a family, affect the equality opportunity of their children in many aspects as many studies have shown. Low household income has been shown to correlate across cultures to many indicators of child well-being.
Income inequality between the world's richest and poorest is higher than ever before, with the richest 5 percent of people receiving one-third of total global income, as much as the poorest 80 percent.
The most common measure of income inequality is the Gini coefficient. The coefficient varies between 0, which reflects complete equality and 1, which indicates complete inequality. So, a country with a low Gini co-efficient like Denmark (.24) is said to be more equal than a country with a high co-efficient such as Namibia (.74).
Another measure of inequality is Deciles. This indicator is fairly simple to understand and is one of the many inequality indicators used by several organizations such as OECD and the US government. Basically, the main idea is that deciles show how income is distributed, how much of the total income in a country is earned by lower wage earning groups and how much of the total income is earned by higher wage earning income groups. If the people in the top and bottom groups earn the same proportion of the income, then there is income equality. If the top groups earns a much higher percent of the total income, while people in the bottom groups earn much lower percent of the total income, then there is inequality.
In December 2011 an OECD report, Divided We Stand: Why Inequality Keeps Rising, stated that the gap between rich and poor in OECD countries has reached its highest level for over 30 years. It specified that in the three decades prior to the economic downturn of the late 2000s, wage gaps widened and household income inequality increased in a large majority of OECD countries. This occurred even when countries were going through a period of sustained economic and employment growth.
This report analyses the major underlying forces behind these developments:
- Part I. How Globalisation, Technological Change and Policies Affect Wage and Earnings Inequalities
- Part II. How Inequalities in Labour Earnings Lead to Inequalities in Household Disposable Income
- Part III. How the Roles of Tax and Transfer Systems Have Changed
This media briefing for the report outlines the content and points to some of the key findings of the report. Extracts from the report are available in the documents below:
Measuring Gender (In)equality
The OECD Gender, Institutions and Development Database (GID-DB) is a new tool for researchers and policy makers to determine and analyse obstacles to women’s economic development. It covers a total of 160 countries and comprises an array of 60 indicators on gender discrimination.
In 2009, the OECD also developed the Social Insitutions and Gender Index (SIGI) which focuses on the root causes behind gender inequalities. It uses 12 innovative indicators on social institutions, which are grouped into 5 categories: Family Code, Physical Integrity, Son Preference, Civil Liberties and Ownership Rights. Each of the SIGI indicators is coded between 0, meaning no or very low inequality, and 1, indicating very high inequality.
For a range of studies on inequality, including global inequality, inequality and politics, openness and inequality, see the World Bank Inequality website.
Inequality, Poverty and Well-being, Mark McGillivary (ed), Palgrave McMillan, 2006. This book examines inequality, poverty and well-being concepts and corresponding empirical measures.
Worlds Apart: Measuring International and Global Inequality, Branko Milanovic, Princeton University Press, 2005. This publication from a leading World Bank economist analyzes income distribution worldwide using household survey data from more than 100 countries and addresses how to measure global inequality among individuals.
Inequality Re-examined, Amartya Sen, Oxford University Press, 1995. This book examines the claims of equality in social arrangements, stressing that we should be concerned with people's capabilities rather than either their resources or their welfare. Looks at "why equality" and "equality of what" and includes a chapter on "Freedom, Agency and Well-being".
The UC Atlas of Global Inequality integrates data, maps, and graphs to create an interactive website for accessing and analyzing information addressing global change and inequality.
The Quantity and Quality of Life and the Evolution of Global Inequality, 2008, Gary S. Becker, Tomas J. Philipson, Rodrigo R. Soares, The University of Chicago, United States. Lack of income convergence for the world as a whole has led to concerns about the impact of globalization of markets on world inequality. GDP per capita is usually used to proxy for the quality of life of individuals living in different countries. However, well-being is also affected by quantity of life, as represented by longevity. This paper incorporates longevity into an overall assessment of the evolution of cross-country inequality.
ESDS International Case Study: Gender Equality in the Labour Market in Central and Eastern Europe: Attitudes to Women’s Work, by Sylke V. Schnepf, (published in S. V. Schnepf (2007) Women in Central and Eastern Europe. Measuring Gender Inequality Differently. Saarbruecken: VDM, pp. 84-136. ISBN: 978-3-8364-2526-1).
- ↑ Progress for Children: Achieving the MDGs with Equity, 2010, UNICEF, MDG Progress Report Card 9, September. http://www.unicef.org/publications/files/Progress_for_Children-No.9_EN_081710.pdf
- ↑ http://www.carnegieendowment.org/publications/ Worlds Apart: Measuring International and Global Inequality, Branko Milanovic
- ↑ http://go.worldbank.org/3SLYUTVY00 Poverty Analysis: Measuring Inequality. The World Bank. 2010.
- ↑ http://www.oecd-ilibrary.org/ Society at a Glance 2009: OECD Social Indicators: Equity Indicators, Income inequality
- ↑ https://www.cia.gov/library/publications/the-world-factbook/docs/notesanddefs.html World Factbook, Definitions and Notes. CIA. 2010. Listed as "Household income or consumption by percentage share".
- ↑ Press Release, Divided We Stand: Why Inequality Keeps Rising. OECD 05.12.2011
- World Bank: Measuring Inequality
- World Bank: Inequality and Transition
- Carnegie Endowment for International Peace: Globalization and Inequality
- Social Watch
- Can the MDGs provide a pathway to social justice? The challenges of intersecting inequalities, Naila Kabeer, Institute of Development Studies
- Is Life Getting Better? A beginners guide on measuring the progress of societies. Economic indicators, what is inequality The Global Social Change Research Project, March 2010
- Measuring Inequality World Bank's brief description of several measures of inequality.
- Michael Forster and Marco Mira D'Ercole. The OECD Approach to Measuring Income Distribution and Poverty: Strengths, Limits and Statistical Issues. From OECD. Center for International Policy Exchanges, University of Maryland, School of Public Policy. Conference: Measuring Poverty, Income Inequality, and Social Exclusion, Lessons from Europe, March 16-17, 2009.
- Haughton, Jonathan; Khandker, Shahidur R.Handbook on poverty and inequality from the World Bank. 2009.
- UN's Millennium Development Goals Indicators. Goal 1.3 is “Share of poorest quintile in national consumption” | http://wikiprogress.org/index.php/Inequality | 13 |
28 | The 15th century saw the waning of the Ashikaga Shogunate, followed by a Warring States period whose violence dominated most of the 16th century. Japan finally emerged from this era of turmoil through the conquests and reforms of the three great unifiers. The Tokugawa period (1600 – 1868 CE) ushered in a lasting age of peace and development, but also isolation and the codification of a social system too inflexible to withstand the inevitable challenges posed by internal economic growth and, externally, by Western imperialism. The role of government, socio-cultural ideals, and economic/class interactions in Japanese society all evolved during this four-hundred year period; some into unexpected forms, while others, perhaps sentimentally, were restored after great intermediate transformations.
The Shogunate system established by Minamoto Yoritomo after his victory in the Gempei War (1180-1185 CE) lasted well beyond the Minamoto lineage’s rule during the Kamakura period (1185-1333 CE). In fact, with few interruptions, the Shogunate system persisted until the fall of Edo in 1868 CE and the Imperial restoration of the Meiji period. Though the basic model persisted for much of the 2nd millennium CE, its details shifted immensely.
Originally a military mirror-government established at Kamakura that was bankrupted by the persistent threat of Mongol invasion, the Kamakura Shogunate was overthrown by a coalition of southern samurai and replaced by the Ashikaga Shogunate, initiating the Muromachi period. The Muromachi period (1336-1573 CE), characterized by strong regional families (kenmon, headed by daimyo) that occupied the leadership roles within the three interdependent political institutions of the day – the court nobles, the warrior aristocracy, and the temple system (Mass 2002) – itself eventually collapsed during the Warring States period that followed the Onin War. Economic changes and the lack of regional autonomy had fueled dissatisfaction with the Ashikaga Shogunate at all levels of society, resulting in violent uprisings among peasant farmers, the militarization of merchants and temples, and most destructively, organized military campaigns to overthrow the Shogunate by powerful daimyo coalitions. A series of weak shoguns (exemplified by Ashikaga Yoshimasa, 1435-1490, who ruled as shogun between 1449-1490) consistently failed to maintain, and later restore, order and central authority, and with only an impotent, symbolic emperor, civil war consumed Japan for more than a century (Sengoku period, 1467-1573).
The chaos finally subsided during the Asuchi-Momoyama period through the bloody conquests and shrewd political and economic reforms of the great unifiers – Oda Nobunaga, Toyotomi Hideyoshi, and Tokugawa Ieyasu. Hideyoshi, leveraging Nobunaga’s successful consolidation of military power through economic reform, further stabilized Japan by enacting weapons confiscation and sumptuary laws. And by 1603, Ieyasa had established the Tokugawa Shogunate, perhaps the most successful – in terms of sustained stability, level of centralized control, and economic prosperity – of all periods preceding it. In the Baku-han political structure that characterized the Tokugawa period, the shogun held national authority while the daimyo held regional authority – including the rights to tax and create laws within their domains – with explicit conditions (sankinkotai) designed to ensure that the daimyos’ interests were bound to the shogun’s, and that their families were bound to the capital city of Edo. This “centralized feudal” system was highly successful, and, if isolated from the socio-economic and international challenges to which it eventually succumbed, may have persisted indefinitely.
But political systems don’t operate in isolation. In fact, socio-economic realities and class ideologies – the drivers of Japanese culture – had been ever-evolving during this thesis period, sometimes following and sometimes triggering the political revolutions so far discussed.
Socio-cultural and Economic Change
By Yoshimasa’s administration, the sovereign status of elegance and cultural refinement that so defined the Heian period aristocracy’s sensibilities had been long superseded by the practical martial and managerial skills of the samurai class. But in Yoshimasa, the former Japanese worldview found both a powerful patron and a brilliant innovator obsessed with the cultivation of what became known as the Higashiyama Bunka. Yoshimasa’s political failures, as great as they were, are overshadowed by his cultural legacy (Keene 2003), which, largely based on Zen Buddhism and promoting Wabi-sabi, remains the definition of traditional Japanese culture and aesthetic. This reinvented and reinvigorated high culture was disseminated by skilled artisans and educated courtiers fleeing the destruction of the capital during the Onin War and adopted by the rural elites, who granted them tutelage (protection, support) in exchange for theirs (guidance).
But the arts weren’t the only significant developments in Japanese society during this period: agricultural technology was advancing as well, which fueled the growth of rural wealth and power; an emerging merchant class was demanding social and political accommodation, and monetization, interconnected road systems, and a booming population were all drivers of the political instability that ultimately collapsed under the weight of the Warring States.
Hideyoshi, one of the great unifiers before the Edo period, initiated a series of calculated policies that significantly altered Japanese society. First, in 1588, he enacted the sword hunt, effectively disarming the peasantry; in 1589, he crystallized and codified the Confucian-inspired class hierarchy through sumptuary laws. These and later reforms by Ieyasu were designed to freeze Japanese society in time; to create a permanent stability upon which to rule. The Tokugawa Shogunate also wanted to monopolize foreign contact and trade (sakoku policy), directing all foreign contact through Shogun-controlled Nagasaki. But despite these top-down efforts, a kind of economics-driven social inversion of the Confucian ideal (shi, no, ko, sho) had begun and couldn’t be averted.
Because of the unprecedented success of the Tokugawa Shogunate in political consolidation, peacetime forced the samurai class from warrior to administrator. Domain nationalism emerged among the samurai, whose loyalties had for centuries been based on a personal and conditional relationship with their daimyo, but, through legislation and new political realities, were now unconditionally tied to a particular place. Their military skills ritualized, literacy a professional requirement, and stipends frozen in a booming economy, the samurai – who ideologically occupied the highest social class, but for whom the Tokugawa system was insufficiently working – were slowly becoming disaffected. Musui’s Story was written toward the end of the Tokugawa period and reflects the growing incongruity between the legislated social hierarchy and the economically prosperous professional classes (Kokichi and Craig 1991).
The farmer, on the other hand, was prospering. As the “fertilizer of civilization”, the farmer had long been held in high regard in the Confucian system. Expansion of agricultural area, better irrigation techniques and fertilizers, and new rice strains from Southeast Asia during the Muromachi period had greatly increased their productivity; population growth expanded their workforce and food demand, and a national road system allowed farmers to amass wealth by selling their surplus, via merchants, to urban centers. By the Tokugawa period, many farmers had become gono, wealthy enough to themselves loan at interest.
Artisans, too, flourished. Whether in urban centers or rural villages, their products and skills were in high demand. But the greatest class beneficiary of Japan’s economic growth during the Muromachi and Edo periods was the merchant. The national road system had “paved the way” for the merchant class – the lowest social class in the Confucian system – to become the highest economic class. Originally rice traders, the emerging coin-based monetary system facilitated commerce at the point of exchange. Lending at interest, first in koku and later in currency, also increased merchant wealth and political influence. Osaka (a merchant city of 400,000 by 1800) had become Japan’s economic capital, and education became common, even among merchants. At one point, merchants had become so militarized and organized that they had taken control of the nation’s trade routes, which they leveraged into tax-exemption status.
End of an Era
The Japanese Empire ultimately emerged from the Edo period’s failures, which were largely due to the system’s inflexibility in the face of historic challenges both within and from without. The Tokugawa system had no national tax. The national governmental bureaucracy was supported entirely by contributions from the daimyo lords, who, already financially strained to the limit by san kinkotai, in turn pressured their samurai (who were themselves financially strained by unemployment, frozen or even cut stipends, and inflation) for revenue. These financially-defined political realities further disintegrated traditional class barriers, and by 1800 CE the Tokugawa system was economically, socially, and intellectually unstable. Some Japanese concluded that Confucianist ideology had been the offending ingredient and was incompatible with the true Japanese spirit, which they asserted to be properly grounded in Shinto religion. But this analysis is incomplete, as external pressures on the Tokugawa system were at least as extreme. Following China’s defeat by the British in the Opium War and subsequent exploitation by the Western imperial powers, Japan was forced to a decision whether to engage the West in trade and mimic it in imperialism, or to wage war against the West to maintain its own isolation. Japan, again finding itself under a weak shogun in a time of crisis, submitted to foreign pressure for treaties. This move, internalized as a national humiliation, set the stage for the Shogunate system’s definitive defeat, and the return, at least in name, of the Japanese Emperor to power.
- Keene, Donald. 2003. Yoshimasa and the Silver Pavilion: The Creation of the Soul of Japan. Columbia University Press.
- Kokichi, Katsu, and Teruko Craig. 1991. Musui’s Story: The Autobiography of a Tokugawa Samurai. University of Arizona Press.
- Mass, Jeffrey. 2002. The Origins of Japan’s Medieval World: Courtiers, Clerics, Warriors, and Peasants in the Fourteenth Century. 1st ed. Stanford University Press. | http://leeware.wordpress.com/2011/04/03/japan-seeds-of-the-empire/ | 13 |
15 | What You Will Learn
Student Learning Objectives
- Price and Quantity Relationship.
- Elasticities and their applications.
- Creation of supply and demand curves.
- Supply and demand shifters and effects.
- Knowledge of the Federal Reserve System and methods of monetary manipulation.
- Knowledge of various schools of economic thoughts (Keynesian, Classical, Monetarism,
etc.) regarding the manipulation of the US economy through the use of monetary and
- Understand how the use of monetary and fiscal policies impact the areas of unemployment,
inflation, government debt, and international trade.
- Understand how the use of monetary and fiscal policy will impact US agriculture and
those individuals in the agricultural field.
- Business structure differences and implications on production levels.
- Contract options including hedging.
- Difference in pricing options.
- Deeds, co-ownership, and other legalities.
- Present value, future value, and investment weighting.
- Learn to calculate interest.
- Learn how to create and analyze the top financial statement and how to calculate and
interpret the financial ratios.
- Essentials of planning.
- Learn and apply the different methods of organizing.
- Understand how to lead and motivate different groups and different types of individuals.
- Understand and apply the essentials of controlling.
- Understand past US government agricultural policies and how they shape the current
US agricultural structures.
- Understand current US government agricultural policies including the current farm
bill and how they impact US agriculture Understand how possible future policy proposals
could have implications on the future of US agriculture structures.
Agribusiness (Horticulture Option)
Agribusiness (Pest Management Option)
- The teacher knows how agricultural education relates to other disciplines.
- The teacher has knowledge of animal science and production.
- The teacher has knowledge of plant science and production.
- The teacher has knowledge of horticulture and floriculture.
- The teacher has knowledge of the area concepts of agriculture mechanics.
- The teacher has knowledge of greenhouse management.
- The teacher has knowledge of soil science.
- The teacher has knowledge of fruit and vegetable production.
- The teacher has knowledge of the role and practices of supervised agricultural education.
- The teacher has knowledge of the science of natural resources and conservation.
- The teacher has knowledge of the role, history and practices of Future Farmers of
America (FFA) as an integral part of agricultural education.
- The teacher has knowledge of forestry.
- The teacher has knowledge of records and reports (inventory, cash flow statements,
net-worth statements, etc.) related to farm production and agri-business.
- The teacher has knowledge of agri-business, processing, and marketing agricultural
- The teacher has knowledge of agri-economics and entrepreneurship.
- The teacher has knowledge of parliamentary procedures, public speaking, and other
- The teacher has knowledge of trends and issues of agricultural education. The teacher
has knowledge of the basic concepts of agricultural education and engages students
in activities designed to improve understanding of agriculture and its role in today's
- The teacher has knowledge of the historical, philosophical, and legal basis of services
for children both with and without special needs.
- The teacher has knowledge of new and emerging technology and its application to agriculture. | http://www.atu.edu/agriculture/student-learning.php | 13 |
14 | Fossil fuels or mineral fuels are fossil source, fuels, that is, carbon or hydrocarbons found in the earth’s crust. Fossil fuels range from volatile materials with low carbon:hydrogen ratios like methane, to liquid petroleum to nonvolatile materials composed of almost pure carbon, like anthracite coal. Methane can be found in hydrocarbon fields, alone, associated with oil, or in the form of methane clathrates. It is generally accepted that they formed from the fossilized remains of dead plants and animals by exposure to heat and pressure in the Earth’s crust over hundreds of millions of years. This biogenic theory was first introduced by Georg Agricola in 1556 and later by Mikhail Lomonosov in 1757.
Fossil fuels are non-renewable resources because they take millions of years to form, and reserves are being depleted much faster than new ones are being formed. Concern about fossil fuel supplies is one of the causes of regional and global conflicts. The production and use of fossil fuels raise environmental concerns. A global movement toward the generation of renewable energy is therefore under way to help meet increased energy needs.
The burning of fossil fuels produces around 21.3 billion tons (21.3 gigatons) of carbon dioxide per year, but it is estimated that natural processes can only absorb about half of that amount, so there is a net increase of 10.65 billion tones of atmospheric carbon dioxide per year (one tonne of atmospheric carbon is equivalent to 44/12 or 3.7 tons of carbon dioxide). Carbon dioxide is one of the greenhouse gases that enhances radiative forcing and contributes to global warming, causing the average surface temperature of the Earth to rise in response, which climate scientists agree will cause major adverse effects, including reduced biodiversity.
Fossil fuels are of great importance because they can be burned (oxidized to carbon dioxide and water), producing significant amounts of energy per unit weight. The use of coal as a fuel predates recorded history. Coal was used to run furnaces for the melting of metal ore. Semi-solid hydrocarbons from seeps were also burned in ancient times, but these materials were mostly used for waterproofing and embalming.
Types of Fossil Fuels
Petroleum: Also called “crude oil,” this liquid fuel is available in many parts of the world. Like coal, petroleum is formed from the biodegraded remains of animals that that lived in the sea and died there. These remains structured themselves into layers of fine dirt on the ocean bed, known as silt. With time, pressure from the layers already formed compressed the organic material, forming the oil.
Coal: When land vegetation decayed over millions of years, a solid fossil fuel developed called coal. It formed into layers and layers of vegetation and grew compacted and heated over time and is recognized as coal.
Oil: This liquid fossil fuel, better known as crude oil, is borne of the remains of marine micro organisms that lay deposited on the sea floor. After millions of years, these deposits harden into rock and sediment where oil lies trapped in small spaces. This is the most widely used fossil fuel. It has wide applications in a refining process, in cars, jets, roads and roofs.
Natural Gas: This gaseous fossil fuel is versatile, abundant and clean, when compared to coal and oil. Just like oil, it too is formed from the remains of marine micro organisms. It contains methane and is highly compressed at large depths in the earth in small volumes.
Advantages of Fossil Fuels
- A major advantage of fossil fuels is their capacity to generate huge amounts of electricity in just a single location.
- Fossil fuels are very easy to find.
- When coal is used in power plants, they are very cost effective. Coal is also in abundant supply.
- Transporting oil and gas to the power stations can be made through the use of pipes making it an easy task.
- Power plants that utilize gas are very efficient.
- Power stations that make use of fossil fuel can be constructed in almost any location. This is possible as long as large quantities of fuel can be easily brought to the power plants.
- Pollution is a major disadvantage of fossil fuels. This is because they give off carbon dioxide when burned thereby causing a greenhouse effect. This is also the main contributory factor to the global warming experienced by the earth today.
- Coal also produces carbon dioxide when burned compared to burning oil or gas. Additionally, it gives off sulphur dioxide, a kind of gas that creates acid rain.
- Environmentally, the mining of coal results in the destruction of wide areas of land. Mining this fossil fuel is also difficult and may endanger the lives of miners. Coal mining is considered one of the most dangerous jobs in the world.
- Power stations that utilize coal need large amounts of fuel. In other words, they not only need truckloads but trainloads of coal on a regular basis to continue operating and generating electricity. This only means that coal-fired power plants should have reserves of coal in a large area near the plant’s location.
- Use of natural gas can cause unpleasant odors and some problems especially with transportation.
- Use of crude oil causes pollution and poses environmental hazards such as oil spills when oil tankers, for instance, experience leaks or drown deep under the sea. Crude oil contains toxic chemicals which cause air pollutants when combusted. | http://lifeofearth.org/fossil-fuels | 13 |
28 | In their political and commercial affairs the colonies felt their connection with the mother country chiefly in its burdens and restrictions, but they found some compensation in the protection which their connection with the British Empire assured them. Their peace and safety were constantly threatened from three allied sources. First there were enemy Indians whose presence was an ever threatening danger. Then the southern colonies in particular were never free from the menace of the Spaniards in Florida for, as Fiske graphically puts it, Carolina was "the border region where English and Spanish America marched upon each other." But greater than the danger from either Indians or Spaniards was the danger from the French. In 1608, one year after the founding of Jamestown, Champlain founded Quebec and secured for France the region drained by the St. Lawrence; in 1682 La Salle, inspired by dreams of a great continental empire, seized the mouth of the Mississippi and established the supremacy of France over all the region drained by the Father of Waters. Between these two distant heads, stretched the vast empire of New France. The interests of New France clashed with those of New England everywhere along their far-flung frontiers, and these clashing interests brought the two colonial empires into a century-long life-and‑death struggle for supremacy in North America. The several stages of this contest were marked by four wars known in American history as King William's War (1689‑1697), Queen Anne's War (1702‑1713), King George's War (1744-1748), and the French and Indian War (1754‑1763).
For North Carolina and South Carolina, the proximity of the Spanish and French settlements held a three-fold danger. There were, first, the danger of a direct attack upon their unprotected coast towns; second, the danger of an indirect attack through the Indians; and, third, the danger of being cut p259off entirely from farther westward expansion. The two colonies were fully alive to the seriousness of their situation and as we have seen freely assisted each other in meeting it. But they also realized that the menace was not to them alone, but to the whole of British-America and they long sought in vain to impress the home government with this view. St. Augustine afforded the enemy an excellent base for operations against the Carolinas both by land and by sea. In 1686 a Spanish force from St. Augustine invaded South Carolina and destroyed the colony at Port Royal. In 1702, upon the outbreak of Queen Anne's War, South Carolina sent an expedition against St. Augustine, but it ended in disaster. Four years later a combined French and Spanish squadron attacked Charleston, but was beaten off with heavy losses. During these wars, according to Governor Burrington, parties from French and Spanish privateers and men-of‑war "frequently landed and plundered" the coast of North Carolina, and the colony was put to "great expenses" in "establishing a force to repell them." Two of the Lords Proprietors declared, "That in 1707 when Carolina was attacked by the French it cost the Province twenty thousand pounds and that neither His Majesty nor any of his predecessors had been at any charge from the first grant to defend the said Province against the French or other enemies."
It was, however, by their indirect attacks through the Indians that the Spaniards and the French inflicted the greatest losses upon the Carolinas. In 1715 they organized the great Indian conspiracy that resulted in the Yamassee War. These rival and generally hostile tribes, said a group of South Carolina merchants in a petition to the king for aid, "never yet had policy enough to form themselves into Alliances, and would not in all Probability have proceeded so far at this time had they not been incouraged, directed and supplied by the Spaniards at St. Augustine and the French at Moville [Mobile] and their other Neighbouring Settlements." In a letter to Lord Townsend, the king's secretary of state, Governor Craven declared that if South Carolina were destroyed, as at one time seemed not improbable, "the French from Moville, or from Canada, or from old France" would take possession and "threaten the whole British Settlements." The Carolina officials could not make the home government understand that the attack was not merely a local Indian outbreak, aimed at South Carolina alone, but that it was a phase of the general policy of the French in their struggle for supremacy p260in America and was aimed at all the British American dominions.
Even more serious than these wars, because if successful more permanent in their results, were the French plans in the Mississippi Valley. In a memorial to the Board of Trade, in 1716, Richard Beresford, of South Carolina, called attention to the fact that the French along the Mississippi River had already encroached "very far within the bounds of the Charter of Carolina" and had "settled themselves on the back of the improved part of that Province." If permitted to remain there they would become a permanent obstacle to the westward march of English settlements, confining them to the narrow region between the Atlantic and the Alleghanies. Yet all efforts to arouse the home authorities to a realization of the danger were vain. The Lords Proprietors could not, and as long as the Carolinas remained proprietary colonies, the Crown would not lift a hand in their defence. It was not until after South Carolina, in 1719, had thrown off the rule of the Lords Proprietors, largely because of their inability to aid in the defence of the colony, that the Board of Trade manifested any interest in the situation. In 1720 it advised the king that considering that the people of South Carolina "have lately shaken off the Proprietors Government, as incapable of affording them protection, [and] that the Inhabitants are exposed to incursions of the Barbarous Indians, [and] to the encroachments of their European neighbours," he should forthwith send a force for the defence of that colony. But this advice, like the repeated appeals of the colonies, were unheeded and the Carolinas were left to their own resources.
The home government, however, finally awaked to a realization of the stakes at issue and in the third of the series of wars for supremacy in America undertook to co-operate with the colonies on a large scale. The war really began in 1739 when England declared war on Spain, though France did not formally enter the struggle until five years later. In attacking Spain, England's purpose was to break down the Spanish colonial system and open Spanish-American ports to English commerce. The government accordingly planned to strike a blow at some vital point in Spain's American colonies with a combined force of British and American troops. In the summer of 1740, therefore, the king called upon the colonies for their contingents of men and money. This was the first call ever made upon them as a whole for co-operation in an imperial p261enterprise, and the colonies responded with enthusiasm. Throughout the summer preparations were actively pushed forward both in England and in America, and in October a fleet of thirty ships of the line and ninety transports, carrying 15,000 sailors and 12,000 soldiers sailed from Spithead, England, for Jamaica, where they were joined by American troops from all the colonies except New Hampshire, Delaware, South Carolina, and Georgia. Delaware's contingent was probably counted in that of Pennsylvania, while those from South Carolina and Georgia were probably kept at home to protect their frontiers from attack by the Spaniards of Florida. The other nine colonies sent thirty-six companies of 100 men each. Of these Massachusetts contributed six, Rhode Island two, Connecticut two, New York five, New Jersey two, Pennsylvania eight, Maryland three, Virginia four, and North Carolina four.
In July, 1740, Governor Gabriel Johnston received instructions from the king directing him to convene the Assembly and inform it of the government's plans. The king declared that he "had not thought fit to fix any particular quota" for the colony as he did not want to place any limitation on its zeal, but he expected it to exert itself in the common cause as much as its circumstances would allow. In reply to the governor's message, the Assembly promised to "contribute to the utmost" of its power and assured him that "no Colony hath with more chearfullness contributed than we shall to forward the intended descent upon some of the Spanish Colonies." This promise was promptly made good. The Assembly passed an act levying a tax of three shillings on each poll in the colony, payable, owing to the scarcity of money among the people, in "commodities of the country" at fixed rates, provided adequate machinery for its prompt collection, and directed that warehouses be erected for storing the proceeds. The governor expressed the "highest satisfaction" at the Assembly's action, saying: "You have now given evident proof of your unfeigned zeal for his Majesty's service and considering the circumstances of the country contributed as liberally as any of our neighbouring colonies." He estimated the levy authorized by the Assembly at £1,200 sterling, which was sufficient to equip and subsist four companies of 100 men each until they could join the army at Jamaica when they would be put on the payroll of the Crown.
The governor's call for recruits brought a prompt response. Four companies containing a total of 400 men, a force p262in proportion to population equivalent to 25,000 at the present time, were quickly enrolled. "I have good reason to believe," wrote the governor to the Duke of Newcastle, "that we could easily have raised 200 more if it had been possible to negotiate the bills of exchange in this part of the continent; but as that was impracticable, we were obliged to rest satisfyed with four companies." Three of these companies were recruited in the Albemarle section, the other at Cape Fear. The Albemarle companies were under command of Captains Halton, Coletrain, and Pratt, the Cape Fear company under Captain James Innes. The former embarked at Edenton early in November, 1740, and sailed for Wilmington where they were joined by Captain Innes' company. Says the Wilmington correspondent of the South Carolina Gazette, November 24, 1740: "The 15th Inst. Capt. James Innes, with his compleat Company of Men, went on board the Transport to proceed for the General Rendezvous. They were in general brisk and hearty, and long for Nothing so much as a favorable Wind, that they may be among the first in Action. Capt. Innes has taken out Letters of Marque and Reprisal, and if any Spanish Ship is to be met with, he doubts not of giving a proper account of them. * * * The Governor and Assembly of this Province proceeded with great Spirit on this Occasion, the lower House chearfully granted an Aid to his Majesty of £1500 Sterling, to assist in Victualling and Transporting their Quot of troops. When so poor a Province gives such Testimony of their zeal and Spirit against our haughty Enemy, it is to be hoped the Ministry at Home will be convinced that it is the Voice of all his Majesty's Subjects, both at home and abroad, Humble the proud Spaniard, bring down his haughty Looks."
From Wilmington the North Carolina companies sailed directly for Jamaica where they joined the united British and colonial forces. The squadron was under the command of Admiral Edward Vernon; the army was first under Lord Cathcart, and after his death under General Wentworth. Sir William Gooch, then governor of Virginia, was in immediate command of the "American Regiments." In February, 1741, the fleet sailed to attack Cartagena on the coast of Venezuela. From the first the expedition was doomed to failure. Ill-feeling and rivalry between the land forces and the naval forces thwarted every movement. The only successful effort made throughout the campaign was the assault on Boca-Chica (little mouth), the entrance to the harbor of Cartagena. North Carolina p263troops participated in this attack. The forces were carried, the fleet entered the harbor, and troops were landed to attack the forces defending the town. This attack on the forces was repulsed with severe losses, heavy rains set in, an epidemic of fever broke out among the troops, and within less than two days half of them were dead or otherwise incapacitated for service. Nothing was left but acknowledgment of defeat, re-embarkation and return to Jamaica. The lives of 20,000 men had been sacrificed to the incompetency and jealousy of the commanding officers. Of the North Carolina contingent but few survived. The Cape Fear company, originally 100 strong, reached Wilmington in January, 1743, reduced to 25 men.
North Carolina's losses on this expedition, however, were not comparable to those she suffered at home. For eight years Spanish and French privateers infested her waters, captured her ships, ravaged her coasts, plundered her towns, and levied tribute upon her inhabitants almost with impunity. In May, 1741, they captured two merchantmen out of Edenton "before they had been half an hour at sea," while the owner of one of them "had the Mortification to see his Vessel and Cargo taken before his face as he stood on the shore." Within the next ten days, four other ships fell victims to the same privateers. On May 12th, a sloop bound from North Carolina to Hull, England, was captured off Cape Fear. In July another merchantman was taken "within the Bar of Ocracoke;" the owner estimated his loss at £700 sterling. The same privateer had already taken six other prizes. In August reports from Wilmington mentioned the capture of a schooner and a sloop besides "many other vessels" bound for that port. The Indian Queen, North Carolina to Bristol, was taken in October. Similar reports run through the succeeding years. In June, 1747, it was reported "that there are now no less than 9 Spanish Privateers cruizing on this coast." The Molly, from Cape Fear to Barbados; the Rebecca, from Charleston to Cape Fear; the John and Mary, from Cape Fear to Bristol, "with a Cargo of Pitch, Tar and Turpentine;" and an unnamed vessel from London to Cape Fear, were but a few of their prizes. In July, 1748, three ships were "cut out of Ocracoke Inlet" by Spanish privateers. Of the great majority of captures no reports are now available, but some idea of the havoc wrought in colonial commerce may be gathered from the shipping reports of the South Carolina Gazette. That periodical reported as clearing between Charleston p264and North Carolina ports during the five years before the declaration of war, 1735‑1739, inclusive, eighty vessels; during the five years, 1744 to 1748 inclusive, the same papers reported as clearing between the same ports only twenty-one vessels.
It is not without interest to note that as the privateersmen revived memories of the deeds of "Blackbeard," so also they made skillful use of the same inlets and harbors that had so often sheltered the famous pirate. "The Spaniards," it was reported, in 1741, "have built themselves Tents on Ocracoke Island; Two of the Sloops lie in Teache's Hole," where they found shelter from the British men-of‑war. After cruising about Chesapeake Bay and ravaging the Virginia coast, says a report in July, 1741, they sought safety from the Hector, a 40‑gun man-of‑war, "in Teache's Hole in North Carolina where they landed, killed as many Cattle as they wanted, and •tallowed their Vessels' Bottoms." Another favorite rendezvous was Lookout harbor "where they wood, water, kill Cattle, and carry their Prizes till they are ready to go (with them) to their respective Homes." Men-of‑war were afraid to seek them in Lookout harbor because of their "Want of Knowledge of it."
Resistance to the Spaniards was feeble and spasmodic. The Assembly made appropriations for the erection of forts at Ocracoke, Core Sound, Bear Inlet, and Cape Fear, but none of them proved of any service. Fort Johnston, named in honor of the governor, afterwards played an important part in the history of the Cape Fear region, but during the Spanish War was ineffective as a defence against the enemy. In June, 1739, before the declaration of war and in anticipation of it, the king authorized Governor Johnston to issue letters of marque and reprisal against Spanish shipping, and a few privateers were fitted out at Wilmington, but the results of their work were negligible. For instance, in July, 1741, Wilmington merchants fitted out two privateers, one of twenty-four guns, Captain George Walker, the other a small sloop, Captain Daniel Dunbibin, "to go in quest of the Spanish Privateers which infest this Coast," but as late as September no news had been received of them. British men-of‑war also patrolled the coast. There were the Hector, forty guns, Captain Sir Yelverton Peyton, the Tartar, Captain George Townsend, the S. Francesco, Captain Bladwell, the Cruizer, and another, name not mentioned, under command of Captain Peacock. But the merchants found grounds for p265complaining of the lack of vigilance even among the men-of‑war, and it was openly charged that "the Spaniards were so encouraged by the Indolence, if not the Cce [cowardice] of Sir Yn [Yelverton], that they ravaged the coast with impunity. Other British commanders, however, were more active. In July, 1741, Captain Peacock compelled the Spaniards to abandon their shelter at Ocracoke and to burn "the Tents they had built on Ocracoke Island." May 26, 1742, the Swift after an all day chase overtook a privateer off Ocracoke Inlet and engaged her in battle. The privateer, however, got the best of the fight, shot away the mainstays and forestays of the Swift, compelling her to put back into Wilmington for repairs, and then escaped in the darkness. A few months later the Swift had better luck, capturing a large Spanish sloop which she brought into Wilmington and converted into a British privateer.
Emboldened by their success, the Spaniards became ambitious. In 1747 they attacked and captured the town of Beaufort which they held for several days and plundered before being driven out. The next year their audacity reached its climax in an attack on Brunswick. September 3, 1748, three Spanish privateers, the Fortune, a sloop of 130 tons, carrying ten 6‑pounders and fourteen swivels, Captain Vincent Lopez, the Loretta, carrying four 4‑pounders, four 6‑pounders, and twelve swivels, Captain Joseph Leon Munroe, and a converted merchantman, appeared off the Cape Fear bar. Two days later they dropped anchor off Brunswick and opened fire upon the shipping there. At the same time a force which they had landed below the town attacked from the land side. Taken by surprise, the inhabitants fled in confusion. The enemy thereupon seized five ships "and several small craft" that were in the harbor, captured the collector of the port and several other men, and "plundered and destroyed everything without fear of being disturbed."
But the inhabitants quickly recovering from their surprise organized a force of eighty men, under command of Captain William Dry, and returned to the attack. They in turn surprised their enemy in the midst of their plundering, killed or captured many of them, drove the others to the shelter of their ships, and were vigorously "pursuing their good fortunes till they were saluted with a very hot fire from the commodore sloop's great guns, which, * * * however, did not prevent their killing or taking all the stragglers." The Fortune continued the bombardment till suddenly "to our great p266amazement and (it may be believed) joy, she blew up." Most of her crew, including her commander and all of his officers, perished in the explosion or were drowned. Thereupon, the Loretta, which had gone up the river in pursuit of a prize, "hoisted bloody colours," dropped down the river again, and opened fire "pretty smartly" on the town. But this turned out to be mere bluster. Soon lowering her "bloody colours," she "hoisted white in her shroud" and sent a flag of truce ashore "desiring to have liberty to go off with all the vessels, and promising on that condition to do no further damage." But Captain Dry boldly replied "that they might think themselves well off to get away with their own vessel, that he could not consent to their carrying away any other, and would take care they should do no more damage; but he proposed to let them go without interruption if they would deliver up all the English prisoners they had, with everything belonging to the place." The Spaniard's only answer to this defiance was to abandon all of his prizes except the Nancy, which he had armed and manned with a Spanish crew, and to slip quietly down the river under a white flag. He anchored off Bald-Head and let it be known that he was ready to negotiate for an exchange of prisoners. This was soon effected through a commission sent by Major John Swann who had arrived from Wilmington with 130 men and taken command. The Spaniard then put to sea and disappeared.
In this attack, the Carolinians escaped without the loss of a man. They had two slightly wounded, none killed. Their property losses, however, were heavy for what the Spaniards "did not carry away they broke or cut to pieces." Nevertheless the Carolinians won a great triumph, for as they justly boasted, "notwithstanding our ignorance in military affairs, our want of arms and ammunition (having but 3 charges per man when we attacked them), the delay of our friends in coming to our assistance, and the small number [we] were composed of (many of which were negroes)," they had beaten off a much superior enemy consisting of 220 men and three armed ships, compelling them to abandon their prizes, and causing them a loss of 140 men, more than one-half of their force, including their commanding officer.
The attack on Brunswick was made more than two months after peace had been declared. On June 17, 1748, the Board of Trade wrote Governor Johnston, "Preliminaries for a Peace have been signed at Aix-la‑Chapelle by the Ministers of all the Powers engaged in the war." This treaty, however, p267settled none of the questions at issue between the rivals in America; it merely afforded them a breathing spell in which to prepare for a greater struggle yet to come. The French, much more alive to the situation than their rivals, began at once to take advantage of this lull in the contest. Realizing that something more than mere assertion of title was necessary to secure to them the territory along the Ohio and the Mississippi, which formed so large a part of New France, they built a series of strong forts to connect the two distant heads of their empire. By the middle of the eighteenth century, therefore, the long frontier between Montreal and New Orleans was defended by more than sixty forts. Many of these forts stood on land claimed by New York, Pennsylvania, Virginia, and the Carolinas, yet in these colonies, only a few people clearly appreciated the significance of the French movements, or understood how to check them. The most significant of the English counter-movements was the organization in London and Virginia of the Ohio Land Company for planting English settlements on the east bank of the Ohio River. But this region was also claimed by the French and it was here that the first clash came. In 1753 Governor Robert Dinwiddie of Virginia learning that the French were encroaching upon this territory sent Major George Washington on his famous mission to demand their withdrawal. Upon their refusal, Dinwiddie ordered Washington to seize and fortify the point where the Alleghany and Monongahela rivers unite to form the Ohio. But Washington had scarcely begun his work when a superior force of Frenchmen appeared, drove him away and erected on the site he had chosen a strong fortress which they called Fort Duquesne. Thus began the great war which was to decide the mastery of North America.
In this contest the English had the advantage of numerical strength and interior lines, but these advantages were fully offset by the unity of command and purpose which prevailed with the French. From Quebec to New Orleans, all New France moved in obedience to a single autocratic will. The English on the other hand were divided into thirteen separate governments, politically independent of each other, and largely self-governing. Not a soldier could be enrolled, not a shilling levied in any English colony until a popular assembly had been persuaded of its wisdom; and no concerted movement could be undertaken until many different executives had been consulted and many different legislative bodies, jealous of their authority and hostile to every suggestion that p268conflicted with their local interests, had given consent. The French of course were aware of this situation and counted it as one of the strong elements in their favor. "The French," observed Governor Dinwiddie, in 1754, "too justly observe this want of connection in the Colonies, and from thence conclude (as they declare without reserve) that although we are vastly superior to them in Numbers, yet they can take and secure the Country before we can agree to hinder them." He thought that an act of Parliament might be necessary to cure the evil. The necessity for co-operation was clearly understood in England and the government urged it upon the colonies in almost every dispatch that crossed the Atlantic. In July, 1754, President Rowan of North Carolina received a rebuke from the government because of his "total Silence upon that part of His Majesty's orders which relate to a concert with the other Colonies." But except among a few far-sighted leaders no sentiment existed in any of the English colonies in favor of a closer union. In 1754, at the beginning of the great war, the colonies rejected with scant ceremony the Albany Plan of Union which, especially as a war measure, had many excellent features to recommend it.
The attitude of North Carolina toward the Albany Plan was typical of the attitude of the other colonies. Governor Dobbs laid it before the Assembly at its December session in 1754 and asked for its consideration saying that the king had instructed him "to promote a happy union among the provinces for their General Union and Defence." But the Assembly was not interested in it. It merely ordered the plan to be printed and distributed among its members "for their Mature Consideration," but postponed discussion to the next session and then forgot it. Other colonies gave it even less consideration. The colonies had to drink deep of the cup of bitter experience, of suffering and disaster, before they were ready for a real union.
In another respect, too, the French had an advantage over the English. The French settlements were little more than military outposts, garrisoned by trained soldiers, fully equipped with the best arms, and commanded by experienced officers. The English colonies on the other hand were industrial and agricultural communities, thoroughly non-militaristic and almost wholly unprepared for war. Here again the situation in North Carolina was typical. Although that colony had just gone through the Spanish War in which its troops had been defeated, its coasts ravaged and its towns plundered, p269the lessons of that experience had been lost upon both governor and people. Not a fort protected its long frontier, and the money appropriated for defences along the coast had been largely unspent. No fortifications had been erected at Ocracoke, Lookout, or Topsail Inlet. At Cape Fear, Fort Johnston was still unfinished and almost totally unmanned. Though the plan called for sixteen 9‑pounders and thirty swivels, the fort contained only five 6‑pounders and four 2‑pounders, and had no regular garrison.
Preparations for offense were no better. On paper the militia numbered more than 15,000 infantry and 400 cavalry, but long neglect had destroyed its organization. President Rowan complained in 1753, that from the indolence of Governor Johnston, the militia had fallen into decay. One of the first acts of Governor Dobbs upon assuming the administration in 1754 was to call for a militia return. The result was alarming. There were twenty-two counties each of which was supposed to have a fully organized regiment. The returns showed that in most of them there were organizations in name only, and in many not even that. Beaufort had no colonel. In Bertie County eight companies were "without officers." Five of Edgecombe's fourteen companies reported their captains "removed, laid down, or dead." Every one of Granville's eight companies was without a captain. In New Hanover the major had "thrown up" his commission. In Orange the colonel had resigned, five captains had left the county or refused to serve, fourteen lieutenancies and ensigncies were vacant. Tyrrell reported: "The Coll. dead, the Lieut. Coll. and Major have neglected to act." Four counties made no returns.
The disorganization was bad, the equipment worse. Governor Dobbs stated that the militia were "not half armed" and that such arms as they had were "very bad." Great was his alarm upon finding "that there is not one pound of [public] gunpowder or shot in store in the Province, nor any arms;" nor were there "twelve barrels of gunpowder in the Province in Traders hands." He felt compelled to appeal to the king for ammunition because "at present we have no credit and must pay double price if any is imported by merchants." He afterwards learned that Beaufort County had on hand fifty pounds of public gunpowder. Beaufort also reported 150 pounds of large shot, but "no arms in the publick store." Chowan had 400 pounds of bullets and swan shot, but no powder and no arms. The militia of Johnston County were "indifferently p270armed," and without ammunition. Bladen, Carteret, Duplin, Edgecombe, Granville, New Hanover, Northampton, Onslow, Pasquotank, Perquimans, Tyrrell, all reported "no arms," or "no arms or ammunition." Six counties made no report on arms and ammunition, probably because they had none. In Granville County the men were drilled with wooden clubs! The situation was somewhat relieved by a gift from the king, in 1754, of 1,000 stand of arms which were distributed to the exposed counties on the western frontier, to the counties on the coast, and to the companies raised for service in Virginia. But even this relief was largely nullified by the conduct of the troops in Virginia, who, after Braddock's defeat, "deserted in great numbers," taking their arms and equipment away with them.
Currency Issued During French and Indian War
Anticipating hostilities with the French, the king in August, 1753, instructed the governors of all the English colonies "in case of Invasion" to co-operate with each other to the fullest extent. Immediately after the attack on Washington, therefore, Governor Dinwiddie hastened to call upon the governors of Pennsylvania, New York, Maryland, New Jersey, Massachusetts, South Carolina, and North Carolina for assistance in driving the French from Fort Duquesne. President Rowan, then acting-governor of North Carolina, met his Assembly February 19, 1754, and laid the situation before it. He felt sure, he said, that the people of North Carolina would not "sitt still and tamely see a formidable forreign Power" p271dispossess the English of their western territory, and he asked the Assembly to exert itself "to the utmost in the common cause" by voting at once "a good and seasonable supply" for the support of a military force to assist in the expulsion of the French and their allies. His appeal found a ready response. The Assembly declared that the action of the French "must fire the Breast of every true Lover of his Country with the warmest Resentments" and "certainly Calls for a speedy Remedy." It promised "to furnish as many forces as we can conveniently spare towards this so necessary an Expedition" and "to consider of such ways and means Immediately to supply the Treasury as the Circumstances of our Constituants will admitt" for their maintenance.
The Assembly acted promptly and liberally. Without a dissenting vote it appropriated £12,000 "for raising and providing for a regiment of 750 effective Men to be sent to the Assistance of Virginia." President Rowan did not expect the maintenance of these men to fall upon North Carolina after their arrival in Virginia, so when he ascertained later that each province must maintain its own soldiers, he realized that the £12,000 would be insufficient to support 750 men. Accordingly he was compelled to reduce the force to 450 men. But even this number was 150 more than Virginia raised for the same expedition although it was for the defence of her own soil. The regiment was placed under command of Colonel James Innes who had commanded the Cape Fear company in the Cartagena expedition. Governor Dinwiddie hailed his appointment with great satisfaction, saying to President Rowan, "I am glad Your Regiment comes under the Command of Colo. Innes, whose Capacity, Judgment and cool Conduct, I have great Regard for." He testified to the sincerity of his sentiments by appointing Innes commander-in‑chief of the expedition. Colonel Innes hastened at once to the front, leaving his regiment to follow. He arrived at Winchester, Virginia, July 5th, two days after the defeat of Washington's Virginians at Great Meadows; thence he hurried on to Wills Creek, where he afterwards built Fort Cumberland, •140 miles from Fort Duquesne, and there took formal command of the colonial forces.
North Carolina's response to Virginia's appeal for aid was liberal, but her liberality was nullified by extravagance and bad management. President Rowan fixed the pay of privates at three shillings a day and that of officers in proportion, an p272extravagance of which Dinwiddie very justly complained because of its effect on the Virginia troops who received only eight pence a day. Rowan also invested large sums in pork and beef to be sent to Virginia and sold for Virginia currency with which to pay the troops after their arrival in that colony, and on most of these transactions he lost heavily. The organization of the regiment proceeded slowly and this delay too added to the expense. Consequently the £12,000 appropriated by the Assembly was entirely expended before the troops ever reached the front, and when they arrived at Winchester, the place of rendezvous, they found that no provisions and no ammunition had been collected there for them. Their pay, too, was in arrears. Colonel Innes appealed to Governor Dinwiddie for advances, but Dinwiddie had no funds which he could use for this purpose. "I can give no orders for entertaining your regiment," he replied, "as this Dominion will maintain none but their own forces." Consequently the North Carolina regiment had scarcely reached Winchester before it was disbanded and sent home without having struck a blow at the enemy.
That the struggle had opened so unfavorably for the English was due primarily to their lack of preparation and of co-operation. In October, 1754, therefore, Governor Dinwiddie, Governor Horatio Sharpe of Maryland, and Governor Dobbs held a conference at Williamsburg to formulate plans for a joint attack on Fort Duquesne. Dobbs laid these plans before his Assembly in December and asked for men and money to carry them into execution. The Assembly responded by authorizing a company of 100 men for service in Virginia and another of fifty men for service on the North Carolina frontier, and by voting £8,000 for their subsistence. The company destined for Virginia was placed under the command of the governor's son, Captain Edward Brice Dobbs, formerly a lieutenant in the English army. But before the plans of the Williamsburg conference could be carried out, they were superseded by others on a much larger scale, arranged in April, 1755, at a conference held at Alexandria, Virginia, between several of the colonial governors and General Edward Braddock, who had been sent from England to take command of the forces in Virginia for the reduction of Fort Duquesne. These new plans called for simultaneous campaigns against the French on the Ohio, on the Niagara, and on Lake Champlain. Although North Carolina was not represented at this meeting, p273both governor and Assembly entered heartily into the arrangements. Captain Dobbs was ordered to move his company at once to Alexandria where Braddock was assembling a force for the expedition against Fort Duquesne. Three months later all British America was thrown into consternation by the disastrous ending of this expedition. Dobbs' North Carolinians, being absent at the time from the main army on a scouting expedition, escaped destruction, but many of them, sharing the general demoralization of the British forces, deserted and made their way back home. With what remained Captain Dobbs joined Colonel Innes at Fort Cumberland, where he continued for nearly a year helping to guard the Virginia frontier.
Immediately after Braddock's defeat, Governor Dobbs convened the Assembly in special session and in a sensible, well-written address pointed out the seriousness of the situation and suggested that "a proper sum cheerfully granted at once will accomplish what a very great sum may not do hereafter." The Assembly promptly voted a supply of £10,000 and authorized the governor to raise three new companies "to protect the Frontier of this Province and to assist the other Colonies in Defence of his Majesty's Territories." To command these companies, the governor commissioned Caleb Grainger, Thomas Arbuthnot, and Thomas McManus captains and sent them to New York to aid in the operations against the French at Niagara and Crown Point. At the same time he ordered Captain Dobbs to withdraw his company from Fort Cumberland and join the other North Carolina companies in New York. Captain Dobbs, promoted to the rank of major, was appointed to command the battalion. The governor declared that he took this action because he found that if Captain Dobbs' company remained in Virginia it would only do guard duty on the frontier, without making any attempt against Fort Duquesne, since the English there had no officers competent to make a plan of operations, nor any artillery; nor was there any likelihood of any assistance from either Maryland or Pennsylvania, "as they don't seem Zealous for the Common Cause of the Colonies." The North Carolina troops arrived at New York May 31st, and shared in the disasters which resulted in the loss of Oswego and the failure to wrest Crown Point from the French. Since the capture of Oswego threw open to the enemy the entire English frontier from New York to Georgia, problems of home defence so strained p274the resources of the colony that North Carolina was unable to continue to support her troops in New York; the governor accordingly directed their officers to try to induce the men to enlist either in the Loyal American Regiment, or in the regulars. Those who took neither course were allowed to return to North Carolina.
After the loss of Oswego, the Earl of Loudoun, commander-in‑chief of the British forces in America, notified the southern governors to prepare for the defence of their frontiers since the French then had free access by the Great Lakes to send troops to the Ohio, and also to attack them through their Indian allies. The situation was so serious that he called a conference at Philadelphia, March 15, 1757, of Dobbs, Dinwiddie, Sharpe, and Denny of Pennsylvania, that he might "concert in Conjunction with them a Plan for the Defence of the Southern Provinces." He informed the governors that since the greater part of the British troops in America would be needed in the northern campaign, he could give the southern colonies only 1,200 regulars, for the rest they would have to shift for themselves. It was agreed, therefore, that they should raise 3,800 men, distributed as follows: Pennsylvania 1,400, Maryland 500, Virginia 1,000, North Carolina 400, and South Carolina 500, making with the regulars, 5,000 men. Of these, 2,000 men were to be used in defence of South Carolina and Georgia which were threatened with attack by sea as well as by land. Returning from this conference, Dobbs immediately convened the Assembly, and in a brief and pointed message explained the agreement he had made for the province and asked for the means to carry it out. The Assembly promised, in spite of the large debt already contracted in the common cause, to vote the necessary supplies. An act was accordingly passed appropriating £5,300 and providing for 200 men "to be imployed for the service of South Carolina or at home in case not demanded or wanted there." These troops were speedily raised and ordered to South Carolina under command of Colonel Henry Bouquet, the British officer assigned to command in the southern colonies. At the same time, Governor Dobbs ordered the militia in the counties along the South Carolina border to be ready to join Colonel Bouquet at his command without waiting for further orders from him. However, they were never called upon for active service.
The summer of 1757 was one of the gloomiest in the annals p275of the British Empire. Success everywhere crowned the arms of France. In Europe disasters followed each other so rapidly, and some of them were so disgraceful, that Lord Chesterfield exclaimed in despair, "We are no longer a nation!" In America, Braddock's army had been destroyed; Oswego had fallen, the Crown Point expedition had failed; Fort William had been captured. New France "stretched without a break over the vast territory from Louisiana to the St. Lawrence,"1 and not an English fort or an English hamlet remained in the basin of the St. Lawrence, or in all the valley of the Ohio. In the wigwams of the red men the prestige of the British arms had been so utterly destroyed that the Indians called Montcalm, "the famous man who tramples the English under his feet."2 But a change was at hand. In July, a new force came into the contest which was destined to wrest from France every foot of her American empire and assure to men of the English-speaking race complete supremacy on the continent of North America. This force was the genius of William Pitt, "the greatest war minister and organizer of victory that the world has seen."3 Under his leadership the year 1758 was as glorious as that of 1757 had been gloomy. In every quarter of the globe the arms of England were victorious. In Europe and in Asia victory followed victory with dazzling rapidity. In America Louisburg fell, Fort Frontenac surrendered, and Fort Duquesne was captured. "We are forced to ask every morning," wrote Horace Walpole, "what new victory there is, for fear of missing one."
The Assembly of North Carolina had quarreled with Dobbs, but the words and spirit of Pitt inspired it, "notwithstanding the indigency of the country," to renewed efforts in support of the war. On December 30, 1757, Pitt called upon the province, together with other southern colonies, for a force to reduce Fort Duquesne. He appealed to their pride and patriotism by declaring that he would not "limit the Zeal and Ardor of any of His Majesty's Provinces" by suggesting the number of troops for it to raise, but asked each for "as large a Body of Men * * * as the Number of its Inhabitants may p276allow." The North Carolina Assembly, pleading as its excuse for not doing more that the colony's debts incurred in defense not of itself alone, but also of Virginia, New York, and South Carolina, amounted "to above forty Shillings each Taxable," which was "more than the Currency at present circulating among us," voted an aid of £7,000 and 300 men. It requested that these troops be sent to General John Forbes, whom Pitt had sent to Virginia to command the expedition, "without loss of time." Governor Dobbs placed this battalion under the command of Major Hugh Waddell, a young officer whose services on the North Carolina frontier had already attracted wide attention. Waddell raised, organized, and equipped his battalion with dispatch, and marched them to join the forces of General Forbes.
Very different was Forbes' course from that of Braddock. No foolish boastings of the superior prowess of British regulars, no equally foolish contempt for the prowess of his foe, no scorn of his provincial troops and their officers, no neglect of the principles of frontier warfare, betrayed him to his ruin. Among his colonial troops Hugh Waddell and his Carolinians stood high in his esteem. Waddell, wrote Governor Dobbs, "had great honour done him being employed in all reconnoitering parties; and dressed and acted as an Indian; and his Sergeant Rogers took the only Indian prisoner who gave Mr. Forbes certain intelligence of the Forces in Fort Duquesne upon which they resolved to proceed." The reference to Sergeant Rogers is to the following incident. Winter had set in and the British general, with his army in a mountainous region, ill prepared to pass the winter in such a wilderness, or to lay a winter to a strongly fortified fort, and without accurate information of his enemy's force, was in a dilemma whether to retire to a more favorable position for the winter, or to push on. He therefore offered a reward of £50 to any one who would capture an Indian from whom information as to the enemy's situation could be obtained. Sergeant John Rogers, of Waddell's command, won this reward by bringing in an Indian who told Forbes that if he would push resolutely on, the French would evacuate Fort Duquesne. The British commander followed the red man's advice. Upon his approach, the French garrison fled, and Fort Duquesne, dismantled and partially destroyed, fell without a blow into the p277hands of the English general who immediately renamed it Fort Pitt, because as he said in a letter to Pitt, "it was in some measure the being actuated by your spirit that now makes me master of the place."
The victories of 1758, together with the fall of Quebec in 1759, moved the French as a serious factor in the war and brought peace with them in sight. But the war was not at an end for the colonies still had to reckon with the Indians. In the North the confederated tribes under Pontiac continued to make war on the English, while in the South the Cherokee warriors who had acted as allies of the British against Fort Duquesne returned from that expedition to arouse their tribe to hostilities. In 1755 they could call to arms more than 2,500 warriors. Besides the Cherokee, the two Carolinas had also to reckon with the Catawba who had, in 1755, about 250 warriors. Both Cherokee and Catawba were nominally friends of the English, but for several years the French had been undermining the English influence with such success that at the outbreak of the French and Indian War the preference of the Indians for the French was but thinly veiled and nothing but policy prevented their joining forces with their new friends. The English were fully aware of this situation and took immediate steps to hold both nations to their allegiance.
The outbreak of war on the Ohio was accompanied by manifestations of hostility by the Carolina Indians. In December, 1754, therefore, the Assembly provided for a company of rangers for the protection of the frontier. Governor Dobbs entrusted this work to Hugh Waddell, a young Irishman, not yet twenty-one years of age, and but recently arrived in the province, who was, wrote Dobbs, "in his person and character every way qualified for such a command, as he was young, active, and resolute." The governor's choice was fully justified by the results. The young officer acted with energy in raising and organizing his company, and was soon scouting on the frontier where his presence tended to keep the Indians quiet. It soon became evident, however, that a larger force and some permanent forts would be necessary. In the summer of 1755, therefore, Governor Dobbs visited the western settlements to study the situation. He was on this tour when he received information of Braddock's defeat. Hastening to New Bern, he convened the Assembly, September 25, and in a forceful address set forth the defenceless condition of the province, p278the growing influence of the French over the Cherokee Indians, and the necessity for prompt action to defeat their schemes. Besides sending aid to New York this Assembly ordered that a fort be erected on the North Carolina frontier. The execution of this work was entrusted to Captain Waddell who, selecting a site "beautifully situated in the fork of Fourth Creek, a Branch of the Yadkin River about •twenty miles west of Salisbury," erected there a fort which he named in honor of the governor. In 1756 a committee of the Assembly, of which Richard Caswell was a member, after an inspection reported that the fort was "a good and substantial Building" and that its garrison of forty-six men appeared to be well and in good spirits.
Besides his military duties, Captain Waddell was charged with diplomatic duties. In February, 1756, as the representative of North Carolina he was associated with Peyton Randolph and William Byrd, representatives of Virginia, in negotiating an offensive and defensive alliance with the Cherokee and Catawba nations. The noted chief, King Haiglar, represented the Catawba and Ata‑kullakulla the Cherokee. Ata‑kullakulla was one of the most remarkable Indians of whom we have any record. Bartram, the eminent botanist and traveller, described him as a man of small stature, slender build and delicate frame, but of superior abilities. Noted as an orator and a statesman, he was "esteemed to be the wisest man of the nation and the most steady friend of the English." The treaties signed by these representatives stipulated that the English should build three forts within the Indian reservations to protect them against the French while the Cherokee were to furnish 400 warriors to aid the English in the North. Accordingly South Carolina built Fort Prince George at Keowee on the headwaters of the Savannah and Virginia built Fort Loudoun on the Little Tennessee at the mouth of the Tellico. It fell to North Carolina to build a fort for the protection of the Catawba, but Captain Waddell had scarcely begun work on it, on the site of the present town of Old Fort, when he was ordered to stop as the Catawba had repented of their agreement and desired that no fort be built among them. The Cherokee also became alarmed when a garrison of 200 men was sent to Fort Loudoun, which Major Andrew Lewis of Virginia was building, and their great council at Echota ordered the work stopped and the garrison withdrawn, p280saying plainly that they did not want so many armed white men among them. Even Ata‑kullakulla was now in opposition to the English. Dispute the treaties, therefore, the situation was highly unsatisfactory and there were strong grounds for believing that several murders along the Catawba and Broad rivers in North Carolina were the joint work of "French Indians" and Cherokee.
Nevertheless, the Cherokee, in accordance with their agreement, sent a considerable body of warriors to aid the English against Fort Duquesne. This policy of calling in the aid of Indians in military affairs was to say the least always of doubtful wisdom; in this case its disastrous. The trouble began in the spring of 1756 with an expedition which Major Andrew Lewis undertook against the hostile Shawano on the Ohio, with 200 white troops and 100 Cherokee. The expedition ended in disaster. Some of the Cherokee returning home having lost their own horses, captured some horses which they found running loose and appropriated them to their own use. Thereupon the Virginia frontiersmen fell upon them, killing sixteen of their number. At this outrage the hot blood of the young warriors, who were none too friendly to the English at the best, flared up in a passion for immediate revenge. The chiefs, however, counseled moderation until reparation could be demanded of the colonial governments in accordance with their treaties. But Virginia, North Carolina, and South Carolina all refused to take any action in the matter. While the women in the wigwams of the slain warriors were wailing night and day for their unavenged kindred, and the Creeks, who were in alliance with the French, were taunting the Cherokee warriors with cowardice for submitting so tamely to their wrongs, came news of the fall of Oswego and other English disasters in the North. The Cherokee thirst for revenge was now mingled with contempt for English arms, and the young men could no longer be restrained. They fell upon the back settlements and spread terror far and wide until Governor Dobbs sent sufficient reinforcements to Captain Waddell to enable him to check the ravages of the enemy.
Thus the situation remained throughout 1757 and 1758. Murders by the Indians followed by prompt reprisals by the whites kept both in a state of constant suspicion. While they were in the inflammable state of mind, 150 Cherokee warriors were sent to join the English in defence of the Virginia p281frontier. They were unruly and dangerous allies, being, as Governor Dinwiddie said, "a dissatisfied set of people." The capture of Fort Duquesne, November 25, 1758, merely accentuated the danger, for the French driven from the Ohio immediately concentrated their intrigues upon the tribes on the Tennessee and the Catawba. Depredations on the back settlements by "French Indians" became more and more frequent, and their influence over the Cherokee became daily more apparent. In May, 1759, both the Carolinas were alarmed by reports of "many horrid murders" committed by the Lower Cherokee along the Yadkin and the Catawba. In July came another report of murders in the vicinity of Fort Dobbs by bands of Middle Cherokee. The white settlers, in great alarm, were abandoning their homes and "enforting themselves," some in Fort Dobbs, others among the Moravians at Bethabara. Governor Dobbs hastily withdrew sixty men from Fort Granville at Ocracoke and Fort Johnston and sent them with some small cannon to the defence of the West with orders to cooperate with the militia of Orange, Anson and Rowan counties. Hugh Waddell, promoted to the rank of colonel, was again sent to Fort Dobbs to take command on the frontier. He had scarcely reached his post when he received orders to hasten to the aid of Governor Lyttleton of South Carolina who was conducting an expedition against the Lower Cherokee, but while on the march with his rangers and 500 militia, he was halted by an express from Governor Lyttleton who had made peace with the enemy.
This peace, however, was of short duration. No sooner had Lyttleton withdrawn his forces from Fort Prince George than Oconostota, the young war chief, who had suffered personal injuries at the hands of Governor Lyttleton, attacked the fort after treacherously murdering its commanding officer. War immediately broke out along the whole frontier. On the night of February 27, 1760, the dogs at Fort Dobbs by "an uncommon noise" warned Colonel Waddell that something unusual was going on outside. Investigation showed that the fort was surrounded by Cherokee warriors. After a hot fight Waddell beat them off with serious losses. Another band preparing for a night assault on Bethabara was frightened away by the ringing of the church bells. Still others laid waste the settlement at Walnut Cove. Across the mountains, Oconostota laid to Fort Loudoun. In June, 1760, a relief p282expedition under Colonel Archibald Montgomery, consisting of 1,600 Scotch Highlanders and Americans, penetrated the Cherokee country as far as Echoee, near the present town of Franklin, where in a desperate engagement with the Cherokee, June 27, 1760, Montgomery was defeated and compelled to retreat to Fort Prince George. His retreat sealed the fate of Fort Loudoun. The garrison after being reduced to the necessity of eating their horses and dogs capitulated on the condition that they be allowed to retire unmolested with their arms and sufficient ammunition for the march, leaving to the enemy their remaining warlike stores. Unfortunately the commanding officer, Captain Demeré, failed to carry out these terms in good faith and the Indians discovering his breach of the treaty fell upon the retreating soldiers, killed Demeré and twenty-nine others and took the rest prisoners.
Harrowing reports of atrocities and butcheries, which continued to spread throughout Virginia, North Carolina, and South Carolina, aroused those colonies to a grim determination to put an end to the power of their ruthless foes. A campaign was accordingly planned in which the three colonies were to have the assistance of Colonel James Grant and his regiments of Scotch Highlanders. In June, 1761, Grant assembled at Fort Prince George an army consisting of regulars, colonial troops, a few Chickasaw Indians and almost every remaining warrior of the Catawba, numbering 2,600 men. Refusing Ata‑kullakulla's request for a friendly accommodation, Grant pushed rapidly forward into the Cherokee country along the trail followed the previous year by Montgomery, until he came within •two miles of Montgomery's battlefield. There on June 10th he encountered the Cherokee upon whom he inflicted a decisive defeat. He drove them into the recesses of the mountains, destroyed their towns, burned their granaries, laid waste their fields, and "pushed the frontier •seventy miles farther to the west." The Cherokee, compelled to sue for peace, sent Ata‑kullakulla to Charleston where he signed a treaty that brought the war to an end. In the meantime, Virginia troops had invaded the country of the Upper Cherokee and on November 19th at the Great Island of Holston, now Kingsport, Tennessee, forced them to sign a treaty independently of the middle and lower towns. These blows broke the power of the Cherokee, who were never again strong enough to stay the westward march of the white race.
p283 Although the fall of Quebec definitely decided the contest as between France and England, peace between the two powers was not signed until 1763. By this treaty France and Spain ceded to England all their North American possessions east of the Mississippi River. The probable effect on the Indians of the removal of their French and Spanish allies from this region was a problem which gave the British government serious concern; and to allay any possible suspicion and alarm which it might occasion among the southern tribes, the king instructed the governors of Virginia, North Carolina, South Carolina, and Georgia to hold a conference with them at Augusta, Georgia, and explain to them "in the most prudent and delicate Manner," the changes about to take place. This congress met November 5, 1763. Present were Lieutenant-Governor Francis Fauquier of Virginia, Governor Arthur Dobbs of North Carolina, Governor Thomas Boone of South Carolina, Governor James Wright of Georgia, John Stuart, Indian agent for the Southern Department, twenty-five chiefs and 700 warriors of the Chickasaw, Choctaw, Creek, Catawba, and Cherokee nations. Six days of oratory and feasting resulted in a treaty of "Perfect and Perpetual Peace and Friendship" between the Indians and the English, which provided for mutual oblivion of past offenses and injuries, the establishment of satisfactory trade relations, the punishment by each party of offenders of its own race for crimes against members of the other race, and the fixing of the boundaries of the Indian reservations. On November 10th the four governors and the Indian agent, on part of the king, and the twenty-five chiefs, on part of their tribes, signed the treaty. The event was celebrated by the bombing of the guns of Fort Augusta and the distribution among the Indians of £5,000 worth of presents sent them by King George.
While these events were transpiring on the frontier, French privateers were busy along the coast. Immediately after the declaration of war, using French and Spanish ports in the West Indies as bases, they began to appear off the Carolina coast and to reenact the scenes of the Spanish War. The defenseless state of the coast gave them ample opportunity for carrying on their work. On one occasion, "for want of a Fort to defend the entrance and Channel" of the Cape Fear, "the Privateers seeing the masts of the Ships at anchor in the road within the Harbour, over the sandy Islands, went in and p284cut out the ships and carried them to Sea." Such coast fortifications as had been constructed were "Incapable of Defence for want of Artillery," which both governor and Assembly vainly begged the home government to supply, but some protection to shipping was afforded by American privateers. A few, sailing under letters of marque and reprisal issued by Governor Dobbs, were fitted out at Wilmington and Brunswick. In the spring of 1757 the brigantine Hawk, armed with 16 charge guns and 20 swivels, manned with 120 men, Thomas Wright captain, and the sloop Franklin, armed with 6 carriage guns and 10 swivels, manned with 50 men, Robert Ellis captain, sailed out of Cape Fear River. Some months later came a report that the Hawk sailing into "a French port in Hispaniola" had taken there "a pretended Danish Vessel with 135 Hogsheads of Sugar [and] 30 Barrels of Coffee." Occasionally, too, a British man-of‑war cruising off the coast, would look in at Cape Fear and other North Carolina ports. But they were not as assiduous as they might have been in the performance of their duty. On March 22, 1757, Governor Dobbs declared that H. M. S. Baltimore, which was supposed to be stationed at Cape Fear, had not been at her station three weeks all told since his arrival in North Carolina; and at another time he charged that her captain spent the winter months at Charleston because there were "no balls or entertainments" at Cape Fear. It is not surprising, therefore, that merchants complained that "notwithstanding our great superiority in the West Indies," French privateers had captured seventy-eight English and American vessels, some of which were wonder by North Carolina merchants, and carried them as prizes to Martinique. But after 1757 the navy like the army coming under the spell of Pitt's genius, began to display greater zeal and activity in running down the enemy. Captain Hutchins, H. M. S. Tartar, reported in June, 1759, that during a cruise of three days off Ocracoke he had neither seen near of a French privateer. Three months later, Wolfe's triumph at Quebec put an end to privateering in American waters.
News of the fall of Quebec reached Brunswick October 24th. "Our Governour upon this occasion," wrote the Brunswick correspondent of the South Carolina Gazette, "ordered a tripple discharge of all the cannon at this town and Fort Johnston, all the Shipping displayed their colours and fired 3 p285rounds; and yester evening was spent in an entertainment at his excellency's in illuminations, bonfires and all kinds of acclamations and demonstrations of joy. Today's rejoicings are repeated at Wilmington."
The war had borne heavily on North Carolina both in men and money. It is impossible to say how many soldiers the colony raised as no accurate returns exist, indeed, none were ever made. At various times, however, the Assembly authorized the recruiting of more than 2,000 men and there is no reason to suppose that they were not enrolled; there were indeed probably more for many a settler took down his musket and went forth to war on the frontier whose name was never entered on any muster roll. Nor does this number include the militia who were called into active service but of whose service no records exist. More than half of the 2,000 provisionals authorized by the Assembly were sent into service in other colonies. Of North Carolina's financial contributions, more accurate information is available. On November 24, 1764, Treasurer John Starkey reported to the Assembly that since 1754 the colony had issued £72,000 of proclamation money, current as legal tender at the rate of four for three of sterling. Of this amount, £68,000 were still in circulation in 1764. The Assembly also issued for war purposes treasury notes bearing interest at 6 per cent to the amount of £30,776, of which in 1764 £7,000 were still out. The war, therefore, had cost North Carolina £102,776, of which £27,776 had been paid, leaving a debt of £75,000. Reckoning the population at 130,000, the public debt contracted in support of the war amounted to upwards of 15s per capita. For the redemption of this war debt the Assembly levied a tax of 4s on the poll and a duty of 4d a gallon on spirituous liquors. During the war Parliament appropriated £200,000 to reimburse all the colonies for their expenditures, and an additional £50,000 for Virginia, North Carolina, and South Carolina. A quarrel between the governor and the Assembly over the control of this fund resulted in North Carolina's receiving only £7,789 from both funds which certainly was much less than her just share.
Over against the colony's losses and expenditures, however, may be placed the benefits resulting from the expulsion of the French from her western territory and the removal of the Cherokee from the path of her westward expansion. To these material results must be added the even greater moral p286benefits, viz., the breaking down of many of the barriers of local prejudices due to her former isolation and the germination of a sense of her common interest and common destiny with the rest of British America which, like the other colonies, she brought out of her experience in this first continental event in American history.
1 Green: Short History of the English People. Revised edition, p748.
2 Parkman: Montcalm and Wolfe, Vol. I, p489.
3 Fiske: New France and New England, p315.
Images with borders lead to more information.
Connor et al.
History of NC
A page or image on this site is in the public domain ONLY
Page updated: 25 May 13 | http://penelope.uchicago.edu/Thayer/E/Gazetteer/Places/America/United_States/North_Carolina/_Texts/CBHHNC/1/15*.html | 13 |
23 | HISTORY OF ZIMBABWE
Mapungubwe and Great Zimbabwe: 11th – 15th c. AD
The plateau between the rivers Zambezi and Limpopo, in southeast Africa, offers rich opportunities for human settlement. Its grasslands make excellent grazing for cattle. The tusks of dead elephants provide an easy basis for a trade in ivory. A seam of gold, running along the highest ridge, shows signs of having been worked in at least four places before 1000 AD.
The earliest important trading centre is at Mapungubwe, on the bank of the Limpopo. The settlement is established by a cattle-herding people, whose increasing prosperity leads to the emergence of a sophisticated court and ruling elite.
In 1075 the ruler of Mapungubwe separates his own dwelling from those of his people. He moves his court from the plain to the top of a sandstone hill, where he rules from a palace with imposing stone walls.
It is the first example of the zimbabwe of this region – a word in Shona, the local Bantu language, meaning literally ‘stone houses’. Zimbabwe become the characteristic dwellings of chieftains, and about 100 hilltop ruins of this kind survive. Easily the most impressive is the group known as Great Zimbabwe, which in the 13th century succeeds Mapungubwe as the dominant Shona power – with a kingdom stretching over the whole region between the Limpopo and the Zambezi.
Great Zimbabwe is not close to the local gold seam, but its power derives from controlling the trade in gold. By this period mine shafts are sunk to a depth of 100 feet. Miners (among them women and children) descend these shafts to bring up the precious metal. As much as a ton of gold is sometimes extracted in a year.
The buildings of Great Zimbabwe are evidence of equally great labour. Massive stone walls enclose a palace complex with a great conical tower, while impressive dry-stone granite masonry is used in a fortress or acropolis at the top of a nearby hill. The buildings date from the 13th and 14th centuries, the peak of Great Zimbabwe’s power.
In the 15th century Great Zimbabwe is eclipsed by two other kingdoms, one to the south at Khami (near modern Bulawayo) and one to the north, near Mount Darwin. This latter kingdom is established by a ruler who is known as the Munhumutapa – a title adopted by all his successors.
The Munhumutapa is the potentate of whom word is sent home to Europe by new arrivals on the African coast in the early 16th century. His court is first reached by a Portuguese traveller in about 1511.
The Ndebele kingdom: 19th century AD
Although Portuguese missionaries and traders occasionally make their way inland from the coast, they have little effect on the African tribes living in the region of modern Zimbabwe. It is Europeans from southern Africa who later exert a profound influence. In 1837 the Boers, pressing north, drive the Ndebele out of the Transvaal and across the Limpopo.
North of the river the Ndebele chief, Mzilikazi, establishes a powerful kingdom. As warriors and cattle-breeders the Ndebele easily subdue the agricultural Shona, long resident in the region. But in the 1880s the Ndebele are unable to resist a new onslaught from the south, this time led by the British community of south Africa.
Cecil Rhodes: AD 1871-1891
In the last quarter of the 19th century the driving force behind British colonial expansion in Africa is Cecil Rhodes. He arrives in Kimberley at the age of eighteen in 1871, the very year in which rich diamond-bearing lodes are discovered there. He makes his first successful career as an entrepreneur, buying out the claims of other prospectors in the region.
In the late 1880s he applies these same techniques to the gold fields discovered in the Transvaal. By the end of the decade his two companies, De Beers Consolidated Mines and Gold Fields of South Africa, dominate the already immensely valuable South African export of diamonds and gold.
Rhodes is now rich beyond the reach of everyday imagination, but he wants this wealth for a very specific purpose. It is needed to fulfil his dream of establishing British colonies north of the Transvaal, as the first step towards his ultimate grand vision – a continuous strip of British empire from the Cape to the mouth of the Nile.
The terms of incorporation of both Rhodes’s mining companies include clauses allowing them to invest in northern expansion, and in 1889 he forms the British South Africa Company to fulfil this precise purpose. Established with a royal charter, its brief is to extend British rule into central Africa without involving the British government in new responsibility or expense.
The first step north towards the Zambezi has considerable urgency in the late 1880s. It is known that the Boers of the Transvaal are interested in extending their territory in this direction. In the developing scramble for Africa the Portuguese could easily press west from Mozambique. So could the Germans, who by an agreement of 1886 have been allowed Tanganyika as a sphere of interest.
Rhodes has been preparing his campaign some years before the founding of the British South Africa Company in 1889. In 1885 he persuades the British government to secure Bechuanaland, which will be his springboard for the push north. And in 1888 he wins a valuable concession from Lobengula, whose kingdom is immediately north of the Transvaal.
Lobengula is the son of Mzilikazi, the leader of the Ndebele who established a new kingdom (in present-day Zimbabwe) after being driven north by the Boers in 1837. Fifty years later, in 1888, Lobengula grants Rhodes the mining rights in part of his territory (there are reports of gold) in return for 1000 rifles, an armed steamship for use on the Zambezi and a monthly rent of £100.
With these arrangements satisfactorily achieved, Rhodes sends the first party of colonists north from Bechuanaland in 1890. In September they settle on the site which today is Harare and begin prospecting for gold. In support of Rhodes’s scheme, the government declares the area a British protectorate in 1891.
The growth of the Rhodesias: AD 1890-1900
The population of settlers rapidly increases in the territory adminstered by Rhodes’s British South Africa Company. There are as many as 1500 Europeans in the region by 1892. More soon follow, thanks partly to developments in transport.
The railway from the Cape has reached Kimberley in 1885, at a fortuitous time just before the start of Rhodes’s ambitious venture (one of the stated aims of his company is to extend the line north to the Zambezi). Trains reach Bulawayo as early as 1896. Victoria Falls is the northern terminus by 1904. Meanwhile the territory has been given a name in honour of its colonial founder. From 1895 the region up to the Zambezi is known as Rhodesia.
During the early 1890s the company has considerable difficulty in maintaining its presence in these new territories. Lobengula himself tries to maintain peace with the British, but many of his tribe are eager to expel the intruders. The issue comes to a head when Leander Jameson, administering the region for Rhodes, finds a pretext in 1893 for war against Lobengula.
With five Maxim machine guns, Jameson easily fights his way into Lobengula’s kraal at Bulawayo. Lobengula flees, bringing to an end the Ndebele kingdom established by his father. There is a strong tribal uprising against the British in 1896-7, but thereafter Rhodes’s company brings the entire region up to the Zambezi under full control.
A settlers’ colony: AD 1890-1953
As with the founding fathers of early American colonies, the first European settlers in Rhodesia feel from the start that government should be in their hands. They insist on having a voice in the colony’s legislative assembly, which by 1903 consists of seven officials of the British South Africa Company and seven elected settlers.
Four years later they have a majority of the seats. And in 1914, when the company’s 25-year-charter is due to expire, it is their wishes which prevail. Self-government is their ambition. So their immediate concern is not to accept the embrace of their large neighbour, South Africa, which is eager to absorb this rich territory. They persuade the British government to extend the company’s charter for another ten years.
Eight years later, with the end of the new charter approaching, a referendum is held on the issue (limited to Rhodesia’s European population). Of the votes cast, 60% are for full internal self-government against 40% wishing to become the fifth province of the Union of South Africa.
On 12 September 1923 (thirty-three years to the day after the arrival of the first settlers at Harare) Rhodesia becomes a self-governing crown colony. It proves prosperous and successful, with the European population rising from 34,000 at the time of the referendum to 222,000 thirty years later.
By the 1950s the political future of all African colonies is under intense discussion. Among the European population of the two regions first settled by Rhodes’s company there is a general assumption that sooner or later Rhodesia and Northern Rhodesia will merge to form a single independent nation.
But this is resisted by the Africans, now beginning to find a political voice. Black opposition is strongest in the northern colony, with its much smaller white minority. Here, from the African point of view, the danger of union seems all too evident. Northern Rhodesia will be overshadowed by the strong European culture of Rhodesia, postponing perhaps indefinitely the ideal of independence under black majority rule.
Federation: AD 1953-1963
Confronted with conflicting demands, and aware of its responsibilities for Nyasaland as well as the two Rhodesias, the British government imposes in 1953 an awkward compromise in the form of the Federation of Rhodesia and Nyasaland. This is to be a self-governing colony, with its own assembly and prime minister (first Lord Malvern, and from 1956 Roy Welensky).
The intention is to derive the greatest economic benefit from the larger unit while minimizing political tension between the three parts of the federation, each of which retains its existing local government.
The federated colonies are at differing stages in their political development. All they have in common is an almost complete absence of any African voice in the political process.
Rhodesia has been a self-governing colony for three decades, but with no African suffrage (a tiny ‘B roll’ of African voters is added to the electorate in 1957). Northern Rhodesia has a legislative council with, since 1948, two seats reserved for African members. At the time of federation there are no Africans on Nyasaland’s legislative council. Two years later, in 1955, places are found for five members.
The intended economic benefits materialize during the early years of the federation, helped by a world rise in copper prices, but this is not enough to stifle increasing political unrest – particularly as British colonies elsewhere in Africa win independence (beginning with Ghana in 1957).
In the early 1960s African politicians in Northern Rhodesia and Nyasaland win increasing power in their legislative councils. The pressure grows to break up the federation. In March 1963, by which time all three colonies are demanding independence, the British government finally concedes. The federation is formally dissolved on 31 December 1963.
Before and after UDI: AD 1957-1979
During the years of federation the parties are formed which will subsequently fight the bitter struggle for the future of an independent Rhodesia.
On the African side the first leader to emerge is Joshua Nkomo. In 1957 he is elected president of the local branch of the African National Congress. After this is banned in Rhodesia, he founds in 1960 the National Democratic Party. When this in turn is proscribed, in 1961, he replaces it with ZAPU (the Zimbabwe African People’s Union). His colleagues in ZAPU include Ndabaningi Sithole and Robert Mugabe. Together they split from ZAPU in 1963 and form the rival ZANU (Zimbabwe African National Union).
This political pressure from Rhodesia’s African majority, combined with support for their cause from the United Nations, causes the federal government in 1961 to introduce a new constitution, allowing for African representation in Rhodesia’s parliament.
But the proposal creates its own backlash, prompting Ian Smith to found a new party, the Rhodesian Front, committed to white supremacist policies and offering the promise of an independent Rhodesia governed by the European minority. In elections in 1962 the new party wins a surprise victory, replacing the more moderate United Federal Party. Winston Field becomes prime minister, with Ian Smith as his deputy.
On April 1964, four months after the end of the federation, Smith replaces Field as prime minister of Rhodesia, now once again a separate self-governing colony. His first act in office is to order the arrest of Nkomo and Mugabe. Each remains in detention until 1974 (Sithole joins them from November 1965).
Smith now tries to persuade the British government to grant the Rhodesian Front’s single overriding demand – independence on the basis of white minority rule. Meeting a flat refusal on this issue, he takes matters into his own hands. On 11 November 1965 he publishes a Unilateral Declaration of Independence (UDI).
The first response of the British government is patient diplomacy (including two meetings between Harold Wilson and Smith on warships off Gibraltar, the Tiger in 1966 and the Fearless in 1968), but this is met by intransigence on Smith’s part. The result is economic sanctions, imposed by the United Nations with British approval in 1968.
The sanctions take a long time to bite. Meanwhile guerrilla activity by separate ZAPU and ZANU forces from across the borders is having rather more unsettling effect – particularly after Nkomo and Mugabe settle their differences in 1976 and form a united Patriotic Front.
By 1978 Smith recognizes the need for concessions. He comes to an agreement with a moderate African leader, bishop Abel Muzorewa, leader of the UANC (United African National Council). In return for guarantees securing white political and economic interests, multi-racial elections will be held in 1979. With the Patriotic Front banned from participating, Muzorewa emerges as prime minister of a transitional government. But nothing is solved. The Patriotic Front continues its guerrilla campaign.
The situation is finally resolved at talks in London in December 1979, attended by all three African leaders. UDI is overturned and Rhodesia reverts briefly to the status of a British colony. Britain agrees to provide funds to purchase the land of British farmers willing to sell, for a much-needed land distribution programme. Elections are organized for February 1980.
Zimbabwe: from AD 1980
In the election Mugabe’s ZANU party wins a decisive victory over Nkomo and ZAPU. The newly independent nation takes the ancient name Zimbabwe. Mugabe rules at the start in a conciliatory manner. The provisions to protect European political rights are respected (Smith continues to serve as a member of parliament until 1987). And Nkomo is brought into the cabinet.
However there is an underlying conflict between ZANU and ZAPU. The former draws its support from the majority Shona people, while ZAPU is linked with the minority (but historically dominant) Ndebele. Tribal hostilities become a noticeable feature of Zimbabwe’s political life after Mugabe dismisses Nkomo from his cabinet in 1982, just two years after independence.
In 1987 the two leaders make a new attempt to resolve the nation’s divisions by merging their parties as ZANU-PF, making Zimbabwe effectively a one-party state. At the same time the constitution is changed to give Mugabe the role of executive president. Nkomo subsequently serves as a vice president (until his death in 1999).
During the 1980s Mugabe’s Marxist policies do harm to the economy, but in the changing fashion of the 1990s there is a move towards a market system. There is also a token gesture towards multiparty democracy, though this does nothing to prevent ZANU-PF winning 98% of the seats in parliament in 1995. In 1996 Mugabe is elected unopposed for a new six-year term as president.
Several factors cause widespread unease about Zimbabwe after twenty years of independence. Political opponents are persecuted. Sithole, for example, is evicted from his farm in 1994 and is arrested in 1995 for allegedly plotting to assassinate Mugabe. It is widely suspected that the underlying purpose in each case is to dissuade him from standing as a presidential candidate in 1996.
The white community is unsettled by frequently announced plans to appropriate many of their farms without compensation, for redistribution to Africans. And there are allegations of financial corruption in senior government circles.
The underlying tensions flare up in dramatic fashion during the first half of 2000. In February Mugabe is defeated in a referendum designed to increase his hold on power. His immediate response is to escalate his long-standing campaign to appropriate the larger commercial farms owned by white Rhodesians. Mugabe’s armed supporters, described as veterans of the war for independence, forcibly occupy some 500 farms (out of a total of 4500 owned by whites).
Meanwhile a new opposition party – the MDC (Movement for Democratic Change), formed in January and led by a trade unionist, Morgan Tsvangirai – shows signs of being able to mount a very serious challenge to ZANU-PF in forthcoming elections.
The election campaign is marred by high levels of violence and intimidation from Mugabe supporters, resulting in thirty or more deaths. Even so, the result is close. ZANU-PF wins 62 seats in the new assembly, with MDC just short of victory with 57.
Immediately after the election, in June 2000, Mugabe publishes a list of 804 large commercial farms (most, but not all, white-owned) which are to be appropriated by the state for the resettlement of peasants. He insists that compensation is the responsibility of the British government.
This is something which in principle is agreed in London, since it is widely recognized that the ancestors of the British farmers claimed dubious ownership over these lands a mere hundred years ago. On independence in 1980 there was an agreed scheme for compensation. It was discontinued by Britain in 1988 on the grounds that the benefit was accruing not to Zimbabwe’s peasants but to the political elite (of 2000 farms acquired by the government in this way, 420 were transferred to the ownership of prominent ZANU-PF supporters).
The land problem is likely to remain on Zimbabwe’s political agenda rather longer than Mugabe himself, whose dictatorial behaviour and attempts to cling to power become increasingly extreme as the new millennium progresses.
In 2007, in the run up to the 2008 general and presidential elections, Tsvangirai is arrested on his way to a Harare prayer meeting and is severely beaten and tortured in prison. But with great courage he emerges from hospital to continue his political campaign against Mugabe, in a context in which the Zimbabwean economy has collapsed with inflation running at a level unheard of since Germany in the 1920s.
When the elections are held, at the end of March 2008, it is announced that in the parliamentary contest Tsvangirai’s party has defeated Mugabe’s (MDC 99 seats, ZANU-PF 97 seats in the assembly). And exit polls suggest that, in spite of intimidation of MDC supporters, Tsvangirai has defeated Mugabe in the presidential election. But in spite of mounting international pressure Mugabe refuses to release the presidential results, saying merely that he will be contesting a second round. Tsvangirai, convinced that he has won, says that he will refuse to participate in an illegal second round. | http://www.magmire.net/history-of-zimbabwe.html | 13 |
28 | OUR AMERINDIAN ANCESTORS
Aruna SharmaSt. Augustine Senior Comprehensive
|THE MIGRATION OF ARAWAKS AND CARIBS|
|THE EARLIEST INHABITANTS OF THE WEST INDIES|
|THE WEST INDIES|
|THE AMERINDIAN WAY OF LIFE|
|PEOPLED BY IMMIGRANTS|
|THE COMING OF THE SPANIARDS|
|A PEOPLE DISAPPEARS|
The Mongoloid were people that lived in Central East Asia. They were nomadic hunters that hunted the buffalo and deer. When the herds moved away from the grazing area the hunters had to follow them in order to get their food supply.
In this way, the herds also probably led the people out of central Asia crossing the Bearing Strait and into North America - although they had no knowledge that they were moving from one continent to another.
The Amerindians settled throughout North America and they were known as the ancestors of the Red Indian tribes we know today, as well as of the Eskimos in the far north. Even though they were nomadic some still followed the settled agricultural pursuit and developed agricultural civilisations of their own.
The migrations continued through the South of America, from where the Arawaks and Caribs then migrated to the West Indies.
The Arawaks can still be traced through their language to two different lands in South America where the Indians speak related languages. In appearance, the ancestors of the Arawaks looked as though they came from some where on the border land between Bolivia, Peru and Brazil.
They eventually migrated on to the West Indies. The land of the Caribs was further South than that of the Arawaks. They migrated across Brazil to the interior of Guyana, then North to the Coast of Venezuela, and so on to the West Indies, possibly about 2,000 years ago.
If we go far enough into history, we come to a time when the West Indies were uninhabited.
This was thousands of years before the writing of the history of the West Indies was even thought about; taking into account that the West Indies is not even 500 years old. We also have an idea that the islands had been inhabited by people before the arrival of Christopher Columbus, who thought he had discovered the ' Indies '. Although our perception may not be accurate, we could still sketch a picture of these inhabitants of the islands, before the coming of the Spaniards.
There were people called the 'SIBONEYS' in Cuba and the other large islands before the coming of the Arawaks and Caribs but our knowledge of them is not as extensive as that of the Arawaks and Caribs, also known as the Amerindians.
They were indigenous to the West Indies but were part of a migration from central east Asia which had begun about 35,000 years ago. The Arawaks and Caribs were part of a large family that inhabited the North and South America.
Europeans found many American Indian communities throughout the continent as they explored. Here are some European impressions of the Amerindians after their encounters in the Caribbean Area:
They were young men; none of them more than thirty years old. They had very hard bodies, very good build; good faces, not handsome, but very well made; very man- like. They had coarse hair, almost like the hair of a horse's tail, and quite short. They also wore hair over their eye brows, except the hank they wore that was never cut.
It looked like it was some part of their tradition. They were neither black nor white; and some painted parts of their body. The reason for painting themselves like that was to have the complexion of the Canary Islanders.
They had no weapons and they knew nothing of them: when they were showed swords they grabbed them by the blades, cutting themselves in the process. They had no iron. They used darts -with a kind of rod without iron; and some had at the end a fish's tooth- and other things that they used for their weapons.
They were generally fairly tall. Some had marks on their bodies. They understood no language and so, signs had to be made to them to communicate what we meant. They seemed to describe how people had come from the main land to take them as slaves.
They appeared to be skilled servants ; quickly repeating whatever was said to them. No doubt they could easily be converted to Christianity, as they seemed to have no religion of their own.
Even on his deathbed Christopher Columbus still believed that the long chain of islands that he " discovered " - stretching from the top of Florida southward toward the South American coast of Venezuela - were the Indies.
When Columbus' mistake was realised, Spain labeled this island arc that separates the Caribbean Sea from the Atlantic Ocean, the' West' Indies to distinguish it from the Spice Islands of the Pacific, the 'East' Indies.
At the time of their discovery, the archipelago of islands which has become known as the West Indies, were inhabited principally by two Amerindian tribes. They had a close link with the Amerindians of Guiana on the South American mainland. The first set was the Arawaks, one branch of which - the Tainos - was concentrated in the Greater Antilles and the Bahamas; while the second - the Igneris - dominated the Lesser Antilles. Apart from the Arawaks, there was a second principal group, the Caribs. A third variant of the Amerindian pattern - the Siboneys - was located on a smaller scale in Western Cuba, possibly representing a pre-Arawak strain originating in Florida.
The outstanding general work on the Amerindian culture is a Swedish publication, "Origins of the Tainan Culture, West Indies" by Sven Joven - the 1935 English translation and expansion of his Swedish treatise of 1942. With respect to Trinidad itself, our knowledge comes from a little masterpiece, "The Aborigines of Trinidad," by J.A. Bullbrook, Associate Curator of the Royal Victoria Institute and Museum, in 1960, representing the results of his excavations in ' middens' - which were both refuse dumps and burial grounds - of the Amerindians in Cedros, Palo Seco and Erin.
Useful information is also available from Surinam, not only from archaeological investigations - a brief account of which is available in English, in the work of D.C. Geijskes - but also from a direct study of living Arawak tribes which have retreated with the onset of Western civilisation, further and further into the interior of the country. Examples of the arts and crafts, and of the life and work of the Amerindians, can be seen in the Royal Victorian Museum in Trinidad and the Surinam Museum in Paramaribo.
The West Indies now comprises more than 30 countries with a regional population of approximately 33 million people scattered over 2,000 square miles ( 5,200 square kilometers ) of ocean. Since World War II the term 'Caribbean' has been favoured as a general name for the region. In addition to the island territories, four mainland countries are considered to be part of the Caribbean, or West Indies: Belize ( formerly British Honduras ) in Central America ; and the three Guianas in South America - Guyana ( formerly British Guiana ), Suriname ( formerly Dutch Guiana ), and French Guiana. Common social and historical legacies tie these continental enclaves to their sister islands.
Island territories range in size from 100 square miles (260 square kilometers) to thousands of square miles, but most - more than two thirds- are tiny. The continental Guianas are relatively larger. Cuba, by far the largest island at 44,000 square miles ( 114,000 square kilometers ), is smaller than the state of Ohio. Grenada, much more typical at 133 square miles ( 344 square kilometers) , is barely larger than the District of Columbia.
Grouping of minute islands that form administrative domains are common but often stretch geographical imagination. There is the Commonwealth of the Bahamas with a quarter of a million people spread over an archipelago of more than 700 islands and more than 80 minute cays - together constituting some 5,000 square miles ( 13,000 square kilometers ).
Another odd legacy of colonial history is the amalgamation of the Netherlands Antilles, two groups of islands some 500 miles ( 800 kilometers ) apart - Curacao and Bonaire in the far-eastern Caribbean off the northwestern coast of Venezuela, and a Leeward Island group east of Puerto Rico : Saint Maartin ( shared with France as St Martin ), Saint Eustatius, and Saba.
Except for the Siboneys with their primitive shell culture, ignorance of stone, pottery and axe blades, and use of shell vessels, the Amerindian civilisation of Arawaks and Caribs was essentially agricultural, representing an important advance in the scale of civilisation over the paleolithic period of human history. They cultivated the soil by constructing mounds of earth, firstly to loosen soil, secondly to protect the roots against the dry season, and thirdly for composting with shovelled ashes.
The national food was cassava. The Arawaks developed the technique of changing the poisonous prussic acid of cassava juice into a kind of non-poisonous vinegar by cooking it. They called this ' cassareep'. The cassareep together with one of the known spices, the chilli pepper, made the pepper pot, the Carib ' tomali ', which stabilised the alimentation to a high degree and made easier the consumption of cassava cakes.
The Arawaks further developed the ' grater ' for making cassava cakes. In their development of graters, juice squeezers, large flat ovens of coarse clay on which the cassava cakes were baked, as well as in the development of cassareep and the pepper pot, the Arawak culture represented essentially an annex to the Amerindian civilisation of Eastern Venezuela and Guiana.
The Arawaks grew just enough food for their families and for themselves,including maize, cassava, sweet potato, yautia and groudnuts. There was no storing and trading of food taking place. They did not lack protein but, compared with the Caribs, placed less emphasis on high protein foods and balanced their diet with more vegetables. Some foods they ate were fish, shellfish, turtle and manatee ( seacow ). Fishing was mainly done by nets made of fibers, bones, hooks and harpoons. The Arawak method of catching the turtle shows some ingenuity: a remora was acugh and tied on a long line to a canoe. The remora would dive for the turtle and attach itself to the back with its suckers. The turtle would then be pulled into the canoe by the fishermen.
The Arawaks hunted very small animals whose meat they enjoyed very much. To help them hunt they would have small dogs called 'alcos' which could not bark, but made a growling noise. They also ate ducks, doves, parrots and a lot of fruits and vegetables such as pineapples, mammee apple, star apple,guavas and cashews.
The Arawaks' food was carefully prepared and they knew about stewing, baking and roasting , techniques which they used in their food preparation - they stewed iguana, baked cassava, and smoked fish.
One of the most important crops grown by our Amerindian ancestors was maize, from which, in certain places, a species of beer was brewed. They also knew the sweet potato and a variety of tropical fruits such as the guava, custard apple, mammy apple, pawpaw, alligator pear, star apple and pineapple. Columbus has stated that he saw beans being cultivated in Hispaniola; and the Amerindians knew also, among the spices, cinnamon, and wild pimento. They introduced peanuts to the Spaniards and it would appear that these were eaten regularly with cassava in Hispaniola.
The Amerindians also knew of, and cultivated, two additional crops which facilitated a further development of what we would today call ' civilised ' existence. They cultivated cotton, which they used on the one hand for petticoats, and on the other, for the manufacture of hammocks for sleeping purposes. Dr Bullbrook, found a bone, needle and buttons in his Trinidad researches. The Amerindians knew also of tobacco, which was exceedingly popular among them; possibly in its origin it was connected in some way with religious rites. The Arawaks used it both for snuff and for smoking, generally in the form of cigars, ( though the pipe was not unknown); while in the form of chewing tobacco in rolls; it was used as currency by the Caribs.
Fishing played some part in the economy of the Amerindian society, and the Amerindians developed the canoe and the pirogue which enabled them to move from island to island, in the sheltered waters of the Gulf of Paria. The canoes appeared to even have cabins for the women. Molluscs or shell fish figured prominently in the Amerindian diet, and particularly the chip chip, as Dr Bullbrook's investigations of the middens indicate. But bones representing fish and tortoise have also been found. In comparison with the shellfish and the fish, bird bones, however, are extremely scarce.
These Amerindians had no knowledge of metals. Their tools were of polished stone, bone, shell, coral or wood - some of their wooden artifacts have been fortunately preserved through accidental burial in the pitch lake of Trinidad. They made pottery and wore ornaments. Dr Bullbrook's exhumations of over twenty burials indicate evidence of arthritis and a high incidence of dental caries, but not of rickets. Their age would appear not to have exceeded forty years, and their height no more than five feet seven inches. They seem to have been, however, a people of great physical strength.
The Amerindians had a simple but well established family life, in which, as in most under- developed society, there appeared to be some sexual grounds for the differentiation of labour. Possibly a result of religious beliefs, Arawaks men alone could collect gold. The women prepared the cassava, cared for the poultry, brought water from the river, wove cloth and mats, and shared in the agricultural work using the primitive implement of the Amerindians, " the digging stick " .
It is not clear whether the Amerindian women of Trinidad and Tobago displayed as much readiness, as has been noticed of the Amerindian women in Hispaniola, to promiscuity in their sexual relations as a form of welcome to strangers. Nor is it clear whether in Trinidad, as in Hispaniola, there was the same accentuation of feminine tendencies among male Amerindians which has been noted of them in comparison with the Negroes of Africa.
And the records do not permit us positively to involve Trinidad in the 470- year old argument as to whether syphilis was an export from the Old World to the West Indies, or an importation from the West Indies into Spain, and thence into Europe. Dr Bullbrook did find evidence of syphillis in the exhumations.
What is certain is that syphilis would appear to have been as prevalent in Guiana and Venezuela as in the Greater Antilles and Mexico, and that the Arawaks developed a peculiar remedy for the disease.
The Amerindian tribe was governed by a cacique, very much as a father governs his family. If Columbus is to be believed, fighting between two Amerindians was rare, and so was adultery. The only crime punished by the community was theft, for which the punishment in Hispaniola, even where petty thefts was concerned, was death - the culprit being pierced to death with a pole or pointed stick.
The Arawaks were a relatively peaceful people; the Caribs essentially warlike. While both painted their bodies with roucou, partly no doubt to present a terrifying appearance in time of war, the Caribs were distinguished from the Arawak in their use of poisoned arrows. The Caribs have also been conventionally described as 'cannibals'.
As far as Trinidad is concerned, there would appear to have been several distinct tribes of Amerindians present in the island towards the end of the fifteenth century . The Caribs tended to settle for the most part in North and West, around what is today Port-of-Spain; two of their principal settlements were located in Arima and Mucurapo. The Arawaks some to have concentrated above all in the South East and it is recorded that on one occasion the Arawaks took Tobago from the Caribs.
Dr Bullbrook, however , challenged the view that there were any Caribs in Trinidad. He based this on the absence of two facts customarily associated with the Caribs. First, he found no evidence of the use of bows and arrows, which, in his view, is confirmed by the relative scarcity of bird bones in the middens.
But he admits the possibility that the spines of the sting ray and eagle ray, found in large numbers in the middens, in some cases obviously improved by man, might have been used as arrows on lance heads. In the second place he emphatically denies any evidence of cannibalism in the remnants of the animal foods found in the middens. Not a single human bone was found.
The Mayas were middle American Indians who produced one of the finest civilisations in the western world; far more advanced than the relatively primitive Arawak culture.
Government and Politics:
The Maya developed the city state. It was considered to be a small unit ruled by the Halach Uinic 'real man'. It was an absolute hereditary office. Each village was controlled by 'Batabs' (village chiefs) who were responsible for the Halach Uinic.
The free population was then divided into farmers, artisans and merchants. The lowest class in this society was the slaves.
The Mayas were polytheistic people and their religion influenced their whole lives. They had as much as 166 gods, each of whom could be considered good or bad, so that they needed constant worship. Among them were Hunab Ku, the chief God; Kinich Abau, the sun god; Chac the rain god; Yum Kax, the corn god; and Ah Kinchil, the god of the earth.
Ah Kin (priests) were so important in the Mayan society that the historians mistook them for rulers. They had to set and organise festivals; they made sacrifices, and decided on the auspicious days on the calendar for planting and harvesting. Human sacrifice was also an important element in their sacred religion. Even their famous ball game, 'Pok a Tok', a kind of basketball, had a ritual significance, and the losers could be sacrificed.
The Mayas lived in round huts with central wooden poles supporting their thatched roofs. The walls were woven, with no windows. Set apart, was a ceremonial area containing their famous massive stone structures, which archaeologists have uncovered. From their size it has been concluded that the leading Mayan city states had a population of between eight and ten thousand.
Arts and Crafts:
The Mayas knew nothing of metal tools and had none. Wooden hose and fine-hardened wooden ploughs were used in the fields, and even limestone blocks were cut without metal. Women wore boldly patterned cotton clothes, and quetzal head-dresses were highly prized.
Their craftsmen fashioned life-like and symbolic figurines in jade, wood, copper and gold. One of their favourite objects was the figurine whistle found in several sites. Their excellent artists painted life-like and some abstract pictures. Although the Mayas knew of gold and copper, they used cacao beans for money.
Writing, Mathematics and Calendars:
In about 300 AD ,a hieroglyphic script with about 850 stylized characters was used for writing by the Mayas. Their books were made of bark, folded in a concertina way. The Spaniards destroyed the Mayan literature as pagan, but three legible writings have survived - these however, have not yet been deciphered. Most existing Mayan writing is on stelae, pottery and ornaments.
The Mayans could add, subtract, multiply and divide in columns working from top to bottom. Their symbols were: a dot for 1, a bar for 5 and a shell for 0. The famous Mayan calendar was very accurate, but complicated, and it is not known how its dates correspond to the dates on the Christian calendar. It involved revolving, interlocking, circles and showed a well-developed knowledge of astronomy.
When the Spaniards crossed the Atlantic in the late fifteenth century, the Aztecs were the rulers of a huge American Empire that stretched across the continent from present- day Guatemala in the south, to Texas in the north.
The Aztecs were governed by an emperor, Montezuma, who was assisted by nobles. The emperor had two important jobs to do: he was the ruler of his empire but he was also the highest priest in the Aztec religion. He had to make sure that the gods favoured his people. The Aztecs were famous as warriors and traders, and their empire had grown by means of war and enforced trade. Tenochtitlan, their capital city, was built on an island in a lake that once existed in the highland valley of Mexico.
Mexico City now stands on the site of Tenochtitlan, which was destroyed by the Spaniards. The Aztec empire was rich in gold and silver. Its craftsmen worked with these metals, as well as with stone, cotton and fine feathers.
The life of the Aztecs was governed by their observation of the movement of the sun, moon and stars. They were, fundamentally, a pessimistic people. Their religion and beliefs made them constantly afraid of doom and destruction.
They sought to strengthen their gods by 'feeding' them with the hearts and blood of warriors, believing that if they strengthened their gods, then the gods would be better able to preserve them and their empire. Thus their religion led them to war; and war, by extending their empire, led them to greater efforts to strengthen their gods and themselves against the forces of destruction.
Perhaps because of this very feeling of doom, the Aztecs evolved a busy, sparkling way of life - frightful though it may seem to us - ' Life might be short; therefore it was to be enjoyed.' They loved poetry, dancing, games and ceremonies. They built magnificent temples and palaces. They were proud of their wealth and their great accomplishments, and they fought the Spaniards fiercely to preserve their way of life.
In 1519, when the Spaniards attacked them, they were perhaps at the height of their power, and they almost defeated the invaders from Europe.
Further south, covering almost the entire area of South America, west of the Andes, from Ecuador to norther Chile, and spreading eastwards over the high ranges to the borders of the Amazonian and Bolivian forests, lay the empire of the Incas. It had been founded in the twelfth century AD by warrior-kings, and by the early sixteenth century was at its peak. The capital and seat of government, was Cuzco in Peru - 11,500 feet up in the Andes.
This empire was governed by a god- king, known as the Lord Inca. (The term ' god-king ' is used because the Incas believed that their kings were gods.) There was a thorough and detailed organisation of society. In return for loyalty and service to the Lord Inca, every citizen was assured of the means of subsistence 'from the cradle to the grave'. Indeed, the Inca empire has been described as a 'Welfare State'.
In works of creative art the Incas were less gifted than the Mayas or the Aztecs, but in civil and military engineering, and in matters of administration, they were far more advanced. The Incas had neither alphabet nor numbers but they had developed an amazing ability to keep records and accounts on the ' quipu ', a device of strings of beads of different shapes, sizes and colours which they put in different arrangements to record different things. They occupied areas rich in gold and silver and in the working of these metals their craftsmen were supreme.
This wealth of gold and silver hastened the Spaniards' eagerness to attack them, and in 1532- 1533, the Inca empire was attacked and conquered by a small band of Spaniards led by Francisco Pizarro. In part, the empire had been weakened by internal disputes.
When the Spaniards arrived, a civil war between two rival princes for the throne of the Manco Inca had just ended. The story of the conquest of this vast empire by a company of Spanish soldiers is a gripping tale of heroism, treachery and brutality.
In contrast with the Mayas, the Aztecs and the Incas, all of whom had built large empires, the Chibchas, who occupied the mountain valleys of the northern Andes, were still an undeveloped nation. Their separate tribes had not yet come together to form a single nation. However, the tribes centered around Bogota, their chief town, had begun to exercise some authority over the others.
They cultivated maize and potatoes but their skills were still limited. They had no writing or means of keeping accurate records, and they had produced no great monuments or works of art. But they were formidable fighters and they seemed to have been at the beginning of a period of political development and territorial growth at the time of the Spanish arrival. Although far, far behind the Incas and the Aztecs in political development, they were more advanced than the majority of the Indians of northern and eastern South America.
Estimates of the original Amerindian inhabitants of the West Indies vary between 200,000 and several million. Prominent among these native peoples were the Arawak ( Taino ) and the Cibonay on the northern larger islands of the greater Antilles, the Bahamas, and the Leeward Islands. They were relatively easy to enslave.
In the Windward Chain were the Caribs, who demonstrated strong resistance until the 18th century but nevertheless failed to prevent European penetration and their own annihilation. A legacy of Carib ferocity, and descriptive of their treatment of Arawak captives, is the term ' cannibal ' which is derived from their Spanish name,'Caribal '.
European colonisation by the Spanish, followed by the British, French, and Dutch; ensuing wars; buccaneering and mercantile adventuring, ended in wholesale depopulation of the West Indies of its native Amerindian inhabitants.
Each European power encouraged pioneer settlement of its possessions, but trade and commercial interests - piracy among them - dominated their appreciation of the region.
In the 1640's Portuguese Jews emigrated from Brazil to Barbados, bringing with them the techniques and methods of raising sugarcane on plantations. For the next 150 years there was unparalled economic prosperity, with each colonial territory developing its own plantation economy based mainly on sugarcane, or "brown gold."
An essential base of this enterprise was the plentiful labour supply in the form of slaves. It is estimated that as many as 10 million slaves were brought from Africa to work on the plantations. The West Indies was thus repopulated by the forced transportation of African peoples. The transformation from European to African dominance occurred everywhere except in Puerto Rico.
The West Indies region now contains 17 politically independent countries, most having achieved independence since 1960. A diminishing number of these mini-states are still colonial dependencies of their European mother countries.
France administers its possessions of Martinique, Guadeloupe and dependencies, and French Guiana as Departments d'Outre Mer, or French provinces. The Netherlands coordinates the administration of its remaining scattered tourist islands in the Netherlands Antilles, following the relinquishing of Dutch Guiana as Suriname in 1976. After emancipation (1834 in the British colonies, later in others ) plantation labour was sought from other sources.
Ex-slaves who wanted to get away from their plantations in the smaller Leeward and Windward islands were recruited, but the largest numbers came from India as indentured labourers. These East Indian - West Indians ( called Hindustanis in Suriname ) were attracted by indenture contracts that paid their passage and granted them options to acquire land. The plantation economies of Trinidad and the Guainas prospered from their immigration.
They now make up 40 percent or more of the population in Trinidad and Tobago, Guyana, and Suriname.
These majority groups, however, do not complete the range of ethnic diversity. There are minorities of Chinese origin Portugese Madeirans, Levantines, Jews, and Danes who still cling to their identity in certain territories. North Americans have relocated in increasing numbers to West Indian homes, particularly in the Bahamas, the United States Virgin Islands, and Puerto Rico.
The discovery of the West Indian islands by Christopher Columbus, acting as agent of the Spanish monarchy, in 1492 and subsequent years, was the culmination of a series of dramatic events and changes in the European society in the 15th century.
Behind the voyages of Columbus lay the urge of the East with its fabled stories of gold and spices popularised by the famous travelogues of Marco Polo and Ibn Battuta, and the persistent legend of Prester John. The disruption of the conventional Mediterranean - cum - overland route by the Turks, followed by the domination of the Mediterranean by the Italian cities of Venice and Genoa, stimulated the desire to find a westerly route to the East.
The development of nautical technology brought this desire within reach of realisation. New maps of the world and new theories of the nature of the universe exploded ancient beliefs, fallacies and superstitions. The compass and the quadrant had been devised, making possible longer voyages out of the sight of land and outside of sheltered seas. Larger ships had appeared, notably the Venetian galleys; as early as 417 Chinese junks, of a colossal size for those days, had sailed all the way from China to East Africa parts.
In the economic sense, Europe in 1492, was ready for overseas expansion; it had the experience, the organisation, or to use the contemporary vulgarism, the ' know-how '.
Europe in 1492 knew all about colonisation. The Italian republic of Genoa had long before established colonies in the Crimea, the Black Sea and on the coasts of Asia Minor. A catalan protectorate was established over Tunisia in 1280.
The portuguese had conquered ceuta in 1415, an inversion of the mosleum conquest of the Iberian Peninsula. Thereafter they had penetrated all along the coast of west Africa until, in 1487, Diaz made his memorable voyage rounding the Cape of Good Hope. The Portuguese were thus seen for their long colonial reign in Asia which ended only with their expulsion from Goa by India in 1961.
Europe in 1492 knew, too, about slavery, which was the normal method of production in the medieval colonies of the levant. Slavery existed also on European soil, in Portugal, Spain, and in southern France; and the African slave trade in its origins was a transport of slaves from Africa to Europe. The slaves were used in agriculture, industry and mining. To such an extent did slavery dominate Portuguese economy before the voyages of Columbus, that the Portuguese verb ' to work ' became modified to mean ' to work like a Moor ' .
When the Spaniards enslaved the Amerindians in the West Indies and later introduced Negro slaves from Africa, they were merely continuing in the New World the slavery with which Europe was sufficiently familiar.
The European society of 1492 was also conversant with sugar cultivation and manufacture. Sugar manufacture originated in India, from whence it spread to Asia Minor and, through the Arabs, to the Mediterranean. The early literature of India is full of references to sugar - for example, the Ramayan, and a sugar factory and its machines are used to illustrate maxims of Buddhist philosophy. The Law Book of Manu, some two centuries before the Christian era, prescribed corporal punishment for stealing molasses with fasting for three days and nights as penance; a Brahmin was not to be forced to sell sugar; a man caught stealing sugar would be reborn as a flying fox.
With Arab expansion, the art of growing cane and manufacturing sugar spread to Syria, Egypt, Italy, Cypnes, Spain, Malta and Rhodes. But the Arab sugar industry was differentiated in one important particular from that of Christian Europe. It was not based on organised slavery.
The European sugar industry in the Mediterranean, developed after Europe's contact with sugar during the crusades, contained from the outset the germ of the colonial system familiar to all West Indians.
It was an industry established in a country and financed by local bankers. One of its principal centers was Sicily, where we find the University of Patermo in 1419, studying and advising on irrigation of the cane, and it was dominated by Italian financiers. Another important center was Cypnes, a 1449 account of the sugar industry of sugar planters. Large merchant houses in Italy distributed the sugar throughout Europe.
Europe, too, was poitically ready for overseas expansion. The European state, with its theories of protection and its grants of characters and monopolies developed the economic doctrine for the balance of trade, and the need for conserving bullion and the precious metals by encouraging exports and either reducing imports of luxury goods.
The Monarchy, with the aid of the great commercial cities and foot soilders, had established control of the federal aristocracy with their mounted armies, and the Nation State had begun to emerge. Generations of war between Christians and non- Christians in the crusades had developed a militant crusading Church not yet split by Schism, and not yet reformed after dessenting sects.
Of all the countries of Europe in 1492, apart from Portugal which preceeded it, Spain was best fitted, physically and physiologically, for the initiation of overseas colonialism. The heirs of the rival Crowns of Castile and Arragon had brought peace to the country and developed a centralised monarchy, backed by the cities and the lawyers.
The expulsion of Jews and Moors dominated the church militant and triumphant. Undoubtedly secure, the Spanish monarchy was ready to receive Columbus when he arrived with his new theories and his discovery proposals.
There was nothing strange in the approach by Columbus, an Italian from Genoa, to the Spanish Monarchy. The Genoese were no strangers in Spain. From the 12th century they were established in a quarter in Seville, whence they would be well placed to participate in subsequent Spainsh trade to the West Indies. Convoys of Genoa, as well as Venice, Florence and the Kingdom of Naples, called regularly at Spain on their way to England and Flanders.
The Genoese, and the Italians generally, were equally well known in Portugal. They served as admirers in the Potuguese navy, and they served also, like Columbus' father - in -law in the Portuguese trade with the Canary Islands. Columbus himself approached without success, the Portuguese Court; and the great Italian geographer, Paolo Toscanelli, was consulted by the king of Portugal on Columbus' proposals.
These proposals were ultimately rejected by Portugal, either because they had no faith in Columbus or because it had evidence, which Diaz was soon to confirm, that the true route to India was by way of Africa.
Thus did Columbus turn to Spain; and to the service of Spain in those days, adventure and geographical scholarship knew no national boundaries. The Cabots- Venetians- served England. Verezzano- Florentine- served France. Mangellan- Portuguese- served Spain. Hudson- English- served Holland. Columbus was ready to serve either England, or France, or Portugal, or Spain. Spain accepted his proposals; the others temporised, or studied them, or rejected them.
The sovereigns of Spain signed the discovery contract with Columbus in 1492, by which they agreed to finance the voyage in return for royal control of the lands discovered, and a high proportion of the profits of the voyage.
This contract opened the door to introductions of the medieval society into the West Indies, with its grants of titles and large tracts of land. Columbus was the mouth piece of the medieval tradition, which was to be followed in the seventeenth century by French concessions inspired by the federal system in France, and by the wholesale grants of islands to favourites of the British Monarchy.
The Spanish Monarchy, after the success of Columbus had been established, secured a religious title to the entire Westrn Hemisphere by the Papal Donation of 1493, ratified by the Treaty of Tordesillas between Spain and Portugal in 1494. This Treaty was a diplomatic triumph for Portugal. Spain, led astray by Columbus's decisions, agreed to rectify the Papal boundary in a way that confirmed to Portugal not only the true route to India, but the whole of the South Atlantic with Brazil.
It was against this background that Columbus set out on his third voyage on May 30th, 1498, and sighted the island which he christened ' Trinidad ' at noon on Tuesday July 31st. He touched at a harbour which he called, Point Galera, and then sailed Westward until he entered the Gulf of Paria through the entrance which he named, the Serpent's Mouth.
Searching for an exit from the Gulf of Paria in which he sailed North South and West, he identified the narrowness of the strait separating of Venezuela from Trinidad. After running from the furious currents both north and south, he eventally found an exit into the Caribbean Sea which he called, the Dragon's Mouth. One can well understand today the difficulties encountered by the sailing ships of Columbus' day in these tricky passages when one reads, in a Dutch report in Trinidad as late as 1637, 139 years after Columbus, that the Spaniards, before going through the Dragon's Mouth, promised a mass to St Anthony so that he might guard them as they passed.
On this same voyage Columbus is alleged to have sighted Tobago. What is certain is that he did not land in Tobago but proceeded from Trinidad to Hispaniola. Tobago remained therefore virtually isolated and undiscovered; an Amerindian island untouched for many decades by any Europeans, retaining its name, " TOBACCO" signifying the importance of Tobacco in the Amerindian economy.
The story of the treatment of the Spaniard Indians in the Americas is a very sad one in the whole history of mankind. The Indian enslavement was begun by Christopher Columbus himself. His first act was to carry a number of them back to Spain on his ships. In Hispaniola he forced a system of tribute on the Indians who lived in the area where gold could be found.
Every Indian over fourteen years of age was to bring a hawks-beel full of gold to him every three months. The Chief, or should I say ' Cacique ', had to bring a calabash full. Indians in other areas had to pay tributes of cotton. Those who did not bring these tributes were then punished by death.
The Indians, in hope of leading the Europeans away, told them that the gold mines were far off. Dogs were used to round up the Indians. These animals, brought from Spain, were greatly feared by the Arawaks who were unaccustomed to such fiercely trained creatures. The Indians were not used to hard labour and it became quite difficult for them to secure gold.
Some Indians killed themselves rather than work for the Spaniards. Others who escaped to the mountains were hunted once more; and some took to the sea in their canoes. Many died from hopelessness and grief. The way of life they knew was over.
No longer could they simply lay idle in the sun, catch fish, pick fruits and wander where they pleased over land and sea. The only real fear they had known had been the attacks by the Caribs from the South. But under the Spaniards, the Indian population rapidly decreased. According to many historians, one-third of the natives of Hispaniola were dead by 1497.
Fortunately, not all the Amerindian tribes have been made extinct. Amerindian descendants exist in the Americas and in several Caribbean islands and the Guianas. Although small in number, their descendants enjoy full citizenship in the Republic of Trinidad and Tobago: in the town of Arima lives a "Carib Queen", although some historians believe she may be an Arawak.
These descendants, in the Guianas and Caribbean islands including St. Vincent, Jamaica, Dominica, Monsterrat and the Bahamas have begun communicating with each other, and annual festivals are held when they display and share their foods, jewellery, clothing and even parts of their cultures.
Most of them today are literate in the languages of the land and some, although professionals, have retained their inherited traditions, customs and cultures. As time would have it, they are as proud of their heritage as we are of them; living and sharing equally, strong in the knowledge and experiences of both, new and old worlds.
Much of their individual identities may be lost over the years to come, but these 'hunters led by the hunted' made no mean contribution to our own coming. | http://harrysharma.com/tandt/amerindi.htm | 13 |
32 | When a resource or asset is borrowed, the borrower pays interest to the lender for the use of it. The interest rate is the price paid for the use of money for a period of time. One type of interest rate is the yield on a bond, another is the amount (expressed as a percentage of the total sum lent) that a bank would pay someone who deposits money with them.
When money is loaned the lender defers consumption (or other use of the money) for a specific period of time. The lender does this in exchange for an expected increase in future income. The expected increase in interest payments (relative to the amount loaned) is the nominal interest rate, and can be defined as the face value of money received by the lender, or paid by the borrower.
Nominal interest rate
For example, suppose a person deposits £100 with a bank for 1 year and receives interest payments totalling £10. In this case, the nominal interest rate is 10% per annum. This is also known as the annual percentage return (APR).
Real interest rate
The real interest rate, which measures the purchasing power of interest receipts, is calculated by adjusting the actual rate received (the nominal interest rate) to take inflation into account.
A first approximation for the real interest rate for a one-year loan is:
After the fact, there is the realized or ex post real interest rate:
where p = the actual inflation rate over the year.
Thus, if the (expected) inflation rate is 5% and the nominal interest rate is 7%, the (expected) real interest rate is 2%.
If financial markets have adjusted for the effects of expected inflation and the real interest rate is given, then the nominal rate approximately equals:
Thus, if the real interest rate is 3% and the inflation rate equals 5%, the nominal interest rate = 8%. The theory of rational expectations is sometimes applied to say that this equation applies in most cases. Most economists would agree that it applies over several years, as financial markets adjust: higher inflation leads to higher nominal rates, all else being equal.
See also | http://www.trade2win.com/traderpedia/Interest_rate | 13 |
40 | Jul 21, 2007
Bronchitis is an inflammation of the bronchial tubes, or bronchi, that bring air into the lungs. Bronchitis means swelling in bronchi. Bronchi are the air passages that connect windpipe (trachea) with tiny air sacs (alveoli) in lungs. Bronchitis is usually caused by virusesor bacteria and it may last for several days or weeks. Bronchitis can be acute bronchitis or chronic bronchitis. Simply said, Bronchitis is an inflammation of the Bronchi.
TYPES OF BRONCHITIS:
Bronchitis may be classified into two types. These include
This is the most common type of bronchitis.It affects nose, sinuses, throat and then it spreads to the lungs. It is caused by viruses. The viruses attack the insides of our airways and infect them. The same viruses give the common cold. - Acute bronchitis is usually caused by a viral infection, but can also be caused by a bacterial infection and can heal without complications.
This is another type of bronchitis. It is a long-term condition. Chronic bronchitis is also known as chronic obstructive pulmonary disease. With chronic bronchitis, the bronchial tubes continue to be inflamed (red and swollen), irritated, and produce excessive mucus over time. The most common cause of chronic bronchitis is smoking. Chronic bronchitis is an inflammation of the bronchi, the main air passages in the lungs, which persists for a long period or repeatedly recurs.
SYMPTOMS OF BRONCHITIS:
Symptoms of bronchitis usually begin with the symptoms of a cold, such as a runny nose, sneezing, and dry cough. The following are the common symptoms of bronchitis. Such as:
*Cough that produces mucus; if yellow-green in color, you are more likely to have a bacterial infection
*Shortness of breathworsened by exertion or mild activity
*Frequent respiratory infections (such as colds or the flu)
*Ankle, feet, and leg swelling
*Blue-tinged lips from low levels of oxygen
CAUSES OF BRONCHITIS:
Bronchitis can also be caused by exposure to smoke, chemicals, or air pollution, all of which can irritate the bronchial tubes, or it can develop from accidentally inhaling (aspirating) food, vomit, or mucous material.The following are the main causes of bronchitis. These are:
It is the main cause of bronchitis. Excessive smoking irritates the bronchial tubes and lowers their resistance, so that they become vulnerable to germs breathed in from the atmosphere
*Working in a stuff atmosphere, use of drugs and heredity:
Other causes of bronchitis are living or working in a stuffy atmosphere, use of drugs to suppress earlier diseases, and hereditary factors.
Changes in weather and environment hasten the onset of the disease.
Bronchitis occurs when trachea (windpipe) and the large and small bronchi (airways) in lungs become inflamed because of infection or other causes.
*The thin mucous lining of these airways can become irritated and swollen.
*The cells that make up this lining may leak fluids in response to the inflammation.
*Coughing is a reflex that works to clear secretions from your lungs. Often the discomfort of a severe cough leads you to seek medical treatment.
*Both adults and children can get bronchitis. Symptoms are similar for both.
*Infants usually get bronchiolitis, which involves the smaller airways and causes symptoms similar to asthma.
PREVENTIVE MEASURES FOR BRONCHITIS:
Good hygiene can reduce the spread of viral infection. Immunizations against influenza and pertussis can reduce the risk for bacterial bronchitis. The following steps help to get relief from bronchitis. These include:
*Wash hands frequently to avoid spreading viruses and other infections.
*Do not smoke.
*Get an annual flu vaccine and a pneumococcal vaccine as directed by your doctor.
*Minimize exposure to air pollutants.
*Take aspirin or acetaminophen (Tylenol) if you have a fever. Do not give aspirin to children.
*Drink plenty of fluids.
*Use a humidifier or steam in the bathroom.
HERBAL CURE FOR BRONCHITIS:
Herbal treatment is the best treatment for bronchitis.It has no side effects. The bronchitis sufferers can get relief from bronchitis through herbal treatment. The following herbs help to treat bronchitis. These are:
It is the most effective home remedy for bronchitis.It reduces inflammation, and helps to clear bronchial passages.
It is an antibiotic herb. It is used to strengthen the immune system.
It is the most useful herb for bronchitis.It is used to soothe coughing.
It is the popular herb for bronchitis.It is used to stimulate the immune system and shorten duration of illness.
This type of herb is used to reduce fever and nasal congestion.
This herb helps to clear mucus from lungs.
This herb is the most effective home remedy for bronchitis.It is used to relieve bronchial spasms & nasal congestion.
This herb is used for reducing the flow of mucous.
This herb is used to prevent free radical damage in the lungs.
This herb has antibiotic properties and relieves inflammation in throat, nasal passages, & sinuses.
This herb helps to clear bronchial congestion and stop wheezing.
This is the popular home remedy for bronchitis.It is used to relieve bronchial congestion and increase lung circulation.
This is the most useful herb for bronchitis.It is used to fight bacterial infections.
Onions have been used as a remedy for bronchitis for centuries. They are said to possess expectorant properties.
Another effective remedy for bronchitis is a mixture comprising of half a teaspoon each of the powder of ginger, pepper, and cloves. The mixture of these three ingredients has also antipyretic qualities and is effective in reducing fever accompanying bronchitis.
One of the most effective home remedies for bronchitis is the use of turmeric powder. Half a teaspoon of this powder should be administered with half a glass of milk, two or three times daily
Posted at 11:36 am by supercrazy
The word Gastritis is derived from the Greek word gastro and itis. The word gastro-meaning of the stomach and word itis-meaning inflammation. Gastritis is inflammation of the gastric mucosa. Gastritis is a mild irritation, inflammation, or infection of the stomach lining. It may be a sudden attack or chronic. Gastritis can lead to ulcers and an increased risk of stomach cancer. Gastritis isn't one disease but a group of conditions, all of which are characterized by inflammation of the lining of stomach.
TYPES OF GASTRITIS:
The following are the common types Gastritis. These include:
This type of gastritis involves both inflammation and wearing away of the stomach lining. Erosive gastritis results from irritants such as drugs, especially aspirin and other nonsteroidal anti-inflammatory drugs.
Acute stress gastritis:
It is a form of erosive gastritis. This type of gastritis is caused by a sudden illness or injury. The injury may not even be to the stomach.
This type of gastritis can occur if radiation is delivered to the lower left side of the chest or upper abdomen, where it can irritate the stomach lining.
This type of gastritis occurs in people who have had part of their stomach surgically removed (a procedure called partial gastrectomy). The inflammation usually occurs where tissue has been sewn back together.
This type of gastritis can also occur in people who are chronically infected with H. pylori bacteria. It also tends to occur in those who have had part of their stomach removed.
It may result from an allergic reaction to an infestation with roundworms. In this type of gastritis, eosinophils (a type of white blood cell) accumulate in the stomach wall.
It is a type of gastritis .In this type the stomach wall develops thick, large folds; enlarged glands; and fluid-filled cysts. The disease may be due to an abnormal immune reaction and has also been associated with H. pylori infection.
It is another type of gastritis. In this type, lymphocytes (another type of white blood cell) accumulate in the stomach wall and other organs. This lymphocyte accumulation also occurs in celiac sprue (a malabsorptive disorder).
CAUSES OF GASTRITIS:
Gastritis is not a single disease, but several different conditions that all have inflammation of the stomach lining Gastritis can be caused by many factors, including infection, injury, certain drugs, and disorders of the immune system. The most common cause of gastritis is infection with Helicobacter pylori bacteria. The following are the common causes of gastritis. These include:
*Bacterial or viral infection (infection by a virus is contagious)
*Excess stomach acid caused by heavy smoking, alcohol use, caffeine, improper diet such as spicy, greasy foods.
*Use of drugs such as Aspirin, non-steroidal anti-inflammatories, cortisone.
*Fungal infection (typically in people with AIDS).
*Parasitic infection (often from poorly cooked seafood)
*Certain types of radiation
SYMPTOMS OF GASTRITIS:
Gastritis usually causes no symptoms. When symptoms do occur, they vary depending on the cause and may include pain or discomfort (dyspepsia) or nausea or vomiting, problems that are often simply referred to as indigestion. The symptoms of gastritis are:
*Upper abdominal pain or discomfort
PREVENTIVE MEASURES FOR GASTRITIS:
We can prevent gastritis by following the useful tips. These include:
*Eat regularly and moderately
*Limit or avoid alcohol and caffeine
*If possible avoid drugs that are irritating to your stomach
*Avoid foods that you don't digest easily.
TREATMRNT FOR GASTRITIS:
The treatment of gastritis depends on the cause of the problem. We can reduce or avoid gastritis by taking appropriate treatment for that disease. The treatment approaches are:
MEDICATIONS FOR GASTRITIS:
This medicine reduces stomach acid secretion. It includes calcium carbonate and magnesium hydroxide with aluminum salts.
It reduces stomach acid secretion, and it helps to protect against or treat ulcers. It includes ranitidine, cimetidine, nizatidine, and famotidine.
Proton pump inhibitors:
This medicine also reduces stomach acid secretion. It includes omeprazole and lansoprazole.
This is the best drug for treat ulcers. It can reduce the formation of ulcers.
This drug helps to heal ulcers in the stomach.
DIETARY SYSTEM FOR GASTRITIS:
We can get relief from gastritis by following the best dietary system.
The following foods can be consumed by the gastritis sufferers. These include:
The following foods should be avoided by the gastritis sufferers. These include:
*Acidic drinks such as coffee (with and without caffeine)
*Spices and peppers (for some people this is important, while for others such foods do not seem to cause symptoms or inflammation)
HERBAL TREATMENT FOR GASTRITIS:
We can easily get remedy for gastritis through the following herbs. These include:
Astragalus (Astragalus membranaceus):
It is used traditionally to treat stomach ulcers. It may also prevent the damage from radiation or chemotherapy that can lead to gastritis.
Barberry (Berberis vulgaris):
This herb contains active substances called berberine alkaloids. These substances have been shown to combat infection and bacteria. Barberry is used to ease inflammation and infection of the gastrointestinal tract. Barberry has also been used traditionally to improve appetite.
Bilberry (Vaccinium myrtillus):
Bilberry fruits help prevent stomach ulcers caused by a variety of factors including stress, medications, and alcohol.
Cat's Claw (Uncaria tomentosa):
This herb is used to treat a variety of health problems including ulcers and other gastrointestinal disorders. The benefits of this herb may be due to its ability to reduce inflammation.
Chamomile, Roman (Chamaemelum nobile):
This herb has been used to treat nausea, vomiting, heartburn, and excess intestinal gas.
Devil's Claw (Harpagophytum procumbens):
It can be useful for upset stomach and loss of appetite.
Ginger (Zingiber officinale):
This is the best herb for gastritis. Ginger has been used to aid digestion and treat stomach upset as well as nausea. This herb is also thought to reduce inflammation.
SIMPLE TIPS TO AVOID GASTRITIS:
The following tips help to maintain our health without any diseases. These are:
*Practice good eating habits
*Maintain a healthy weight
*Get plenty of exercise
Posted at 11:34 am by supercrazy
Crohn disease is a disease which affects the lower part of the small intestine, called ileum.Crohn's disease is a chronic illness that causes irritation in the digestive tract. Crohn's disease occurs in the last portion of intestine (ileum). Crohn’s disease is an ongoing disorder that causes inflammation of the digestive tract also referred to as the gastrointestinal (GI) tract. Crohn’s disease can affect any area of the GI tract, from the mouth to the anus. Crohn’s disease is an inflammatory bowel disease, the general name for diseases that cause swelling in the intestines. Crohn’s disease can occur in people of all age groups, but it is more often diagnosed in people between the ages of 20 and 30.The other name of Crohn’s disease is ileitis or enteritis.
TYPES OF CROHN’S DISEASE:
The following are the five types of crohn’s disease. These include:
This is the most common type of Crohn’s disease. It affects the ileum and colon. Symptoms include diarrhea and cramping or pain in the right lower part or middle of the abdomen.
This type affects the ileum. The Symptoms of this type include diarrhea and cramping or pain in the right lower part or middle of the abdomen.
Gastroduodenal Crohn's disease:
This type affects the stomach and duodenum (the first part of the small intestine). Symptoms include loss of appetite, weight loss, and nausea.
This type affects upper half of the small intestine.The Symptoms include abdominal pain (ranging from mild to intense) and cramps following meals, as well as diarrhea.
Crohn's (granulomatous) colitis:
It affects the colon only. The Symptoms include diarrhea, rectal bleeding, and disease around the anus (abscess, fistulas, ulcers). Skin lesions and joint pains are more common in this form of Crohn's than in others.
SYMPTOMS OF CROHN’S DISEASE:
The symptoms of Crohn’s disease depend on what part of the intestinal tract is inflamed. The most common symptoms of crohn’s disease include:
*Diarrhoea - most people with Crohn's disease get this. This may contain blood, pus or mucus. It may be up to 10 or 20 times a day, as well as at night.
*Pain - may be felt anywhere in the abdomen, and is often described as cramping or colicky. The abdomen may be sore to the touch and swollen.
*Loss of appetite.
*Weight loss - the ongoing symptoms of Crohn's disease, such as diarrhoea, can lead to weight loss.
*Fever - people with severe Crohn's disease may sometimes have a high temperature.
*Rectal bleeding - this may be serious and persistent and can lead to anaemia (too few red blood cells in the blood, meaning the body does not get enough oxygen).
*Painful tears (fissures), ulceration or pus-filled areas (abscesses) around the anus.
TREATMENT APPROACH FOR CROHN’S DISEASE:
Treatment for Crohn's disease depends on the location and severity of disease. . Treatment may include medications, surgery, dietary adjustments, herbs and mind/body techniques.
MEDICATIONS FOR CROHN’S DISEASE:
The following medications are commonly used to treat CD:
It reduces inflammation during acute flare-ups. The side effects include abdominal discomfort, nausea, and lowered sperm count.
It reduces inflammation during acute flare-ups and it helps to prevent recurrences.
OLSALAZINE AND BULSALAZIDE:
It reduces inflammation during acute flare-ups and helps prevent recurrences. It has fewer side effects than sulfasalazine.
CORTICOSTEROIDS (SUCH AS BUDESONIDE, PREDNISONE, AND PREDNISOLONE):
It reduces inflammation by decreasing the production of prostaglandins (substances in the body that contribute to the development of pain and inflammation. The side effects include acne, and an increased risk of infection, osteoporosis, high blood pressure, excessive hair growth, diabetes, and disorders of the eye including glaucoma and cataracts.
ANTIBIOTICS (SUCH AS CIPROFLOXACIN AND METRONDIDAZOLE):
It may be prescribed for individuals who undergo surgical resection or have an excess accumulation of pus and bacterial overgrowth. The side effects include nausea and anorexia.
SURGERY FOR CROHN’S DISEASE:
Crohn's disease patients require surgery, either to relieve symptoms that do not respond to medical therapy or to correct complications such as blockage, perforation, abscess, or bleeding in the intestine.
DIETARY TREATMENT FOR CROHN-DISEASE:
Elemental diets help prevent symptom recurrence and may be as effective as certain medications in treating CD.The crohn’s sufferers must follow the following dietary system. These include:
*Regular intake of fruits and vegetables, and lowered fat and sugar consumption may reduce the risk of developing CD.
*Certain foods may aggravate symptoms of CD (for example, dairy products, fats, spicy foods, and artificial sweeteners) and should be avoided by people with the condition.
*After surgery, people with CD should avoid foods high in organic acids known as oxalates (for example, spinach, rhubarb, black and blueberries, red currants, beets, celery, cucumbers, potatoes, coffee, tea, diet sodas, tofu, and chocolate) because oxalates can increase the risk of kidney stones.
HERBAL TREATMENT FOR CROHN-DISEASE:
The crohn’s disease sufferers can get quick remedy from crohn-disease by taking the following herbs. These include:
CAT'S CLAW (UNCARIA TOMENTOSA):
It helps to treat intestinal disorders such as diarrhea, ulcers, and inflammatory bowel diseases.
GINKGO (GINKGO BILOBA):
It contains substances that act as antioxidants and therefore, may protect the gastrointestinal tract from the damaging effects of CD.
GOLDENSEAL (HYDRASTIS CANADENSIS):
It reduces the ability of bacteria to stick to the intestinal wall thereby protecting against CD;
GREEN TEA (CAMELLIA SINENSIS):
It has anti-inflammatory properties and may also reduce risk of cancer (a potential complication of CD)
SLIPPERY ELM (ULMUS FULVA):
It relieves gastrointestinal irritation
TURMERIC (CURCUMA LONGA):
It has anti-inflammatory and antioxidant properties, and reduces the possibility of cancerous changes in cells
WILD INDIGO (BAPTISIA TINCTORIA):
It contains substances that act as antioxidants. It also has properties that protect against infection and reduce inflammation.
Hypnosis and Other Relaxation Techniques:
Hypnosis may improve immune function, increase relaxation, decrease stress, and ease feelings of anxiety. Many healthcare practitioners and people with CD have reported that symptoms of the disease improve with relaxation methods such as hypnosis, meditation, and biofeedback.
Posted at 11:33 am by supercrazy
Jul 20, 2007
Amebiasis is a disease which is caused by amoeba. An amoeba is a single-celled microscopic organism that has no solid body structure. Amebiasis is contracted by consuming contaminated food or water containing the cyst stage of the parasite. It can also be spread by person to person contact.Simply said, Amebiasis is an intestinal illness caused by a microscopic parasite called Entamoeba histolytica. The other names of Amebiasis are Amebic dysentery and Intestinal amebiasis.
TYPES OF AMEBIASIS:
The following are the two types of Amebiasis.These include:
It’s another name is encapsulated form. This type of Amebiasis can survive outside the human body because of its protective covering. In the digestive tract the cysts are transported to the intestine where the walls of the cysts are broken open by digestive secretions, releasing the mobile trophozoites.
It is another type of Amebiasis. The trophozoite form can't survive once excreted in the stool and therefore can't infect others. The trophozoites may remain inside the intestine, in the intestinal wall, or may break through the intestinal wall and be carried by the blood to the liver, lungs, brain, or other organs.
CAUSES OF AMEBIASIS:
Amebiasis is an infection caused by the protozoal organism E histolytica, which causes colitis and liver abscess. Amebiasis occurs when a person swallows microscopic cysts containing the parasites. The cysts may be in contaminated food or water. Transmission generally occurs through ingestion of cysts from food or water contaminated by feces. All household members should have their stools examined because person to person transmission can occur. . During our life cycles, the amebas exist in two very different forms.These are as follows:
SYMPTOMS OF AMEBIASIS:
Most of the time Amebiasis symptoms do not occur. When symptoms do occur, the parasites have already invaded deeply into wall of the intestine. The following are the some common symptoms of Amebiasis.Such as:
TREATMENT FOR AMEBIASIS:
Anyone can get Amebiasis, but it can be reduced by taking regular treatment. The choice of drug depends on the type of clinical presentation and the site of drug action (in the intestinal wall versus inside the intestine itself). The drugs for Amebiasis are as follows:
The Amebiasis sufferers must use the above drugs as per their physician’s suggession,because these drugs have side effects.
HERBAL TREATMENT FOR AMEBIASIS:
Herbal treatment is the best treatment for Amebiasis.This treatment has no side effects. so Amebiasis sufferers can get quick cure through this treatment.
It is the best herb for Amebiasis.The Amebiasis sufferers must take 30 drops of chaparro amargo with water in the morning and 30 drops before the last meal of the day, for seven days straight. After taking a seven day break from the treatment, it is resumed for seven days. Some mild cramping may be felt; this means that the amoebas are dying and will be expelled from the body.
PREVENTIVE MEASURES FOR AMEBIASIS:
We can avoid Amebiasis by following some preventive measures.These are :
Avoid unsanitary water supplies.
When we are in traveling, we must avoid food that is not cooked or peeled.
Protect food from feces, flies, and contaminated water.
When we are in camping, boil water for 5 minutes or treat it with disinfectant tablets. (Adding chlorine to the water will not kill the parasite, but Globaline tablets and iodine will.)
Washing hands after defecation and before preparing or eating food
SIMPLE TIPS TO REDUCE AMEBIASIS:
The following tips help to get remedy from Amebiasis infections.These are:
*Thoroughly cook all raw foods.
*Thoroughly wash raw vegetables and fruits before eating.
*Reheat food until the internal temperature of the food reaches at least 167º Fahrenheit.
*Uncooked foods must be avoided, particularly vegetables and fruit, which cannot be peeled before eating.
*Unpacked drinks and ice should also be avoided.
Food handlers should always use disposable paper towels or an air dryer to dry their hands. Generally, cloth towels are not recommended as they can spread germs from one person to another.
Posted at 12:08 pm by supercrazy
Arthritis ('arth' meaning joint, 'itis' meaning inflammation). The word Arthritis is derived from the Greek word "arthron" meaning joint. Arthritis literally means joint inflammation. Arthritis is a joint disorder featuring inflammation. A joint is an area of the body where two different bones meet. A joint functions to move the body parts connected by its bones. Arthritis sufferers include men and women, children and adults. Arthritis is the leading cause of disability in people over the age of 65.
CAUSES OF ARTHRITIS:
The causes of arthritis depend on the form of arthritis .The following are some common causes of arthritis. These include:
*Structural changes in the articular cartilage in the joints
SYMPTOMS OF ARTHRITIS:
Arthritis is a group of conditions that affect the health of the bone joints in the body. Arthritis can begin very gradually, or it can strike quickly. Symptoms of arthritis include pain and limited function of joints. The following are the common symptoms of Arthritis. These include:
* Weight loss
*Loss of appetite
* Joints pain
*Stiffness in joints
*Deformed hands and feet,
* Constipation etc.
TYPES OF ARTHRITIS:
Arthritis is a painful inflammation of a joint or joints of the body.The types of Arthritis are as follows:
Inflammatory arthritis is the most common type of Arthritis. It is characterized by inflammation of tissues associated with joints. The examples of this Arthritis are Connective tissue diseases, crystal deposition diseases, infectious arthritis, and spondyloarthropathies Degenerative joint disease.
Nonarticular rheumatism is a group of diseases, also called soft-tissue rheumatisms. It includes tendonitis, bursitis, tenosynovitis, and fibrositis. The etiology is unclear, but the disorder may relate to psychobiologic or sleep disturbances or muscular and soft-tissue abnormalities.
Rheumatoid arthritis is the most common variety of inflammatory arthritis. It occurs in younger and middle-aged persons. It is characterized by noninfectious inflammation of the synovium (joint-lining membrane) frequently associated with extraarticular manifestations other than in the joints. The treatment of rheumatoid arthritis includes the use of non-drug treatment such as rest and physiotherapy, drugs may be required both to control symptoms of the disease.
Degenerative joint disease (osteoarthritis):
It is also called as osteoarthritis. It is a ubiquitous joint disease characterized pathologically by deterioration of cartilage lining the joints and new bone formation beneath the cartilage. Degenerative joint disease is marked by a progressive stiffness, loss of function, and destruction of the larger, weight-bearing joints of the body. Primary osteoarthritis is mostly related to aging. The main symptoms of osteoarthritis are is pain that worsens during activity and that gets better during rest.
Juvenile rheumatoid arthritis:
Juvenile rheumatoid arthritis (JRA) is a form of arthritis in children ages 16 or younger that causes inflammation and stiffness of joints for more than six weeks. The treatment of juvenile rheumatoid arthritis centers on decreasing joint inflammation, suppressing pain, and preserving movement.
It is also known as infectious arthritis or pyogenic arthritis. It is an infection in the joint (synovial) fluid and joint tissues. Septic arthritis requires immediate treatment.
It is a chronic inflammation of the joints that occurs in some people with a chronic skin and nail condition known as psoriasis. The cause of psoriatic arthritis is unknown. Psoriatic is triggered by an attack of the body's own immune system on itself.
Gout or gouty arthritis is a form of arthritis. It is caused by the accumulation of uric acid crystals (due to hyperuricemia) in joints. The goals of treatment for gout consist of alleviating pain, avoiding severe attacks in the future, and preventing long-term joint damage.
TREATMENT FOR ARTHRITIS:
A variety of treatment has been recommended for patients with Arthritis. The objectives in the treatment of arthritis are controlling inflammation, preserving joint function, and curing the disease .Arthritis can be defined as a physical disorder that involves inflammation of joints. There is no particular age at which arthritis happens. Treatment options vary depending on the type of arthritis.These include:
Medications (symptomatic or targeted at the disease process causing the arthritis):
It helps to decrease inflammation and to treat pain.
It includes (Naprosyn), ibuprofen (Advil, Medipren, Motrin), and etodolac (Lodine).These medications help to get remedy from Arthritis.
It is generally very effective and more than 90% of patients are very satisfied.It is the best treatment for Arthritis.
It has been used for Arthritis patients to promote relaxation, relieve stress, and improve flexibility.
It can help to greatly reduce pain and inflammation. Moist heat is more effective than dry heat, and cold packs are useful during acute flare-ups.
ACUPUNCTURE AND ACUPRESSURE MASSAGE:
These can be extremely helpful in treating the arthritis. These can aggressively promote the movement of blood to relieve pain and promote normalcy. The Arthritis sufferers can easily get relief from Arthritis by using this acupuncture and acupressure massage.
MOXIBUSTION (HEAT TREATMENT):
It is also the best treatment for Arthritis. It is extremely beneficial in treating cold conditions such as cold bi, damp bi, and third stage injuries. It is not appropriate in hot conditions such as when a joint appears red or feels warm to the touch, or in cases of rhumatoid arthritis.
HERBS FOR ARTHRITIS:
The herbal treatment is the best treatment for Arthritis. The Arthritis sufferers can easily get remedy for Arthritis from the following herbs. These include:
Ginger is a fantastic herb. It has been used for the treatment of many aliments. It is very beneficial in relieving arthritis. Its anti-inflammatory properties make it a well know arthritis treatment.
Cayenne Pepper or Red Pepper (Capsicum frutescens ) is another wonderful herb, with a wide range of medical properties to heal the body. Cayenne can be very hot and some people are sensitive .Take cayenne as a tincture for fast acting absorption, and include the spice in our food.
Garlic and Ginkgo Biloba:
Garlic and Ginkgo Biloba are the best herbal medicines for Arthritis. Both garlic and ginkgo biloba have been shown to help with circulation and improve blood flow. This is important where arthritis is concerned.
It is a tuber found in South Africa contains a glycoside called harpagoside that helps to reduce inflammation in joints.
Stinging Nettle (Urtica dioica):
Stinging nettle is an official remedy for rheumatism in Germany. It is the most important herb to consider for treating early- onset arthritis. Nettle juice contains an anti-inflammatory component similar to that of steroid drugs.
Yucca is the best herb for Arthritis. It has long been used to reduce arthritic pain. A double-blind clinical trial indicated a saponin extract of yucca demonstrated a positive therapeutic effect.
It has been used historically in the treatment of inflammation, allergy, asthma and other conditions that put added stress on the adrenals.
Posted at 11:59 am by supercrazy
The word anorexia is derived from the Greek word orexe which means loss of appetite (deriving from the Greek"á (í)-" (a (n)-, a prefix that denotes absence) + "üñåîç (orexe) = appetite) Anorexia is the decreased sensation of appetite.It is a loss or reduction in appetite for food. Appetite is the desire to eat. A decreased appetite is when we have reduced desire to eat. This occurs despite the body's basic caloric (energy) needs. Anorexia is causing, people to lose interest in eating. The other name of Anorexia is Loss of appetite or Decreased appetite.
TYPES OF ANOREXIA:
The following are the two types of anorexia. These include:
This is the first type of appetite loss. This type of Anorexia is caused by the patient adopting harmful habits, like fasting.
Binge eating or purging type:
This is the second type of appetite loss. This type is characterised by the use of self-induced vomiting, or misuse of laxatives or diuretics to help prevent weight gain.
CAUSES OF ANOREXIA:
Anorexia is an eating disorder where people starve themselves. Anorexia usually begins in young people around the onset of youth. Individuals suffering from anorexia have severe weight loss. There are many causes associated with Anorexia. These are as follows:
*Fever - any fever may cause temporary loss of appetite.
*Prolonged fever - may affect appetite long enough to cause weight loss
*Changes in taste and smell
SYMPTOMS OF ANOREXIA:
The following conditions are some of the possible symptoms of Poor appetite. These are as follows:
*Emotional stress - may lead to either under-eating or over-eating
*Grief or loss
* Relationship problems
HERBAL TREATMENT FOR ANOREXIA:
The herbal treatment is the best treatment for Anorexia. Anorexia can be reduced by using the following herbs. These include:
A bitter green is a good herbal remedy for Anorexia. Bitter greens consist of arugula, radicchio, collards, kale, endives, escarole, mizuna, sorrel, dandelions, watercress, and red/green mustard. Bitter foods also stimulate the gallbladder to contract and release bile, which helps break fatty foods into small enough particles that enzymes can easily finish breaking them apart for absorption.
Water is also a good herbal remedy for Anorexia. The wonders of water never cease. Water helps to control the appetite.
BARBERRY AND GOLDENSEAL (HYDRASTIS CANADENSIS):
It is a useful remedy for Anorexia. It has very similar therapeutic uses because both herbs contain active substances called berberine alkaloids. These substances have been shown to combat infection and bacteria, stimulate the activity of the immune system, and lower fever.
LEMON BALM (MELISSA OFFICINALIS):
It is a member of the mint family, has long been considered a "calming" herb. It has been used since the Middle Ages to reduce stress and anxiety, promote sleep, and improve appetite
YARROW (ACHILLEA MILLEFOLIUM):
It was named after Achilles, the Greek mythical figure who used it to stop the bleeding wounds of his soldiers. Popular in European folk medicine, yarrow has traditionally been used to treat wounds, menstrual ailments bleeding hemorrhoids and appetite loss.
Posted at 11:57 am by supercrazy
The appendix is a small, tube-like structure attached to the first part of the large intestine, also called the colon. The appendix is located in the lower right portion of the abdomen. It has no known function. Removal of the appendix appears to cause no change in digestive function.
Appendicitis is an inflammation of the appendix. Once it starts, there is no effective medical therapy, so appendicitis is considered a medical emergency. When treated promptly, most patients recover without difficulty. If treatment is delayed, the appendix can burst, causing infection and even death. Appendicitis is the most common acute surgical emergency of the abdomen. Anyone can get appendicitis, but it occurs most often between the ages of 10 and 30.
Appendicitis is inflammation of the appendix. The appendix is a small pouch attached to your large intestine.
CAUSES OF APPENDICITIS:
Appendicitis is one of the most common causes of emergency abdominal surgery in the United States. Appendicitis usually occurs when the appendix becomes blocked by feces, a foreign object, or rarely, a tumor.
The cause of appendicitis relates to blockage of the inside of the appendix, known as the lumen. The blockage leads to increased pressure, impaired blood flow, and inflammation. If the blockage is not treated, gangrene and rupture (breaking or tearing) of the appendix can result.
Most commonly, feces blocks the inside of the appendix. Also, bacterial or viral infections in the digestive tract can lead to swelling of lymph nodes, which squeeze the appendix and cause obstruction. This swelling of lymph nodes is known as lymphoid hyperplasia. Traumatic injury to the abdomen may lead to appendicitis in a small number of people. Genetics may be a factor in others. For example, appendicitis that runs in families may result from a genetic variant that predisposes a person to obstruction of the appendiceal lumen.
SYMPTOMS OF APPENDICITIS:
The symptoms of appendicitis vary. It can be hard to diagnosis appendicitis in young children, the elderly, and women of childbearing age.
Typically, the first symptom is pain around your navel. (See: abdominal pain) The pain initially may be vague, but becomes increasingly sharp and severe. You may have reduced appetite, nausea, vomiting, and a low-grade fever.
As the inflammation in the appendix increases, the pain tends to move into your right lower abdomen and focuses directly above the appendix at a place calledMcBurney's point
Appendicitis is accompanied by the following signs and symptoms:
*Pain on the right side of the abdomen, usually beginning near the navel and moving down and to the right. The pain worsens when moving, taking deep breaths, coughing, sneezing, or being touched in this area.
*Loss of appetite
*Change in bowel movements, including diarrhea or inability to have a bowel movement or to pass gas
*Low fever that begins after other symptoms
*Urinating frequently or difficult or painful urination
PREVENTIVE MEASURE FOR APPENDICITIS:
There is no way to prevent appendicitis. However, appendicitis is less common in people who eat foods high in fiber, such as fresh fruits and vegetables.
TREATMENT FOR APPENDICITIS:
For uncomplicated cases, a surgical procedure called an appendectomy is performed to remove the appendix soon after the diagnosis. An appendectomy can be done as an "open" procedure, where fairly large surgical cuts are made in your abdomen. The surgery can also be done as a laparoscopic procedure, which uses a camera and small incisions.
If the operation reveals that the appendix is normal, the surgeon will remove the appendix and explore the rest of the abdomen for other causes of your pain.
If a CT scan reveals an abscess from a ruptured appendix, the patient may be treated and the appendix removed later, after the infection and inflammation have gone away.
Appendicitis is most often treated with a combination of surgery and antibiotics. In addition to antibiotics, you will receive intravenous fluids and, if nauseated, medication to control vomiting. If you have symptoms of appendicitis, you will be evaluated for surgery. When the diagnosis is not clear from tests such as an ultrasound or CT scan, exploratory surgery is performed. If appendicitis is confirmed, either from the tests or the exploratory surgery, the appendix is removed in a procedure called an appendectomy.
The doctor may prescribe the following medications.
Medications taken to ease nausea
SURGICAL AND OTHER PROCEDURES:
Doctors can usually diagnose appendicitis by our description of the symptoms, the physical exam, and laboratory tests alone. In some cases, additional tests may be needed. These may include:
*Abdominal CT scan
Blood tests are used to check for signs of infection, such as a high white blood cell count. Blood chemistries may also show dehydration or fluid and electrolyte disorders. Urinalysis is used to rule out a urinary tract infection. Doctors may also order a pregnancy test for women of childbearing age (those who have regular periods).
X rays, ultrasound, and computed tomography (CT) scans can produce images of the abdomen. Plain x rays can show signs of obstruction, perforation (a hole), foreign bodies, and in rare cases, an appendicolith, which is hardened stool in the appendix. Ultrasound may show appendiceal inflammation and can diagnose gall bladder disease and pregnancy. By far the most common test used, however, is the CT scan. This test provides a series of cross-sectional images of the body and can identify many abdominal conditions and facilitate diagnosis when the clinical impression is in doubt. All women of childbearing age should have a pregnancy test before undergoing any testing with x rays.
An appendectomy is the surgical removal of the appendix through an incision in your abdomen that can be several inches long. A laparoscopic appendectomy involves making several tiny cuts in the abdomen and inserting a miniature camera and surgical instruments. The surgeon then removes the appendix through one of the small incisions. The advantage of laparoscopic appendectomy is that recovery is usually faster than with traditional surgery. However, not everyone is a candidate for the laparoscopic procedure.
Acute appendicitis is treated by surgery to remove the appendix. The operation may be performed through a standard small incision in the right lower part of the abdomen, or it may be performed using a laparoscope, which requires three to four smaller incisions. If other conditions are suspected in addition to appendicitis, they may be identified using laparoscopy. In some patients, laparoscopy is preferable to open surgery because the incision is smaller, recovery time is quicker, and less pain medication is required. The appendix is almost always removed, even if it is found to be normal. With complete removal, any later episodes of pain will not be attributed to appendicitis.
Recovery from appendectomy takes a few weeks. Doctors usually prescribe pain medication and ask patients to limit physical activity. Recovery from laparoscopic appendectomy is generally faster, but limiting strenuous activity may still be necessary for 4 to 6 weeks after surgery. Most people treated for appendicitis recover excellently and rarely need to make any changes in their diet, exercise, or lifestyle.
After an appendectomy, a person who's had an appendectomy will feel better soon, and he or she won't feel any different without an appendix
The way acupuncture works on acute abdominal conditions is complex. In Chinese medical terms, appendicitis is thought to be caused by blockages in the circulation of blood and flow of vitality. Acupuncture appears to help relieve pain, control peristalsis (the wave-like movements of muscles in the intestines), and improve blood flow. Case reports from China suggest that acupuncture has been used for mild appendicitis. Electro acupuncture (sending electric current through needles) has also been used.
Belladonna and Bryonia:
These are classic homeopathic remedies often used for an inflamed appendix. Using the appropriate homeopathic remedy along with conventional Western medicine may relieve your symptoms and help clear up appendicitis more quickly. However, no scientific literature supports the use of homeopathy for appendicitis. An experienced homeopath would condition and any current symptoms.
HERBAL TREATMENT FOR APPENDICITIS:
Traditional Chinese herbal therapies may help treat appendicitis. There is not yet enough scientific research on Chinese or Western herbs to be sure, but there are some case reports from a TCM perspective. In a report of 425 patients with acute appendicitis treated with Chinese herbal preparations, either with or without antibiotics, the majority of patients did extremely well and did not require surgery. Of the 425 cases, 93% were cured with TCM alone, 4% with TCM and antibiotics together, and 3% with surgery after medicine failed. Only thirty patients had acute relapse of appendicitis shortly after recovery. Given that appendicitis sometimes resolves but then recurs, a subset of the people who had not had surgery was followed for 1 year; 85% of them experienced complete recovery without recurrence during that period.
Some examples of herbal therapies used in TCM include: detoxifying and fever-reducing herbs (Flos lonicerae, Fructus forsythiae, Herba taraxaci, Patrinia scabioseafolia, Gypsum fibrosum), circulation-enhancing herbs (Semen persicae, Radix paeoniae rubra, Squama manitis, Spina gleditsiae), and laxatives (Rhizoma rhei, Mirabilitum depuratum).
Here are presented some appendicitis cure home remedies:
Green gram serves:
It is an excellent home remedy for appendicitis. Consume 1 tsp of green grams thrice a day.
In 1 litre cold water, add 1 tbsp of fenugreek seeds and cook on low flame. Simmer for half an hour and then strain. Allow it to cool a bit before drinking. As a part of appendicitis home remedy treatment, tea made from fenugreek seeds is considered valuable.
Beet and cucumber juices:
Take about 100 ml each of beet and cucumber juices and mix with 300 ml carrot juice. This mixture has to be taken two times on a daily basis.
For chronic appendicitis, buttermilk is of great value. Consume 1 litre buttermilk everyday.
Echinacea and goldenseal:
Echinacea and goldenseal in the same combination are helpful to the immune system and can act in the prevention of infection in the wound. The daily dose for children and adults is one to two teaspoons, twice or thrice daily.
It has a rich combination of minerals and is also abundant in micronutrients, aiding greatly in bolstering immune function. It is a good dose for children.
This herb is good in strengthening the immune system and is also an excellent source of trace minerals and micronutrients, and will greatly speed immune function and the body’s recovery. This herb cannot be used when there is a fever present and if a fever has been always there during the recovery phase.
These are rich in minerals and micronutrients and are considered very good general tonics in the treatment of all surgery patients, both this herbs are not suitable for use as supplements with children below four years of age, as the herbs can easily upset children’s stomachs. The herb must be discontinued immediately as a supplement should these symptoms occur.
This Chinese herbal concoction helps in the recovery of strength in the body, and in boosting resistance to infection; it is however not advised for children who have fever or other signs of persistent infection while they recover from surgery.
Vitamin E oil, castor oil, or evening primrose oil:
once recovery is complete and medical discharge is obtained, these oil can be rubbed on to prevent scarring and in minimizing scar tissue formation.
Points to Remember
*The appendix is a small, tube-like structure attached to the first part of the colon. Appendicitis is an inflammation of the appendix.
*Appendicitis is considered a medical emergency.
*Symptoms of appendicitis include pain in the abdomen, loss of appetite, nausea, vomiting, constipation or diarrhea, inability to pass gas, low-grade fever, and abdominal swelling. Not everyone with appendicitis has all the symptoms.
*Physical examination, laboratory tests, and imaging tests are used to diagnose appendicitis.
*Acute appendicitis is treated by surgery to remove the appendix.
*The most serious complication of appendicitis is rupture, which can lead to peritonitis and abscess.
DIET CHART FOR APPENDICITIS:
A - DIET
I. An all-fruit diet for 2 or 3 days, with three meals a day of fresh juicy fruits at five-hourly intervals.
II. Fruit and milk diet for further 3 days. In this regimen, milk may be added to each fruit meal.
III. Therefore, adopt a well-balanced diet on the following lines:
1. Upon arising:
A glass of lukewarm water with half a freshly squeezed lime and a teaspoon of honey.
Fruits and milk, followed by nuts, if desired.
Steamed vegetables, 2 or 3 whole-wheat chapattis and a glass of buttermilk.
A glass of fresh fruit or vegetable juice or sugarcane juice.
A bowl of fresh green vegetable salad, with limejuice dressing, sprouted seeds and fresh homemade cottage cheese or a glass of buttermilk.
A glass of fresh milk or an apple.
Meat, fried foods, condiments, spices, white sugar, white flour and products made from white flour, sugar, tea, coffee, refined cereals and tinned and canned foods.
B - OTHER MEASURES
1. Abdominal packs for 2 or 3 times for duration of one hour each.
2. Massage to abdomen.
3. Adequate rest.
4. Hot fomentation to painful area several times daily.
Posted at 11:54 am by supercrazy
Jul 19, 2007
Bell's palsy is paralysis or weakness of the facial muscles on one side only. It comes on suddenly and has no obvious cause. It is the most common cause of paralysis affecting the face. Bell's palsy is a condition in which one side of the face becomes paralysed. It is usually temporary.
Bell's palsy was named after Sir Charles Bell, a 19th century doctor who first described the condition and linked it to a problem with the facial nerve.
The incidence of Bells palsy in males and females, as well as in the various races is also approximately equal. The chances of the condition being mild or severe and the rate of recovery are also equal. Bells palsy should not cause any other part of the body to become paralyzed, weak or numb. It is not contagious. People with Bells palsy can return to work and resume normal activity as soon as they feel up to it.
Bell's palsy is a disorder caused by damage to cranial nerve VII, involving sudden facial drooping and decreased ability to move the face.
CAUSES OF BELL-PALSY:
Bell's palsy occurs when the nerve that controls the facial muscles is swollen, inflamed, or compressed, resulting in facial weakness or paralysis. Exactly what causes this damage, however, is unknown. When Bell's palsy occurs, the function of the facial nerve is disrupted, causing an interruption in the messages the brain sends to the facial muscles. This interruption results in facial weakness or paralysis. A specific cause of Bell's palsy is unknown, however, it has been suggested that the disorder may be inherited. It also may be associated with:
*High blood pressure
Bell's palsy afflicts approximately 40,000 Americans each year. It affects men and women equally and can occur at any age, but it is less common before age 15 or after age 60. It disproportionately attacks pregnant women and people who have diabetes or upper respiratory ailments such as the flu or a cold.
Other causes of facial paralysis include:
*Pressure on the facial nerve (eg caused by a tumour)
*Infections (eg Lyme disease)
*Sarcoidosis (a condition of the immune system)
*Disorders which affect the immune system, such as HIV/AIDS
*Spontaneous twitches or spasms (called synkinesis) such as the corner of the mouth turning up in a "smile" when blinking.
*Tears forming in one eye while eating.
SYMPTOMS OF BELL-PALSY:
The following are the most common symptoms of Bell's palsy. However, each individual may experience symptoms differently. Symptoms may include:
*Loss of feeling in the face
*Loss of the sense of taste on the front two-thirds of the tongue
*Hypersensitivity to sound in the affected ear
*Inability to close the eye on the affected side of the face
*Affects the muscles that control facial expressions such as smiling, squinting, blinking, or closing the eyelid.
The symptoms of Bell's palsy are likely to come on very quickly - often in a matter of hours or overnight. The main symptom is likely to be paralysis or weakness on one side of the face, along with a sagging eyebrow and difficulty closing the eye. Mild earache or pain behind the ear is sometimes the first sign of Bell's palsy. There are several other possible symptoms including:
*A Sagging mouth
*Dribbling of saliva and drinks
*Difficulty in speaking
*Alteration or loss of taste at the front of the tongue
*Dryness or watering of the affected eye
*A turned-out lower eyelid
*Unusually sharp hearing on the affected side
Bell's palsy is not preventable
Treatment for Bell-palsy:
In order to be sure that this is the cause of the facial weakness, and not something else, a special set of questions will be asked. After an examination of the head, neck, and ears, a series of tests may be performed. The most common tests are:
Determines if the cause of damage to the nerve has involved the hearing nerve, inner ear, or delicate hearing mechanism.
Evaluates balance nerve involvement.
Measures the eye's ability to produce tears. Eye drops may be necessary to prevent drying of the surface of the eye cornea).
CT (computerized tomography) or MRI (magnetic resonance imaging) determine if there is infection, tumor, bone fracture, or other abnormality in the area of the facial nerve.
Stimulates the facial nerve to assess how badly the nerve is damaged. This test may have to be repeated at frequent intervals to see if the disease is progressing.
The results of diagnostic testing will determine treatment.
*If infection is the cause, then an antibiotic to fight bacteria (as in middle ear infections) or antiviral agents (to fight syndromes caused by viruses like Ramsay Hunt) may be used.
*If simple swelling is believed to be responsible for the facial nerve disorder, then steroids are often prescribed.
*In certain circumstances, surgical removal of the bone around the nerve (decompression) may be appropriate.
There is no cure or standard course of treatment for Bell's palsy. The most important factor in treatment is to eliminate the source of the nerve damage.
Bell's palsy affects each individual differently. Some cases are mild and do not require treatment as the symptoms usually subside on their own within 2 weeks. For others, treatment may include medications and other therapeutic options.
Bell's palsy may make it hard to close your eyelid. These safeguards can help stop the surface of your eyeball drying out.
*Regularly close the eye by pulling the upper lid down with your finger.
*Wear protective glasses or an eye patch.
*Tape the eye closed before you go to sleep.
*Use artificial tears (eye drops) to keep the eye moist - ask a pharmacist for advice.
It helps to stimulate the facial nerve and help maintain muscle tone may be beneficial to some.
Facial massage and exercises:
It may help prevent permanent contractures (shrinkage or shortening of muscles) of the paralyzed muscles before recovery takes place. Moist heat applied to the affected side of the face may help reduce pain.
The other therapies include relaxation techniques, acupuncture, electrical stimulation, biofeedback training, and vitamin therapy (including vitamin B12, B6, and zinc), which may help nerve growth.
Decompression surgery for Bell's palsy:
It helps to relieve pressure on the nerve-is controversial and is seldom recommended.
Cosmetic or reconstructive surgery:
It may be needed to reduce deformities and correct some damage such as an eyelid that will not fully close or a crooked smile.
Acupuncture is started on the side of the face that is not affected by palsy. It usually takes two weeks of daily treatment to see changes in symptoms. Acupuncturists in Kumming have developed extremely effective approaches to treating this disease. These approaches involve the technique of "pause and regress", in which needles are inserted, withdrawn, and replaced on acupuncture points over the parts of the face served by the facial nerve.
Herbal treatment designed to stop pain and improve symptoms, correct imbalance and adjust immune system, and most importantly, to boost energy and strong body for better health and quality of life. The purpose of herbal treatment is not to take the place of necessary orthodox medical treatment. Combination of the both is a better choice.
HERBS FOR BELL-PALSY:
Cloves oil is a good remedy for Bell-palsy. The Bell-palsy sufferers must take 5-10 drops in 1/4 cup water 3 times daily.ItIncreases the effectiveness of acyclovir (Zovirax).
Kudzu tablet relieves muscle tension in the muscles of the face and neck not affected by palsy. The Bell-palsy sufferers must take 10 mg 3 times daily.
B vitamin is the use of nutritional yeast, which can be used as bread spread, this can also be added to all kinds of soups and sauces or it can be sprinkled directly on salads. All these food items must be included in the diet to provide the maximum amount of the B complex vitamins.
*vitamin B1 is to support nerves and nervous system function in general besides its role in alleviating depression.
*vitamin C must also be maintained as it is an effective agent against all forms of inflammation that affects patients.
Use of about 50 mg of ginkgo standardized extract mixed in water every day till such times as the symptoms disappear.
Rue is first mentioned by Turner, 1562, in his Herbal, and has since become one of the best medicines for Bell-palsy.
Strychnine was discovered in 1818, (Motion, 2000) although nux vomica, the unpurified plant extract in which it is the active component, had been known and used for Bell-palsy disease.
Posted at 06:16 pm by supercrazy
Food poisoning is a common, usually mild, but sometimes deadly illness .Food borne illness results from eating food contaminated with bacteria (or their toxins) or other pathogens such as parasites or viruses. The illnesses range from upset stomach to more serious symptoms, including diarrhea, fever, vomiting, abdominal cramps, and dehydration. Food poisoning comes from eating foods that contain germs like bad bacteria or toxins, which are poisonous substances. Bacteria are all around us, so mild cases of food poisoning are common.
Food poisoning is the result of eating organisms or toxins in contaminated food. Most cases of food poisoning are from common bacteria like Staphylococcus or E. coli.
CAUSES OF FOOD-POISONING:
Food poisoning can affect one person or it can occur as an outbreak in a group of people who all ate the same contaminated food. Food poisoning tends to occur at picnics, school cafeterias, and large social functions. These are situations where food may be left unrefrigerated too long or food preparation techniques are not clean. Food poisoning often occurs from undercooked meats or dairy products (like mayonnaise mixed in coleslaw or potato salad) that have sat out too long.
Food poisoning can be caused by:
*E. coli enteritis
*Foods from animals, raw foods, and unwashed vegetables all can contain germs that cause food poisoning.
Some of the most common bacteria are:
*Salmonella (say: sal-meh-neh-luh)
*Listeria (say: lis-tir-ee-uh)
*Campylobacter (say: kam-pye-low-bak-tur)
*E. coli (say: ee kole-eye)
SYMPTOMS OF FOOD-POISONING:
Food poisoning from bacteria causes nausea, vomiting, abdominal cramping, and diarrhea. Specific bacteria may cause these signs and symptoms:
Clostridium botulinum (C. botulinum, or botulism): weakness, blurred vision, sensitivity to light, double vision, paralyzed eye nerves, difficulty speaking, trouble swallowing, paralysis that spreads downward, respiratory failure, death
C. botulinum in infants: impaired physical growth (failure to thrive), constipation, paralysis, sudden infant death
Vibrio cholerae (V. cholerae, or cholera): stools that are liquid with a whitish tinge
Salmonella spp., Shigella spp., and Campylobacter jejuni (C. jejuni): fever, chills, bloody diarrhea
Escherichia coli (E. coli): hemorrhagic colitis (bleeding from inflamed large intestine)
Yersinia spp.: symptoms similar to appendicitis; delayed immune reaction including arthritis and/or red, tender bumps under the skin (erythema nodosum); sometimes bloody stool
In most cases of food borne illness, symptoms resemble intestinal flu and may last a few hours or even several days. Symptoms can range from mild to serious and include:
TIPS TO PREVENT FOOD-POISONING:
We take the following steps when preparing food:
*Carefully wash your hands and clean dishes and utensils.
*Use a thermometer when cooking. Cook beef to at least 160°F, poultry to at least 180°F, and fish to at least 140°F.
*DO NOT place cooked meat or fish back onto the same plate or container that held the raw meat, unless the container has been thoroughly washed.
*Promptly refrigerate any food you will not be eating right away. Keep the refrigerator set to around 40°F and your freezer at or below 0°F.
DO NOT eat meat, poultry, or fish that has been refrigerated uncooked for longer than 1 to 2 days.
*DO NOT use outdated foods, packaged food with a broken seal, or cans that are bulging or have a dent.
*DO NOT use foods that have an unusual odor or a spoiled taste.
STEPS TO KEEP FOOD FOR SAFE:
We can keep our food for safe by following the steps. These include:
*Wash fruits and vegetables well before eating them.
*Only eat foods that are properly cooked. If you cut into chicken and it looks pink and raw inside, tell a grown-up.
*Look at what you're eating and smell it, too. If something looks or smells different than normal, check with an adult before eating or drinking it. Milk is a good example. If you've ever had a sip of sour milk, you know you never want to taste that again! Mold (which can be green, pink, white, or brown) is also often a sign that food has spoiled.
*If you're going to eat leftovers, ask a grown-up for help heating them up. By heating them, you can kill bacteria that grew while it was in the fridge.
*Check the date. Lots of packaged foods have expiration dates or "sell by" dates. Don't eat a food if today's date is after the expiration date. Use it before it expires. Some of these dates are "sell by," which means that the food should leave store shelves by that time. Ask an adult for help deciding if it's past the sell by date.
*Cover and refrigerate food right away. Sitting at room temperature, bacteria get a good chance to grow. By putting food in the fridge, you're putting the chill on those bad germs!
STEPS TO PREVENT FOOD POISONING:
Follow these tips to prevent food poisoning:
*Always wash your hands thoroughly before preparing food, after going to the toilet and after handling pets.
*Keep kitchen work surfaces clean.
*Make sure food is defrosted completely before cooking.
*Keep pets away from food.
*Ensure food is cooked thoroughly before eating. Meat shouldn't have any pink bits.
*Serve reheated food piping hot.
*Keep raw meat and fish covered and store at the bottom of the fridge.
*Store all perishable foods at 5°C (41°F) or less.
*Keep raw food covered up.
*Rinse fruit and vegetables under running water before eating.
Posted at 06:09 pm by supercrazy
TREATMENT PLAN FOR FOOD POISONING:
TREATMENT PLAN FOR FOOD POISONING:
Treatment is meant to help support recovery and relieve symptoms. For instance, treatment may help replace fluids and electrolytes (such as sodium, potassium, magnesium, and chloride), help the person breathe, or stop vomiting or diarrhea.
For the most common causes of food poisoning, the doctor would not prescribe antibiotics. Antibiotics can actually prolong diarrhea and keep the organism in your body longer.
If we have eaten toxins from mushrooms or shellfish, we shall need to be seen right away. The doctor will take steps to empty out your stomach and remove the toxin.
Most infections last 24 to 48 hours, during which time fluid is often lost from both ends. To prevent dehydration, drink plenty of cooled boiled water and use re-hydration powders if the symptoms continue. Sometimes, antibiotic treatment is necessary; this can be determined by testing for the micro-organism responsible.
It's especially important that anyone whose work involves handling or preparing food stays away from work while they have symptoms to avoid passing the illness to others. They must also notify, and seek advice from, their local environmental health department.
If someone suspects that food bought from, or eaten in, a specific shop, takeaway or restaurant is responsible, they should also inform their local environmental health department, so the standards of food hygiene can be investigated.
HERBS FOR FOOD-POISONING:
Milk Thistle (Silybum marianum):
It is one of the most effective herbs for liver disorders and is widely used in Europe to treat Amanita mushroom poisoning.
Bittervine (Mikania micranthu):
It is a good herbal medicine against several types of bacteria, including S. aureus and E. coli.
Tea Tree Oil (Melaleuca alternifolia):
The essential oil of the tea tree has activity against E. coli.
The essential oil of thyme (Thymus vulgaris) has killed the bacteria Salmonella typhimurium; additional lab studies also suggest that thymol (a part of thyme oil) has activity against S. aureus.
Barberry (Berberis vulgaris):
It has also been used traditionally to treat diarrhea from infectious causes such as E. coli and V. cholera and, therefore, may help ease this symptom in some people with food poisoning.
HOMEOPATHY TREATMENT FOR FOOD-POISONING:
It may also be used to prevent diarrhea when traveling .This remedy is most appropriate for individuals who feel exhausted yet restless and whose symptoms tend to worsen in the cold and improve with warmth; vomiting may also occur.
It is used primarily for children, especially those who are irritable, argumentative, and difficult to console.
It is used for children who fear being in the dark or alone and who perspire heavily while sleeping.
It is a good medicine for explosive, gushing, painless diarrhea that becomes worse after eating or drinking.
It is the useful remedy for irritable and weepy children.
*Antibiotics, such as ampicillin; similarly, TMP-SMX, doxycycline, or ciprofloxacin, are given to prevent or treat traveler's diarrhea.
*Antitoxin to neutralize toxins from C. botulinum.
*Amitriptyline to control the numbness and tingling from ciguatera poisoning .
*Apomorphine or ipecac syrup to cause vomiting and help rid the body of toxin.
*Atropine for mushroom poisoning
*Diphenhydramine and cimetidine for fish poisoning
*Mannitol for nerve-related symptoms of ciguatera poisoning.
Posted at 06:06 pm by supercrazy | http://supercrazy.blogdrive.com/archive/cm-7_cy-2007_m-07_d-21_y-2007_o-0.html | 13 |
14 | Note that there is just one price where this is true! The equilibrium price is the price that will generally prevail in a perfectly competitive market that is not subject to governmental intervention. As you might remember from your chemistry classes, a system is in equilibrium when there is no tendency for it to change under existing conditions. When a market is in equilibrium, there is no tendency for the market price to change. In other words, the equilibrium price is stable under the existing market conditions.
Consider, for example, the soybean market depicted in the following figure:
Under the existing conditions of supply and demand (the existing incomes for consumers, prices of related goods, state of technology, input prices, and other conditions), the market price of soybeans will be $2.50 per bushel. When farmers hear the farm report on the radio in the morning, the price of soybeans will be quoted as being $2.50 per bushel. Agricultural products such as apples, bread, or other items, are generally about their equilibrium prices.
If the equilibrium price is the price that is stable under existing conditions, that must mean other prices will tend to be unstable. That is exactly the case. Consider what happens when the market price is below the equilibrium price. At low prices, producers supply less and consumers want to buy more than at the equilibrium price. This creates an excess demand, and causes a shortage of the product. Imagine the situation at your local market if the minute supplies of the product came in, frantic consumers immediately would scoop them up! What is the manager of the store likely to do in such a situation? The manager would most likely raise prices. When the market price is below the equilibrium price, consumers compete with each other in order to grab the good deals. This puts upward pressure on the market price. Such pressure will cease when the market price reaches the equilibrium price.
The shortage resulting from the price being below the equilibrium level is shown in the following figure at the price of $1.75. The amount of the shortage is the difference between quantity demanded and quantity supplied at that price. In this case, there is a shortage of 25,000 bushels (50,000 - 25,000).
Now consider what happens when the market price is above the equilibrium level. In this case there is an excess supply, or surplus, of the product. At high prices, producers are willing to produce more of the product, but consumers are willing to buy less than at the equilibrium price. Excess supply, the condition where quantity supplied exceeds quantity demanded at the current price, will result. Now imagine what happens at your local store. As inventories pile up on the back shelves, managers will put the product on sale in order to unload some of it.
As a result, market forces will pull the price down toward the equilibrium price. The surplus resulting from the price being above the equilibrium level is shown below:
To more fully understand equilibrium, try the following Active Graph exercise:
For more practice in understanding market equilibrium, try the following Active Graph exercise: | http://wps.prenhall.com/bp_casefair_econf_7e/30/7931/2030537.cw/index.html | 13 |
19 | By researching and analyzing each economic indicator and sharing their results, students develop a sense of how to analyze economic recovery.
How do we know when the economy is recovering?
What key indicators do economists and politicians use to judge the health of our economic system?
Students use New York Times resources to analyze economic recovery by investigating and charting the current state of key economic indicators like household income and unemployment.
Lesson plan for how to analyze economic recovery:
Materials | Student journals, computers with Internet access, scientific or graphing calculators, graph paper.
Warm-Up | Tell students to write down in their journals some examples of how they can tell when economic times are tough. Ideas might include closed stores in their neighborhoods, higher prices for things they want or need or friends or relatives who are out of work or struggling to find jobs.
Next, tell them to analyze economic recovery by writing down some ways they can tell when the economy is getting better. These could include new stores opening, friends and relatives finding jobs, new family outings and the like.
Invite students to share ideas, and write them on the board. Ask: What signs do you think economists and politicians look for in order to identify an economic recovery?
Show students the infographic “Where the Comeback Has and Hasn’t Taken Hold,” which displays data on key economic indicators like household income, net worth, employment, consumer confidence and housing prices.
Have the students compare their personal observations with these broad economic indicators and classify those personal indicators with the appropriate economic indicator.
Ask: Which of these key indicators do you think are most important in determining if our economy is improving? Do you believe we are currently in recovery? Explain that today they will explore these questions more deeply by conducting research and analyzing these key economic indicators. | http://www.howtolearn.com/2012/04/tips-for-teaching-students-how-to-analyze-economic-recovery | 13 |
17 | Most molecular bonds involve pairs of electrons
. The two nuclei
involved satisfy valence
requirements by sharing two electrons. The sharing of an odd number of electrons is a relatively rare phenomenon. In special cases, two atoms may form a one-electron bond
. This is the kind of interaction found in boron hydrides, for example. Another type of odd electron orbital configuration involves three electrons in a three-electron bond.
The three-electron bond may be best thought of as a resonance
of two structures:
A.:B and A:.B
sometimes represented as A...B
It has been found both by calculation and experiment that this interaction has about half the bond strength of a regular bond. For the bond to be stable, A and B must be similar, if not identical such that the two resonance structures are energetically somewhat symmetric.
The simplest example of a three-electron bond is the helium
. The bond has a strength of approximately 58 kcal
with an equilibrium distance between the two helium nuclei of 1.09 Å
. The He...
bond energy is the same as that of the one-electron bond in H...
and about half the energy of regular diatomic hydrogen
A well known molecule that also forms a three-electron bond as one of its resonance structures is nitric oxide
). It is one of the most stable of the odd-bond molecules. NO has a double bond and
a three electron bond between the two atoms:
This explains some of the physical properties of nitric oxide. The internuclear distance of 1.14 Å lies somewhere between that of a double bond (1.18 Å) and a triple bond
(1.06 Å). The electric dipole moment
is very small as a result of the resonance distribution of the odd electron across both atoms.
Many other small molecules also have three-electron bonds such as complexes of sulfur
, which have two
three-electron bonds per molecule:
Similar structures have been assigned for diatomic selenium | http://everything2.com/title/three-electron+bond | 13 |
16 | ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (April 2010)|
An aggregate in economics is a summary measure describing a market or economy. The aggregation problem refers to the difficulty of treating an empirical or theoretical aggregate as if it reacted like a less-aggregated measure, say, about behavior of an individual agent as described in general microeconomic theory. Examples of aggregates in micro- and macroeconomics relative to less aggregated counterparts are:
- food vs. apples
- the price level and real GDP vs. the price and quantity of apples
- the capital stock for the economy vs. the value of computers of a certain type and the value of steam shovels
- the money supply vs. paper currency
- the general unemployment rate vs. the unemployment rate of civil engineers.
Standard theory uses simple assumptions to derive general, and commonly accepted, results such as the law of demand to explain market behavior. An example is the abstraction of a composite good. It considers the price of one good changing proportionately to the composite good, that is, all other goods. If this assumption is violated and the agents are subject to aggregated utility functions, restrictions on the latter are necessary to yield the law of demand. The aggregation problem emphasizes:
- how broad such restrictions are in microeconomics
- that use of broad factor inputs ('labor' and 'capital'), real 'output', and 'investment', as if there was only a single such aggregate is without a solid foundation for rigorously deriving analytical results.
Aggregate consumer demand curve
The aggregate consumer demand curve is the summation of the individual consumer demand curves. The aggregation process preserves only two characteristics of individual consumer preference theory - continuity and homogeneity. Aggregation introduces three additional non price determinants of demand - (1) the number of consumers (2) "the distribution of tastes among the consumers" and (3) "the distribution of incomes among consumers of different taste." Thus if the population of consumers increases ceteris paribus the demand curve will shift out. If the proportion of consumers with a strong preference for a good increases ceteris paribus the demand for the good will change. Finally if the distribution of income changes in favor of those consumer with a strong preference for the good in question the demand will shift out. It is important to remember that factors that affect individual demand can also affect aggregate demand. However, net effects must be considered. For example, a good that is a complement for one person is not necessarily a complement for another. Further the strength of the relationship would vary among persons.
Aggregating individual consumer demand curves presents several problems.
Independence assumption
First to sum the demand functions it must be assumed that they are independent - that is that one consumer's demand decisions are not influenced by the decisions of another consumer. Example, A is asked how many pairs of shoes he would buy at a certain price. A says at that price I would be willing and able to buy 2 pairs of shoes. B is asked the same question and says 4 pairs. Questioner goes back to A and says B is willing to buy four pairs of shoes, what do you think about that? A says if B has any interest in those shoes then I have none. Or A, not to be outdone by B says then I'll buy five pairs. And on and on. This problem can be eliminated by assuming that the consumers' tastes are fixed in the short run. This assumption can be expressed as assuming that each consumer is an independent idiosyncratic decision maker.
No interesting properties
This second problem is the most serious. As David Kreps notes is his text, A Course in Microeconomic Theory (Princeton 1990), “...total demand will shift about as a function of how individual incomes are distributed even holding total (societal) income fixed. So it makes no sense to speak of aggregate demand as a function of price and societal income." Since any change in relative prices affects a redistribution of real income the result is that there is a separate demand curve for every relative price. Kreps goes on to say, "So what can we say about aggregate demand based on the hypothesis that individuals are preference/utility maximizers? Unless we are able to make strong assumptions about the distribution of preferences or income throughout the economy (everyone has the same homothetic preferences for example) there is little we can say. ..” The strong assumptions are that everyone has the same tastes and that each person’s taste remain the same as income changes so each additional income is spent exactly the same way as all previous dollars. As Keen notes the first assumption amounts to assuming that there is a single consumer the second that there is a single good. Keen further states that because of the aggregation problem you cannot draw conclusions about social welfare, there is no invisible hand and Adam Smith was wrong. Varian, a leading expert on microeconomic analysis reaches a more muted conclusion, "The aggregate demand function will in general possess no interesting properties..." However Varian went on to say," the neoclassical theory of the consumer places no restriction on aggregate behavior in general." Among other things this means the preference conditions (with the possible exception of continuity) simply don't apply to the aggregate function.
See also
- Franklin M. Fisher (1987). "aggregation problem," The New Palgrave: A Dictionary of Economics, v. 1, p. 54. [Pp. 53-55.]
- Franklin M. Fisher (1987). "aggregation problem," The New Palgrave: A Dictionary of Economics, v. 1, p. 55.
- Besanko and Braeutigam, (2005) p. 169.
- Kreps (1990) p. 63.
- Kreps (1990) p. 63.
- Keen, Steve (2001). Debunking Economics.
- Varian (1992) p. 153.
- Varian (1992) p. 153.
- Franklin M. Fisher (1987). "aggregation problem," The New Palgrave: A Dictionary of Economics, v. 1, pp. 53–55.
- Jesus Felipe and Franklin M. Fisher (2008). "aggregation (production)," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
- John R. Hicks (1939, 2nd ed. 1946). Value and Capital.
- Werner Hildenbrand (2008). "aggregation (theory)," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
- Thomas M. Stoker (2008). "aggregation (econometrics)," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
- Douglas W. Blackburn and Andrey D. Ukhov (2008) "Individual vs. Aggregate Preferences: The Case of a Small Fish in a Big Pond," Abstract. | http://en.wikipedia.org/wiki/Aggregation_problem | 13 |
15 | Long Division of Polynomials
When attempting to find the roots of a polynomial, it will be useful to be able to divide that polynomial by other polynomials. Here we'll learn how.
Long division of polynomials is a lot like long division of real numbers. If the polynomials involved were written in fraction form, the numerator would be the dividend, and the denominator would be the divisor. To divide polynomials using long division, first divide the first term of the dividend by the first term of the divisor. This is the first term of the quotient. Multiply the new term by the divisor, and subtract this product from the dividend. This difference is the new dividend. Repeat these steps, using the difference as the new dividend until the first term of the divisor is of a greater degree than the new dividend. The last "new dividend" whose degree is less than that of the divisor is the remainder. If the remainder is zero, the divisor divided evenly into the dividend. In the example below, f (x) = x 4 +4x 3 + x - 10 is divided by g(x) = x 2 + 3x - 5 .
Two important theorems pertain to long division of polynomials.
The Remainder Theorem states the following: if a polynomial f (x) is divided by the polynomial g(x) = x - c , then the remainder is the value of f at c , f (c) .
The Factor Theorem states the following: Let f (x) be a polynomial; (x - c) a factor of f if and only if f (c) = 0 . This means that if a given value c is a root of a polynomial, then (x - c) is a factor of that polynomial.
Synthetic division is an easy way to divide polynomials by a polynomial of the form (x - c) . It is both a way to calculate the value of a function at c (Remainder Theorem) as well as to check whether or not c is a root of the polynomial (Factor Theorem). Synthetic division is a shortcut to long division. It requires only three lines -- the top line for the dividend and divisor, the second line for the intermediate values, and the third line for the quotient and remainder. It is done this way. Let the dividend have degree n . 1) In line one write the coefficients of the polynomial as the dividend, and let c be the divisor. 2) In line three rewrite the leading coefficient of the dividend directly below its position in the dividend. 2) Multiply it by the divisor, and write the product in line two directly below the coefficient of x n - 1 . 3) Add this product to the number directly above it in the dividend (this number is the coefficient of x n - 1 ) to get a new number. Repeat steps two and three until the entire polynomial has been divided. The quotient will be one degree less than the dividend. The coefficients of the quotient are the first n - 1 numbers in line three. The remainder is the last number in line three. Below a polynomial of the form (x - c) is divided using long division, and then using synthetic division. Study it carefully. | http://www.sparknotes.com/math/precalc/polynomialfunctions/section4.rhtml | 13 |
35 | According to the National Renewable Energy Laboratory,1 60 million square kilometers (23 million square miles) of tropical oceans daily absorb solar radiation equal in heat content to about 250 billion barrels of oil (e.g., 1,450 Quads). This is an order of magnitude greater than the expected total U.S. energy consumption through 2030, and twice the projected global energy demand. Even using present maximum estimates for steady-state sustainable energy harvesting2 (e.g., 3 to 5 TW, or 90 to 150 Quads), this resource still provides enough clean and non-GHG emitting energy to supply 15 to 20 percent of the global energy demand in 2030.
Ocean thermal energy conversion (OTEC) technologies convert the solar radiation that heats the surface of the ocean into electrical power by exploiting the thermal gradient temperature differences between the surface and the depths. This temperature gradient in the Tropics (see Figure 1) can be 20 degrees C (36 degrees F) or more between the warm surface water and the cold deep seawater, which is sufficient to produce usable power, albeit not very thermodynamically efficiently (i.e., 3 to 5 percent). It should be noted that lying within the Tropical zone——the area most favorable for OTEC—are some 29 territories and 66 developing nations, as well as portions of Australia and Hawaii, all of which are natural markets for OTEC-generated energy and other side-products.
This enormous resource merits a closer look as policy makers consider alternative technologies for serving future energy demands. Achieving viability, however, will require more supportive and stable regulatory policies as well as funding for research and development.
The science behind OTEC was first described in 1881, when a French physicist, Jacques Arsene d’Arsonval, proposed using what came to be known as a closed-cycle plant to tap the thermal energy of the ocean;3 however, it took almost 50 years for the concept to be applied. In 1930, Georges Claude, a student of d’Arsonval, built the first open-cycle OTEC plant in Matanzas Bay, Cuba, which produced 22 kilowatts (kW) of electricity before being destroy-ed by inclement weather and waves. Undeterred, Claude constructed another open-cycle plant aboard a 10,000-ton cargo vessel moored off the coast of Brazil in 1935; however, it too was destroyed by weather and waves before he could produce net power.4 Twenty-one years later, another French team attempted to build a 3-MW open-cycle (Claude-cycle) plant for Abidjan, then the capital of Côte d’Ivoire; however, the plant never was completed because it couldn’t compete economically with local fossil-fueled power plants.
Mini-OTEC, the world’s first net-power-producing floating OTEC plant, and part of the first foray by the United States into this technology, was deployed in 1979 on a barge at Keahole Point on the Kona coast of the island of Hawaii. This proof-of-concept demonstration facility was developed by the Natural Energy Laboratory of Hawaii (NELHA) and several private firms, including Lockheed Ocean Systems. It operated for three months, generating approximately 50 kW of gross power with net power ranging from 10 to 17 kW.5 Based on the results of Mini-OTEC, it was estimated that a 10-MW OTEC facility could achieve net-to-gross power production efficiency upwards of 75 percent, which would make it more commercially viable than the Mini-OTEC unit. Moreover, OTEC appears to be directly scalable with the applicable economies-of-scale that it implies; that is, the larger the OTEC plant, the more energy can be harvested, and the more cost-effective it is.
The results were so favorable that, in 1980, the U.S. Department of Energy (DOE) built OTEC-1, a non-power test-bed, on a converted U.S. Navy tanker to identify methods for designing commercial-scale heat exchangers, and demonstrated that OTEC systems can operate from slowly moving ships with marginal impact on the marine environment. This facility wasn’t designed to produce electricity, but rather to certify necessary technologies. DOE spent approximately $260 million on OTEC research and development between 1975 and 1982.6
The U.S. Congress, in order to support and promote the commercial development and deployment of the nascent OTEC industry, passed Public Law (PL) 96-320, the Ocean Thermal Energy Conversion Act of 1980, as amended by PL 98-623, National Fishing Enhancement Act of 1984, to, among other actions, “...establish a legal regime which will permit and encourage the development of ocean thermal energy conversion as a commercial energy technology” [42 USC 9101(a)(4)]. The U.S. Congress also enacted the Ocean Thermal Energy Conversion Research, Development, and Demonstration Act, PL 96-310, which stated that “it is in the national interest to accelerate efforts to commercialize ocean thermal energy conversion by building pilot and demonstration facilities and to begin planning for the commercial demonstration of ocean thermal energy conversion technology” [42 USC 9100(a)(5)]. PL 96-310 established “as a national goal ten thousand megawatts [10,000 MWe] of electrical capacity or energy product equivalent from ocean thermal energy conversion systems by the year 1999” [42 USC 9100(b)(4)].
However, following the Mini-OTEC testing, next-generation energy costs world-wide plummeted, and continued OTEC research no longer was economically justifiable. In the 1990s, while some domestic companies and NELHA performed testing on complimentary technologies, most of the research and development work was supported by such countries as Japan and India. For in-stance, in 1981, Japan demonstrated a land-based, 100-kW closed-cycle power plant on the island Nation of Nauru, which exceeded engineering expectations by producing 31.5 kW of net electric power during continuous operating tests. Testing of an open-cycle OTEC plant at NELHA in 1993 produced 50 kW during a net power-producing experiment. In 1996, Japan’s Saga University entered into an agreement with the National Institute of Ocean Technology of India to collaborate on the design and construction of a 1 MW plant to be located off the coast of Tamil Nadu in India. The facility, built in 2002 with Xenesys Inc., was unsuccessful due to a failure of the deep sea cold water pipe and has since been decommissioned.
Renewed interest in OTEC occurred in the second half of the first decade of the 21st century when fossil-fuel prices began to climb, and concerns were raised about the environmental impacts of continued usage of these carbon-based fuels, as well as energy-related security issues. Further, the necessary component technologies that go into developing a viable OTEC infrastructure has benefitted from complimentary research and development efforts undertaken in the past several decades for other purposes.
As a part of comprehensive energy legislation Congress passed in 2007, the Marine and Hydrokinetic Renewable Energy Research and Development Act was enacted. This law created a DOE program to support renewed research into OTEC (as well as tidal, wave and other marine or hydrokinetic energy technologies), and authorized spending $50 million a year through 2012 for technology R&D, as well as grants to universities to establish marine energy R&D centers.8 Management of this program is within the DOE office that also oversees wind and hydropower. Congress provided $10 million for “Water Power R&D” in Fiscal Year (FY) 2008 and another $40 million for FY 2009,9 which enabled the DOE to fund grants under this program. In September 2008, DOE announced the first grants, totaling up to $7.3 million, released under this program, and which included two grants related to OTEC. Up to $.6 million for a possible 2 years was awarded to Lockheed Martin to “validate manufacturing techniques for coldwater pipes critical to OTEC in order to help create a more cost-effective OTEC system,” and another $1.25 million for as many as 5 years to the University of Hawaii to establish the National Renewable Marine Energy Center, which in part will “assist the private sector in moving ocean thermal energy conversion systems beyond proof-of-concept to pre-commercialization, long-term testing.”10 Furthermore, funding committed to other ocean-energy research may have cross-pollination opportunities with OTEC, particularly as it relates to controlling corrosion, mitigating damage from ocean forces, and developing high-voltage undersea electrical cables. Finally, funding has been made available by Congress for OTEC research in recent Defense Department spending bills.11
Unfortunately, federal support for renewable R&D has been highly vola-tile, and in the context of the energy market, very low.12 As UC Berkeley Professor Dan Kammen wrote, “Many R&D programs have exhibited roller-coaster funding cycles, at times doing more harm than good to the sustainable development and deployment of specific technologies.”13
While President Obama and his administration have indicated their intent to increase federal support for renewable research, as reflected in the significant amount of funding made available in the economic stimulus and recovery law passed in early 2009, sustaining these amounts is important to send signals to private and institutional researchers that their pursuits of OTEC research will continue being funded. Despite the need for expanded R&D funding, the existing authorizing laws, grant programs and university-based research centers will help lay the institutional foundation for not only developing this technology, but also for growing the knowledge and workforce capacity necessary for long-term domestic development of OTEC.
With the above caveats, the time appears ripe for revisiting the development and deployment of commercial OTEC plants.
There are three types of OTEC plants: closed-cycle, open-cycle, and hybrid-cycle. The main differences between the first two is that the closed cycle uses the warm surface waters to heat a low-boiling point fluid, such as ammonia, that is used to drive the turbine-generator, while the open cycle utilizes a vacuum to flash sea water to steam, which then is used to turn the turbine-generator. The hybrid cycle uses features of both the closed- and open-cycles.
In d’Arsonval’s closed-cycle OTEC system (see Figure 3), the warm surface water is sent through a heat exchanger (evaporator) where the low-boiling point working fluid is vaporized. This vapor is used to turn the turbine-generator, generating electricity. The vapor is then sent to a condenser where the cold sea water from the depths removes the remaining heat, which condenses it back to liquid. A pump sends the fluid back to the heat exchanger, completing the closed loop, and ensuring that the working fluid that remains continuously is circulated in a closed system, achieving relatively high efficiencies at a smaller scale when compared to the open-cycle system. This is essentially the same technology as is used in standard refrigeration systems, and the technology is well-understood and fairly mature, allowing for a straightforward scale-up to commercial sizes.
For the open-cycle system pioneered by Claude (see Figure 4), the warm surface sea water is pumped into a low-pressure (vacuum) flash evaporator, causing it to boil into desalinated water vapor. This low-quality steam drives a low-pressure turbine-generator, and then is condensed into potable water in the condensor. This system allows for not only the generation of electricity, but also fresh water. In 1984, the Solar Energy Research Institute (now the National Renewable Energy Laboratory, NREL) developed an evaporator for open-cycle plants that had conversion efficiencies as high as 97 percent.
The hybrid OTEC system combines the features of both the closed- and open-cycle systems (see Figure 5). Similar to the open-cycle process, warm sea water is flash-evaporated into steam in a vacuum chamber; however, this steam then is used to vaporize a low-boiling-point fluid, like the closed-cycle system, which then drives a turbine to produce electricity. The major advantage to the hybrid system is that is considered a more efficient producer of both electricity and side products like desalinated water.
OTEC facilities can be built on: 1) land or near the shore; 2) deep-water platforms moored to the continental shelf within a nation’s Exclusive Economic Zone (EEZ); or 3) free-floating facilities in deep ocean water (either within or beyond the EEZ). In determining the site selection of OTEC facilities, there are three technical considerations—thermal gradient, sea water depth, and offshore distance, which impacts efficiency of electrical and other side-product transmissions—and the territorial sovereignty that applies to adjacent waters. In a nation’s inland and territorial waters, rights of regulatory competence and judicial oversight are unquestioned. This right assumes less prominence as the off-shore distance increases until, eventually, national sovereignty disappears completely and the law of the high seas takes hold. OTEC site selection will be governed by the political and legal realities of operating outside territorial waters. If desirable near-shore sites don’t present favorable thermal conditions, OTEC operators will be compelled to locate in international waters. In OTEC markets, this calculation is particularly acute for land-locked nations, and states bordered by colder waters in the temperate north and south.
The main advantages that land-based and near-shore facilities offer are that they don’t require sophisticated mooring or lengthy power cables and side-product piping, and they offer ease of access. However, there will be additional expenses involved in the extended warm- and cold-water piping infrastructure (which are exposed to additional stresses of the shallow water environment). This siting allows for the smallest scaleof operations (hence, less cost-effective). There is the potential for local environmental issues not seen by the other two siting choices. And in order to minimize pump head losses, the heat exchangers will need to be located below sea level, thus requiring additional site expenses.
Like the state-of-the-art construction techniques used in the present-day offshore industry in building and siting deep-water oil platforms, deep-water and open-sea OTEC plants easily can be built in a shipyard, towed to the site, and—for moored deep-water platforms—fixed to the sea bottom away from shore (either by pilings or cables), thus avoiding the negative effects of the surf zone and coming closer to cold waters. Among the other advantages of the deep-water moored OTEC plants are that they have easy access to sea-water resources and can have larger scale of operations, making them more cost-effective. However, among the challenges that they face are the need for sophisticated mooring cabling systems; increased lengths of power cables and side-product piping; and the impacts of open-ocean storm conditions.
Free-floating OTEC facilities could be preferable if the plant isn’t intended to deliver electricity to shore, but is designed for production of other side-products (e.g., fresh water, liquid fuels and mineral extraction). The advantages the free-floating OTEC plant offers include siting in areas not subject to hurricanes; largest scale of operations; and no mooring or stabilization issues. However, free-floating facilities present the difficulty of having to ship the side-products to shore, although the shipping distances would be considerably shorter than those already accomplished by the oil industry.
Besides providing a source of clean, renewable baseload electrical energy, OTEC has the potential to provide many useful side-products such as desalinated fresh water for industrial, agricultural, and residential uses; liquid fuels (e.g., hydrogen, ammonia, and biofuels); foods from mariculture and greenhouses (including cold-weather crops utilizing chilled-soil agriculture to provide the right growing conditions); resource extraction from the brine (i.e., lithium, molybdenum and uranium may be profitably extracted from seawater considering the flow rates needed to operate the OTEC plant); and, if close enough to shore, provide moderate-temperature refrigeration and air-conditioning for buildings or for on-board facilities. In addition, utilizing the “Energy Island” concept14 developed by Dominic Michaelis, the OTEC plant could incorporate other renewable energy-gathering technologies (e.g., wind, photovoltaics, concentrating solar, wave, current, and reverse-pumped energy storage (PES) to increase the overall energy production capabilities. Finally, the OTEC plant could be co-located with other industrial facilities (e.g., computer server farms, cargo transhipment facilities, shipping refueling facilities), in order to provide additional revenue streams.
While there are challenges to bringing OTEC to commercial viability, building this energy infrastructure would offer many advantages.
First, they provide clean, renewable, and independent baseload energy production. Unlike other sources of renewable energy that vary depending on weather and time of day, OTEC power plants can produce electricity 24 hours a day, 365 days a year,15 providing customers with enough power and water to make them independent of costly fuel imports. OTEC has a virtually non-existent carbon footprint, which leads to little if any adverse environmental impacts, particularly when compared with other energy sources. Since OTEC isn’t exothermic (like fossil-fueled and nuclear power plants), and since the cold or mixed water will be discharged at depth, it doesn’t contribute directly to global warming.
Second, it can produce fresh water for various purposes. Both open-cycle and hybrid plants directly can produce potable water as well as electricity (at a rate of about 700,000 gallons/MW) that is suitable for human consumption, as well as agriculture and livestock needs, which can be significant for areas that have little rainfall or increasing fresh water needs.16, 17
OTEC plants can produce fuels in addition to heat and electricity. OTEC plants can produce hydrogen (through electrolysis of water), ammonia or biofuels (e.g., growing algae), which could be transported virtually anywhere. Alternately, an OTEC plant can be used as a deep-water refueling station for ships.
OTEC facilities can serve mariculture and agriculture production. The large quantities of cold ocean waters (around 4 degrees C) pumped from 1,000 meters deep are nutrient-rich and relatively pathogen-free, which provides an excellent medium for growing phytoplankton (microalgae), which is the feedstock for the production of a variety of commercially valuable fish and shellfish,18 as well as growing other algae that can be turned into biofuels. Further, the cold waters can support greenhouses growing cold-weather fruits and vegetables if suitably mixed for the ideal growth temperature either ashore or afloat.19
Additionally, these plants can provide air-conditioning and refrigeration capacity. The deep-ocean cold water can be used as a cooling medium in air-conditioning systems. For example, only 1 cubic meter per second (1 m3/s) of water at a temperature of 7 degrees C (~45 degrees F) is required to produce 5,800 tons of cooling—roughly sufficient to cool 5,800 rooms. Using a 1-meter pipe and about 360 kW of pumping power (compared to 5,000 kW for a conventional AC system) would give an investment payback period of three to four years.20,21 In the case of a co-located computer server firm, this pay-back period would be considerably shorter since the largest energy costs for these farms are those associated with cooling.
OTEC plants can perform mineral extraction. Most economic analyses show that dissolved mineral extraction from ocean water is prohibitively expensive due to energy requirements to pump the large volume of water needed and to separate the minerals from seawater; however, because OTEC plants already will be pumping the water, the cost of the extraction process is the only remaining factor. Investigations are underway to determine the feasibility of combining the extraction of elements dissolved in seawater with ocean energy production.22
Like hydroelectric dams, most of the costs of an OTEC plant are up-front—once the infrastructure is in place, the fuel (solar energy) costs are essentially zero, and day-to-day expenses are only those associated with routine operations and maintenance (O&M).23
OTEC plants offer economic advantages not only on a plant level, but also in terms of the broader economy. Investment in the RD&D for an OTEC infra- structure will create many new employment opportunities, not just directly but also in complimentary and spin-off industries, similar to that seen in the Apollo and Space Shuttle programs.24
OTEC would reduce, both domestically and globally, dependence on fossil fuels, especially petroleum, of which about half of the world’s proven reserves are located in nations that are sponsors of, or allied with, terrorist groups.25,26 Thus, while OTEC-generated electricity and liquid fuel side-products won’t eliminate oil usage, its extensive use could impact the financial re-sources of these terrorist groups.
Finally, since OTEC could supply clean and competitively-priced energy globally, engaging in international partnerships to perform the necessary RD&D would help ensure U.S. leadership in ocean, energy, and environmental issues, and could aid in reasserting our influence in the developing nations that was squandered during the past Administration. Further, by working to ensure that developing nations have access to OTEC, this will reduce their need to develop other energy sources, such as nuclear power programs, with their attendant proliferation and accident risks.27
Leaving aside the technical concerns inherent in developing and commercializing any new technologies, there are also several significant challenges to OTEC, not least among them the cost of generating the electricity; on a per-kilowatt-hour basis, OTEC electricity is expensive compared to coal, hydroelectric, and nuclear power. However, as the technology matures, this cost is expected to drop into the range that will make it competitive with technologies that already have very high energy costs.
Additional challenges include low thermodynamic efficiency. The greater the temperature difference between the heat source and a heat sink, the greater the thermal efficiency of an energy-conversion system; however, the small temperature difference between the source (warm surface water) and the sink (cold deep water) temperature gives OTEC plants a typical thermal-to-electrical energy conversion efficiency of less than 3 percent. In comparison, conventional oil- or coal-fired steam plants, which may have temperature differences of over 200C, have thermal efficiencies around 30 to 35 percent. To compensate for its low thermal efficiency, an OTEC plant has to move significant quantities of water, which increases the power it needs to feed back into the plant’s pumps before any OTEC-generated electricity can be made available to the power grid. For plants larger than about 10 MW, about 25- to 40-percent of the generated power will go to pump the water through the intake and discharge pipes.
Another major challenge OTEC faces is the high capital costs for initial construction. About half of the capital cost of current OTEC designs will be for the heat exchanges, followed by the costs involved with the platform and its moorings, and the sea water pumps and deep seawater pipes, which must extend to around 1,000 m (3,300 ft.) and withstand the pumping of very large volumes of water. For example, a 100-MW OTEC plant will require about 215 m3/s (3.4 million gal/min) of deep sea water, necessitating a minimum pipe diameter of 10 m (32.8 ft.). Such large pipelines would be composed of very expensive materials. In addition, the very large pumps, heat exchangers, and low-efficiency turbines all will add considerably to the construction cost. However, it should be noted that these low-efficiency turbines also could be retrofitted to existing power plants in order to increase their power output and reduce thermal pollution, thus increasing their overall efficiency and profitability.
OTEC plants pose potential ecological consequences. A 100-MW OTEC plant would pump a volume of water similar to that of a major river (i.e., equivalent to the nominal flow of the Colorado River into the Pacific Ocean, or about 3 percent of the Mississippi, 10 percent of the Danube, or 20 percent of the Nile). This will require ensuring that sea water discharges occur at a depth below the bottom of the surface thermocline layer in order to avoid contaminating the surface water and causing poten- tial negative impacts on the local ecology, as well as on the overall thermal efficiency of the OTEC plant.
Perhaps the biggest challenge to eventual OTEC deployment is making the financial case—will it be economical to build and operate? Having in place a consistent regulatory infrastructure, which provides necessary predictability through an orderly, timely, and efficient review of OTEC license applications and operations, along with a supportive legal climate and financial support from the federal government, at least for the initial plants until the technology is proven, will help assure the economic viability of the technology in its infancy. Absent a legal framework and supporting regulatory infrastructure, financing and insuring commercial OTEC operations in the United States may be im-possible.
There are several areas that need to be addressed if OTEC is to become a viable energy option. Unresolved challenges in any of these areas could curtail U.S. work in this field, which could lead the United States to continue its dependence on energy imports and to lose market share in an emerging industry. Obviously, such a situation could have severe repercussions for long-term national energy and economic security. It would behoove the United States to make progress in the areas of: 1) OTEC technologies; 2) enabling legislation; 3) a consistent and predictable regulatory infrastructure for constructing, operating and interfacing with new and existing energy infrastructures; 4) the financial underpinnings for adequately funding the development, construction and operation of these systems; and 5) ensuring an adequate international legal framework is in place to support the peaceful development and commercialization of OTEC technologies.
If these factors are met, and if the technologies can be demonstrated, then the financial support will become more likely and OTEC can begin delivering on its promise.
1. Solar Energy Research Institute, 1989; Ocean Thermal Energy Conversion: An Overview; SERI/SP-220-3024; Golden, CO: Solar Energy Research Institute; 36 pp.
2. Nihous, Gérard C., 2005; “An Order-of-Magnitude Estimate of Ocean Thermal Energy Conversion Resources,” Journal of Energy Resources Technology; Vol. 127, December 2005.
3. The first published reference to the concept of using ocean thermal differences to generate electricity is found in Jules Verne’s “Twenty Thousand Leagues Under the Sea,” published in 1870.
4. Net power is the amount of power generated after subtracting power needed to run the system.
5. Owens, W.L. and L.C. Trimble, 1980; “Mini-OTEC Operational Results;” Proceedings: Seventh Ocean Energy Conference, Washington, D.C., p. 14.1:1-9.
6. Avery, William H. and Walter G. Berl, 1997; “Solar Energy from the Tropical Oceans,” Issues in Science and Technology, Winter 1997.
7. World Energy Council, 2007; 2007 Survey of Energy Resources; pp. 557.; www.worldenergy.org.
8. Energy Independence and Security Act of 2007, P.L. 110-140, Subtitle C, Sections 631-636, Dec. 19, 2007, http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=110_cong_bills&docid=f:h6enr.txt.pdf (last accessed March 29, 2009).
9. FY 2008 Consolidated Appropriations Act, P.L. 110-161, Dec. 26, 2007, Joint Explanatory Statement, p. 558; FY 2009 Omnibus Appropriations Act, P.L. 111-8, March 11, 2009, Joint Explanatory Statement, p. 647.
10. U.S. Department of Energy, 2008; Press Release: “DOE Selects Projects for Up to $7.3 Million for R&D Clean Technology Water Power Projects,” Sept. 18, 2008 (last accessed March 29, 2009).
11. Conference report to accompany the Defense Department Appropriations Act for Fiscal Year 2008, Report 110-434, Nov. 6, 2007, p. 479.
12. Laird, Frank, 2009; “A Full Court Press for Renewable Energy,” Issues in Science and Technology, Winter 2009, p. 55.
13. Kammen, Daniel M., 2004; “Renewable Energy Options for the Emerging Economy: Advances, Opportunities, and Obstacles,” Background Paper for “The 10-40 Solution: Technologies and Policies for a Low-Carbon Future,” Pew Center and NCEP Conference, Washington, D.C., March 25-26, 2004.
14. Gizmag.com, 2008; “Energy Island: Unlocking the Potential of the Ocean as a Renewable Power Source.”
15. Baird, M. and D. Hayhoe, 1993; Energy Fact Sheet, The International Council for Local Environmental Initiatives (ICLEI) information.
16. U.S. Department of Energy, 1990; “The Potential of Renewable Energy: An Interlaboratory White Paper;” SERI/TP-260-3674
17. Craven, John P., and Patrick K. Sullivan, 1998; “Utilization of Deep Ocean Water for Seawater Desalination;” International OTEC/DOWA Association (IOA) Newsletter, Vol. 9, No. 4, Winter 1998.
18. Daniel, T.H., 1985; “Aquaculture Using Cold OTEC Water;” Oceans ‘85 Conference Record; Nov. 12-14, San Diego, CA. Sponsored by Marine Technology Society & IEEE Oceanic Engineering Society.
19. It should be noted that the Natural Energy Laboratory of Hawaii Authority (NELHA) has several commercial tenants making use of deep-sea cold water for various mariculture and nutraceutical products (see http://www.nelha.org/tenants/commercial.html for additional details).
20. Van Ryzin, J.R., and T. Leraand, 1992; “Air Conditioning with Deep Seawater: A Cost-Effective Alternative;” Sea Technology Magazine, Sept., 1992, p. 37
21. Cornell University installed a “Lake Cooling” system in 1999 that uses 100 m deep water from Cayuga Lake to cool the campus. This 20,000 ton system saves Cornell over 20 million kw-hrs annually, even though the air conditioning is only needed in the summer time. Cornell University Lake Source Cooling (LSC) project, Humphreys Service Building, Ithaca, NY.
22. Daniel, T.H., 1993; “An Overview of Ocean Thermal Energy Conversion and its Potential By-Products;” Recent Advances in Marine Science and Technology, ‘92, PACON International, p. 263-272.
23. The operations and maintenance (O&M) of facilities covers all that broad spectrum of services required to assure the built environment is available to, and will, perform the functions for which they were designed and constructed. O&M is comprised of the day-to-day activities necessary for the built entities to perform their intended function. Operations and maintenance are combined into the one term O&M because an entity cannot operate without being maintained.
24. NASA, 1994; “What is the Value of Space Exploration? A Symposium Sponsored by the Mission From Planet Earth Study Office, Office of Space Science, NASA Headquarters, and the University of Maryland at College Park, July 18-19, 1994; National Geographic Society, Washington, D.C.; http://cmex.ihmc.us/CMEX/data/vse/session2.html; (accessed Dec. 1, 2008).
25. Of the fourteen top world oil net exporters listed on the U.S. Department of Energy Web site (www.eia.doe.gov/emeu/cabs/topworldtables1_2.html), two are listed as state sponsors of terrorism by the U.S. Department of State (www.state.gov/s/ct/c14151.htm). The State Department lists Iran, Sudan, and Saudi Arabia as areas of concern for breeding terrorists. Further, Venezuela, which is the third largest supplier of oil to the United States, has a regime that is actively hostile to our interests.
26. Kraemer. Thomas D., Commander, U.S. Navy; 2006; Addicted to Oil: Strategic Implications of American Oil Policy; U.S. Army Warfare College.
27. Included among the nations with no existing commercial nuclear power infrastructure that are considering building nuclear power plants are Algeria, Australia, Chile, Estonia, Israel, Kazakhstan, Latvia, Poland, Switzerland, Thailand, Turkey, and the United Arabic Emirates.There are also nations with an existing commercial nuclear power infrastructure that could benefit from additional assistance, including Argentina, Belarus, Brazil, Bulgaria, Lithuania, and Slovenia. | http://www.fortnightly.com/print/13677 | 13 |
15 | Greenland ice sheet
The Greenland ice sheet (Greenlandic: Sermersuaq) is a vast body of ice covering 1,710,000 square kilometres (660,235 sq mi), roughly 80% of the surface of Greenland. It is the second largest ice body in the world, after the Antarctic Ice Sheet. The ice sheet is almost 2,400 kilometres (1,500 mi) long in a north-south direction, and its greatest width is 1,100 kilometres (680 mi) at a latitude of 77°N, near its northern margin. The mean altitude of the ice is 2,135 metres (7,005 ft). The thickness is generally more than 2 km (1.24 mi) and over 3 km (1.86 mi) at its thickest point. It is not the only ice mass of Greenland – isolated glaciers and small ice caps cover between 76,000 and 100,000 square kilometres (29,344 and 38,610 sq mi) around the periphery. Some scientists predict that climate change may be near a "tipping point" where the entire ice sheet will melt in about 2000 years. If the entire 2,850,000 cubic kilometres (683,751 cu mi) of ice were to melt, it would lead to a global sea level rise of 7.2 m (23.6 ft).
The Greenland Ice Sheet is also sometimes referred to under the term inland ice, or its Danish equivalent, indlandsis. It is also sometimes referred to as an ice cap. "Ice sheet" is considered the more correct term, as "ice cap" generally refers to less extensive ice masses.
The ice in the current ice sheet is as old as 110,000 years. The presence of ice-rafted sediments in deep-sea cores recovered off of northeast Greenland, in the Fram Strait, and south of Greenland indicated the more or less continuous presence of either an ice sheet or ice sheets covering significant parts of Greenland for the last 18 million years. From just before 11 million years ago to a little after 10 million years ago, the Greenland Ice Sheet appears to have been greatly reduced in size. The Greenland Ice Sheet formed in the middle Miocene by coalescence of ice caps and glaciers. There was an intensification of glaciation during the Late Pliocene.
The weight of the ice has depressed the central area of Greenland; the bedrock surface is near sea level over most of the interior of Greenland, but mountains occur around the periphery, confining the sheet along its margins. If the ice disappeared, Greenland would most probably appear as an archipelago, at least until isostasy lifted the land surface above sea level once again. The ice surface reaches its greatest altitude on two north-south elongated domes, or ridges. The southern dome reaches almost 3,000 metres (9,843 ft) at latitudes 63°–65°N; the northern dome reaches about 3,290 metres (10,794 ft) at about latitude 72°N. The crests of both domes are displaced east of the centre line of Greenland. The unconfined ice sheet does not reach the sea along a broad front anywhere in Greenland, so that no large ice shelves occur. The ice margin just reaches the sea, however, in a region of irregular topography in the area of Melville Bay southeast of Thule. Large outlet glaciers, which are restricted tongues of the ice sheet, move through bordering valleys around the periphery of Greenland to calve off into the ocean, producing the numerous icebergs that sometimes occur in North Atlantic shipping lanes. The best known of these outlet glaciers is Jakobshavn Isbræ (Greenlandic: Sermeq Kujalleq), which, at its terminus, flows at speeds of 20 to 22 metres or 65.6 to 72.2 feet per day.
On the ice sheet, temperatures are generally substantially lower than elsewhere in Greenland. The lowest mean annual temperatures, about −31 °C (−23.8 °F), occur on the north-central part of the north dome, and temperatures at the crest of the south dome are about −20 °C (−4 °F).
During winter, the ice sheet takes on a clear blue/green color. During summer, the top layer of ice melts leaving pockets of air in the ice that makes it look white.
The ice sheet as a record of past climates
The ice sheet, consisting of layers of compressed snow from more than 100,000 years, contains in its ice today's most valuable record of past climates. In the past decades, scientists have drilled ice cores up to 4 kilometres (2.5 mi) deep. Scientists have, using those ice cores, obtained information on (proxies for) temperature, ocean volume, precipitation, chemistry and gas composition of the lower atmosphere, volcanic eruptions, solar variability, sea-surface productivity, desert extent and forest fires. This variety of climatic proxies is greater than in any other natural recorder of climate, such as tree rings or sediment layers.
The melting ice sheet
Positioned in the Arctic, the Greenland ice sheet is especially vulnerable to climate change. Arctic climate is now rapidly warming and much larger Arctic shrinkage changes are projected. The Greenland Ice Sheet has experienced record melting in recent years and is likely to contribute substantially to sea level rise as well as to possible changes in ocean circulation in the future. The area of the sheet that experiences melting has increased about 16% from 1979 (when measurements started) to 2002 (most recent data). The area of melting in 2002 broke all previous records. The number of glacial earthquakes at the Helheim Glacier and the northwest Greenland glaciers increased substantially between 1993 and 2005. In 2006, estimated monthly changes in the mass of Greenland's ice sheet suggest that it is melting at a rate of about 239 cubic kilometers (57 cu mi) per year. A more recent study, based on reprocessed and improved data between 2003 and 2008, reports an average trend of 195 cubic kilometers (47 cu mi) per year. These measurements came from the US space agency's GRACE (Gravity Recovery and Climate Experiment) satellite, launched in 2002, as reported by BBC. Using data from two ground-observing satellites, ICESAT and ASTER, a study published in Geophysical Research Letters (September 2008) shows that nearly 75 percent of the loss of Greenland's ice can be traced back to small coastal glaciers.
If the entire 2,850,000 km3 (683,751 cu mi) of ice were to melt, global sea levels would rise 7.2 m (23.6 ft). Recently, fears have grown that continued climate change will make the Greenland Ice Sheet cross a threshold where long-term melting of the ice sheet is inevitable. Climate models project that local warming in Greenland will be 3 °C (5.4 °F) to 9 °C (16.2 °F) during this century. Ice sheet models project that such a warming would initiate the long-term melting of the ice sheet, leading to a complete melting of the ice sheet (over centuries), resulting in a global sea level rise of about 7 metres (23.0 ft). Such a rise would inundate almost every major coastal city in the world. How fast the melt would eventually occur is a matter of discussion. According to the IPCC 2001 report, such warming would, if kept from rising further after the 21st Century, result in 1 to 5 meter sea level rise over the next millennium due to Greenland ice sheet melting (see image below).
Some scientists have cautioned that these rates of melting are overly optimistic as they assume a linear, rather than erratic, progression. James E. Hansen has argued that multiple positive feedbacks could lead to nonlinear ice sheet disintegration much faster than claimed by the IPCC. According to a 2007 paper, "we find no evidence of millennial lags between forcing and ice sheet response in paleoclimate data. An ice sheet response time of centuries seems probable, and we cannot rule out large changes on decadal time-scales once wide-scale surface melt is underway."
The melt zone, where summer warmth turns snow and ice into slush and melt ponds of meltwater, has been expanding at an accelerating rate in recent years. When the meltwater seeps down through cracks in the sheet, it accelerates the melting and, in some areas, allows the ice to slide more easily over the bedrock below, speeding its movement to the sea. Besides contributing to global sea level rise, the process adds freshwater to the ocean, which may disturb ocean circulation and thus regional climate. In July 2012, this melt zone covered 97 percent of the ice cover. Ice cores show that events such as this occur approximately every 150 years on average. The last time a melt this large happened was in 1889. This particular melt may be part of cyclical behavior; however, Lora Koenig, a Goddard glaciologist suggested that "...if we continue to observe melting events like this in upcoming years, it will be worrisome."
Meltwater, which moves to the sea under the ice in contact with the land surface, may transport solids or dissolved material such as iron to the ocean. Measurements of the amount of available iron in meltwater from the Greenland ice sheet shows that extensive melting of the ice sheet might add an amount of iron to the Atlantic Ocean equivalent to that added by airborne dust. This would increase biological activity in the Atlantic.
Satellite image of dark blue melt ponds.
Recent ice loss events
- Between 2000 and 2001: Northern Greenland's Petermann glacier lost 33 square miles (85 km2) of floating ice.
- Between 2001 and 2005: Sermeq Kujalleq broke up, losing 36 square miles (93 km2) and raised awareness worldwide of glacial response to global climate change.
- July 2008: Researchers monitoring daily satellite images discovered that a 11-square-mile (28 km2) piece of Petermann broke away.
- August 2010: A sheet of ice measuring 260 square kilometres (100 sq mi) broke off from the Petermann Glacier. Researchers from the Canadian Ice Service located the calving from NASA satellite images taken on August 5. The images showed that Petermann lost about one-quarter of its 70 km-long (43 mile) floating ice shelf.
- July 2012: An iceberg twice the size of Manhattan (100 square mi) broke away from the Petermann glacier in northern Greenland.
Ice sheet acceleration
Two mechanisms have been utilized to explain the change in velocity of the Greenland Ice Sheets outlet glaciers. The first is the enhanced meltwater effect, which relies on additional surface melting, funneled through moulins reaching the glacier base and reducing the friction through a higher basal water pressure. (It should be noted that not all meltwater is retained in the ice sheet and some moulins drain into the ocean, with varying rapidity.) This idea, was observed to be the cause of a brief seasonal acceleration of up to 20% on Sermeq Kujalleq in 1998 and 1999 at Swiss Camp. (The acceleration lasted two-three months and was less than 10% in 1996 and 1997 for example. They offered a conclusion that the "coupling between surface melting and ice-sheet flow provides a mechanism for rapid, large-scale, dynamic responses of ice sheets to climate warming". Examination of recent rapid supra-glacial lake drainage documented short term velocity changes due to such events, but they had little significance to the annual flow of the large glaciers outlet glaciers. The second mechanism is a force imbalance at the calving front due to thinning causing a substantial non-linear response. In this case an imbalance of forces at the calving front propagates up-glacier. Thinning causes the glacier to be more buoyant, reducing frictional back forces, as the glacier becomes more afloat at the calving front. The reduced friction due to greater buoyancy allows for an increase in velocity. This is akin to letting off the emergency brake a bit. The reduced resistive force at the calving front is then propagated up glacier via longitudinal extension because of the backforce reduction. For ice streaming sections of large outlet glaciers (in Antarctica as well) there is always water at the base of the glacier that helps lubricate the flow. This water is, however, generally from basal processes, not surface melting.
If the enhanced meltwater effect is the key then since meltwater is a seasonal input, velocity would have a seasonal signal and all glaciers would experience this effect. If the force imbalance effect is the key the velocity will propagate up-glacier, there will be no seasonal cycle, and the acceleration will be focused on calving glaciers. Helheim Glacier, East Greenland had a stable terminus from the 1970s-2000. In 2001–2005 the glacier retreated 7 km (4.3 mi) and accelerated from 20 to 33 m or 65.6 to 108.3 ft/day, while thinning up to 130 meters (430 ft) in the terminus region. Kangerdlugssuaq Glacier, East Greenland had a stable terminus history from 1960 to 2002. The glacier velocity was 13 m or 42.7 ft/day in the 1990s. In 2004–2005 it accelerated to 36 m or 118 ft/day and thinned by up to 100 m (328 ft) in the lower reach of the glacier. On Sermeq Kujalleq the acceleration began at the calving front and spread up-glacier 20 km (12 mi) in 1997 and up to 55 km (34 mi) inland by 2003. On Helheim the thinning and velocity propagated up-glacier from the calving front. In each case the major outlet glaciers accelerated by at least 50%, much larger than the impact noted due to summer meltwater increase. On each glacier the acceleration was not restricted to the summer, persisting through the winter when surface meltwater is absent.
An examination of 32 outlet glaciers in southeast Greenland indicates that the acceleration is significant only for marine terminating outlet glaciers. That is glaciers that calve into the ocean. Further, noted that the thinning of the ice sheet is most pronounced for marine terminating outlet glaciers. As a result of the above, all concluded that the only plausible sequence of events is that increased thinning of the terminus regions, of marine terminating outlet glaciers, ungrounded the glacier tongues and subsequently allowed acceleration, retreat and further thinning. Enhanced meltwater induced acceleration does exist but is of a notably smaller magnitude and duration.
Warmer temperatures in the region have brought increased precipitation to Greenland, and part of the lost mass has been offset by increased snowfall. However, there are only a small number of weather stations on the island, and though Satellite data can examine the entire island, it has only been available since the early 1990s, making the study of trends difficult. It has been observed that there is more precipitation where it is warmer, up to 1.5 meters per year on the SE flank, and less precipitation or none on the 25–80% (depending on the time of year) of the island that are cooler. Actual figures for precipitation are available in "New precipitation and accumulation maps for Greenland", A. Ohmura and N. Reeh, Journal of Glaciology, 1991.
Data from NASA's Polar program confirms that the average elevation change above 2,000 m (6,562 ft) "was not significant".
Rate of change
Several factors determine the net rate of growth or decline. These are
- Accumulation of snow in the central parts
- Melting of ice along the sheet's margins (runoff) and basal hydrology,
- Iceberg calving into the sea from outlet glaciers also along the sheet's edges
IPCC estimates in their third assessment report (2001) the accumulation to 520 ± 26 Gigatonnes of ice per year, runoff and bottom melting to 297±32 Gt/yr and 32±3 Gt/yr, respectively, and iceberg production to 235±33 Gt/yr. On balance, the IPCC estimates -44 ± 53 Gt/yr, which means that the ice sheet may currently be melting. The most recent research using data from 1996 to 2005 shows that the ice sheet is thinning even faster than supposed by IPCC. According to the study, in 1996 Greenland was losing about 96 km3 or 23.0 cu mi per year in mass from its ice sheet. In 2005, this had increased to about 220 km3 or 52.8 cu mi a year due to rapid thinning near its coasts, while in 2006 it was estimated at 239 km3 (57.3 cu mi) per year. It was estimated that in the year 2007 Greenland ice sheet melting was higher than ever, 592 km3 (142.0 cu mi). Also snowfall was unusually low, which led to unprecedented negative −65 km3 (−15.6 cu mi) Surface Mass Balance. If iceberg calving has happened as an average, Greenland lost 294 Gt of its mass during 2007 (one km3 of ice weighs about 0.9 Gt).
According to the 2007 report from the IPCC, it is hard to measure the mass balance precisely, but most results indicate accelerating mass loss from Greenland during the 1990s up to 2005. Assessment of the data and techniques suggests a mass balance for the Greenland Ice Sheet ranging between growth of 25 Gt/yr and loss of 60 Gt/yr for 1961 to 2003, loss of 50 to 100 Gt/yr for 1993 to 2003 and loss at even higher rates between 2003 and 2005.
Analysis of gravity data from GRACE satellites indicates that the Greenland ice sheet lost approximately 2900 Gt (0.1% of its total mass) between March 2002 and September 2012. The mean mass loss rate for 2008-2012 was 367 Gt/year.
A paper on Greenland's temperature record shows that the warmest year on record was 1941 while the warmest decades were the 1930s and 1940s. The data used was from stations on the south and west coasts, most of which did not operate continuously the entire study period.
While Arctic temperatures have generally increased, there is some discussion over the temperatures over Greenland. First of all, Arctic temperatures are highly variable, making it difficult to discern clear trends at a local level. Also, until recently, an area in the North Atlantic including southern Greenland was one of the only areas in the World showing cooling rather than warming in recent decades, but this cooling has now been replaced by strong warming in the period 1979–2005.
- Encyclopaedia Britannica. 1999 Multimedia edition.
- Pete Spotts (March 13, 2012). "Greenland's ice sheet: Climate change outlook gets a little more dire". Christian Science Monitor. Retrieved 2012-06-01.
- Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) [Houghton, J.T.,Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 881pp. ,, and .
- Meese, DA, AJ Gow, RB Alley, GA Zielinski, PM Grootes, M Ram, KC Taylor, PA Mayewski, JF Bolzan (1997) The Greenland Ice Sheet Project 2 depth-age scale: Methods and results. Journal of Geophysical Research. C. Oceans. 102(C12):26,411-26,423.
- Thiede, JC Jessen, P Knutz, A Kuijpers, N Mikkelsen, N Norgaard-Pedersen, and R Spielhagen (2011) Millions of Years of Greenland Ice Sheet History Recorded in Ocean Sediments. Polarforschung. 80(3):141-159.
- Impacts of a Warming Arctic: Arctic Climate Impact Assessment, Cambridge University Press, 2004.
- Earth Observatory at Columbia University "Glacial Earthquakes Point to Rising Temperatures in Greenland"
- ScienceDaily, 10 October 2008: "An Accurate Picture Of Ice Loss In Greenland"
- BBC News, 11 August 2006: "Greenland melt 'speeding up' "
- Small Glaciers Account for Most of Greenland's Recent Ice Loss Newswise, Retrieved on September 15, 2008.
- Climate change and trace gases. James Hansen, Makiko Sato, et al. Phil.Trans.R.Soc.A (2007)365,1925–1954, doi:10.1098/rsta.2007.2052. Published online 18 May 2007,
- Greenland enters melt mode; Island-wide thaw is one for the record books August 25, 2012; Vol.182 #4 (p. 8) Science News
- Wall, Tim. "Greenland Hits 97 Percent Meltdown in July". Discovery News.
- "Glaciers Contribute Significant Iron to North Atlantic Ocean" (news release). Woods Hole Oceanographic Institution. March 10, 2013. Retrieved March 18, 2013.
- "Images Show Breakup of Two of Greenland's Largest Glaciers, Predict Disintegration in Near Future". NASA Earth Observatory. August 20, 2008. Retrieved 2008-08-31.
- Huge ice sheet breaks from Greenland glacier
- Iceberg breaks off from Greenland's Petermann Glacier 19 July 2012
- "Surface Melt-Induced Acceleration of Greenland Ice-Sheet Flow by Zwally et al., "
- "Fracture Propagation to the Base of the Greenland Ice Sheet During Supraglacial Lake Drainage by Das. et al.,"
- "Thomas R.H (2004), Force-perturbation analysis of recent thinning and acceleration of Jakobshavn Isbrae, Greenland, Journal of Glaciology 50 (168): 57–66. "
- "Thomas, R. H. Abdalati W, Frederick E, Krabill WB, Manizade S, Steffen K, (2003) Investigation of surface melting and dynamic thinning on Jakobshavn Isbrae, Greenland. Journal of Glaciology 49, 231–239."
- "Letters to Nature Nature 432, 608–610 (2 December 2004) | doi:10.1038/nature03130; Received 7 July 2004; Accepted 8 October 2004 Large fluctuations in speed on Greenland's Jakobshavn Isbræ glacier by Joughin, Abdalati and Fahnestock"
- "Rates of southeast Greenland ice volume loss...by Howat et al.,, "
- "Greenland Ice Sheet: is land-terminating ice thinning at anomalously high rates by Sole et al.,"
- "Rapid and synchronous ice-dynamic changes in East Greenland by Luckman, Murray. de Lange and Hanna"
- "Greenland Ice Sheet: is land-terminating ice thinning at anomalously high rates by Sole et al.,"
- "Rates of southeast Greenland ice volume loss...by Howat et al.,"
- "Moulins calving fronts and Greenland outletglacier acceleration by Pelto"
- "Modelling Precipitation over ice sheets: an assessment using Greenland", Gerard H. Roe, University of Washington,
- "Greenland Icesheet elevation changes"
- "Greenland Ice Loss Doubles in Past Decade, Raising Sea Level Faster". Jet Propulsion Laboratory News release, Thursday, 16 February 2006.
- Science nature
- Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Chapter 4 Observations: Changes in Snow, Ice and Frozen Ground.IPCC, 2007. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 996 pp.
- "Arctic Report Card: Update for 2012; Greenland Ice Sheet".
- "A Greenland temperature record spanning two centuries" JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 111, D11105, doi:10.1029/2005JD006810, 2006. Vinther, Anderson, Jones, Briffa, Cappelen.
- see Arctic Climate Impact Assessment (2004) and IPCC Second Assessment Report, among others.
- IPCC, 2007. Trenberth, K.E., P.D. Jones, P. Ambenje, R. Bojariu, D. Easterling, A. Klein Tank, D. Parker, F. Rahimzadeh, J.A. Renwick, M. Rusticucci, B. Soden and P. Zhai, 2007: Observations: Surface and Atmospheric Climate Change. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
|Wikimedia Commons has media related to: Greenland ice sheet|
- Real Climate the Greenland Ice
- Geological Survey of Denmark and Greenland (GEUS) GEUS has much scientific material on Greenland.
- Emporia State University - James S. Aber Lecture 2: Modern Glaciers and Ice Sheets.
- Arctic Climate Impact Assessment
- Lamont-Doherty Earth Observatory at Columbia University "Glacial Earthquakes Point to Rising Temperatures in Greenland"
- GRACE ice mass measurement: "Recent Land Ice Mass Flux from SpaceborneGravimetry"
- Greenland ice cap melting faster than ever, Bristol University | http://en.mobile.wikipedia.org/wiki/Greenland_ice_sheet | 13 |
23 | History of the Polish–Lithuanian Commonwealth (1569–1648)
|Part of a series on the|
|History of Poland|
|Prehistory and protohistory|
History of the Polish–Lithuanian Commonwealth (1569–1648) covers a period in the history of Poland and Lithuania, before their joint state was subjected to devastating wars in the middle of the 17th century. The Union of Lublin of 1569 established the Polish–Lithuanian Commonwealth, a more closely unified federal state, replacing the previously existing personal union of the two countries. The Union was largely run by the Polish and increasingly Polonized Lithuanian and Ruthenian nobility, through the system of the central parliament and local assemblies, but from 1573 led by elected kings. The formal rule of the proportionally more numerous than in other European countries nobility constituted a sophisticated early democratic system, in contrast to the absolute monarchies prevalent at that time in the rest of Europe.
The beginning of the Commonwealth coincided with the period of Poland's great power, civilizational advancement and prosperity. The Polish–Lithuanian Union had become an influential player in Europe and a vital cultural entity, spreading the Western culture eastward. In the second half of the 16th and the first half of the 17th century, the Polish–Lithuanian Commonwealth was a huge state in central-eastern Europe, with an area approaching one million square kilometers.
Following the Reformation gains (the Warsaw Confederation of 1573 was the culmination of the unique in Europe religious toleration processes), the Catholic Church embarked on an ideological counter-offensive and Counter-Reformation claimed many converts from Protestant circles. The disagreements over and the difficulties with the assimilation of the eastern Ruthenian populations of the Commonwealth had become clearly discernible. At an earlier stage (from the late 16th century), they manifested themselves in the religious Union of Brest, which split the Eastern Christians of the Commonwealth, and on the military front, in a series of Cossack uprisings.
The Commonwealth, assertive militarily under King Stephen Báthory, suffered from dynastic distractions during the reigns of the Vasa kings Sigismund III and Władysław IV. It had also become a playground of internal conflicts, in which the kings, powerful magnats and factions of nobility were the main actors. The Commonwealth fought wars with Russia, Sweden and the Ottoman Empire. At the Commonwealth's height, some of its powerful neighbors experienced difficulties of their own and the Polish-Lithuanian state sought domination in Eastern Europe, in particular over Russia. Allied with the Habsburg Monarchy, it did not directly participate in the Thirty Years' War.
Tsar Ivan IV of Russia undertook in 1577 hostilities in the Livonian region, which resulted in his takeover of most of the area and caused the Polish-Lithuanian involvement in the Livonian War. The successful counter-offensive led by King Báthory and Jan Zamoyski resulted in the peace of 1582 and the retaking of much of the territory contested with Russia, with the Swedish forces establishing themselves in the far north (Estonia). Estonia was declared a part of the Commonwealth by Sigismund III in 1600, which gave rise to a war with Sweden over Livonia; the war lasted until 1611 without producing a definite outcome.
In 1600, as Russia was entering a period of instability, the Commonwealth proposed a union with the Russian state. This failed move was followed by many other similarly unsuccessful, often adventurous attempts, some involving military invasions, other dynastic and diplomatic manipulations and scheming. While the differences between the two societies and empires proved in the end too formidable to overcome, the Polish-Lithuanian state ended up in 1619, after the Truce of Deulino, with the greatest ever expansion of its territory. At the same time it was weakened by the huge military effort made.
In 1620 the Ottoman Empire under Sultan Osman II declared a war against the Commonwealth. At the disastrous Battle of Ţuţora Hetman Stanisław Żółkiewski was killed and the Commonwealth's situation in respect to the Turkish-Tatar invasion forces became very precarious. A mobilization in Poland-Lithuania followed and when Hetman Jan Karol Chodkiewicz's army withstood fierce enemy assaults at the Battle of Khotyn (1621), the situation improved on the southeastern front. More warfare with the Ottomans followed in 1633–1634 and vast expanses of the Commonwealth had been subjected to Tatar incursions and slave-taking expeditions throughout the period.
War with Sweden, now under Gustavus Adolphus, resumed in 1621 with his attack on Riga, followed by the Swedish occupation of much of Livonia, control of Baltic Sea coast up to Puck and the blockade of Danzig. The Commonwealth, exhausted by the warfare that had taken place elsewhere, in 1626–1627 mustered a response, utilizing the military talents of Hetman Stanisław Koniecpolski and help from Austria. Under pressure from several European powers, the campaign was stopped and ended in the Truce of Altmark, leaving in Swedish hands much of what Gustavus Adolphus had conquered.
Another war with Russia followed in 1632 and was concluded without much change in the status quo. King Władysław IV then proceeded to recover the lands lost to Sweden. At the conclusion of the hostilities, Sweden evacuated the cities and ports of Royal Prussia but kept most of Livonia. Courland, which had remained with the Commonwealth, assumed the servicing of Lithuania's Baltic trade. After Frederick William's last Prussian homage before the Polish king in 1641, the Commonwealth's position in regard to Prussia and its Hohenzollern rulers kept getting weaker.
Elective monarchy and republic of nobility
At the outset of the Polish–Lithuanian Commonwealth, in the second half of the 16th century, Poland-Lithuania became an elective monarchy, in which the king was elected by the hereditary nobility. This king would serve as the monarch until he died, at which time the country would have another election.
In 1572, Sigismund II Augustus, the last king of the Jagiellon dynasty, died without any heirs. The political system was not prepared for this eventuality, as there was no method of choosing a new king. After much debate it was determined that the entire nobility of Poland and Lithuania would decide who the king was to be. The nobility were to gather at Wola, near Warsaw, to vote in the royal election.
The election of Polish kings lasted until the Partitions of Poland. The elected kings in chronological order were: Henry of Valois, Anna Jagiellon, Stephen Báthory, Sigismund III Vasa, Władysław IV, John II Casimir, Michael Korybut Wiśniowiecki, John III Sobieski, Augustus II the Strong, Stanisław Leszczyński, Augustus III and Stanisław August Poniatowski.
The first Polish royal election was held in 1573. The four men running for the office were Henry of Valois, who was the brother of King Charles IX of France, Tsar Ivan IV of Russia, Archduke Ernest of Austria, and King John III of Sweden. Henry of Valois ended up a winner. But after serving as the Polish king for only four months, he received the news that his brother, the King of France, had died. Henry of Valois then abandoned his Polish post and went back to France, where he succeeded to the throne as Henry III of France.
A few of the elected kings left a lasting mark in the Commonwealth. Stephen Báthory was determined to reassert the deteriorated royal prerogative, at the cost of alienating the powerful noble families. Sigismund III, Władysław IV and John Casimir were all of the Swedish House of Vasa; preoccupation with foreign and dynastic affairs prevented them from making a major contribution to the stability of Poland-Lithuania. John III Sobieski commanded the allied Relief of Vienna operation in 1683, which turned out to be the last great victory of the "Republic of Both Nations". Stanisław August Poniatowski, the last of the Polish kings, was a controversial figure. On the one hand he was a driving force behind the substantial and constructive reforms belatedly undertaken by the Commonwealth. On the other, by his weakness and lack of resolve, especially in dealing with imperial Russia, he doomed the reforms together with the country they were supposed to help.
The Polish-Lithuanian Commonwealth, following the Union of Lublin, became a counterpoint to the absolute monarchies gaining power in Europe. Its quasi-democratic political system of Golden Liberty, albeit limited to nobility, was mostly unprecedented in the history of Europe. In itself, it constituted a fundamental precedent for the later development of European constitutional monarchies.
However the series of power struggles between the lesser nobility (szlachta), the higher nobility (magnates), and elected kings, undermined citizenship values and gradually eroded the government's authority, ability to function and provide for national defense. The infamous liberum veto procedure was used to paralyze parliamentary proceedings beginning in the second half of the 17th century. After the series of devastating wars in the middle of the 17th century (most notably the Chmielnicki Uprising and the Deluge), Poland-Lithuania stopped being an influential player in the politics of Europe. During the wars the Commonwealth lost an estimated 1/3 of its population (higher losses than during World War II). Its economy and growth were further damaged by the nobility's reliance on agriculture and serfdom, which, combined with the weakness of the urban burgher class, delayed the industrialization of the country.
By the beginning of the 18th century, the Polish-Lithuanian Commonwealth, one of the largest and most populous European states, was little more than a pawn of its neighbors (the Russian Empire, Prussia and Austria), who interfered in its domestic politics almost at will. In the second half of the 18th century, the Commonwealth was repeatedly partitioned by the neighboring powers and ceased to exist.
The agricultural trade boom in Eastern Europe showed the first signs of the approaching crisis in the 1580s, when food prices stopped increasing. It was followed by a gradual decline in agricultural products prices, a price depression, initially present in Western Europe. The negative consequences of this process on folwark economies of the East had reached its culmination in the second half of the 17th century. Further economic aggravation resulted from Europe-wide devaluation of the currency around 1620, caused by the influx of silver from the Western Hemisphere. At that time however massive amounts of Polish grain were still exported through Danzig (Gdańsk). The Commonwealth nobility took a variety of steps to combat the crisis and keep up high production levels, burdening in particular the serfs with further heavy obligations. The nobles were also forcibly buying or taking over properties of the more affluent thus far peasant categories, a phenomenon especially pronounced from the mid 17th century.
Capital and energy of urban enterprisers affected the development of mining and metallurgy during the earlier Commonwealth period. There were several hundred hammersmith shops at the turn of the 17th century. Great ironworks furnaces were built in the first half of that century. Mining and metallurgy of silver, copper and lead had also been developed. Expansion of salt production was taking place in Wieliczka, Bochnia and elsewhere. After about 1700 some of the industrial enterprises were increasingly being taken over by land owners who used serf labor, which led to their neglect and decline in the second half of the 17th century.
Danzig had remained practically autonomous and adamant about protecting its status and foreign trade monopoly. The Karnkowski Statutes of 1570 gave Polish kings the control over maritime commerce, but not even Stephen Báthory, who resorted to an armed intervention against the city, was able to enforce them. Other Polish cities held steady and prosperous through the first half of the 17th century. War disasters in the middle of that century devastated the urban classes.
A rigid social separation legal system, intended to prevent any inter-class mobility, matured around the first half of the 17th century. But the nobility's goal of becoming self-contained and impermeable to newcomers had never been fully realized, as in practice even peasants on occasions acquired the noble status. Later numerous Polish szlachta clans had had such "illegitimate" beginnings. Szlachta found justification for their self-appointed dominant role in a peculiar set of attitudes, known as sarmatism, that they had adopted.
The Union of Lublin accelerated the process of massive Polonization of Lithuanian and Rus' elites and general nobility in Lithuania and the eastern borderlands, the process that retarded national development of local populations there. In 1563, Sigismund Augustus belatedly allowed the Eastern Orthodox Lithuanian nobility access to highest offices in the Duchy, but by that time the act was of little practical consequence, as there were few Orthodox nobles of any standing left and the encroaching Catholic Counter-Reformation would soon nullify the gains. Many magnate families of the east were of Ruthenian origin; their inclusion in the enlarged Crown made the magnate class much stronger politically and economically. Regular szlachta, increasingly dominated by the great land owners, lacked the will to align themselves with Cossack settlers in Ukraine to counterbalance the magnate power, and in the area of Cossack acceptance, integration and rights resorted to delayed and ineffective half-measures. The peasantry was being subjected to heavier burdens and more oppression. For those reasons, the way in which the Polish-Lithuanian Commonwealth expansion took place and developed had caused an aggravation of both the social and national tensions, introduced a fundamental instability into the system, and ultimately resulted in the future crises of the "Republic of Nobles".
Western and Eastern Christianity: Counter-Reformation, Union of Brest
The increasingly uniform and polonized (in the case of ethnic minorities) szlachta of the Commonwealth for the most part returned to the Roman Catholic religion, or if already Catholic remained Catholic, in the course of the 17th century.
Already the Sandomierz Agreement of 1570, which was an early expression of Protestant irenicism later prominent in Europe and Poland, had a self-defensive character, because of the intensification of Counter-Reformation pressure at that time. The agreement strengthened the Protestant position and made the Warsaw Confederation religious freedom guarantees in 1573 possible.
At the heyday of Reformation in the Commonwealth, at the end of the 16th century, there were about one thousand Protestant congregations, nearly half of them Calvinist. Half a century later, only 50% of them had survived, with the burgher Lutheranism suffering lesser losses, the szlachta dominated Calvinism and Nontrinitarianism (Polish Brethren) the greatest. The closing of the Brethren Racovian Academy and a printing facility in Raków on charges of blasphemy in 1638 forewarned of more trouble to come.
This Counter-Reformation offensive happened somewhat mysteriously in a country, where there were no religious wars and the state had not cooperated with the Catholic Church in eradicating or limiting competing denominations. Among the factors responsible, low Protestant involvement among the masses, especially of peasantry, pro-Catholic position of the kings, low level of involvement of the nobility once the religious emancipation had been accomplished, internal divisions within the Protestant movement, and the rising intensity of the Catholic Church propaganda, have been listed.
The ideological war between the Protestant and Catholic camps at first enriched the intellectual life of the Commonwealth. The Catholic Church responded to the challenges with internal reform, following the directions of the Council of Trent, officially accepted by the Polish Church in 1577, but implemented not until after 1589 and throughout the 17th century. There were earlier efforts of reform, originating from the lower clergy, and from about 1551 by Bishop Stanisław Hozjusz of Warmia, a lone at that time among the Church hierarchy, but ardent reformer. At the turn of the 17th century, a number of Rome educated bishops took over the Church administration at the diocese level, clergy discipline was implemented and rapid intensification of Counter-Reformation activities took place.
Hozjusz brought to Poland the Jesuits and founded for them a college in Braniewo in 1564. Numerous Jesuit educational institutions and residencies were established in the following decades, most often in the vicinity of centers of Protestant activity. Jesuit priests were carefully selected, well educated, of both noble and urban origins. They had soon become highly influential with the royal court, while working hard within all segments of the society. The Jesuit educational programs and Counter-Reformation propaganda utilized many innovative media techniques, often custom-tailored for a particular audience on hand, as well as time-tried methods of humanist instruction. Preacher Piotr Skarga and Bible translator Jakub Wujek count among prominent Jesuit personalities.
Catholic efforts to win the population countered the Protestant idea of a national church with Polonization, or nationalization of the Catholic Church in the Commonwealth, introducing a variety of native elements to make it more accessible and attractive to the masses. The Church hierarchy went along with the notion. The changes that took place during the 17th century defined the character of Polish Catholicism for centuries to come.
The apex of the Counter-Reformation activity had fallen on the turn of the 17th century, the earlier years of the reign of Sigismund III Vasa (Zygmunt III Waza), who in cooperation with the Jesuits and some other Church circles attempted to strengthen the power of his monarchy. The King tried to limit access to higher offices to Catholics. Anti-Protestant riots took place in some cities. During the Sandomierz Rebellion of 1606 the Protestants supported the anti-King opposition in large numbers. Nevertheless, the massive wave of szlachta's return to Catholicism could not have been stopped.
Although attempts were made during the common Protestant-Orthodox congregations in Toruń in 1595 and in Vilnius in 1599, the failure of the Protestant movement to form an alliance with the Eastern Orthodox Christians, the inhabitants of the eastern portion of the Commonwealth, contributed to the Protestants' downfall. The Polish Catholic establishment would not miss the opportunity to form a union with the Orthodox, although their goal was rather the subjugation of the Eastern Rite Christians to the pope (the papacy solicited help in bringing the "schism" under control) and the Commonwealth's Catholic centers of power. The Orthodox establishment was perceived as a security threat, because of the Eastern Rite bishops dependence on the Patriarchate of Constantinople at the time of an aggravating conflict with the Ottoman Empire, and because of the recent development, the establishment in 1589 of the Moscow Patriarchate. The Patriarchate of Moscow then claimed ecclesiastical jurisdiction over the Orthodox Christians of the Polish-Lithuanian Commonwealth, which to many of them was a worrisome development, motivating them to accept the alternate option of union with the West. The union idea had the support of King Sigismund III and the Polish nobility in the east; opinions were divided among the church and lay leaders of the Eastern Orthodox faith.
The Union of Brest act was negotiated and solemnly concluded in 1595–1596. It had not merged the Roman Catholic and Eastern Orthodox denominations, but led to the establishment of the Slavic language liturgy Uniate Church, which was to become an Eastern Catholic Church, one of the Greek Catholic Churches (presently Ukrainian Greek Catholic and Belarusian Greek Catholic). The new church, of the Byzantine Rite, accepted papal supremacy, while it retained in most respects its Eastern Rite character. The compromise union was flawed from the beginning, because despite the initial agreement, the Greek-Catholic bishops were not, like their Roman Catholic counterparts, seated in the Senate, and the Eastern Rite participants of the union had not been granted full general equality they expected.
The Union of Brest increased antagonisms among the Belarusian and Ukrainian communities of the Commonwealth, within which the Orthodox Church had remained the most potent religious force. It added to the already prominent ethnic and class fragmentation and became one more reason for internal infighting that was to impair the Republic. The Eastern Orthodox nobility, branded "Disuniates" and deprived of legal standing, led by Konstanty Ostrogski commenced a fight for their rights. Prince Ostrogski had been a leader of an Orthodox intellectual revival in Polish Ukraine. In 1576, he founded an elite liberal arts secondary and academic school, the Ostroh Academy, with trilingual instruction. In 1581, he and his academy were instrumental in the publication of the Ostroh Bible, the Bible's first scholarly Orthodox Church Slavonic edition. As a result of the efforts, parliamentary statutes of 1607, 1609 and 1635 recognized the Orthodox religion again, as one of the two equal Eastern churches. The restoration of Orthodox hierarchy and administrative structure proved difficult (most bishops had become Uniates, and their Orthodox replacements of 1620 and 1621 were not recognized by the Commonwealth) and was officially done only during the reign of Władysław IV. By that time many of the Orthodox nobles had become Catholics, and the Orthodox leadership fell into the hands of townspeople and lesser nobility organized into church brotherhoods, and the new power in the east, the Cossack warrior class. Metropolitan Peter Mogila of Kiev, who organized an influential academy there, contributed greatly to the rebuilding and reform of the Orthodox Church.
The Uniate Church, created for the Ruthenian population of the Commonwealth, in its administrative dealings gradually switched to the Polish language use. From about 1650, the majority of the Church's archival documents generated were in the Polish, rather than in the otherwise used Ruthenian (its Chancery Slavonic variety), language.
Culture of Early Baroque
The Baroque style dominated the Polish culture from the 1580s, building on the achievements of the Renaissance and for a while coexisting with it, to the mid 18th century. Initially Baroque artists and intellectuals, torn between the two competing views of the world, enjoyed wide latitude and freedom of expression. Soon however the Counter-Reformation instituted a binding point of view that invoked the medieval tradition, imposed censorship in education and elsewhere (the index of prohibited books in Poland from 1617), and straightened out their convoluted ways. By the middle of the 17th century the doctrine had been firmly reestablished, sarmatism and religious zealotry had become the norm. Artistic tastes of the epoch were often acquiring an increasingly Oriental character. In contrast with the integrative tendencies of the previous period, the burgher and nobility cultural spheres went their separate ways. Renaissance publicist Stanisław Orzechowski had already provided the foundations for Baroque szlachta's political thinking.
At that time there were about forty Jesuit colleges (secondary schools) scattered throughout the Commonwealth. They were educating mostly szlachta, burgher sons to a lesser degree. Jan Zamoyski, Chancellor of the Crown, who built the town of Zamość, established an academy there in 1594; it had functioned as a gymnasium only after Zamoyski's death. The first two Vasa kings were well known for patronizing both the arts and sciences. After that the Commonwealth's science experienced general decline, which paralleled the wartime decline of the burgher class.
By the mid 16th century Poland's university, the Academy of Kraków, entered a crisis stage, and by the early 17th century regressed into Counter-reformational conformism. The Jesuits took advantage of the infighting and established in 1579 a university college in Vilnius, but their efforts aimed at taking over the Academy were unsuccessful. Under the circumstances many elected to pursue their studies abroad. Jan Brożek, a rector of the Kraków University, was a multidisciplinary scholar who worked on number theory and promoted Copernicus' work. He was banned by the Church in 1616 and his anti-Jesuit pamphlet was publicly burned. Brożek's co-worker, Stanisław Pudłowski, worked on a system of measurements based on physical phenomena.
Michał Sędziwój (Sendivogius Polonus) was a famous in Europe alchemist, who wrote a number of treatises in several languages, beginning with Novum Lumen Chymicum (1604, with over fifty editions and translations in the 17th and 18th centuries). A member of Emperor Rudolph II's circle of scientists and sages, he is believed by some authorities to have been a pioneer chemist and a discoverer of oxygen, long before Lavoisier (Sendivogius' works were studied by leading scientists, including Isaac Newton).
The early Baroque period produced a number of noted poets. Sebastian Grabowiecki wrote metaphysical and mystical religious poetry representing the passive current of Quietism. Another szlachta poet Samuel Twardowski participated in military and other historic events; among the genres he pursued was epic poetry. Urban poetry was quite vital until the middle of the 17th century; the plebeian poets criticized the existing social order and continued within the ambiance of elements of the Renaissance style. The creations of John of Kijany contained a hearty dose of social radicalism. The moralist Sebastian Klonowic wrote a symbolic poem Flis using the setting of Vistula river craft floating work. Szymon Szymonowic in his Pastorals portrayed, without embellishments, the hardships of serf life. Maciej Sarbiewski, a Jesuit, was highly appreciated throughout Europe for the Latin poetry he wrote.
The preeminent prose of the period was written by Piotr Skarga, the preacher-orator. In his Sejm Sermons Skarga severely criticized the nobility and the state, while expressing his support for a system based on strong monarchy. Writing of memoirs had become most highly developed in the 17th century. Peregrination to the Holy Land by Mikołaj Radziwiłł and Beginning and Progress of the Muscovy War written by Stanisław Żółkiewski, one of the greatest Polish military commanders, are the best known examples.
One form of art particularly apt for Baroque purposes was the theater. Various theatrical shows were most often staged in conjunction with religious occasions and moralizing, and commonly utilized folk stylization. School theaters had become common among both the Protestant and Catholic secondary schools. A permanent court theater with an orchestra was established by Władysław IV at the Royal Castle in Warsaw in 1637; the actor troupe, dominated by Italians, performed primarily Italian opera and ballet repertoire.
Music, both sacral and secular, kept developing during the Baroque period. High quality church pipe organs were built in churches from the 17th century; a fine specimen has been preserved in Leżajsk. Sigismund III supported an internationally renowned ensemble of sixty musicians. Working with that orchestra were Adam Jarzębski and his contemporary Marcin Mielczewski, chief composers of the courts of Sigismund III and Władysław IV. Jan Aleksander Gorczyn, a royal secretary, published in 1647 a popular music tutorial for beginners.
Between 1580 and 1600 Jan Zamoyski commissioned the Venetian architect Bernardo Morando to build the city of Zamość. The town and its fortifications were designed to consistently implement the Renaissance and Mannerism aesthetic paradigms.
Mannerism is the name sometimes given to the period in art history during which the late Renaissance coexisted with the early Baroque, in Poland the last quarter of the 16th century and the first quarter of the 17th century. Polish art remained influenced by the Italian centers, increasingly Rome, and increasingly by the art of the Netherlands. As a fusion of imported and local elements, it evolved into an original Polish form of the Baroque.
The Baroque art was developing to a great extent under the patronage of the Catholic Church, which utilized the art to facilitate religious influence, allocating for this purpose the very substantial financial resources at its disposal. The most important in this context art form was architecture, with features rather austere at first, accompanied in due time by progressively more elaborate and lavish facade and interior design concepts.
Beginning in the 1580s, a number of churches patterned after the Church of the Gesù in Rome had been built. Gothic and other older churches were increasingly being supplemented with Baroque style architectural additions, sculptures, wall paintings and other ornaments, which is conspicuous in many Polish churches today. The Royal Castle in Warsaw, after 1596 the main residence of the monarchs, was enlarged and rebuilt around 1611. The Ujazdów Castle (1620s) of the Polish kings turned out to be architecturally more influential, its design having been followed by a number of Baroque magnate residencies.
The role of Baroque sculpture was usually subordinate, as decorative elements of exteriors and interiors, and on tombstones. A famous exception is the Sigismund's Column of Sigismund III Vasa (1644) in front of Warsaw's Royal Castle.
Realistic religious painting, sometimes entire series of related works, served its didactic purpose. Nudity and mythological themes were banned, but other than that fancy collection of Western paintings were in vogue. Sigismund III brought from Venice Tommaso Dolabella. A prolific painter, he was to spend the rest of his life in Kraków and give rise to a school of Polish painters working under his influence. Danzig (Gdańsk) was also a center for graphic arts; painters Herman Han and Bartholomäus Strobel worked there, and so did Willem Hondius and Jeremias Falck, who were engravers.
During the first half of the 17th century Poland was still a leading Central European power in the area of culture. As compared with the previous century, even wider circles of the society participated in cultural activities, but Catholic Counter-Reformation pressure resulted in diminished diversity. Catastrophic wars in the middle of the century greatly weakened the Commonwealth's cultural development and influence in the region.
Sejm and sejmiks
After the Union of Lublin, the Senate of general sejm of the Commonwealth became augmented by Lithuanian high officials; the position of the lay and ecclesiastical lords, who served for life as members of the Senate was strengthened, as the already outnumbered middle szlachta high office holders had now proportionally fewer representatives in the upper chamber. The Senate could also be convened separately by the king in its traditional capacity of the royal council, apart from any sejm's formal deliberations, and szlachta's attempts to limit the upper chamber's role had not been successful. After the formal union and the addition of deputies from the Grand Duchy, and Royal Prussia, also more fully integrated with the Crown in 1569, there were about 170 regional deputies in the lower chamber (referred to as the Sejm) and 140 senators.
Sejm deputies doing legislative work were generally not able to act as they pleased. Regional szlachta assemblies, the sejmiks, were summoned before sessions of general sejm; there the local nobility provided their representatives with copious instructions on how to proceed and protect the interests of the area involved. Another sejmik was called after the Sejm's conclusion. At that time the deputies would report to their constituency on what had been accomplished.
Sejmiks had become an important part of the Commonwealth's parliamentary life, complementing the role of general sejm. They sometimes provided detailed implementations for general proclamations of sejms, or made legislative decisions during periods when the Sejm was not in session, at times communicating directly with the monarch.
There was little significant parliamentary representation for the burgher class, and none for the peasants. The Jewish communities sent representatives to their own Va'ad, or Council of Four Lands. The narrow social base of the Commonwealth's parliamentary system was detrimental to its future development and the future of the Polish-Lithuanian statehood.
From 1573 an "ordinary" general sejm was to be convened every two years, for a period of six weeks. A king could summon an "extraordinary" sejm for two weeks, as necessitated by circumstances; an extraordinary sejm could be prolonged if the parliamentarians assented. After the Union the Sejm of the Republic deliberated in more centrally located Warsaw, except that Kraków had remained the location of coronation sejms. The turn of the 17th century brought also a permanent migration of the royal court from Kraków to Warsaw.
The order of sejm proceedings was formalized in the 17th century. The lower chamber would do most of the statute preparation work. The last several days were spent working together with the Senate and the king, when the final versions were agreed upon and decisions made; the finished legislative product had to have the consent of all three legislating estates of the realm, the Sejm, the Senate, and the monarch. The lower chamber's rule of unanimity had not been rigorously enforced during the first half of the 17th century.
General sejm was the highest organ of collective and consensus-based state power. The Sejm's supreme court, presided over by the king, decided the most serious of legal cases. During the second half of the 17th century, for a variety of reasons, including abuse of the unanimity rule (liberum veto), sejm's effectiveness had declined, and the void was being increasingly filled by sejmiks, where in practice the bulk of government's work was getting done.
Nobility rule, first royal election
The system of noble democracy became more firmly rooted during the first interregnum, after the death of Sigismund II Augustus, who following the Union of Lublin wanted to reassert his personal power, rather than become an executor of szlachta's will. A lack of agreement concerning the method and timing of the election of his successor was one of the casualties of the situation, and the conflict strengthened the Senate-magnate camp. After the monarch's 1572 death, to protect its common interests, szlachta moved to establish territorial confederations (kapturs) as provincial governments, through which public order was protected and basic court system provided. The magnates were able to push through their candidacy for the interrex or regent to hold the office until a new king is sworn, in the person of the primate, Jakub Uchański. The Senate took over the election preparations. The establishment's proposition of universal szlachta participation (rather than election by the Sejm) appeared at that time to be the right idea to most szlachta factions; in reality, during this first as well as subsequent elections, the magnates subordinated and directed, especially the poorer of szlachta.
During the interregnum the szlachta prepared a set of rules and limitations for the future monarch to obey as a safeguard to ensure that the new king, who was going to be a foreigner, complied with the peculiarities of the Commonwealth's political system and respected the privileges of the nobility. As Henry of Valois was the first one to sign the rules, they became known as the Henrician Articles. The articles also specified the wolna elekcja (free election) as the only way for any monarch's successor to assume the office, thus precluding any possibility of hereditary monarchy in the future. The Henrician Articles summarized the accumulated rights of Polish nobility, including religious freedom guarantees, and introduced further restrictions on the elective king; as if that were not enough, Henry also signed the so-called pacta conventa, through which he accepted additional specific obligations. Newly crowned Henry soon embarked on a course of action intended to free him from all the encumbrances imposed, but the outcome of this power struggle was never to be determined. One year after the election, in June 1574, upon learning of his brother's death, Henry secretly left for France.
Stephen Báthory
In 1575 the nobility commenced a new election process. The magnates tried to force the candidacy of Emperor Maximilian II, and on 12 December Archbishop Uchański even announced his election. This effort was thwarted by the execution movement szlachta party led by Mikołaj Sienicki and Jan Zamoyski; their choice was Stephen Báthory, Prince of Transylvania. Sienicki quickly arranged for a 15 December proclamation of Anna Jagiellon, sister of Sigismund Augustus, as the reigning queen, with Stefan Batory added as her husband and king jure uxoris. Szlachta's pospolite ruszenie supported the selection with their arms. Batory took over Kraków, where the couple's crowning ceremony took place on 1 May 1576.
Stephen Báthory's reign marks the end of szlachta's reform movement. The foreign king was skeptical of the Polish parliamentary system and had little appreciation for what the execution movement activists had been trying to accomplish. Batory's relations with Sienicki soon deteriorated, while other szlachta leaders had advanced within the nobility ranks, becoming senators or being otherwise preoccupied with their own careers. The reformers managed to move in 1578 in Poland and in 1581 in Lithuania the out-of-date appellate court system from the monarch's domain to the Crown and Lithuanian Tribunals run by the nobility. The cumbersome sejm and sejmiks system, the ad hoc confederations, and the lack of efficient mechanisms for the implementation of the laws escaped the reformers' attention or will to persevere. Many thought that the glorified nobility rule had approached perfection.
Jan Zamoyski, one of the most distinguished personalities of the period, became the king's principal adviser and manager. A highly educated and cultivated individual, talented military chief and accomplished politician, he had often promoted himself as a tribune of his fellow szlachta. In fact in a typical magnate manner, Zamoyski accumulated multiple offices and royal land grants, removing himself far from the reform movement ideals he professed earlier.
The king himself was a great military leader and far-sighted politician. Of Batory's confrontations with members of the nobility, the famous case involved the Zborowski brothers: Samuel was executed on Zamoyski's orders, Krzysztof was sentenced to banishment and property confiscation by the sejm court. A Hungarian, like other foreign rulers of Poland, Batory was concerned with the affairs of the country of his origin. Batory failed to enforce the Karnkowski's Statutes and therefore was unable to control the foreign trade through Danzig (Gdańsk), which was to have highly negative economic and political consequences for the Republic. In cooperation with his chancellor and later hetman Jan Zamoyski, he was largely successful in the Livonian war. At that time the Commonwealth was able to increase the magnitude of its military effort: The combined for a campaign armed forces from several sources available could be up to 60,000 men strong. King Batory initiated the creation of piechota wybraniecka, an important peasant infantry military formation.
In 1577 Batory agreed to George Frederick of Brandenburg becoming a custodian for the mentally ill Albert Frederick, Duke of Prussia, which brought the two German polities closer together, to the detriment of the Commonwealth's long-term interests.
War with Russia over Livonia
King Sigismund Augustus' Dominium Maris Baltici program, aimed at securing Poland's access to and control over the portion of the Baltic region and ports that the country had vital interests in protecting, led to the Commonwealth's participation in the Livonian conflict, which had also become another stage in the series of Lithuania's and Poland's confrontations with Russia. In 1563 Ivan IV took Polotsk. After the Stettin peace of 1570 (which involved several powers, including Sweden and Denmark) the Commonwealth remained in control of the main part of Livonia, including Riga and Pernau. In 1577 Ivan undertook a great expedition, taking over for himself, or his vassal Magnus, Duke of Holstein most of Livonia, except for the coastal areas of Riga and Reval. A success of the Polish-Lithuanian counter-offensive became possible as Batory was able to secure the necessary funding from the nobility.
The Polish forces recovered Dünaburg and most of middle Livonia. The King and Zamoyski then opted for attacking directly the inland Russian territory necessary for keeping Russian communication lines to Livonia open and functioning. Polotsk was retaken in 1579 and the Velikiye Łuki fortress fell in 1580. The take-over of Pskov was attempted in 1581, but Ivan Petrovich Shuisky was able to defend the city despite a several months long siege. An armistice was arranged in Jam Zapolski in 1582 by the papal legate Antonio Possevino. The Russians evacuated all the Livonian castles they had captured, gave up the Polotsk area and left Velizh in Lithuanian hands. The Swedish forces, which took over Narva and most of Estonia, contributed to the victory. The Commonwealth ended up with the possession of the continuous Baltic coast from Puck to Pernau.
Sigismund III Vasa's reign
There were several candidates for the Commonwealth crown considered after the death of Stephen Báthory, including Archduke Maximilian of Austria. Anna Jagiellon proposed and pushed for the election of her nephew Sigismund Vasa, son of John III, King of Sweden and Catherine Jagellon and the Swedish heir apparent. The Zamoyski faction supported Sigismund, the faction led by the Zborowski family wanted Maximilian; two separate elections took place and a civil war resulted. The Habsburg's army entered Poland and attacked Kraków, but was repulsed there and then, while retreating in Silesia, crushed by the forces organized by Jan Zamoyski at the Battle of Byczyna (1588), where Maximilian was taken prisoner.
In the meantime Sigismund also arrived and was crowned in Kraków, which initiated his long in the Commonwealth (1587–1632) reign as Zygmunt III Waza. The prospect of a personal union with Sweden raised for the Polish and Lithuanian ruling circles political and economic hopes, including favorable Baltic trade conditions and a common front against Russia's expansion. However concerning the latter, the control of Estonia had soon become the bone of contention. Sigismund's ultra-Catholicism appeared threatening to the Swedish Protestant establishment and contributed to his dethronement in Sweden in 1599.
Inclined to form an alliance with the Habsburgs (and even give up the Polish crown to pursue his ambitions in Sweden), Sigismund conducted secret negotiations with them and married Archduchess Anna. Accused by Zamoyski of breaking his covenants, Sigismund III was humiliated during the sejm of 1592, which deepened his resentment of szlachta. Sigismund was bent on strengthening the power of the monarchy and Counter-Reformational promotion of the Catholic Church (Piotr Skarga was among his supporters). Indifferent to the increasingly common breaches of the Warsaw Confederation religious protections and instances of violence against the Protestants, the King was opposed by religious minorities.
1605–1607 brought fruitless confrontation between King Sigismund with his supporters and the coalition of opposition nobility. During the sejm of 1605 the royal court proposed a fundamental reform of the body itself, an adoption of the majority rule instead of the traditional practice of unanimous acclamation by all deputies present. Jan Zamoyski in his last public address reduced himself to a defense of szlachta prerogatives, thus setting the stage for the demagoguery that was to dominate the Commonwealth's political culture for many decades.
For the sejm of 1606 the royal faction, hoping to take advantage of the glorious Battle of Kircholm victory and other successes, submitted a more comprehensive constructive reform program. Instead the sejm had become preoccupied with the dissident postulate of prosecuting instigators of religious disturbances directed against non-Catholics; advised by Skarga, the King refused his assent to the proposed statute.
The nobility opposition, suspecting an attempt against their liberties, called for a rokosz, or an armed confederation. Tens of thousands of disaffected szlachta, led by the ultra-Catholic Mikołaj Zebrzydowski and Calvinist Janusz Radziwiłł, congregated in August near Sandomierz, giving rise to the so-called Zebrzydowski Rebellion.
The Sandomierz articles produced by the rebels were concerned mostly with placing further limitations on the monarch's power. Threatened by royal forces under Stanisław Żółkiewski, the confederates entered into an agreement with Sigismund, but then backed out of it and demanded the King's deposition. The ensuing civil war was resolved at the Battle of Guzów, where the szlachta was defeated in 1607. Afterwards however magnate leaders of the pro-King faction made sure that Sigismund's position would remain precarious, leaving arbitration powers within the Senate's competence. Whatever was left of the execution movement had become thwarted together with the obstructionist szlachta elements, and a compromise solution to the crisis of authority was arrived at. But the victorious lords of the council had at their disposal no effective political machinery necessary to propagate the well-being of the Commonwealth, still in its Golden Age (or as some prefer Silver Age now), much further.
In 1611 John Sigismund, Elector of Brandenburg was allowed by the Commonwealth sejm to inherit the Duchy of Prussia fief, after the death of Albert Frederick, the last duke of the Prussian Hohenzollern line. The Brandenburg Hohenzollern branch led the Duchy from 1618.
The reforms of the execution movement had clearly established the Sejm as the central and dominant organ of state power. But this situation in reality had not lasted very long, as various destructive decentralizing tendencies, steps taken by the szlachta and the kings, were progressively undermining and eroding the functionality and primacy of the central legislative organ. The resulting void was being filled during the late 16th and 17th centuries by the increasingly active and assertive territorial sejmiks, which provided a more accessible and direct forum for szlachta activists to promote their narrowly conceived local interests. Sejmiks established effective controls, in practice limiting the Sejm's authority; themselves they were taking on an ever broader range of state matters and local issues.
In addition to the destabilizing to the central authority role of the over 70 sejmiks, during the same period, the often unpaid army had begun establishing their own "confederations", or rebellions. By plunder and terror they attempted to recover their compensation and pursue other, sometimes political aims.
Some reforms were being pursued by the more enlightened szlachta, who wanted to expand the role of the Sejm at the monarch's and magnate faction's expense, and by the elected kings. Sigismund III during the later part of his rule constructively cooperated with the Sejm, making sure that between 1616 and 1632 each session of the body produced the badly needed statutes. The increased efforts in the areas of taxation and maintenance of the military forces made possible the positive outcomes of some of the armed conflicts that took place during Sigismund's reign.
Cossacks and Cossack rebellions
There weren't very many Cossacks in the mid 16th century in the south-eastern borderlands of Lithuania and Poland yet, but the first companies of Cossack light cavalry had become incorporated into the Polish armed forces already around that time. During the reign of Sigismund III Vasa, the Cossack problem was beginning to play its role as Rzeczpospolita's preeminent internal challenge of the 17th century.
The transfer of Ukraine from the Grand Duchy of Lithuania to the Polish Crown and the Union of Lublin, all completed by 1569, gave rise to the modern colonization of Ukraine, implemented under the Polish rule.
Conscious and planned colonization of the fertile, but underdeveloped region was pioneered in the 1580s and 1590s by the Ruthenian dukes of Volhynia. Of the Poles, only Jan Zamoyski, who penetrated the Bracław area, was economically active by the end of the 16th century. There and in the Kiev area Polish fortunes also began to develop, often through intermarriage with Ruthenian clans. In 1630, the great Ukrainian latifundia were dominated by Ruthenian families, such as the Ostrogski, Zbaraski and Zasławski. At the outset of the great civil war of 1648, the Polish settlers comprised barely 10% of the middle and petty nobility, for example in the well-researched Bracław Voivodeship and Kiev Voivodeship. The early Cossack rebellions were, therefore, instants of social uprising, rather than national anti-Polish movements. As class warfare they were ruthlessly stamped out by the state, which would sometimes take their leaders to Warsaw for execution.
Cossacks were first semi-nomadic, then also settled East Slavic people of the Dnieper River area, who practiced brigandage and plunder, and, renowned for their fighting prowess, early in their history assumed a military organization. Many of them were, or originated from run-away peasants from the eastern and other areas of the Commonwealth or from Russia; other significant elements were townspeople and even nobility, who came from the region or migrated into Ukraine. Cossacks considered themselves free and independent of any bondage and followed their own elected leaders, who originated from the more affluent strata of their society. There were tens of thousands of Cossacks already early in the 17th century. They had frequently clashed with the neighboring Turks and Tatars and raided their Black Sea coastal settlements. Such excursions, executed by formal subjects of the Polish king, were intolerable from the point of view of foreign relations of the Commonwealth, because they violated peace or interfered with the state's current policy toward the Ottoman Empire.
During this earlier period of the Polish–Lithuanian Commonwealth, the separate Ukrainian national consciousness was being formed, influenced in part by the context and heroes of the Cossack uprisings. The legacy of Kievan Rus' was recognized, as was the heritage of the East Slavic Ruthenian language. Cossacks felt being members of the "Rus' Orthodox nation" (the Uniate Church was practically eliminated in the Dnieper region in 1633). But seeing themselves also as members of the (Polish) "Republic-Fatherland", they dealt with sejms and kings as its subjects. Cossacks and the Ruthenian nobility, until recently subjects of the Grand Duchy of Lithuania, were not formally or otherwise connected to the Tsardom of Russia.
Many Cossacks were being hired to participate in wars waged by the Commonwealth. This status resulted in privileges and often constituted a form of social upward mobility; the Cossacks resented the periodic reductions in their enrollment. Cossack rebellions or uprisings typically assumed the form of huge plebeian social movements.
The Ottoman Empire demanded a total liquidation of the Cossack power. The Commonwealth, however, needed the Cossacks in the south-east, where they provided an effective buffer against Crimean Tatars incursions. The other way to quell the Cossack unrest would be to grant the nobility status to a substantial portion of their population and thus assimilate them into the Commonwealth's power structure, which was what Cossacks aspired to. This solution was being rejected by the magnates and szlachta for political, economic and cultural reasons when there was still time for reform, before disasters struck. The Polish-Lithuanian establishment had instead shifted unsteadily between compromising with the Cossacks, allowing limited varying numbers, the so-called Cossack register (500 in 1582, 8000 in the 1630s), to serve with the Commonwealth army (the rest were to be converted into serfdom, to help the magnates in colonizing the Dnieper area), and brutally using military force in an attempt to subdue them.
Oppressive efforts, often led by Poles, including Crown tenants or their Jewish plenipotents, Ruthenian nobles of the Commonwealth and even upper-rank Cossack officers, to subjugate and exploit economically the Cossack territories and population in Zaporizhia region, resulted in a series of Cossack uprisings, of which the early ones could have served as a warning for szlachta legislators. While Ukraine was undergoing substantial economic development, Cossacks and peasants were by and large not among the beneficiaries of the process.
In 1591 the bloodily suppressed Kosiński Uprising was led by Krzysztof Kosiński. New fighting took place already in 1594, when the Nalyvaiko Uprising engulfed large portions of Ukraine and Belarus. Hetman Stanisław Żółkiewski defeated the Cossack units in 1596 and Severyn Nalyvaiko was executed. A temporary pacification of relations followed in the early 17th century, when the many wars fought by the Commonwealth necessitated greater involvement by registered Cossacks. But the Union of Brest resulted in new tensions, as the Cossacks had become dedicated adherents and defenders of the Eastern Orthodoxy.
The Time of Trouble period in Russia resulted in peasant rebellions, such as the one led by Ivan Bolotnikov, which contributed also to peasant unrest in the Commonwealth and to further insurgency by the Cossacks there.
The uprising of Marek Zhmaylo of 1625 was confronted by Stanisław Koniecpolski and concluded with Mykhailo Doroshenko signing the Treaty of Kurukove. More fighting soon erupted and culminated in the "Taras night" of 1630, when the Cossack rebels under Taras Fedorovych turned against army units and noble estates. The Fedorovych Uprising was put under control by Hetman Koniecpolski. These events were followed by an increase in the Cossack registry (Treaty of Pereyaslav), but then a rejection of demands by Cossack elders during the convocation sejm of 1632. Cossacks wanted to participate in free elections as members of the Commonwealth and have religious rights of the "disuniate" Eastern Christians restored. The 1635 sejm voted instead further restrictions and authorized the construction of the Dnieper Kodak Fortress, to facilitate more effective control over the Cossack territories. Another round of fighting, the Pavluk Uprising, followed in 1637–1638. It was defeated and its leader Pavel Mikhnovych executed. Upon new anti-Cossack limitations and sejm statutes imposing serfdom on most Cossacks, the Cossacks rose up again in 1638 under Jakiv Ostryanin and Dmytro Hunia. The uprising was cruelly suppressed and the existing Cossack land properties were taken over by the magnates.
The Commonwealth's struggles with the Cossacks were being noticed at Moscow's Kremlin, which from the late 1620s began regarding Cossacks as a potent source of fundamental instability in the Polish-Lithuanian rival and neighbor. Russian efforts to destabilize the Polish Kingdom using Cossacks in the 1630s were not yet successful, even though Cossack elders themselves often raised the possibility of a union with the Tsardom to pressure Poland's ruling elites. The borderlands with Russia had become a place of refuge for Cossacks persecuted after their failed uprisings.
The harsh measures restored relative calm for a decade, until 1648. Seen by the establishment as the "golden peace", for the Cossacks and peasants the period brought worst oppression. During that time the private dukedoms of Ukrainian potentates, such as the families of Kalinowski, Daniłowicz and Wiśniowiecki, rapidly expanded and the folwark-serfdom economy, only then (much later than in other parts of the Polish Crown) being introduced in Ukraine, caused still unprecedented levels of exploitation. The Cossack affair, perceived as a weak spot of the Commonwealth, was increasingly becoming an issue in international politics.
Władysław IV
Władysław IV Vasa, son of Sigismund III, ruled the Commonwealth during 1632–1648. Born and raised in Poland, prepared for the office from the early years, popular, educated, free of his father's religious prejudices, he seemed a promising chief executive candidate. Władysław however, like his father, had the life ambition of attaining the Swedish throne by using his royal status and power in Poland and Lithuania, which, to serve his purpose, he attempted to strengthen. Władysław ruled with the help of several prominent magnates, among them Jerzy Ossoliński, Chancellor of the Crown, Hetman Stanisław Koniecpolski, and Jakub Sobieski, the middle szlachta leader. Władysław IV was unable to attract a wider szlachta following, and many of his plans had foundered because of lack of support in the increasingly ineffectual sejm. Because of his tolerance for non-Catholics, Władysław was also opposed by the Catholic clergy and the papacy.
Toward the last years of his reign Władysław IV sought to enhance his position and assure his son's succession by waging a war on the Ottoman Empire, for which he prepared, despite the lack of nobility support. To secure this end the King worked on forming an alliance with the Cossacks, whom he encouraged to improve their military readiness and intended to use against the Turks, moving in that direction of cooperation further than his predecessors. The war never took place, and the King had to explain his offensive war designs during the "inquisition" sejm of 1646. Władysław's son Zygmunt Kazimierz died in 1647, and the King, weakened, resigned and disappointed, in 1648.
Seeking preponderance in Eastern Europe
The turn of the 16th and 17th centuries brought changes that, for the time being, weakened the Commonwealth's powerful neighbors (The Tsardom of Russia, The Austrian Habsburg Monarchy and the Ottoman Empire). The resulting opportunity for the Polish-Lithuanian state to improve its position depended on its ability to overcome internal distractions, such as the isolationist and pacifist tendencies that prevailed among the szlachta ruling class, or the rivalry between nobility leaders and elected kings, often intent on circumventing restrictions on their authority, such as the Henrician Articles.
The nearly continuous wars of the first three decades of the new century resulted in modernization, if not (because of the treasury limitations) enlargement, of the Commonwealth's army. The total military forces available ranged from a few thousands at the Battle of Kircholm, to the over fifty thousands plus pospolite ruszenie mobilized for the Khotyn (Chocim) campaign of 1621. The remarkable during the first half of the 17th century development of artillery resulted in the 1650 publication in Amsterdam of the Artis Magnae Artilleriae pars prima book by Kazimierz Siemienowicz, a pioneer also in the science of rocketry. Despite the superior quality of the Commonwealth's heavy (hussar) and light (Cossack) cavalry, the increasing proportions of the infantry (peasant, mercenary and Cossack formations) and of the contingent of foreign troops resulted in an army, in which these respective components were heavily represented. During the reigns of the first two Vasas a war fleet was developed and fought successful naval battles (1609 against Sweden). As usual, fiscal difficulties impaired the effectiveness of the military, and the treasury's ability to pay the soldiers.
As a continuation of the earlier plans for an anti-Turkish offensive, that had not materialized because of the death of Stefan Batory, Jan Zamoyski intervened in Moldavia in 1595. With the backing of the Commonwealth army Ieremia Movilă assumed the hospodar's throne as the Commonwealth's vassal. Zamoyski's army repelled the subsequent assault by the Ottoman Empire forces at Ţuţora. The next confrontation in the area took place in 1600, when Zamoyski and Stanisław Żółkiewski acted against Michael the Brave, hospodar of Wallachia and Transylvania. First Ieremia Movilă, who in the meantime had been removed by Michael in Moldavia, was reimposed, and then Michael was defeated in Wallachia at the Battle of Bucov. Ieremia's brother Simion Movilă became the new hospodar there and for a brief period the entire region up to the Danube had become the Commonwealth's dependency. Turkey soon reasserted its role, in 1601 in Wallachia and in 1606 in Transylvania. Zamoyski's politics and actions, which constituted the earlier stage of the Moldavian magnate wars, only prolonged Poland's influence in Moldavia and interfered effectively with the simultaneous Habsburg plans and ambitions in this part of Europe. Further military involvement at the southern frontiers ceased being feasible, as the forces were needed more urgently in the north.
War with Sweden
Sigismund III's crowning in Sweden took place in 1594 amid tensions and instability caused by religious controversies. As Sigismund returned to Poland, his uncle Charles, the regent, took the lead of the anti-Sigismund Swedish opposition. In 1598 Sigismund attempted to resolve the matter militarily, but the expedition to the country of his origin was defeated at the Battle of Linköping; Sigismund was taken prisoner and had to agree to the harsh conditions imposed. After his return to Poland, in 1599 the Riksdag of the Estates deposed him in Sweden, and Charles led the Swedish forces into Estonia. Sigismund in 1600 proclaimed the incorporation of Estonia into the Commonwealth, which was tantamount to a declaration of war on Sweden, at the height of Rzeczpospolita's involvement in Moldavia region.
Jürgen von Farensbach, given the command of the Commonwealth forces, was overpowered by the much larger army brought to the area by Charles, whose quick offensive resulted in the 1600 take-over of most of Livonia up to the Daugava River, except for Riga. The Swedes were welcomed by much of the local population, by that time increasingly dissatisfied with the Polish-Lithuanian rule. in 1601 Krzysztof Radziwiłł succeeded at the Battle of Kokenhausen, but the Swedish advances had been reversed up to (not including) Reval, only after Jan Zamoyski brought in a more substantial force. Much of this army, having been unpaid, returned to Poland. The clearing action was continued by Jan Karol Chodkiewicz, who, with a small contingent of troops left, defeated the Swedish incursion at Paide (Biały Kamień) in 1604.
In 1605 Charles, now Charles IX, the King of Sweden, launched a new offensive, but his efforts were crossed by Chodkiewicz's victories at Kircholm and elsewhere and the Polish naval successes, while the war continued without a decisive resolution being produced. In the armistice of 1611 the Commonwealth was able to keep the majority of the contested areas, as a variety of internal and foreign difficulties, including the inability to pay the mercenary soldiers and the Union's new involvement in Russia, precluded a comprehensive victory.
Attempts to subordinate Russia
After the deaths of Ivan IV and in 1598 of his son Feodor, the last tsars of the Rurik Dynasty, Russia entered a period of severe dynastic, economic and social crisis and instability. As Boris Godunov encountered resistance from both the peasant masses and the boyar opposition, in the Commonwealth the ideas of turning Russia into a subordinated ally, either through a union, or an imposition of a ruler dependent on the Polish-Lithuanian establishment, were rapidly coming into play.
In 1600 Lew Sapieha led a Commonwealth mission to Moscow to propose a union with the Russian state, patterned after the Polish-Lithuanian Union, with the boyars granted rights comparable with those of the Commonwealth's nobility. A decision on a single monarch was to be postponed until the death of the current king or tsar. Boris Godunov, at that time also engaged in negotiations with Charles of Sweden, wasn't interested in that close a relationship and only a twenty-year truce was agreed upon in 1602.
In order to continue their efforts, the magnates took advantage of the earlier death of Tsarevich Dmitry (1591) under mysterious circumstances and of the appearance of False Dmitriy I, a pretender-impostor claiming to be the tsarevich. False Dmitriy was able to secure the cooperation and help of the Wiśniowiecki family and of Jerzy Mniszech, Voivode of Sandomierz, whom he promised vast Russian estates and a marriage with the voivode's daughter Marina. Dmitriy became a Catholic and leading an army of adventurers raised in the Commonwealth, with the tacit support of Sigismund III entered in 1604 the Russian state. After the death of Boris Godunov and the murder of his son Feodor, False Dmitriy I became the Tsar of Russia, and remained in that capacity until killed during a popular turmoil in 1606, which also eliminated the Polish presence in Moscow.
Russia under the new tsar Vasili Shuysky remained unstable. A new false Dmitriy materialized and Tsaritsa Marina had even "recognized" in him her thought-to-be-dead husband. With a new army provided largely by the magnates of the Commonwealth, False Dmitriy II approached Moscow and made futile attempts to take the city. Tsar Vasili IV, seeking help from King Charles IX of Sweden, agreed to territorial concessions in Sweden's favor and in 1609 the Russo-Swedish anti-Dmitriy and anti-Commonwealth alliance was able to remove the threat from Moscow and strengthen Vasili. The alliance and the Swedish involvement in Russian affairs caused a direct military intervention on the part of the Polish-Lithuanian Commonwealth, instigated and led by King Sigismund III, with the support of the Roman Curia.
The Polish army commenced a siege of Smolensk and the Russo-Swedish relief expedition was defeated in 1610 by Hetman Żółkiewski at the Battle of Klushino. The victory strengthened the position of the compromise-oriented faction of Russian boyars, which had already been interested in offering the Moscow throne to Władysław Vasa, son of Sigismund III. Fyodor Nikitich Romanov, the Patriarch of Moscow, was one of the leaders of the boyars. Under arrangements negotiated by Żółkiewski, the boyars deposed Tsar Vasili and accepted Władysław in return for peace, no annexation of Russia into the Commonwealth, the Prince's conversion to the Orthodox religion, and privileges, including exclusive rights to high offices in the Tsardom granted to the Russian nobility. After the agreement was signed and Władysław declared tsar, the Commonwealth forces entered the Kremlin (1610).
Sigismund III subsequently rejected the compromise solution and demanded the tsar's throne for himself, which would mean complete subjugation of Russia, and as such was rejected by the bulk of the Russian society. Sigismund's refusal and demands only intensified the chaos, as the Swedes proposed their own candidate and took over Veliki Novgorod. The result of this situation and of the ruthless Commonwealth occupation in Moscow and elsewhere in Russia was the 1611 popular Russian anti-Polish uprising, heavy fighting in Moscow and a siege of the Polish garrison occupying the Kremlin.[a]
In the meantime, the Commonwealth forces after a long siege stormed and took Smolensk in 1611. At the Kremlin the situation of the Poles had been worsening despite occasional reinforcements, and the massive national and religious uprising was spreading all over Russia. Prince Dmitry Pozharsky and Kuzma Minin effectively led the Russians, a new rescue operation attempted by Hetman Chodkiewicz had failed and a capitulation of the Polish and Lithuanian forces at the Kremlin terminated in 1612 their involvement there. Mikhail Romanov, son of the imprisoned in Poland (since his rejection of Sigismund III's demand for the Russian throne) Patriarch Filaret, became the new tsar in 1613.
The war effort, debilitated by a rebellious confederation established by the unpaid military, was continued. Turkey, threatened by the Polish territorial gains became involved at the frontiers, and a peace between Russia and Sweden was agreed to in 1617. Fearing the new alliance the Commonwealth undertook one more major expedition, which took over Vyazma and arrived at the walls of Moscow, in an attempt to impose the rule of Władysław Vasa again. The city would not open its gates and not enough military strength was brought in to attempt a forced take-over.
Despite the disappointment, the Commonwealth was able to take advantage of the Russian weakness and through the territorial advances accomplished to reverse the eastern losses suffered in the earlier decades. In the Truce of Deulino of 1619 the Rzeczpospolita was granted the Smolensk, Chernihiv and Novhorod-Siverskyi regions.
The Polish–Lithuanian Commonwealth attained its greatest geographic extent, but the attempted union with Russia could not have been achieved, as the systemic, cultural and religious incompatibilities between the two empires proved to be insurmountable. The territorial annexations and the ruthlessly conducted wars left a legacy of injustice suffered and desire for revenge on the part of the Russian ruling classes and people. The huge military effort weakened the Commonwealth and the painful consequences of the adventurous policies of the Vasa court and its allied magnates were soon to be felt.
The Commonwealth and Silesia during Thirty Years' War
In 1613 Sigismund III Vasa reached an understanding with Matthias, Holy Roman Emperor, based on which both sides agreed to cooperate and mutually provide assistance in suppressing internal rebellions. The pact neutralized the Habsburg Monarchy in regard to the Commonwealth's war with Russia, but had resulted in more serious consequences after the Bohemian Revolt gave rise to the Thirty Years' War in 1618.
The Czech events weakened the position of the Habsburgs in Silesia, where there were large concentrations of ethnically Polish inhabitants, whose ties and interests at that time placed them within the Protestant camp. Numerous Polish Lutheran parishes, with schools and centers of cultural activity, had been established in the heavily Polish areas around Opole and Cieszyn in eastern Silesia, as well as in numerous cities and towns throughout the region and beyond, including Breslau (Wrocław) and Grünberg (Zielona Góra). The threat posed by a potentially resurgent Habsburg monarchy to the situation of Polish Silesians was keenly felt, and there were voices within King Sigismund's circle, including Stanisław Łubieński and Jerzy Zbaraski, who brought to his attention Poland's historic rights and options in the area. The King, an ardent Catholic, advised by many not to involve the Commonwealth on the Catholic-Habsburg side, decided in the end to act in their support, but unofficially.
The ten thousand men strong Lisowczycy mercenary division, a highly effective military force, had just returned from the Moscow campaign, and having become a major nuisance for the szlachta, was available for another assignment abroad; Sigismund sent them south to assist Emperor Ferdinand II. Sigismund court's intervention greatly influenced the first phase of the war, helping save the position of the Habsburg Monarchy at a critical moment.
The Lisowczycy entered northern Hungary (now Slovakia) and in 1619 defeated the Transylvanian forces at the Battle of Humenné. Prince Bethlen Gábor of Transylvania, who together with the Czechs had laid siege to Vienna, had to hurry back to his country and make peace with Ferdinand, which seriously compromised the situation of the Czech insurgents, crushed in the course and in the aftermath of the Battle of White Mountain. Afterwards the Lisowczycy ruthlessly fought to suppress the Emperor's opponents in Glatz (Kłodzko) region and elsewhere in Silesia, in Bohemia and Germany.
After the breakdown of the Bohemian Revolt the residents of Silesia, including the Polish gentry in Upper Silesia, were subjected to severe repressions and Counter-Reformational activities, including forced expulsions of thousands of Silesians, many of whom ended up in Poland. Later during the war years the province was repeatedly ravaged in the course of military campaigns crossing its territory, and at one point a Protestant leader, Piast Duke John Christian of Brieg, appealed to Władysław IV Vasa for assuming supremacy over Silesia. King Władysław, although a tolerant ruler including in matters of religion, was like his father disinclined to involve the Commonwealth in the Thirty Years' War. He ended up getting as fiefs from the Emperor the duchies of Opole and Racibórz in 1646, twenty years later reclaimed by the Empire. The Peace of Westphalia allowed the Habsburgs to do as they pleased in Silesia, already completely ruined by the war, which had resulted in intense persecution of Protestants, including the Polish Lower Silesia communities, forced to emigrate or subjected to Germanization.
Conflicts with the Ottoman Empire and Crimean Khanate
Although the Rzeczpospolita had not formally participated directly in the Thirty Years' War, the alliance with the Habsburg Monarchy contributed to getting Poland involved in new wars with the Ottoman Empire, Sweden and Russia, and therefore led to significant Commonwealth influence over the course of the Thirty Years' War. The Polish–Lithuanian Commonwealth also had its own intrinsic reasons for the continuation of struggles with the above powers.
From the 16th century the Commonwealth suffered a series of Tatar invasions. In the 16th century Cossack raids began descending on the Black Sea area Turkish settlements and Tatar lands. In retaliation the Ottoman Empire directed their vassal Tatar forces, based in Crimea or Budjak areas, against the Commonwealth regions of Podolia and Red Ruthenia. The borderland area to the south-east was in a state of semi-permanent warfare until the 18th century. Some researchers estimate that altogether more than 3 million people had been captured and enslaved during the time of the Crimean Khanate.
The greatest intensity of Cossack raids, reaching as far as Sinop in Turkey, fell on the 1613–1620 period. The Ukrainian magnates on their part continued their traditional involvement in Moldavia, where they kept trying to install their relatives (the Movileşti family) on the hospodar's throne (Stefan Potocki in 1607 and 1612, Samuel Korecki and Michał Wiśniowiecki in 1615). Ottoman chief Iskender Pasha destroyed the magnate forces in Moldavia and compelled Stanisław Żółkiewski in 1617 to consent to the Treaty of Busza at Poland's border, in which the Commonwealth obliged not to get involved in matters concerning Wallachia and Transylvania.
Turkish unease about Poland's influence in Russia, the consequences of the Lisowczycy expedition against Transylvania, an Ottoman fief in 1619 and the burning of Varna by the Cossacks in 1620 caused the Empire under the young Sultan Osman II to declare a war against the Commonwealth, with the aim of breaking and conquering the Polish-Lithuanian state.
The actual hostilities, which were to bring the demise of Stanisław Żółkiewski, were initiated by the old Polish hetman. Żółkiewski with Koniecpolski and a rather small force entered Moldavia, hoping for military reinforcements from Moldavian Hospodar Gaspar Graziani and the Cossacks. The aid had not materialized and the hetmans faced a superior Turkish and Tatar force led by Iskender Pasha. In the aftermath of the failed Battle of Ţuţora (1620) Żółkiewski was killed, Koniecpolski captured, and the Commonwealth left opened defenseless, but disagreements between the Turkish and Tatar commanders prevented the Ottoman army from immediately waging an effective follow-up.
The Sejm was convened in Warsaw, the royal court was blamed for endangering the country, but high taxes for a sixty thousand men army were agreed to and the number of registered Cossacks was allowed to reach forty thousand. The Commonwealth forces, led by Jan Karol Chodkiewicz, were helped by Petro Konashevych-Sahaidachny and his Cossacks, who raised against the Turks and Tatars and participated in the upcoming campaign. In practice about 30,000 regular army and 25,000 Cossacks faced at Khotyn a much larger Ottoman force under Osman II. Fierce Turkish attacks against the fortified Commonwealth positions lasted throughout September 1621 and were repelled. The exhaustion and depletion of its forces made the Ottoman Empire sign the Treaty of Khotyn, which had kept the old territorial status quo of Sigismund II (Dniester River border between the Commonwealth and Ottoman combatants), a favorable for the Polish side outcome. After Osman II was killed in a coup, ratification of the treaty was obtained from his successor Mustafa I.
In response to further Cossack attacks Tatar incursions continued as well, in 1623 and 1624 reaching almost as far west as the Vistula, with the attendant plunder and taking of captives. More effective defense was put together by the freed Koniecpolski and Stefan Chmielecki, who defeated the Tatars on several occasions between 1624 and 1633, using the quarter army supported by the Cossacks and general population. More warfare with the Ottomans took place in 1633–1634 and ended with a peace treaty. In 1644 Koniecpolski defeated Tugay Bey's army at Okhmativ and before his death planned an invasion against the Crimean Khanate. King Władysław IV's ideas of a grand international war-crusade against the Ottoman Empire were thwarted by the inquisition sejm in 1646. The state's inability to control the activities of the magnates and the Cossacks had contributed to the semi-permanent instability and danger at the Commonwealth's south-eastern frontiers.
Baltic area territorial and maritime access losses
More acute threat to the Polish-Lithuanian state came from Sweden. The balance of power in the north had shifted in Sweden's favor, as the Baltic neighbor was led by King Gustavus Adolphus, a highly able and aggressive military leader, who greatly improved the effectiveness of the Swedish armed forces, while also taking advantage of Protestant zealotry. The Commonwealth, exhausted by the wars with Russia and the Ottoman Empire and lacking allies, was poorly prepared to face this new challenge. Continuous diplomatic maneuvering by Sigismund III made the whole situation look to szlachta like another stage in the King's Swedish dynastic affairs; in reality the Swedish power resolved to take hold of the entire Polish-controlled Baltic coast, and thereby profit from the Commonwealth's maritime trade intermediary control, endangering its basis for independent existence.
Gustavus Adolphus chose to attack Riga, the Grand Duchy's foremost trade center, in late August 1621, just as the Ottoman army was approaching Khotyn, tying-up the Polish forces there. The city, stormed several times, had to surrender a month later. Moving inland to the south the Swedes next entered Courland. With Riga the Commonwealth lost the most important Baltic seaport in the region and an entry to northern Livonia, the Daugava River crossing. The 1622 Truce of Mitawa gave Poland the possession of Courland and eastern Livonia, but the Swedes were to take over most of Livonia north of the Daugava. The Lithuanian forces were able to keep Dyneburg, but suffered a heavy defeat at the Battle of Wallhof.
The losses impacted severely the trade and customs income of the Great Duchy of Lithuania. The Crown lands were to be also affected, as in July 1626 the Swedes took Pillau and forced Duke George William, Elector of Brandenburg and vassal of the Commonwealth in the attacked Ducal Prussia, to assume a neutrality stance. The Swedish advance resulted in the take-over of the Baltic coastline up to Puck. Danzig (Gdańsk), which had remained loyal to the Commonwealth, was subjected to a naval blockade.
The Poles, completely surprised by the Swedish invasion, in September attempted a counter-offensive, but were defeated by Gustavus Adolphus at the Battle of Gniew. The forces required serious modernization. The Sejm passed high taxation for the defense, but collections lagged behind. The situation was partially saved by the City of Danzig, which hurriedly embarked on the construction of modern fortifications, and by Hetman Stanisław Koniecpolski. The accomplished commander of the eastern borderlands fighting quickly learned the maritime affairs and contemporary methods of European warfare. Koniecpolski promoted the necessary enlargement of the naval fleet, modernization of the army, and became a fitting counterbalance for the military abilities of Gustavus Adolphus.
Koniecpolski led a spring 1627 military campaign, trying to keep the Swedish army in the Duchy of Prussia from moving toward Danzig, while also intending to block their reinforcements arriving from the Holy Roman Empire. Moving quickly the Hetman recovered Puck, and then destroyed at the Battle of Czarne (Hammerstein) the forces intended for Gustavus. The Swedes themselves Koniecpolski's forces kept near Tczew, shielding the access to Danzig and preventing Gustavus Adolphus from reaching his main objective. At the Battle of Oliva the Polish ships defeated a Swedish naval squadron.
Danzig was saved, but the next year the strengthened in the Ducal Prussia Swedish army took Brodnica, and early in 1629 defeated the Polish units at Górzno. Gustavus Adolphus from his Baltic coast position laid an economic siege against the Commonwealth and ravaged what he had conquered. At this point allied forces under Albrecht von Wallenstein were brought in to help keep the Swedes in check. Forced by the combined Polish-Austrian action Gustavus had to withdraw from Kwidzyn to Malbork, in process being defeated and almost taken prisoner by Koniecpolski at the Battle of Trzciana.
But in addition to being militarily exhausted, the Commonwealth was now pressured by several European diplomacies to suspend further military activities, to allow Gustavus Adolphus to intervene in the Holy Roman Empire. The Truce of Altmark left Livonia north of the Daugava and all Prussian and Livonian seaports except for Danzig, Puck, Königsberg, and Libau in hands of the Swedes, who were also allowed to charge duty on trade through Danzig.
Compromised power
As Władysław IV was assuming the Commonwealth crown, Gustavus Adolphus, who had been working on organizing an anti-Polish coalition including Sweden, Russia, Transylvania and Turkey, died. The Russians then undertook an action of their own, attempting to recover lands lost in the Truce of Deulino.
In the fall of 1632 a well-prepared Russian army took a number of strongholds on the Lithuanian side of the border and commenced a siege of Smolensk. The well-fortified city was able to withstand a general onslaught followed by a ten-month encirclement by an overwhelming force led by Mikhail Shein. At that time a Commonwealth rescue expedition of comparable strength arrived, under the highly effective military command of Władysław IV. After months of fierce fighting, in February 1634 Shein capitulated. The Treaty of Polyanovka confirmed the Deulino territorial arrangements with small adjustments in favor of the Tsardom. Władysław had relinquished, upon monetary compensation, his claims to the Russian throne.
Having secured the eastern front, the King was able to concentrate on the recovery of Baltic areas lost by his father to Sweden. Władysław IV wanted to take advantage of the Swedish defeat at Nördlingen and fight for both the territories and his Swedish dynastic claims. The Poles were suspicious of his designs and war preparations and the King was able to proceed with negotiations only, where his unwillingness to give up the dynastic claim weakened the Commonwealth's position. According to the Treaty of Stuhmsdorf of 1635 the Swedes evacuated Royal Prussia's cities and ports, which meant a return of the Crown's lower Vistula possessions, and stopped collecting custom duties there. Sweden retained most of Livonia, while the Rzeczpospolita kept Courland, which having assumed the servicing of Lithuania's Baltic trade entered a period of prosperity.
The position of the Commonwealth with respect to the Duchy of Prussia kept getting weaker, as the power in the Duchy was being taken over by the Electors of Brandenburg. Under the electors, the Duchy had become ever more closely linked to Brandenburg, which was harmful to the political interests of the Commonwealth. Sigismung III left the Duchy's administration in the hands of Joachim Frederick, and then John Sigismund, who in 1611 acquired the right to Hohenzollern succession in the Duchy by the consent of the King and the Sejm. He actually became the Duke of Prussia in 1618, after the death of Albert Frederick, and was followed by George William and then Frederick William, who in 1641 in Warsaw for the last time paid a Prussian homage to a Polish king. The successive Brandenburg dukes would make nominal concessions, to satisfy the Commonwealth's expediencies and justify the granting of privileges, but an irreversible shift in relations was taking place.
In 1637 died Bogislaw XIV, Duke of Pomerania, the last of the Slavic Griffins Dynasty of the Duchy of Pomerania. Sweden acquired the Pomeranian rule, while the Commonwealth was only able to get back its fiefs, Bytów Land and Lębork Land. Słupsk Land was also sought by Władysław IV at the peace conference, but it ended up a part of Brandenburg, which after the Peace of Westphalia controlled all of Pomerania adjacent to the border of the Commonwealth, extending south to where it met with Habsburg lands. Portions of Pomerania were populated by the Slavic Kashubians and Slovincians.
The Thirty Years' War period brought the Commonwealth a mixed legacy, rather more losses than gains, with the Polish-Lithuanian state retaining its status as one the few great powers in central-eastern Europe. From 1635 the country enjoyed a period of peace, during which internal bickering and progressively dysfunctional legislative processes prevented any substantial reforms from taking place. The Commonwealth was unprepared to deal with grave challenges that materialized in the middle of the century.
See also
- History of Poland during the Jagiellon dynasty
- Polish–Lithuanian Commonwealth
- History of Poland (1569–1795)
- History of the Polish–Lithuanian Commonwealth (1648–1764)
a.^ Contemporary accounts report widespread killing, acts of cruelty and abuse committed by the forces of the Polish–Lithuanian Commonwealth in Russia. Atrocities were commonly practiced by both sides, but the military offensives were undertook by the Poles, who dealt with the local civilian population. Aleksander Gosiewski, the first commandant of the Polish garrison at the Kremlin in 1610, vainly tried to curb his subordinates' misbehavior by imposing harsh penalties in turn on them. Hetman Stanisław Żółkiewski wrote of a great slaughter in Moscow, "as on the Day of Judgement", clearly sympathizing with the untold loss and the plight of the extensive, prosperous and affluent Russian capital, burning and wasting in an enormous bloodshed.
Gosiewski ordered the use of fire to expel the Russian opponents; the fires caused the death of 60,000 people in Moscow. Gosiewski ordered the deposed Tsar Shuysky and his brothers to be deported to Poland and had Hermogenes imprisoned, after the patriarch (successfully) called for a rising against the Poles and their supporters.
- Lukowski, Jerzy and Zawadzki, Hubert (2006) A Concise History of Poland (2nd edition) Cambridge University Press, Cambridge, England, pages 83–132, ISBN 0-521-61857-6
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), Państwowe Wydawnictwo Naukowe (Polish Scientific Publishers PWN), Warszawa 1986, ISBN 83-01-03732-6, p. 115
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 29-53
- Lukowski, Jerzy and Zawadzki, Hubert (2006) A Concise History of Poland (2nd edition) Cambridge University Press, Cambridge, England, pages 79–81, ISBN 0-521-61857-6
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 108-109
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 60-64
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 104
- Derwich, Marek and Żurek, Adam (editors) (2003) Rzeczpospolita Szlachecka, 1586–1795 (The Noble Republic: 1586–1795), page 27, Urszula Augustyniak. Wydawnictwo Dolnośląskie, Wrocław, ISBN 83-7384-055-9.
- Pieśniarczyk, Piotr (1998) Historia Polski w pigułce (History of Poland in a Pill) Agencja Benkowski, Białystok, Poland, page 133, ISBN 83-907633-9-7.
- Norman Davies, Europe: A History, p. 505, 1998 New York, HarperPerennial, ISBN 0-06-097468-0
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 64-66
- Lukowski, Jerzy and Zawadzki, Hubert (2006) A Concise History of Poland (2nd edition) Cambridge University Press, Cambridge, England, pages 87–88, ISBN 0-521-61857-6
- Kalendarium dziejów Polski (Chronology of Polish History), ed. Andrzej Chwalba, Copyright 1999 Wydawnictwo Literackie Kraków, ISBN 83-08-02855-1, p. 135, Jakub Basista
- Lukowski, Jerzy and Zawadzki, Hubert (2006) A Concise History of Poland (2nd edition) Cambridge University Press, Cambridge, England, pages 86–87, ISBN 0-521-61857-6
- Timothy Snyder, The Reconstruction of Nations, p. 45, 48, 77, 2003 New Haven & London, Yale University Press, ISBN 978-0-300-10586-5
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 68-71
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 72-74
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 71-74
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 74-79
- Zbigniew Szydlo -Water which does not wet hands: the alchemy of Michael Sendivogius, Warsaw 1994, Polish Academy of Sciences, ISBN 83-86062-45-2
- Norman Davies, Europe: A History, p. 529-531
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 81-83
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 82
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 83-84
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 85
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 88
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 87
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 88-92
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 98-101
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 108
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), pages 109–112
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 112-113
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 113-116
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 117
- Anita J. Prażmowska – A History of Poland, 2004 Palgrave Macmillan, ISBN 0-333-97253-8, p. 96
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 127-129
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 129-130
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 130-134
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 134-137
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 121
- Pieśniarczyk, Piotr (1998) Historia Polski w pigułce (History of Poland in a Pill) Agencja "Benkowski", Białystok, Poland, page 183, ISBN 83-907633-9-7
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 140-144
- Jerzy Besala, Ogniem, mieczem i podatkiem (By fire, sword and tax), Polityka www.polityka.pl, November 4, 2009
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 137-140
- Lech Kańtoch, Wypędzenie zaproszonych. Mity roku 1612 (Expulsion of the invited. The myths of 1612), Przegląd socjalistyczny (The socialist review) 2012, www.przglad-socjalistyczny.pl
- Pieśniarczyk, Piotr (1998) Historia Polski w pigułce (History of Poland in a Pill) Agencja "Benkowski", Białystok, Poland, pages 158–159, ISBN 83-907633-9-7
- Piotr Kroll, Kozaczyzna, Rzeczpospolita, Moskwa (Cossacks, the Republic, Moscow), Rzeczpospolita www.rp.pl, August 8, 2012
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 144-146
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 146-150
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 153
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 150-152
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 152-153
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 153-158
- Janusz Tazbir, Była rzeź wielka... (There was a great slaughter), Polityka No. 45(2882), November 2012
- Pieśniarczyk, Piotr (1998) Historia Polski w pigułce (History of Poland in a Pill) Agencja "Benkowski", Białystok, Poland, page 172, ISBN 83-907633-9-7
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 158-161
- Norman Davies, Europe: A History, p. 564
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 162-166
- Pieśniarczyk, Piotr (1998) Historia Polski w pigułce (History of Poland in a Pill) Agencja "Benkowski", Białystok, Poland, page 175–176, ISBN 83-907633-9-7
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 166-169
- Piotr Pieśniarczyk, Historia Polski w pigułce (History of Poland in a Pill), p. 165
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 169-173
- Norman Davies, Europe: A History, p. 567 | http://en.wikipedia.org/wiki/History_of_the_Polish%e2%80%93Lithuanian_Commonwealth_(1569%e2%80%931648) | 13 |
246 | In this section, we begin with a look at the balance of payments, followed by a discussion of exchange rate determination, and in the next section at policy implications.
The balance of payments is used to record the value of the transactions carried out between a country's residents and the rest of the world. The balance of payments is composed of two accounts:
(1) Since the large majority of the current account deals with trade of goods and services we will only discuss this component of the current account.
The balance of payments sums to zero because of the symmetry of the current and capital accounts.
Consider the purchase of a Japanese-made VCR by an American. This is considered an export from Japan and an import into the United States. When the U.S. consumer purchases the VCR, he pays in dollars (which increases the value of imports in the current account). However, the producer in Japan receives yen. The exchange of dollars for yen takes place in the foreign exchange markets. The important point is that the Japanese producers do not want dollars; instead they need yen to pay their employees, suppliers, and dividends to shareholders. The dollars return to the U.S. through the capital account.
We end up with a circular flow of currency. In the example given here, the dollars spent on a Japanese VCR will:
- increase the value of imports into the U.S.,
- reduce the value of the current account surplus ($exports -$ imports > 0) or increase the value of the current account deficit ($exports - $imports< 0).
- increase the supply of dollars in foreign exchange markets (we will soon see the effect on the yen/$ exchange rate), and
- eventually return to the U.S. through the capital account.
Consider an example. Assume that initially both the current account and capital account are in balance; NX = 0. Now you, rush out and buy a new $200 Toshiba VCR made in Japan and imported into the United States. The result is a $200 current account deficit (exports = 0, imports = $200). Back in Japan, Toshiba's bank statement shows a $200 credit, but Toshiba needs to withdraw yen to pay for manufacturing costs. As a result, Toshiba's bank uses the foreign exchange market to convert the dollars into yen, so Toshiba's bank credit is actually denominated in yen.
As a result of your purchase, the supply of dollars in the foreign exchange markets has increased by $200. Now let us assume that a Japanese saver wants to buy $200 worth of stock shares of a U.S. company, fast growing Sun Microsystems for example. She will deposit $200 worth of yen (2) in her brokerage account. Her investment bank will convert the yen into $200 and execute the stock purchase of Sun Microsystems shares. Consequently, your $200 has made its way back into the U.S. through the capital account.
(2) If we assume an initial exchange rate of ¥ 100/$1, the Japanese saver deposits ¥ 20,000 in her Tokyo account
Although this example is simplistic, it shows the basic linkage between the current and capital accounts. In general, deficits or surpluses in either account will show up as the opposite in the other account. Later in this topic we will show the linkage between changes in the capital account in Mexico and swings in the current account. Thus the linkages can work either way: from the current account to the capital account or the capital to current account.
There are exceptions to the one-to-one tradeoff. For example, the dollar is used throughout the world as an alternative to transactions in the domestic currency. As a result, some dollars may remain in Russia, as goods may be paid for in either rubles or dollars in the markets of Moscow. This is a relatively minor factor which will have no impact on our analysis.
When considering a nation's current account surplus or deficit, we need to consider some determinants of the level of its total imports and exports of goods and services. For now, let us assume that its trade is not influenced by trade barriers such as tariffs and quotas; we will look at these effects later.
There are four primary considerations when examining activity in a nation's current account:
- Changes in economic growth rates and national income will have a significant influence on the amount of goods and services that a country imports. As GDP growth and consumer incomes rise, purchases of goods and services also increase. Part of this increased consumption will be composed of foreign imports. For example, for every dollar that U.S. consumers spend on goods and services, about 10%, or 10 cents, is spent on imported goods. Thus we can expect a positive correlation between economic growth and the level of imports.
We also need to consider the GDP growth of a nation's trading partners. High growth rates abroad will lead to increases in the demand for another country's exports as foreign consumers increase their purchases of goods and services. Therefore, relative economic growth rates is the key variable. If a nation's economy is growing relatively faster than its trading partners, then the value of imports will increase faster than exports and net exports will decline.
- Changes in relative prices or inflation rates will determine the comparative prices of imports and exports for a nation. A country that is experiencing higher inflation rates than its trading partners will see the relative prices of exports increase and the relative prices of imports decline. In general:
- With an increase in domestic inflation, the prices of goods that are exported also rise. As foreign consumers pay a higher price for imported goods from the country with higher inflation rates, they are likely to switch to lower-priced substitute goods produced by domestic firms or imported from other countries.
For example, if inflation rates in Japan rise, the prices of Japanese exports throughout the world also increase in tandem (we assume). American consumers now must pay a higher price for goods imported from Japan, such as VCRs, automobiles, and televisions. As the relative price of Japanese imports increases (assume U.S. and all other inflation rates remain steady), U.S. consumers look for substitutes in consumption, such as American-made cars or Korean VCRs.
- The opposite holds true when domestic inflation rates decline relative to the inflation rates of trading partners. When a country's inflation rate falls, the relative prices of its exports decline and the prices of imported goods rise in comparison.
The basic rule is higher domestic inflation rates relative to other trading countries lead to a decrease in net exports. Higher inflation rates increase the relative prices of exports and decrease the relative prices of imports.
Lower domestic inflation rates relative to other trading countries lead to an increase in net exports. They decrease the relative prices of exports and increase the relative prices of imports.
- Changes in tastes will influence net exports. Consumer preferences change for a number of reasons and may affect their purchases of goods. Domestic governments and producers may appeal to patriotism and support of local jobs in attempts to increase consumption of domestically made goods and services relative to imports. However, over time with the increasing globalization of the world economy and methods of production, it has become increasingly difficult to find goods which are truly domestically made. Increasingly, manufacturers import parts and components from all over the world to produce a consumer good.
- Factors that determine comparative advantage such as production costs, technology, and worker skills all are important considerations when considering a country's current account trade balance. For example, increases in technology improve worker productivity and lower production costs. As the firm passes these savings on in the form of lower prices for consumer goods, exports increase as the relative price of the good falls in world markets. A nation that promotes education, worker skills, and technical research should see an expansion of its export markets over time as product quality improves and cost decreases.
The capital account deals with monetary flows into and out of a nation's financial markets. The most important determinant of financial flows are interest rates, which determine the rate of return on savings. In addition to interest rates, we should consider:
- The potential return on direct investment. Foreign firms will consider direct investment in a country the greater the potential return. By direct investment, a foreign firm may make a financial agreement with a domestic firm to provide money for capital expansion and research and development. Or the foreign firm may produce goods and services abroad, bringing in the money to build productive capacity.
- The potential return on financial assets such as real estate and equities will have important effects in the capital market. The combination of a developing country and volatile stock markets can lead to sudden and dramatic changes in capital account currency flows. Later in this topic we will examine events in Mexico during 1994 and 1995 as an example of how hot money flows in the capital account can seriously disrupt a country.
Returning to interest rates, the higher a country's interest rates, the more attractive its financial markets are to both domestic and foreign savers.
As domestic interest rates increase relative to rates in other countries, the relative rate of return in that country's financial markets rises, which attracts savings and financial capital. This leads to an increased inflow of money through the capital account and less money leaving a country in search of higher foreign interest rates, also through the capital account.
Throughout this course we have examined how equilibrium is determined in various markets. We began with the product market, looking at the supply and demand for a good. When supply equaled demand, the market price was decided. As we progressed, we saw how the equilibrium of the supply of labor and labor demand set the wage rate and how equilibrium in the market for loanable funds (the capital market) determined the rate of interest. Next we examined macroeconomic markets where the interaction of aggregate demand and aggregate supply changed macroeconomic prices (inflation) and output (GDP). Using the same basic analysis of supply and demand, we will now see how exchange rates are determined.
Exchange rates give us the price of one currency in relation to another. As with any good, the relative price of two currencies is determined by the supply and demand of the currencies in exchange rate markets. We can use basic fundamentals to explain how a domestic currency's price changes in relation to another. For a floating currency, its price in relation to another currency is determined by conditions of supply and demand for the currency.
Before we begin with our analysis of floating exchange rates, consider two other ways in which a currency's value can be determined.
Figure 12-1 shows the demand and supply of dollars in foreign exchange markets. We label the horizontal axis with the quantity of dollars in the foreign exchange markets. We will measure the value of the dollar in relation to the Japanese yen, so we label the vertical axis as the yen/$ ratio, or the price of the dollar in relation to the yen. The dollar's value is determined by the equilibrium of the demand for the dollar in foreign exchange markets with the supply of dollars in the same markets. In this case one dollar is worth 100 yen (¥ 100/$1). Or equivalently, it takes 100 yen to buy one dollar.
Now that we see how the demand and supply of dollars in foreign exchange markets determine its value relative to other currencies, let us consider changes in the dollar's value. We will first examine how current account conditions impact the U.S. dollar, and then we will examine the capital account.
First, let's examine how a change in economic growth affects the current account. Throughout this section it will be important to emphasize the ceteris paribus condition, or holding everything else constant. In this case let us assume that a tax cut increases U.S. GDP growth. Importantly, we hold everything else constant such as inflation rates and the rate of Japanese GDP growth.
Accelerating economic growth in the U.S. increases incomes and consumption. As U.S. consumers increase their consumption, part of this will include additional spending on imports. Over 10 cents out of every dollar spent by U.S. consumers is on imported goods. As economic growth increases in the U.S., so does spending on imports, reducing the value of net exports.
Remember what happens when a U.S. consumer buys an import. In Baltimore, the consumer pays for a Japanese good in dollars. The dollars make their way over to Japan through the foreign exchange markets where they are converted to yen. The result is to increase the supply of dollars in foreign exchange markets, which we will show below, and also increase the demand for yen (which we will not show).
As U.S. consumers increase their consumption of imports, the supply of dollars in foreign exchange markets rises. Figure 12-2 shows a rightward shift in the supply of dollars from S$0 to S$1 as U.S. consumers increase their purchases of imports. As a result, the price or value of the dollar falls from P$0 to P$1in relation to the yen. This is also known as a depreciation of the dollar.
This also means that the yen has appreciated, which could also be shown by looking at the demand for yen using the demand and supply for yen and the $/yen ratio.
The depreciation of the dollar could be expressed numerically. For example, the original yen/dollar ratio was ¥ 100/$1, and after the dollar depreciates let us say the new yen/dollar ratio is ¥ 90/$1. Originally, one dollar would buy the equivalent of ¥ 100 worth of Japanese goods and services. At the new value, the purchasing power of one dollar has fallen to ¥ 90 worth of Japanese goods and services.
Now consider the capital account. Let us construct a scenario where the U.S. Federal Reserve uses restrictive monetary policy to raise U.S. interest rates. Importantly, we hold all other economic variables constant, such as foreign interest rates.
In the previous example, changes in the level of imports affected the amount of dollars in the foreign exchange market through the current account, which measures trade in goods and services. Movements in interest rates impact the number of dollars in foreign exchange markets through the capital account. The capital account measures currency flows of savings and financial capital (or the supply of loanable funds). In the current account, dollars end up in the foreign exchange markets as a result of a purchase or sale of a tangible good or service. Money moves in the capital account seeking more favorable rates of return (e.g., higher interest return) or greater stability. No transactions of goods and services occur in this case; only the movement of money. Money that moves either through the capital or the current account still ends up in the same foreign exchange market; it just takes a different route (via accounting definitions) getting there.
With an increase in U.S. interest rates (holding all foreign interest rates constant), money from foreign savers enters the U.S. seeking a higher relative return. Consider the Japanese saver in Yokosuka making a purchase of U.S. Treasury Bills because he seeks a higher rate of return than he can earn domestically. The saver deposits his money with his brokerage in Japan and the broker takes the yen deposit and exchanges the yen for dollars in the foreign exchange market. The dollars are then used to buy a U.S. T-bill in America. The Japanese saver becomes the owner of the U.S. T-bill. Importantly, this transaction has increased the demand for dollars in the foreign exchange market. The demand for dollars increases as yen are exchanged for dollars in the foreign exchange market in order to make the T-bill purchase.
Figure 12-3 shows the impact of higher U.S. interest rates. As foreign savers send their money over to the U.S. to purchase American financial assets, the demand for dollars increases from D$0 to D$1. This raises the price or value of the dollar relative to the yen from P$0 to P$1. This is known as an appreciation of the dollar.
Subsequent Effects of a Currency Revaluation
We conclude this section with a Table that shows how changes in exchange rates affect the relative prices of goods. Assume the initial yen/dollar exchange rate is ¥ 100/$1 and the dollar depreciates to ¥ 90/$1. We now look at the impact on a U.S.-made mountain bicycle and a Japanese-made television. The price of the mountain bike to U.S. consumers remains fixed at $500, while in Japan, Japanese consumers pay ¥ 36,000, regardless of the exchange rate. As the table below shows, at the initial exchange rate of ¥ 100/$1, Japanese consumers pay ¥ 50,000 for the imported U.S.-made bike. U.S. consumers pay $360 to purchase the imported T.V. from Japan. For simplicity, assume the price paid for imports only reflects the market price that foreign consumers pay.
|Comparative Prices of Goods in Japan and the United States|
|¥ 36,000 Japanese-made TV||$500 U.S.-made bike|
|Exchange Rate||Price in Japan||Price in U.S.||Price in U.S.||Price in Japan|
|¥ 100 = $1||¥ 36,000||$360||$500||¥ 50,000|
|¥ 90 = $1||¥ 36,000||$400||$500||¥ 45,000|
Now consider the change in the relative price that consumers pay for imports when the dollar depreciates to ¥ 90/$1. While the price of the bike does not change for U.S. consumers, the increased purchasing power of the yen reduces the price that Japanese consumers pay for the imported bike to ¥ 45,000. Likewise the price of a Japanese-made television purchased by U.S. consumers rises to $400. From the table we can see that a depreciation of a country's currency (the dollar depreciates in this example) increases the price of imports and lower the price of its exports. An appreciation of a nation's currency (e.g., the yen), increases the relative prices of its exports, while making imports cheaper to buy for domestic consumers.
Extending this concept to net exports, we see that a depreciating currency will eventually increase exports and decrease imports, improving net exports. In contrast, an appreciating currency reduces exports and increases imports, reducing net exports. Of course, we are isolating the impact of changes in the exchange rate only. There are many other factors at work that could work opposite or complementary to the exchange rate effect. For a more realistic appraisal of overall trade, refer to the earlier part of this section that looks at factors determining the current and capital accounts. However, we can conclude that changes in exchange rates do alter the relative prices of imports and exports.
One of the strengths of the US economic system (and for many other countries as well) is the independence of the Federal Reserve (central bank) from the political process (executive and legislative). While the Chair of the Federal Reserve is a political appointee as are several members of the Federal Open Market Committee, once in place, they can practice monetary policy independent of political pressures from the White House and Congress. For the most part, Presidents and Congress have been wise in making Federal Reserve appointments based on the qualifications of the appointee and not on a litmus test of potential obedience to political causes.
The central bank such as the Federal Reserve performs the critical role of printing and controlling the domestic money supply. If the central bank lacks independence from a nation's political rulers, often inflation is the result. To gain the favor of voters, politicians like to spend and distribute money. Since taxes are unpopular, the easiest source for money to spend excessively is to order the central bank simply to print the money for the government to distribute. The result is increasing amounts of money to purchase a given amount of goods and inflation quickly takes off.
In an attempt to break the close connection between the central bank and the politicians, a few countries have taken extreme measures to control the money supply and thus the inflation rate. Perhaps the strongest remedy was recently undertaken by Ecuador. Ecuador abandoned its domestic currency, the sucre, in favor of making the US dollar the national currency. This is a process known as dollarization. To be clear, Ecuador now uses the same dollars that Americans use in the United States.
Here are some of the considerations of the dollarization in Ecuador:
· The government of Ecuador cannot print dollars - this would be counterfeiting.
· The major source of dollars comes from exports and tourism.
· The government of Ecuador gives up a tremendous amount of power in monetary policy. Unless the government collects ample dollar reserves, it will have trouble increasing the supply of money to lower domestic interest rates.
· For the purposes of foreign trade, Ecuador is now tied to the value of the dollar. If the dollar appreciates against other currencies such as the euro or yen, the prices of exports from Ecuador will rise to foreign consumers, just as they do in the United States.
· For the reasons discussed above, the money supply of Ecuador is very limited and the government has no ability to print money to finance its activities. If the government runs a budget deficit, it must either borrow the money in financial markets by issuing debt and/or raise taxes to increase revenues. As a result, inflation due to excessive monetary stimulus is no longer a problem.
Is dollarization a long run solution for Ecuador? Certainly giving up the domestic currency and adopting a foreign currency is an extreme measure, both economically and socially.
A step short of dollarization is to use what is known as a currency board. A country that implements a currency board still uses the domestic currency but is required to hold a major foreign currency at a one-to-one ratio. For example, in the 1990s Argentina started a currency board to limit the supply of pesos and control inflation - the source of the inflation was the same as with Ecuador, too many pesos printed by the government. Argentina's currency board required the government of Argentina to hold a dollar in reserve for every peso in circulation. The only way for the government to increase the number of pesos was to acquire additional dollars - primarily from exports and tourism. Furthermore, the currency board required Argentina to fix the exchange rate of the peso at a rate of one peso per dollar.
Argentina's currency board created some severe problems for Argentina and was abandoned in 2002. A major source of Argentina's problem resulted from the appreciation of the dollar, and thus the peso that was fixed to the dollar at a one-to-one ratio. As the dollar, and thus the peso appreciated in the 1990s, Argentina's export prices increased. Argentina is very dependent on agricultural exports (such as beef) to neighbor countries. As the price of imports from Argentina increased for buyers in Brazil, cheaper substitutes from other countries were found (e.g. beef from Peru), further damaging Argentinas economy that had been suffering from a recession that started in 1998.
While a solution to Argentina's foreign exchange problems appeared to be obvious - allow the peso to devalue against the dollar - in reality things were more complicated. Once the currency board was established, the government of Argentina lacked the ability to print pesos to accommodate continued deficit spending - it turned to foreign debt markets instead. Argentina's government issued tremendous amounts of government debt, mostly sold to foreign lenders who liked the fact that payments were made in dollars. When Argentina's government sold debt, the interest payments and eventual reimbursement would be made in US dollars. With a currency board and a one-to-one exchange rate, it did not really matter to the government if it paid in dollars or pesos.
By early 2002, Argentina accounted for one-fourth (25%) of all debt sold by emerging country governments. Argentina's businesses were also large borrowers in foreign markets, typically making their debt payments in dollars as well. Consider the consequence of a peso devaluation, to 3 pesos to a dollar for example. Before the devaluation, the government of Argentina or a business would have to raise one peso in revenues (through taxes for the government, sales for the firm) to convert to a dollar in order to make a dollar in debt payments. After the devaluation of the peso to 3 pesos per dollar, the government or business would have to raise 3 pesos domestically to convert into a dollar to make a dollar in debt payments. The price of debt servicing would have tripled. In a practical sense, with the peso devaluation, the government would have to impose a significant and unpopular tax increase, and businesses would have to raise prices to collect the additional pesos to service the outstanding debt.
In 2002, as its financial situation deteriorated and foreign lenders stopped purchasing debt originating in Argentina in 2002, Argentina allowed the peso to devalue to a floating currency. Unable to raise the additional pesos to make dollar-denominated interest payments the government of Argentina defaulted on its foreign debt. The government of Argentina declared bankruptcy.
Starting in 2002, many European countries gave up their sovereign currency to form the European Monetary Union (EMU) and create a joint currency known as the euro. Members of the currency union include economic powers like Germany and smaller nations including Ireland and Portugal. The monetary union is directed by the European Central Bank (ECB) that acts as does any central bank - prints the money, controls the money supply and interest rates, regulates banks and other activities. The ECB is headquartered in Brussels, Belgium.
In practice, EMU countries are undertaking an economic integration. Rather than considering multiple currencies and exchange rates, there is now a single currency the all member nations will use for transactions and a single exchange rate with other major currencies. Furthermore, monetary policy is focused on overall economic conditions in the euro-zone, not one country. Although remaining independent, fiscal policy is restricted for each member country by the stability criteria. As a condition for membership, the stability criteria requires that when a country has a fiscal budget deficit, the deficit exceeds no more than 3% of total domestic GDP. As a result of the stability criteria, countries are limited in the fiscal stimulus they can give the domestic economy through government spending and tax policies.
An additional consideration of the EMU is the establishment of more transparent borders between member countries. The movement of goods and workers between nations is much more open and tariffs are abolished between member countries. Aside from improving the transport of goods between countries, workers have the increased flexibility of moving to a country that offers better employment and wage opportunities when necessary.
The economic benefits to the EMU appear substantial and member countries have a good incentive to participate. However, while the EMU makes huge strides in economically integrating EMU countries, each retains its political sovereignty. However, domestic politicians have lost much of their economic power to the EMU authority and the ECB. Majors tests of the EMU's longevity will come when there are asymmetric shocks. For an example of an asymmetric shock, assume that Germany and most of the other EMU countries are facing inflationary pressures. At the same time, assume that France has a weak domestic economy and the French unemployment rate is rising. Since the overall economic condition of the EMC is one of a rising likelihood of inflation, the prudent policy for the ECB is to raise interest rates to slow economic growth across the EMU and dampen inflationary pressures. However, this will only increase the already troubled unemployment rate in France to a higher level.
A good analogy is found in the United States where it is common for a state or a region to have different economic circumstances than the country as a whole. For example, in the 1980s when oil prices collapsed, oil producing states like Oklahoma and Texas were hurt, while much of the rest of America was helped by lower gasoline prices. The term "rust belt" described some of the Midwestern states that produced much of the nation's automobiles and steel. These states were devastated by the 1982 recession, while other parts of the country felt only minimal impacts. Another example looks at the "peace dividend" realized in the early 1990s when the Soviet Union disintegrated. California, a state very dependent on the defense industry, experienced a local recession as the US government slashed defense expenditures after the Gulf War. However, the Federal Reserve did not respond by lowering interest rates to help California, in fact, the Fed raised interest rates through much of 1994 to respond to overall macroeconomic conditions in the United States. With diminished job opportunities in the local economy, there was an exodus of residents to states like Colorado where the local economy was booming and jobs were plentiful.
Returning to our example in Europe, if France is suffering from a "local" recession and the ECB is raising interest rates to reduce inflationary pressures in the overall European macroeconomy, French workers may have to consider migrating to another European country when the job opportunities are better. Workers who move from California to Colorado need to adjust to the lack of ocean access and some other minor changes. In contrast, a French citizen who migrates to Spain needs to learn a new language and adopt a very different culture.
Another consideration is in the area of income transfers. In the United States, Europe and for many countries, unemployed workers receive assistance from the government until they can find a new job. This represents an income transfer from those who are currently working and paying taxes, to the unemployed who receive benefits. The problem could become interesting in the EMU when a country suffers from chronically high unemployment rates. In this case, countries with high unemployment would be a net recipient of income transfers from other EMU countries that have managed their economy's better. There could be some resentment of this fact between countries with a long history of grievances.
Historically, no currency union between different countries has ever succeeded in the long run. At any time, a member nation can vote to drop out of the EMU and there are many popular politicians in Europe who include quitting the EMU as part of their platform. The major test of the euro will come when economic times become very difficult for some of the member nations.
Copyright © 2002, Jay Kaplan
All rights reserved | http://www.colorado.edu/Economics/courses/econ2020/section12/section12-main.html | 13 |
21 | See something needing your input? Click here to Join us in providing quality, expert-guided information to the public for free!
From Citizendium, the Citizens' Compendium
Coal mining is a term that encompasses the various methods used to extract the carbon-containing rock called coal from the ground. Coal tends to exist in seams, which are lateral layers under the earth that may vary in depth from one or two feet to dozens of feet.
Coal mining methods
Mining of coal seams is achieved in several different ways. "Strip" mines scrape coal from the earth's surface; they may be large open pits, or if on a mountain, result in ribbons chewed away from around the perimeter of a mountain at each level where a seam of coal exists. So-called "drift" mines angle horizontally into a mountainside and may be very shallow (i.e., not tall enough for a person to stand up in). "Shaft" mines, also called deep mines, reach down vertically to open into person-sized horizontal tunnels which may be miles from the surface.
Strip mines remove any top soil with bulldozers to get at coal near the earth's surface. Coal is excavated from the ground in what becomes large pits, or else ribbons of stripped land stretch around a mountain. After strip mining has exhausted the available surface coal, the mining company often abandons the site with no restoration, leading to severe erosion problems (with resultant flooding or pollution) and to an unsightly landscape that cannot support plant growth due to the lack of topsoil. Drift mines may be used after strip mining has used up surface coal. Because drift mines tend to be shallow, special equipment may be required to mine them, on which, for example, workers can lie flat on short vehicles moving inside the tunnel. Drift mining is common in the extreme southwest corner of Virginia. Deep mines are similar to those for any other mineral deposit found deep enough in the earth that the cost of removing the overburden is prohibitive. Shafts are dug and veins of coal are excavated and transported to the surface.
Deep and drift mining safety
Early mining methods led to very unsafe mines which often were not even represented on maps at all, or were represented inaccurately. Early mining methods led to irregularly spaced supporting pillars, which often were not represented on maps, or were represented inaccurately. Modern mines have regular pillars at safe intervals of known thickness.
Even using the best known methods, underground coal mining is hazardous work. In addition to the hazard of simple cave-ins, miners have to worry about their tunnels flooding, accumulation of "bad air" (gases lacking enough oxygen), accumulation of explosive gases resulting in fires and/or cave-ins, and many other unexpected problems. Bad air and water can suddenly flood a tunnel if a pocket of the non-oxygenated gas or water is reached without warning when removing coal from a seam.
Following known best practices can reduce the likelihood of extensive loss of life during catastrophic mining accidents. Poor safety records of some mine owners led to the formation of labor unions around the world, and today there remains a high degree of solidarity among mine workers. Mining deaths still occur periodically that arguably could have been prevented with appropriate safety equipment, training and safety procedures.
History of coal mining
Coal has been used for centuries for small-scale furnaces. Around 1800 it became the main energy source for the Industrial Revolution, the expanding railway system of countries being a prime user. Britain developed the main techniques of underground mining from the late 18th century onward with further progress being driven by 19th and early 20th century progress.
By 1900 the United States and Britain were the chief producers, followed by Germany.
However oil became an alternative fuel after 1920 (as did natural gas after 1980). By the mid 20th century coal was for the most part replaced in domestic as well as industrial and transportation usage by oil, natural gas or electricity produced from oil, gas, nuclear or water power.
Since 1890 coal has also been a political and social issue. Coal miners' labor unions became powerful in many countries in the 20th century. Often, the miners were leaders of the left or Socialist movements (as in Britain, Germany, Poland, Japan, Canada and the U.S.). Since 1970, environmental issues have been paramount, including the health of miners, destruction of the landscape from strip mines and mountaintop removal mining, air pollution, and contribution to global warming. Coal remains the cheapest energy source by a factor of 50% and even in many economies (such as the U.S.) it is the primary fuel used in electricity generation.
Coal was first used as a fuel in various parts of the world during the Bronze Age, 2000-1000 BCE. The Chinese began to use coal for heating and smelting in the Warring States Period (475-221 BCE). They are credited with organizing production and consumption to the extent that by the year 1000 CE this activity could be called an industry. China remained the world's largest producer and consumer of coal until the 18th century. Roman historians describe coal as a heating source in Britannia.
The earliest uses of coal in the Americas were by the Aztecs, who used coal not only for heat and as ornaments as well. Coal deposits near the surface were extracted by colonists in Virginia and Pennsylvania in the 18th century. Early coal extraction was small-scale, the coal lying either on the surface, or very close to it. Typical methods for extraction included drift mining and bell pits. In Britain, some of the earliest drift mines (in the Forest of Dean) date from the medieval period.
Small scale shaft mining as well as drift mines were the most common forms used prior to mechanization that occurred in the twentieth century. This took the form of a "bell pit", the extraction working outward from a central shaft, or a technique called "room and pillar" in which "rooms" of coal were extracted with pillars left to support the roofs. Both of these techniques, however, left considerable amount of usable coal behind.
The Industrial Revolution
From its origins in Britain after 1750, the world-wide industrial revolution has been dependent upon the availability of coal to power steam engines and industrial equipment of all kinds. International trade expanded exponentially when coal-fed steam engines were built for the railways and steamships in the 1810-1840 era. Coal was cheaper and much more efficient than wood in most steam engines. As central and northern England contains an abundance of coal, many mines were situated in these areas. The small-scale techniques were unsuited to the increasing demand, with extraction moving away from surface extraction to deep shaft mining as the Industrial Revolution progressed.
The large-scale exploitation of coal was an important moving force behind the Industrial Revolution. Coal was used in making iron and steel. It was also used to power the early railroad locomotives and steamboats, driven by coal-burning steam engines, which made possible the transport of very of large quantities of raw materials and manufactured goods. Coal-burning steam engines also powered many types of factory machinery.
The largest economic impacts of exploiting coal during the Industrial Revolution were experienced in Wales and the Midlands of England, and in the Rhine and Ruhr river areas of Germany. The early railroads also played a major role in the westward expansion of the United States during the 19th century.
Deep shaft mining in Britain started in the late 18th century, although rapid expansion occurred throughout the 19th and early 20th centuries. The location of the coalfields helped to make the prosperity of Lancashire, of Yorkshire, and of South Wales; the Yorkshire pits which supplied Sheffield were only about 300 feet deep. Northumberland and Durham were the leading coal producers and they were the sites of the first deep pits. In much of Britain coal was worked from drifts, or scraped off when it outcropped. Small groups of part-time miners used shovels and primitive equipment. Before 1800 a great deal of coal was left in places as support pillars. As a result, in the deep Tyneside pits (300 to 1,000 ft. deep) only about 40 percent of the coal could be extracted. The use of wood props to support the roof was an innovation first introduced about 1800. The critical factor was circulation of air and control of explosive gases. At first fires were burned to create air currents.
Coal was so abundant in Britain that the supply could be stepped up to meet the rapidly rising demand. About 1770-1780 the annual output of coal in was some 6.3 million tons (or about the output of a week and a half in the twentieth century). After 1790 output soared, reaching 16 million tons by 1815. The miners, less menaced by imported labor or machines than were the cotton operatives, had begun to form unions and fight to control wages and working conditions against the coal owners and royalty-lessees.
Technological development throughout the 19th and 20th century helped both to improve the safety of colliers and the productive capacity of collieries they worked in. Scott examines the importance of path dependence effects in impeding the diffusion of high throughput mechanized mining systems in the British coal industry. Evidence shows that the industry had become "locked in" to low throughput underground haulage technology because of institutional interrelatedness between Britain's traditional practice of extensive in-seam mining and its unique system of fragmented, privately owned mineral royalties. Fragmented royalties prevented the concentration of workings and introduction of high throughput main haulage systems that underpinned the rapid productivity growth of European producers. Meanwhile, technical interrelatedness between the haulage systems taking coal to the pit shaft and operations further "upstream" created bottlenecks that both slowed the overall rate of mechanization and limited the productivity gains from the mechanization that did occur.
In the late 20th century, improved integration of coal extraction with bulk industries such as electrical generation helped coal maintain its position despite the emergence of alternative energies supplies such as oil, natural gas, and, from the late 1950s, nuclear power used for electricity. More recently coal has faced competition from renewable energy sources and biofuels.
However, the 1980s and 1990s saw much much change in the coal industry within the UK, with the industry contracting, in some areas quite drastically. Many pits were 'uneconomic' to work at current wage rates compared to 'cheap' North Sea oil and gas, and in comparison to subsidy levels in Europe. The Miners' Strike of 1984 and subsequent strikes helped shrink the industry . The National Coal Board (by then British Coal), was privatized by selling off a large number of pits to private concerns through the mid 1990s.
Pretty profiles Welsh mining labor leader David Morgan (1840-1900) and explores the dynamics of labor relations in the Welsh coal mining industry during 1851-1901. First elected to the executive committee of the Amalgamated Association of Miners (AAM) in 1872, Morgan was active in numerous strikes for better wages, working conditions, and hours. His role in the arbitration of an 1875 labor dispute won him respect both from his fellow miners and from mine owners. With the demise of the AAM that same year, Morgan turned his attention to working-class politics. Although often at odds with fellow miners and unions, Morgan maintained his popularity in the community, taking a seat on the Aberdare school board in 1883. At the same time Morgan acted as the miners' agent for the Aberdare, Methyr, and Dowlais Miners' Associations. During the 1890's, Morgan attempted to forge a peace between rival trade unions, but continuing animosity with mine owners caused his arrest and eventually his loss of influence over Welsh miners.
Coal became a political issue, as the miners formed the National Union of Mineworkers, in 1888, which claimed 600,000 members in 1908. Much of the 'old left' of British politics can trace its origins to coal-mining areas. The failed General Strike of 1926 was led by the miners. The Labor government in 1947 nationalized coal into the National Coal Board, giving miners access to control of the mines via their control of the Labor party and the government.
Health and safety
McIvor and Johnston (2005) analyze the gap between knowledge of occupational lung diseases and actions by industry officials and workers to limit their exposure to the life-threatening conditions in British coal mining and asbestos work during the 20th century. Although respiratory diseases had been medically documented in the 19th century, it was not until the 1910s that research institutions such as the Industrial Fatigue and Industrial Health research boards were formed to offer recommendations for occupational health. Workplace regulation of some asbestos work began in 1931. Legislation regarding disability payments for coal miners was not enacted until 1943. Many miners and asbestos workers were reluctant to implement the use of safety gear as an assertion of manliness, despite increasingly recognized health concerns and regulatory efforts throughout the mid-20th century to improve workplace conditions. When legislation was first introduced to protect workers from silicosis (1919), coal mining was not a serious target of the regulation. By the late 1920's, scientific investigation had determined that coal mining could create a silicosis threat. The South Wales Miners' Federation fought individual compensation cases in the 1920's and in the 1930's lobbied civil servants and legislators. By the late 1930's, the Medical Research Council recognized increased claims of coal miners, and its report in 1945 provided a significant reappraisal of respiratory diseases among coal miners and colliery workers. Bufton and Melling reconsiders the class-based argument versus the perspective that bureaucrats and policymakers were making conscientious attempts to achieve consensus solutions to the complex issue of respiratory health problems. During the interwar years, the problems of individual miners became part of a broader political and intellectual struggle over the responsibility for workplace health.
In spite of significant improvements in occupational health and safety standards in Scottish coalfields during the 20th century, the underground working environment remained dangerous until the coal industry's ultimate demise. There was a gap between state and employer action to improve safety and health in the pits, which largely explains this dangerous continuity. The coal industry's nationalization in 1947, which brought Great Britain's small privately owned pits under the new National Coal Board's (NCB) control, ushered in concern for miners' health and safety. Despite the NCB's efforts, however, safety underground had increased little by 1962 due to cost cutting and an intense productivity drive in the industry, resulting in a greater occurrence of the disease pneumoconiosis, caused by inhaling coal dust. In 1947 the NCB began implementing dust suppression measures, such as fitting coal-cutting machinery with water sprays and issuing masks. While the mines inspectors' reports note the miners' reluctance to wear the masks, the oral evidence suggests that the miners rejected them because they were just a piece of gauze. The oral testimony is especially enlightening when it comes to ascertaining the personal impact of coal miners' lung diseases and disability, which included an erosion of their sense of masculinity.
Anthracite (or "hard" coal), clean and smokeless, became the preferred fuel in cities, replacing wood by about 1850. Anthracite from the Northeastern Pennsylvania coal region (and later from West Virginia) was typically used for household uses because it is of high quality, with few impurities, and stoves and furnaces were designed for it. The rich Pennsylvania anthracite fields were close to the eastern cities, and a few major railroads like the Reading Railroad controlled the anthracite fields. By 1840, hard coal output had passed the million short ton mark, and then quadrupled by 1850.
Bituminous (or "soft coal") mining came later. In the mid-century, Pittsburgh was the principal market. After 1850, soft coal, which is cheaper but dirtier, came into demand for railway locomotives and stationary steam engines, and was used to make coke for steel after 1870. Total coal output soared until 1918; before 1890, it doubled every ten years, going from 8.4 million short tons in 1850 to 40 million in 1870, 270 million in 1900, and peaking at 680 million short tons in 1918. New soft coal fields opened in Ohio, Indiana and Illinois, as well as West Virginia, Kentucky and Alabama. The Great Depression of the 1930s lowered the demand to 360 million short tons in 1932.
The United Mine Workers (UMW), formed in the 1880s in the Midwest, was successful in its strike against bituminous mines in the Midwest in 1900. However, the union's strike against the anthracite mines of Pennsylvania turned into a national political crisis in 1902. President Theodore Roosevelt brokered a compromise solution that kept the flow of coal going, and won higher wages and shorter hours for the miners, but did not include recognition of the union as a bargaining agent.
Under the leadership of John L. Lewis the UMW became the dominant force in the coal fields in the 1930s and 1940s, producing high wages and benefits. Repeated strikes caused the public to switch away from anthracite for home heating after 1945, and that sector collapsed.
In 1914 at the peak there were 180,000 anthracite miners; by 1970 only 6,000 remained. At the same time steam engines were phased out in railways and factories, and bituminous was used primarily for the generation of electricity. Employment in bituminous peaked at 705,000 men in 1923, falling to 140,000 by 1970 and 70,000 in 2003. Environmental restrictions on high-sulfur coal, and the rise of very large-scale strip mining in the west (especially the Powder River fields in Wyoming and adjacent states), caused the sharp decline in underground mining after 1970. UMW membership among active miners fell from 160,000 in 1980 to only 16,000 in 2005, as non-union miners predominated. The American share of world coal production remained steady at about 20% from 1980 to 2005.
Canada had a small coal industry concentrated at Cape Breton in Nova Scotia. At its peak in 1949 25,000 miners dug 17 million metric tons of coal from mines. The miners, who lived in company towns, were politically active in left-wing politics. All the mines were closed by 2001. The United States always supplied the coal for the industrial regions of Ontario. By 2000 about 19% of Canada's energy was supplied by coal, chiefly imported from the U.S.
Germany: The Ruhr Basin
The first important mines appeared in the 1750s, In 1782 the Krupp family began operations near Essen. After 1815 entrepreneurs in the Ruhr Area, which then became part of Prussia took advantage of the tariff zone (Zollverein) to open new mines and associated iron smelters. New railroads were built by British engineers around 1850. Numerous small industrial centers sprang up, focused on ironworks, using local coal. The iron and steel works typically bought mines, and erected coking ovens to supply their own requirements in coke and gas. These integrated coal-iron firms ("Huettenzechen") became numerous after 1854; after 1900 they became mixed firms called "Konzern".
The average output of a mine in the Ruhr Area in 1850 was about 8,500 short tons and it employed about 64 miners. By 1900, the average mine's output had risen to 280,000 short tons and the employment to about 1,400. Total Ruhr coal output rose from 2.0 million short tons in 1850 to 22 million in 1880, 60 million in 1900, and 114 million in 1913, on the verge of war. In 1932 output was down to 73 million short tons, growing to 130 million in 1940. Output peaked in 1957 (at 123 million), declining to 78 million short tons in 1974.
By 1830 when iron and later steel became important the Belgium coal industry had long been established, and used steam-engines for pumping. The Belgian coalfield lay near the navigable River Meuse, so coal was shipped downstream to the ports and cities of the Rhine-Meuse delta. The opening of the Saint-Quentin canal allowed coal to go by barge to Paris. The Belgian coalfield outcrops over most of its area, and the highly folded nature of the seams meant that surface occurrences of the coal were very abundant. Deep mines were not required at first so there were a large number of small operations. There was a complex legal system for concessions, often multiple layers had different owners. Entrepreneurs started going deeper and deeper (thanks to the good pumping system). In 1790, the maximum depth of mines was 220 meters. By 1856, the average depth in the area west of Mons was 361, and in 1866, 437 meters and some pits had reached down 700 and 900 meters; one was 1,065 meters deep, probably the deepest coal mine in Europe at this time. Gas explosions were a serious problem, and Belgium had high fatality rates. By the late 19th century the seams were becoming exhausted and the steel industry was importing some coal from the Ruhr.
Post World War II Europe
After World War II much of Europe's coal mines passed into effective government control, with the British coal mines being nationalized under the control of the National Coal Board. The plan to nationalize the coal mines had been accepted in principle by owners and miners alike before the elections of 1945. The owners were paid £165,000,000. The government set up the National Coal Board to manage the coal mines; and it loaned it £150,000,000 to modernize the system. The general condition of the coal industry has been unsatisfactory for many years, with poor productivity. In 1945 there were 28% more workers in the coal mines than in 1890, but the annual output was only 8% greater. Young people avoided the pits; between 1931 and 1945 the percentage of miners more than 40 years old rose from 35% to 43%, and 24,000 over 65 years old. The number of surface workers decreased between 1938 and 1945 by only 3,200, but in that same time the number of underground workers declined by 69,600, substantially altering the balance of labor in the mines. That accidents, breakdowns, and repairs in the mines were nearly twice as costly in terms of production in 1945 as they had been in 1939 was probably a by-product of the war. Output in 1946 averaged 3,300,000 tons weekly. By summer 1946 it was clear that the country was facing a coal shortage for the upcoming winter with stock piles 5 million tons too low. Nationalization exposed both a lack of preparation for public ownership and a failure to stabilize the industry in advance of the change. Also lacking were any significant incentives to maintain or increase coal production to meet demand.
In Eastern Europe, the Communist governments nationalized all the mines after 1945.
Co-operation on coal trading was the impetus for forming the "European Coal and Steel Community" (ECSC) in 1951. Integrationists like French foreign minister Robert Schuman realized that coordinating the coal and steel markets of Germany, France, Italy and the Low Countries could lead to "spillover" in other policy areas; the ECSC indeed morphed into the EEC and European Union.
Coal produces over 80% of China's energy; 2.3 billion metric tons of coal were mined in 2007. Despite the health risks posed by severe air pollution in cities (see Beijing) and international pressure to reduce greenhouse emissions, China’s coal consumption is projected to increase in line with its rapid economic growth. Most of the coal is mined in the western provinces of Shaanxi and Shanxi and the northwestern region of Inner Mongolia. However most coal customers are located in the industrialized southeastern and central coastal provinces, so coal must be hauled long distances on China’s vast but overextended rail network. More than 40% of rail capacity is devoted to moving coal, and the country has been investing heavily in new lines and cargo-handling facilities in an attempt to keep up with demand. Despite these efforts, China has suffered persistent power shortages in industrial centers for more than five years as electricity output failed to meet demand from a booming economy. Demand for electricity increased 14% in 2007.
Mining has always been dangerous, because of explosions, cave-ins, methane gas and the difficulties of timely underground rescue. The worst single disaster in British coal mining history was at Senghenydd in South Wales. On the morning of 14 October 1913 an explosion and subsequent fire killed 436 men and boys. Only 72 bodies were recovered. The Monongah Mine disaster in West Virginia (December 6, 1907) was the worst mining disaster in American history. The explosion was caused by the ignition of methane (also called "black damp"), which ignited the coal dust. In all, the lives of an estimated 362 men and boys were lost in the underground explosion. (The actual number of those killed is unknown since records of those entering the mine were not maintained carefully.)
- ↑ Children in coal-mining areas sometimes can't resist entering old mines, which can be especially dangerous.
- ↑ 2.0 2.1 2.2 2.3 Barbara Freese (2003). Coal: A Human History, 1st Edition. Perseus Publishing. ISBN 0-7382-0400-5.
- ↑ Geoff Eley (2002). Forging Democracy:The History of the Left in Europe, 1858 - 2000. Oxford University Press. ISBN 0-19-503784-7.
- ↑ Frederick Meyers (1961). European Coal Mining Unions: Structure and Function. University of California. p.86
- ↑ Nimura Kazuo, Andrew Gordon and Terry Boardman (1998). The Ashio Riot: A Social History of Mining in Japan. Duke University Press. ISBN 0-8223-2018-5. p.48
- ↑ Hajo Holborn (1959). History of Modern Germany. A.A. Knopf. p. 521
- ↑ David Frank (1999). J. B. McLachlan: A Biography: The Story of a Legendary Labour Leader and the Cape Breton Coal Miner. James Lorimer and Company. ISBN 1-55028-677-3. p. 69
- ↑ David Montgomery (1991). The Fall of the House of Labor: The Workplace, the State, and American Labor Activism, 1865-1925. Cambridge University Press. ISBN 0-521-22579-5. p. 343
- ↑ Elspeth Thomson (2003). The Chinese Coal Industry: An Economic History, 1st Edition. RoutledgeCurzon. ISBN 0-7007-1727-7. p. 8
- ↑ Michael Flinn, John Hatcher, David Stoker, Roy Church, Barry Supple and William Ashworth (1984). The History of the British Coal Industry: 1780-1830, the Industrial Revolution (vol.2). Oxford University Press. ISBN 0-19-828283-4.
- ↑ J. Steven Watson (1960). The Reign of George III (vol.12). Clarendon Press. ISBN 0-10-821713-7. p. 516
- ↑ Peter Scott (January 2006). "Path Dependence, Fragmented Property Rights and the Slow Diffusion of High Throughput Technologies in Inter-war British Coal Mining". Business History 48 (1): 20-42. ISSN 1743-7938.
- ↑ 13.0 13.1 Ben Fine (1990). The Coal Question: Political Economy and Industrial Change from the Nineteenth Century to the Present Day. Routledge. ISBN 0-415-04384-0.
- ↑ Catherine Mitchell and Bridget Woodman (2004). The Burning Question: Is the UK on Course for a Low Carbon Economy. Institute for Public Policy Research. ISBN 1-86030-255-6.
- ↑ David A. Pretty (2001). "David Morgan ("Dai O'r Nant"), Miners' Agent: a Portrait of Leadership in the South Wales Coalfield". Welsh History Review 20 (3): 495-531. ISSN 0043-2431.
- ↑ Arthur McIvor and Ronald Johnston (2005). "Medical Knowledge and the Worker: Occupational Lung Diseases in the United Kingdom, 1920-1975". Labor 2 (4): 63-86. ISSN 1547-6715.
- ↑ Mark W. Bufton and Joseph Melling (2005). "'A Mere Matter of Rock': Organized Labour, Scientific Evidence and British Government Schemes for Compensation of Silicosis and Pneumoconiosis among Coalminers, 1926-1940". Medical History 49 (2): 155-178. ISSN 0025-7273.
- ↑ Arthur McIvor and Ronald Johnston (2002). "Voices from the Pits: Health and Safety in Scottish Coal Mining since 1945". Scottish Economic and Social History 22 (2): 111-133. ISSN 0269-5030. Based on interviews conducted with twenty miners who labored in the Lanarkshire, Ayrshire, and Fifeshire coalfields in the years 1945-80, plus documentary evidence
- ↑ Frederick M. Binder (1974). Coal Age Empire: Pennsylvania Coal and its Utilization to 1860. Pennsylvania Historical and Museum Commission. ISBN 0-911124-75-6.
- ↑ Sam H. Schurr and Bruce C. Netschert (1960). Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Resources for the Future ans John Hopkins Press, pp 60-62.
- ↑ Melvyn Dubofsky and Warren Van Tine (1986). John L. Lewis: A Biography, Abridged Edition. University of Illinois Press. ISBN 0-252-01349-2.
- ↑ Norman J. G. Pounds (1985). An Historical Geography of Europe, 1800-1914. Cambridge University Press. ISBN 0-521-26574-6.
- ↑ Norman J. G. Pounds and William N. Parker (1957). Coal and Steel in Western Europe; the Influence of Resources and Techniques on Production. Indiana University Press. Full text online
- ↑ Mark Tookey (2001). "Three's a Crowd? Government, Owners, and Workers During the Nationalization of the British Coalmining Industry 1945-47". Twentieth Century British History 12 (4): 486-510. ISSN 0955-2359.
- ↑ J. Gillingham (2005). Coal, steel, and the rebirth of Europe, 1945-1955: The Germans and French from Ruhr Conflict to Economic Community. Cambridge University Press. ISBN 0-521-52430-X.
- ↑ Derek W. Urwin (1991). The Community of Europe: A History of European Integration since 1945. Longman. ISBN 0-582-04531-2. | http://en.citizendium.org/wiki/Coal_mining | 13 |
19 | Physics 4060, Acoustics Laboratory
Musical sound is characterized by pitch, loudness, and quality. Pitch is frequency. The loudness of a given sound is determined by its intensity (power per unit area) factored by the sensitivity of the ear for that frequency. Quality or timbre refers to those sound characteristics which allow a person to distinguish sounds which have identical pitch and loudness. These distinguishing characteristics include harmonic content, vibrato, and attack/decay transients.
For sustained tones, the harmonic content is the dominant characteristic - and the determination of harmonic content is the point of this laboratory. To be more specific, "harmonic content" refers to the sound spectrum:the frequencies and relative intensities of the higher frequencies present in the sound, expressed as a fraction or percent of the intensity of the fundamental component. The fundamental is defined as the lowest resonant frequency of the vibrating object which produces the sound, and the terms overtone and harmonic are used to describe other, higher frequencies present in the sound. Overtone is the more general term, referring to any frequency component above the fundamental, but most overtones in musical sound are harmonics (integer multiples of the fundamental). Hence, determination of the sound spectrum usually involves identifying the harmonics and their relative intensity.
The first step will be to produce a tape recording of sounds for analysis. Then the sounds will be analyzed by two different methods: (1) A/D Conversion with a Universal Laboratory Interface and the use of the Sound software accompanying it and (2) A/D conversion and discrete Fourier Analysis using the MacScope A/D Converter. As a final step, you will be given a pre-recorded sound which is supposed to contain only odd harmonics (characteristic of a cylinder with one end closed.) Your task is to analyze that sound, and then resynthesize it using a Fourier synthesizer and compare the sound of your synthesized version with the original.
Each person will be responsible for recording three sounds. You may bring a musical instrument if you wish, in which case you should record three different notes for the instrument at approximately the same dBA level. The sounds should be sustained as long as possible to facilitate analysis. Make note of the instrument used and the pitch if known ( i.e., concert pitch by frequency or letter name.) Alternatively you should use voiced vowel sounds. Select well defined vowel sounds and produce three sounds at about the same pitch and dBA level. It is helpful to think of a word and then sustain the vowel sound in that word. For example "AH" as in HOD, "EE" as in HEED and "OO" as in WHO'D.
II. Sound Analysis with the Universal Laboratory Interface
In this section of the experiment, you will make use of a sound capture routine with a microphone an the ULI. The steps required are
1. Run the Sound program for the ULI on one of the laboratory computers. Prepare it to capture a sound signal as follows:
a. Turn off the repeat mode: Use the mouse to position the arrow over Collect at the top of the screen. A menu should appear. Holding the mouse button down, move the pointer down to Repeat Mode and if there is a check mark beside Repeat Mode, release the mouse button there to take off the check. If there is no check, then Repeat Mode has already been turned off, so you should move the pointer back to the top of the screen before releasing the mouse button.
b. Choose "One Graph" from the Display menu by the same kind of procedure used above, i.e., display the menu and move the pointer over "One Graph" and release the mouse button.
2. Put the microphone of the ULI within a few centimeters of one of the loudspeakers of the sound console so that the produced sound from your tape will be strong compared to room background noise.
3. Play your tape and get a satisfactory sound level from the speaker. With your first sound playing, click start on the computer screen to capture a sound sample. A single sample will be collected at that time, but it will be a few seconds before it is displayed. Scale the waveform as necessary to see whether it is satisfactory and repeat if necessary.
4. Save the sound sample on your disc with a name which will identify it clearly - you will have to find it for the analysis. To save it, choose "Save Experiment as.." from the File menu. You will be prompted for a name, but after typing the name you must make sure the name of your data disk appears as the destination where the information will be saved. If you click the Desktop button, you should see your disk name on the list. Double-click the name to oper your disk. If you can't find it, ask for assistance.Then click on Save.
5. Repeat the capture and disc save operations for your other two sounds. You may now take your disc and complete your analysis on any of the laboratory computers by using the Sound program on any of them.
6. Experiment with the display of your waveform until you have a clear display of at least one cycle of the waveform (preferrably about four cycles) , scaled so that it nearly fills the display vertically as well as horizontally.
7. With your waveform display on the screen , choose Analyze A from the Data menu to display a vertical line on the screen. Using the mouse, position that line on the first distinct peak of the waveform and note the time. Then move it to the right to cover as many full periods of the wave as possible and note the corresponding peak time. Calculate the period and then the frequency of the sound. When you have finished this analysis, click the "Done" button on the screen to take you back to the standard display.
8. Click on the display of the wave with the mouse to select it and then choose Copy from the file menu to copy this display onto the "clipboard" of the computer. Find and open the wordprocessor (Word) by choosing its icon under the Apple menu (far left, top of screen). Then choose Paste from its file menu to paste in your waveform display. Then save this as a wordprocessor document. You will be adding to this file to compose your laboratory report. Note: make sure you don't use the same title for your report as for any of your data files - it will overwrite it. Be sure to save it on your laboratory disk (i.e., make sure the name of your data disk appears before clicking Save.) You will be adding to this file to compose your laboratory report.
9. Type a description of the sound in your wordprocessor file along with the waveform you have saved there, including your measured dBA level, the time data you recorded for determining the period, and the calculated frequency of the sound above the display. Choose Save from the File menu each time you make changes to keep your report up-to-date.
10. Repeat this process for your other two sounds.
11. Produce frequency analysis plots of each of your three signals by the following procedure:
a. Display the wave in the graph window. Click with the mouse at one peak of the waveform and hold the mouse button down while you drag diagonally across the graph to draw a dashed rectangle which encloses exactly one period of the wave. The display will change to display the selected range. You may refine the process to fit more accurately to one period if necessary.
b. With two quick clicks of the mouse on the graph (called "double clicking"), you will display the window for choosing graph scales. Place the mouse pointer over the small rectangular window to the right of "Type of Graph" and you will receive an option to choose Data or FFT. Move the pointer to FFT and release the mouse button. Click OK at the bottom of the window plot of amplitude versus frequency will be displayed. This is your sound analysis. Click on it with the mouse and choose Copy from the Edit menu to copy it to the clipboard. Open your wordprocessor document and paste this analysis in below your waveform display of this same sound. This may be done by clicking the mouse below the waveform to establish an entry point and then choosing Paste from the Edit menu.
12. Compare your calculated frequencies in Hz with the Equal Tempered frequency chart and label the frequencies in your report with the nearest equal tempered note. The indented numbers are the sharps and flats.
III. Sound Analysis With the MacScope Interface
The MacScope interface offers a higher quality digitization of sound signals than the Universal Laboratory Interface used in the first part of this analysis laboratory series. The MacScope is to be used to digitize a sample from each of your three recorded sounds and to analyze the sound. The results are to be added to the wordprocessor file on your disk to complete your sound analysis report.
1. Play your tape and adjust the amplitude with the blue monitor control so that you get a waveform about 2 volts peak-to-peak (about 4 divisions peak-to-peak with the VOLTS/DIV set at 0.5).
2. Replay the tape and when you see the waveform, use the mouse to click the STOP/GO button on the MacScope screen. There is a delay before data appears on the screen - once it appears on the screen, click the STOP/GO button again to stop collection. An asterisk will appear (Stopped*) which shows you that data transfer to the computer is still going on. Wait until it disappears to proceed.
3. SAVE THE FILE IMMEDIATELY! (The software is very crash-prone, and saving the file will allow you to reconstruct it.) To save, choose "Save MacScope File..." from the File menu, give it a title, click the Desktop button and direct the Save to your disk. Use a name which you will recognize and which does not duplicate anything on your disk.
4. Use the slide bars and controls on the display to adjust it to almost full height of the screen and with at least two full periods showing in the window. The width of the waveform on the display is adjusted with the T slider at the top of the screen. The height is adjusted with the V slider to the right.
5. Use the mouse to carefully enclose precisely one period in a rectangle, observing the data box which is displayed on the screen so that you can write down the frequency of the wave for your report. That portion of the curve will be highlighted. Then choose Fourier Analysis from the Analyze menu. The harmonic amplitudes will be displayed in a bar graph below the waveform. Click on the Expand button to expand the display for the widest bars in the bar graph. You may need to use the V slider to readjust the waveform so that it will all fit in the window.
6. The display of the waveform and its analysis is to be made a part of your report. Move the mouse cursor off the display so that it doesn't show and then save an image of the screen by simultaneously holding down the Command and Shift keys and press 3 while those keys are still down. This will store a graphic image of the screen under the title screen0, screen1, or picture0, picture1, etc.
7. Before closing this data file write down the following data to include in your report:
a. The frequency you obtained from part 4 above. If you did not note it then, you can now redraw the rectangle over the same period and record that frequency.
b. Click on the FFT Data button to display the table of harmonic amplitudes. Write down just the harmonic number and the amplitude in the column next to it for all the harmonics which show amplitudes of .02 or more. Include a table of these amplitudes in your report with the graphical display saved above. (Alternatively, you can just take a picture of the data table and patch it into your report.
8. Close this data file and repeat the process for your other two sounds on the tape. To get back to the screen for collecting data as in Step 2, click the Scope button at the top of the display.
9. You must now transfer the graphic screen image files titled screen_ or picture_ onto your data disk. By default, they are stored on the hard disk of the machine. Find them and copy them to your disk by dragging their icons to your disk icon on the side of the screen.
10. Open your lab report file with the wordprocessor. Also open the graphic program DeskPaint by selecting it from the Apple menu (far left). Use DeskPaint to open the first of your Screen_ or Picture_ files. Use the dashed rectangle tool to draw a rectangle around just the waveform display and the Fourier analysis below it. Choose Copy from the Edit menu to copy this image. Then choose the wordprocessor from the extreme right hand menu to bring up your report and paste this graphic into your report. Add the frequency and harmonic amplitude data by typing it in below the display. Repeat for the other two screen files so that you have all three waveforms and the accompanying data in your report.
Below is an example of the type of display that should be a part of your report:
Waveform of voice, vowel "AH", 215 Hz
IV. Fourier Analysis and Synthesis
You will be using a simple Fourier synthesizer which is able to produce up to nine harmonics with variable phase. A recorded sound from a short closed organ pipe will be provided as a test case for Fourier synthesis.
First analyze the sound of the organ pipe using the MacScope A/D converter. Using the analysis of one period of the sound, extract amplitudes and phases for the harmonics present in the sound. Although we don't hav a precise way to control phase, you are match the amplitudes of the harmonics with the Fourier synthesizer and make some attempt to adjust the phase so that the waveform looks like that of the tape of the pipe sound. Listen to the two sounds and comment on their similarity or differences.
|HyperPhysics****Class Home||Go Back| | http://hyperphysics.phy-astr.gsu.edu/hbase/ph4060/p406ex4.html | 13 |
25 | 1. What is Nonverbal Behaviour?
What is nonverbal behaviour and what does study of nonverbal behaviour
include? Nonverbal behaviour refers to communicative human acts distinct
from speech. Since nonverbal behaviour includes every communicative
human act other than speech (spoken or written), it naturally covers
a wide variety and range of phenomena such as facial and eye expressions,
hand and arm gestures, postures, positions, use of space between individuals
and objects, and various movements of the body, legs, and feet.
Since nonverbal behaviour is considered distinct from speech, it also
includes silence as well as dropping of elements from speech and/or
the missing elements in speech utterances. There is a general consensus
that, although nonverbal behaviour means acts other than speech, in
a broader sense nonverbal behaviour also includes a variety of subtle
aspects of speech variously called paralinguistic or vocal phenomena.
These phenomena include intensity range, speech errors, pauses, and
speech rate and speech duration. These features are of a nature that
somewhat eludes explicit description when used in communicative contexts.
In other words, these features are employed for implied meanings and
are not explicitly describable and/or stated through/as linguistic units.
Also included in discussions of nonverbal behaviour are other complex
communication phenomena, such as sarcasm.
Thus, even though as a working definition nonverbal behaviour is conceived
to be everything other than speech, the boundary between verbal and
nonverbal is always blurred and there are certain aspects of speech,
which fall within the domains of nonverbal behaviour. In view of this,
it is not surprising to find that the researchers have differed among
themselves as regards the definition and scope of the study of nonverbal
2. Relationship between Verbal and Nonverbal Communication
There are several ways in which the nonverbal behaviour is seen clearly
related to verbal behaviour. This relationship is one of dependence
and also of independence. There are nonverbal communicative acts that
are easily and accurately translated into words. Several gestures clearly
illustrate this relationship. For example, the gesture of folded hands
for namaste, the gesture of a handshake, a smile, a frown, etc., are
generally translatable into words.
There is also a class of nonverbal acts that are very much a part of
speech and serves the function of emphasis. Examples are head and hand
movements that occur more frequently with words, and phrases of emphasis.
Sometimes we draw the outline of objects or processes in the air to
make our point clear to the addressee. Feelings may be described through
nonverbal acts. There are acts which draw pictures of the referents
tracing the contour of an object or person referred to verbally. Yet
another class of acts is employed for displaying the affects (feelings).
Another class refers to acts that help to initiate and terminate the
speech of participants in a social situation. These regulators might
suggest to a speaker that he keep talking, that he clarify, or that
he hurry up and finish (Ekman and Friesen, 1969).
There are at least six ways in which the relationship between verbal
and nonverbal communication can be characterized. These are as follows:
- The relationship between verbal and nonverbal communication is one
of the latter, playing a supplementary role to the former. The nonverbal
acts that are supplementry to verbal acts may precede or follow or
be simultaneous with the verbal acts. For example, in many verbal
acts one notices an accompaniment of one or more nonverbal acts, such
as gestures, facial expressions, and movement towards or away from
the addressee, to illumine the meaning of the former. While for many
verbal acts such an accompaniment may only be considered redundant,
for several others, such an accompaniment does, indeed, illumine the
meaning of the former, adding explicitness, clarity, emphasis, discrimination
- The relationship between verbal and nonverbal communication is also
one of the former playing a supplementary role to the latter. In many
verbal acts, both in children and adults, in normals with all the
linguistic organs intact, and normals with some handicap to the linguistic
organs, as well as in abnormal individuals, nonverbal acts may take
precedence over the verbal acts in several ways. In the normals with
all the linguistic organs intact, occasions demand the use of nonverbal
acts such as pantomime and gestures for aesthetic purposes, and for
purposes of coded (secret) communication. Indulgence in nonverbal
acts as primary medium is also necessitated by the distance that separates
the parties which can, however, retain visual contact while engaging
themselves in communication.
- The relationship between verbal and nonverbal communication could
be one of correspondence as well. That is, there are several nonverbal
acts that can be accurately translated into words in the language
of a culture in which such nonverbal acts are performed. A handshake,
shaking a fist at someone, a smile, a frown, etc, are all nonverbal
acts translatable into verbal medium in a particular language. The
functions of these nonverbal acts, context to context, are also translatable.
Furthermore, such correspondences are also codified in aesthetic nonverbal
acts, such as dance, sculpture and other arts. The correspondence
is sometimes translatable into words, sometimes into phrases and sentences,
and several times translatable into compressed episodes involving
lengthy language discourses. But the correspondence is there all the
same and the import of this correspondence is shared between individuals
within a community. There is also yet another correspondence of nonverbal
acts in the sense that similar nonverbal acts could mean different
things in different cultures.
- Yet another relationship between a verbal act and a nonverbal act
is one of dependence. A verbal act may depend for its correct interpretation
entirely on a nonverbal act. Likewise a nonverbal act may depend for
its correct interpretation entirely on a verbal act. In extreme circumstances,
the former is caused because of deliberate distortion of the verbal
act, or because of the difficulty in listening clearly to the verbal
act, or because of the difficulty in reading with clarity what is
intended to be read in the written verbal message. Deliberate distortion
is not found only in contrived acts such as poetry or drama. It is
done in day to day language itself. Distortion and opacity of the
verbal message are also required in certain sociocultural contexts
wherein it is demanded that verbal acts be suppressed and made dependent
on nonverbal acts. The dominant nonverbal act also depends on verbal
acts for clarity. This dependence, like the former, could be contrived.
It also occurs in daily life.
- Verbal and nonverbal acts can be independent of one another. Something
is communicated through a verbal act. The continued manifestation
of this communicative act may be in the form of nonverbal acts. That
is, in a single communicative act, part of the message may be in verbal
form and the rest in nonverbal, in an alternating way. Each part is
independent of the other. This is contrived in poetry and drama. It
is also found in every day life. An extreme form of this independence
is the gulf that we notice between what one says and what one does.
Also, prevarication both in word and deed derives its strength, among
others, from this feature.
- Another relationship between verbal and nonverbal acts is one of
non-relevance. This is most commonly found in normal adult speech
and its accompanying gestures, which are produced simply without any
communicative intent. We move our hands, snap our fingers, and move
our bodies while speaking, with these gestures having no relevance
to the speech we make. When this non-relevance between verbal and
nonverbal acts found in normals is shifted to non-relevance or irrelevance
within the single domain, within speech itself or within nonverbal
act itself (during which coherence in speech or act is lost), we start
considering the individual abnormal in some way. That is, non-relevance
across the verbal and nonverbal media is normal, but non-relevance
within a single medium is abnormal. The non-relevance is idiosyncratic
and could be imitational as well. In the normals the excessive non-relevance
of nonverbal acts accompanying speech comes to hamper the understanding
of the verbal acts.
(1973) has suggested the following functions for nonverbal communication:
- Nonverbal signs define, condition, and constrain the system; for
example, time, place and arrangement may provide cues for the participants
as to who is in the system, what the pattern of interaction will be,
and what is appropriate and inappropriate communication content.
Nonverbal signs help regulate the system, cueing hierarchy and priority
among communicators, signalling the flow of interaction, providing meta-communication
- Nonverbal signs communicate content, sometimes more
efficiently than linguistic signs but usually in complementary redundancy
to the verbal flow.
Ekman and Friesen (1969) specify five general functions for nonverbal
behaviour, namely, repetition, contradiction, complementation, accent
and regulation. In repetition there is both verbal and nonverbal expression
made simultaneously, where one will do. In contradiction, the verbal
and nonverbal behaviours contradict one another as in the case of a
verbal praise in a sarcastic tone. In accent, spoken words are emphasized
through nonverbal acts. Through the use of eye contact, gestures and
others, nonverbal behaviour is employed to regulate human interaction
Based on the above brief discussion, we find that the relationship
between verbal and nonverbal behaviours can be considered as follows:
- The relationship between verbal and nonverbal communication is one
of the latter, playing a supplementary role to the former.
- The relationship between verbal and nonverbal communication could
be one of the former playing a supplementary role to the latter.
- The relationship between verbal and nonverbal communication could
be one of correspondence.
- The relationship between verbal and nonverbal communication could
be one of mutual dependence.
- The relationship between the two could also be one of independence
from one another.
- The relationship between the two could be one of non-relevance as
- The relationship between verbal and nonverbal communication could
be one of one repeating the message of the other.
- The relationship between verbal and nonverbal communication could
be one act contradicting the other.
- The relationship between verbal and nonverbal communication could
also be one of mutual emphasis.
- Finally, the relationship between the two could also be one of mutual
While the study of verbal behaviour and non-verbal behaviour
has been done independently in several disciplines, the relationship
between the two has not received the attention it deserves. Human communication
is a wholesome fusion of both verbal and nonverbal acts. This fusion
appears to have both physiological (genetic) as well as sociocultural
consequences. The fusion of verbal and nonverbal behaviours in a communicative
act marks the human species as distinct from other species.
the manner in which the fusion between verbal and nonverbal acts has
taken place in humans marks the humans distinct from other species.
Also, societies and cultures distinguished from one another by the style
are and exploitation of this fusion of verbal and non-verbal acts for
varying contexts, pursuits and purposes. Moreover, various cognitive
disorders, including language disorders found in humans can be seen
as those of differences in the degree and manner of fusing the verbal
and nonverbal behaviours.
That the verbal and nonverbal behaviours are
closely related is well recognized by all. Socialization processes in
every society insist upon mastery and exploitation of this relationship
in both children and adults in their communication modes. For example,
what postures, voice modulations, facial expressions, gestures, etc.,
that one should or should not employ in a particular context for a particular
pursuit and purpose are all predetermined in cultures.
the well-set norm are allowed for certain effects only. Deviations are
also classified into several abnormal varieties. In essence, what makes
communication essentially human is the intrinsic binding within all
such communication between verbal and non-verbal facets.
Nonverbal behaviours reflect very basic social orientations that are
correlates of major categories in the cognition of social environments
(Piaget, 1960). In other words, the nonverbal behaviours pursued in
a society reveal the orientations towards interactions between persons
that individual members of that society consider as basic. There are
also common cognitive and behavioural dimensions for both animal and
human social systems.
Hence, some have claimed that primates, in particular, can provide
complementary information about certain aspects of affect and attitude
communication in humans (Sommer, 1967). That is, the observation of
animal social interactions can complement the study of individuals of
a single culture and provide corroboration for identified dimensions
of social interaction. Furthermore, it has been suggested by many that
nonverbal behaviour is also produced by the same underlying processes
employed in the production of linguistic utterance and that it shares
some of the structural properties of the speech it accompanies.
3. Research Strategies
Research strategies employed in the study of nonverbal behaviour can
be grouped as those following or falling within linguistic methodologies,
methodologies of anthropological investigations and methodologies of
psychological investigations. Note, however, that within each of these
major pursuits there are several variations based on the approaches
and aims of schools within these disciplines. Also note that there are
mutual influences found among these strategies. Some of the strategies
are not followed widely and some have become strategies rather clearly
identified with individual scholars.
3. 1. Linguistically-oriented Studies of Nonverbal Behaviour
Modern linguistics, both Indian and Western, does not include study
of nonverbal behaviour as part of grammar. There are elements of nonverbal
behaviour; or rather elements shared both by verbal and nonverbal behaviour,
such as implied meanings (presupposition, illocutionary acts whose implications
could be brought out by paraphrase etc.) that are sought to be treated
within grammar in modern times. However, these attempts have become
characteristics of certain offbeat grammatical studies, rather than
the core or integral part of grammatical approaches and general practice.
In contrast, traditional Indian studies of language always included
study of nonverbal behaviour as an integral part of grammar (See below
3.5 for a brief descriptive statement and summary).
Bloomfield (1933) distinguished between the act of speech and other
occurrences, which he called practical events. Any incident consisted
of three parts, in order of time: practical events preceding the act
of speech, speech itself, and practical events following the act of
speech. While there is, thus, recognition of occurrence of both speech
and non-speech acts in a communicative act, linguists generally focus
upon speech rather than on the practical events preceding, accompanying
and following acts of speech. In general, linguists ignore the nonverbal
concomitants of verbal act.
Linguistically oriented studies of nonverbal behaviour are indeed very
few, and those few studies also generally aim at adequacy of language
description by way of describing such nonverbal behaviours that impinge
on verbal behaviour and/or exploit verbal-like elements in the nonverbal
act. Moreover, the linguistically oriented studies of nonverbal behaviour
extend the method of description and transcription of linguistic elements
to a description and transcription of nonverbal behaviour. A clear case
of linguistically oriented description of nonverbal behaviour is that
of Trager (1958). Another study is that of West (1963), who seeks to
identify sign language units corresponding to linguistic units, such
as words, clauses, phrases and sentences.
Trager recognizes that communication is more than language. Although
linguistics aims at the description of language as a system of communication,
linguists limit themselves to examination of such parts of linguistic
structures as they could define and examine objectively. In view of
this self-imposed restriction, communication systems other than language
remain outside their purview of research. Trager finds this an unsatisfactory
approach to the study of language and seeks to devise ways and means
to describe systems adjunct to language.
Trager calls the study of language and its attendant phenomena macro
linguistics, and divides it into prelinguistics, micro linguistics,
and metalinguistics. Prelinguistics is said to include physical and
biological events. The statement of the relationship between language
and any of the other cultural systems constitute metalinguistics while
micro linguistics is linguistics proper.
Communication, according to Trager (1958), is divided into language,
vocalizations and kinesics. Language employs certain noises made by
organs of speech. It combines these noises into recurrent sequences
and arranges these sequences in systematic distributions in relation
to each other and in reference to external world. Vocalizations do not
have the structure of language and consist of variegated noises. Vocalizations
also include modifications of language and other noises. In general,
vocalizations may be seen as consisting of paralanguage, voice set and
voice qualities. Variegated noises other than language ones, and modified
language and other noises together are called paralanguage. Voice set
involves the physiological and physical peculiarities of noises. With
the help of these peculiarities we identify individuals as members of
a societal group. We identify them as belonging to certain set, age,
state of health, body build, rhythm state, and position in a group,
mood, bodily condition and location.
Many other identifications are also made. Voice qualities consist of
matters such as intonation. These are recognizable as forming part of
actual speech events and are identified in what is said and heard. Trager
lists the following as voice qualities identified so far: pitch range,
vocal lip control, glottis control, pitch control, articulation control,
rhythm control, resonance and tempo.
The voice set and voice qualities are overall or background characteristics
of the voice, whereas the vocalizations are identifiable noises. All
these are different from language sounds proper. Trager identifies three
kinds of vocalizations constituting paralanguage. These are: vocal characterisers,
vocal qualifiers and vocal segregates. The vocal characterisers are:
laughing, crying, giggling, snickering, whimpering, sobbing, yelling
and whispering, moaning, groaning, whining, breaking, belching and yawning.
The vocal qualifiers are those of intensity, pitch height, and extent.
Vocal segregates are items, such as uh-uh, uh-huh and uh, sh! These
are sounds that do not fit into phonological and/or word frames in sequences
in a language.
Trager has viewed study of paralanguage as contributing directly to
an understanding of kinesics (study of movement, posture and position
individuals assume in their interaction). It may be that in their overall
structure these two fields of human behaviour may be largely analogous
to each other. For all the variables identified, Trager provides symbols
for transcription. The scope of description of the nonverbal behaviour
is limited to descriptions of sound features and their functions in
manifest behaviour. Thus, even in Trager's efforts, while the importance
of nonverbal behaviour for a total description of communication process
is recognized, its accommodation in the discipline of linguistics is
only towards an illumination and adequate coverage of linguistic behaviour.
Also, the method of description of nonverbal behaviour is always an
extension of the methods of study of linguistic behaviours. Attempts
are also made in this process of extension to posit corresponding levels
of linguistic and nonverbal behaviour.
3. 2. Anthropologically-oriented Studies of Nonverbal Behaviour
The anthropologically oriented studies of nonverbal behaviour have
a long history. The sign languages of the aboriginals, the communicative
processes carried on through (non-sign language) gestures, postures,
and exchange of goods and rituals, etc., have been discussed in anthropological
Nineteenth century American anthropologists showed a lot of interest
in the aboriginal sign languages of the Americas. They recognized that
the conventional gesture codes employed by Native Americans (Red Indians)
are independent communication systems, which have the range and flexibility
found in speech. This recognition is still continued, as we find in
the works of Kroeber, characterizing the sign language communication
as follows: 'What makes it an effective system of communication is that
it did not remain on a level of naturalness, spontaneity, and full transparency,
but made artificial commitments, arbitrary choices between potential
expressions and meanings'.
The early 19th century work by Colonel Garrick Mallery, who made a
collection and study of North American Plains sign language gestures
and made a comparison of the same with other codes such as gestures
and sign languages of the deaf, gave an impetus to modern interests
in nonverbal communication processes in the West. This interest and
study influenced anthropological studies in the beginning.
At one time nonverbal behaviour within anthropological studies focused
only on gestures. Later, other aspects of nonverbal behaviour were also
studied. And very soon, in modern anthropology, culture itself began
to be viewed as communication. Yet the study of nonverbal communication,
in the sense of communication as it is effected through behaviour whose
communicative significance cannot be achieved in any other way, is only
a recent introduction to anthropology and has yet to establish itself
fully in anthropology.
However, even today the communication processes in the sense of oral
and nonverbal interaction has not attracted much attention in anthropological
studies. To quote Codere (1966) 'the subjects of gestures, medicine,
or games are rarely considered in any single volume ethnography and
are even more rarely given any extended treatment ... Once the major
ethnographic topics of social organization, economic organization and
religion are dealt with, the task is not done if it is defined as giving
any sense or indication of the richness and complexity of the culture
concerned. Yet why do such topics as technology, the yearly round, and
the life cycle have a secure conventional place as secondary topics;
such topics as humour and the three mentioned here, no place at all;
and such topics as the arts only, an occasional one?'
In the evolution of studies on nonverbal behaviour as a comprehensive
and perhaps an independent discipline, anthropology has played a crucial
role. Hall's study of proxemics (Hall 1959, 1969 and 1977) has revolutionized ideas, assumptions
and identification of domains of nonverbal behaviour studies. And Hall's
contributions come from anthropological bases.
If the study of aboriginals' signs is considered the precursor of modern
anthropologically oriented studies of nonverbal behaviour, Hall's contributions
have led the anthropologically oriented nonverbal behaviour studies
to explore areas such as proxemics that have become since then bases
of ideas and assumptions as well as subject matter of experimental investigations
on nonverbal behaviour. Likewise, Birdwhistell's works present a formal
tool for a description and understanding of nonverbal communication.
Birdwhistell's research strategy (Birdwhistell, 1970) is a clear and
illustrious example of the influence of linguistics on the study of
nonverbal behaviour. Influenced by developments in American structural
linguistics, Birdwhistell makes a very significant contribution, adopting
and effectively modifying underlying concepts, methods, and tools of
transcription and description of units of language, as propounded and
practised in neo-Bloomfieldian structural linguistics. According to
Birdwhistell, our communication system is not something we invented
but rather something that we internalised in the process of becoming
man. Also, research on communication as a systematic and structured
organization could not be initiated until we have some idea about the
organization of society itself. Birdwhistell contends that communication
is multi-channel. It includes both language and paralanguage; it also
includes gesture and kinesics. There is the inter dependence of visible
and audible behaviour in the flow of conversation. Meaning includes
both the contents of words and other measures.
Also, not all shifts of the human body are of equal importance or significance
to the human communicational system. 'As the organs involved in breathing
and swallowing are also involved in vocalic communicative behaviour,
so also is the activity of the skin, musculature, and skeleton involved
in communicative behaviour.
Which particular behaviours are of patterned communicative value, and
thus abstractable without falsification, can be determined only by the
systematic investigation of the behaviour in the communicational context'
(Birdwhistell, 1970). So, what Birdwhistell seeks is not idiosyncratic
nonverbal behaviour, but patterned behaviour within individuals and
across individuals and a systematic study of the same.
Birdwhistell believes that the investigation of human communication
by means of linguistic and kinesic techniques is desirable and relevant.
Body motion is a learned form of communication, which is patterned within
a culture and which can be broken down into an ordered system of isolable
elements, just as language.
Hence, Birdwhistell pursues the research for communication units based
upon linguistic and kinesic analysis. The dependency of Birdwhistell's
analysis of body motion on structural linguistics is seen throughout
his work. He also finds that such a dependency is not without handicap:
Techniques and theories developed over the last 2000 years
of linguistic research are now and may in the future remain quite relevant
for kinesic research and are absolutely necessary to communicational
research. However, these techniques are not all immediately and without
adaptation transferable to kinesic research. For example, the informant
technique, so basic to research on spoken language, is difficult to
control in the investigation of kinesic material.
The influence of linguistics in Birdwhistell's study of kinesic behaviour
is clearly seen in his coinage of technical terms for the description
of kinesic behaviour, identification of units of kinesic behaviour,
correspondence of units between kinesic and linguistic behaviour, method
of identification of units, description of units, transcription of units
and building up of smaller units into components of larger units. In
all these, we find Birdwhistell adopting terms from linguistics. Parallel
between linguistic behaviour and kinesic behaviour is rather too manifestly
This does not mean, however, that Birdwhistell has simply transferred
linguistics to the analysis of nonverbal behaviour or that he has nothing
new to offer by way of analysis of nonverbal behaviour. Birdwhistell's
contribution lies not only in showing the applicability of linguistic
analytical tools and methods to kinesic behaviour, but also in providing
an in-depth study of kinesic behaviour itself in several cultures. He
has also demonstrated the parallel in our chapter on proxemics.
Another significant anthropologically oriented study of nonverbal behaviour
is that of E. T. Hall (1959, 1969 and 1977). While Birdwhistell focuses
his attention on the description of kinesic behaviour in formulaic expressions,
involving a number of derived technical terms, Hall looks at nonverbal
behaviour from a descriptive, ethnographic angle without much technical
terms and formulaic expressions.
Hall's approach to study of nonverbal behaviour is decidedly anthropological
and very much ethnographic and crosscultural as well as meant to be
a guide for a better world of understanding, tolerance and insightful
utilization of human resources; it is also linguistically influenced
at least in its origins. There is not much of an influence of linguistic
terms but there is a sharing of concepts from structural linguistics.
However, Hall's work is more an anthropologist's study of nonverbal
behaviour. His transcription system does not draw from linguistics as
much as the Birdwhistell's system draws from linguistics. Also, Hall's
work is more a comparative ethnographic study of nonverbal behaviour
whereas Birdwhistell's approach generally restricts itself to the description
of nonverbal behaviour, in particular, the kinesic behaviour, of a group
without resorting to any comparison of the same with others.
E.T. Hall considers that culture is a bio-basic; it is rooted in biological
activities. There is an unbroken continuity between the very distant
past and the present in the sense that although man is a culture-producing
animal at present, there were times when there was no man and no culture.
This infra-culture became elaborated by man into culture. Hall argues
that by going back to infra-culture we could demonstrate the complex
biological bases upon which human behaviour has been built at different
times in the history of evolution. Infra-culture is behaviour on lower
organizational levels that underlie culture.
Hall suggests (along with his colleague Linguist Trager) that the numbers
of infra-cultural bases are indeed few and bear little or no apparent
relationship to each other on the surface. These are called primary
message Systems. There are ten systems:
- Exploitation (use
Note that only the first, the primary message system of
interaction, involves language. All other systems are nonlinguistic
forms of communication. Hall finds that language is the most technical
of the message systems. It is to be used as a model for the analysis
of others. In other words, Hall implies that the analysis of other forms
of communication may follow the procedures of analysis of language.
He also emphasizes that in addition to language there are other ways
in which man communicates that either reinforce or deny what he has
said with words. Nonverbal behaviour is an integral part of culture
and it includes not only acts but also material objects having the potential
Patterns are implicit cultural rules by which sets
are arranged to give meaning. For example, most people take horses as
a single set whereas a trainer of horses examines a number of sets such
as height, weight, length of barrel, thickness of chest, depth of chest,
configuration of the neck and head, stance, coat conditions, hoofs and
gait. Laymen see these as isolates but the trainers of horses see them
as sets leading on to patterns. Order, selection and congruence characterize
the system of communication.
Hall's major investigations center around
man's use of space. Every living thing has physical boundary that separates
it from its external environment. That space communicates is well recognized
in all societies. Hall studies space as an informal cultural system
in all its details. Formal patterning of space has varying degrees of
importance and complexity. Use of space is closely linked with status
as well. Hall investigates the use of space by humans in relation to
distance regulation in animals, crowding and social behaviour in animals,
distance receptors such as eyes, ears and nose, immediate receptors
such as skin, and muscles, visual space, and use of space in cross-cultural
Hall's investigations also exploit literary works and other
arts to an understanding of use of space by individuals, social groups
and different language communities. Hall presents his work on use of
space for a better understanding of different peoples and their cultures,
and for a better world of living and understanding. He finds that literally
thousands of our experiences teach us unconsciously that space communicates.
A painstaking and laborious process awaits one who wishes to uncover
the specific cues. The child who is learning the language cannot distinguish
one space category from another by listening to others talk (examples
are, He found a place in her heart, He has a place in the mountains,
I am tired of this place, and so on).
In spite of this, the children
are able to make the difference between various space terms from the
very few cues provided by others: Space, as an informal cultural system
is different from space as it is technically elaborated by classroom
geography and mathematics. Hall seeks to identify what space is in various
cultures, how it is interwoven with individual and social behaviour,
how space comes to communicate various values and how its use becomes
the diagnostic marker of various individual and social values. Hall
is the one who systematized the study of space in human interactions
and brought out various crucial facts underlying use of space. All this
he does taking an interdisciplinary attitude, but all the same the approach
is anthropologically oriented.
It is seen from the study of literature
on nonverbal behaviour that modern growth of explicitly stated studies
in communicative nonverbal behaviour in communicative interactions,
especially in the United States, indeed, is closely linked with the
contributions of Trager, Birdshistell and Hall. Trager's contributions
remained an island; continue to be so even now within linguistics, which,
while giving a spurt to investigations of language-related disciplines,
has somehow continued to treat nonverbal behaviour studies as a peripheral
A remarkable fact is that in spite of the very many attractions
within his own paradigm, calling him to go beyond language variables
and to attack variables that impinge on nonverbal behaviour, the linguist
in Trager has not strayed beyond what is strictly and formally linguistic
(according to Trager) and relevant to an understanding of nonverbal
behaviour. Birdwhistell's investigations continue but not with many
adherents, and yet his investigations have a distinct bearing on studies
of nonverbal behaviour.
Hall's work is largely absorbed in the current
experimental investigations of nonverbal behaviour although it is generally
restricted only to some aspects of nonverbal behaviour. Hall's work,
unlike those of many other authors, has also caught the imagination
of popular science writers leading on to both insightful and not so
insightful investigations of nonverbal behaviour, and to speculations.
All said and done, anthropologically-oriented approaches to the study
of nonverbal behaviour is a continuing and positive aspect of nonverbal
behaviour studies and enriches the experimental investigation by providing
possible and insightful variables for research and for cross cultural
validation of experimental findings.
3. 3. Psychologically-oriented Approaches to the Study of Nonverbal Behaviour
The psychologically oriented
approaches to the study of nonverbal behaviour are many and they currently
dominate the nonverbal communication research scene. Some psychologically
oriented studies focus upon the association of psychological states
with nonverbal behaviours. The nonverbal behaviours are taken to be
indicative of underlying psychological states.
In these studies description
of nonverbal behaviour is linked with the description of psychological
states of the individuals emitting nonverbal behaviour. In another approach,
the studies focus upon observers. The observers are asked to interpret
the given nonverbal behaviour in terms of psychological states. These
are studies that involve decoding of nonverbal behaviours presented
In encoding studies, different situations, to which corresponding
attitudes are explicitly ascribable and clearly linked and elicited,
are identified, subjects are placed in these situations and their responses
measured. These studies are generally of a role-playing type. There
is also another approach in which various choices of nonverbal behaviours
are presented to subjects. They are asked to indicate their preference
among the given nonverbal behaviours for specific social situations.
That is, subjects are asked to choose among forms or combinations of
behaviour to communicate various attitudes. Evaluating these approaches,
Mehrabian (1972) suggests that whereas encoding methods are appropriate
in the beginning stages of communication research, the last mentioned
above, which he calls the encoding-decoding method, is appropriate for
highly developed phrases of nonverbal behaviour research.
oriented approaches have led to a wider coverage of a variety of nonverbal
behaviours. Currently studies of all forms of nonverbal behaviour, such
as crowding, space utilization, visual behaviour, facial expressions,
abnormal nonverbal behaviour are generally initiated and enriched by
the emergence of psychologically-oriented researches.
can be traced back to the beginning of modern psychological investigations.
After all, retrieval of meanings of human behaviour, and interpretation
of human behaviour has been the major purpose of psychology. The specific
communicative means of behaviour have always been subject matter of
investigation along with the behaviour itself.
A salient feature of
psychologically oriented studies of nonverbal behaviour is the exploitation
of statistical measures which are generally not resorted to (or even
avoided) in the linguistically and anthropologically oriented studies.
Also, in contrast to linguistically and anthropologically oriented studies,
psychologically oriented nonverbal behaviour are mainly experimental studies. These studies are generally based on individual
psychological factors, rather than on social factors, although the social
function is not lost sight of.
the psychological-oriented studies of nonverbal behaviour are typically
articles in research journals based on controlled experiments focusing
on limited variables. Validation or rejection of hypotheses, description
and explanation of processes involved and an attempt at bringing out
a hierarchy of events and variables involved and the hidden processes
through an understanding of manifest processes become the focus of these
psychologically-oriented studies of nonverbal behaviour. All aspects
of nonverbal behaviour are sought to be dealt with under experimental
Accordingly, a lot of energy is expended not on identifying
facets and aspects of nonverbal behaviour per se, but on means to bring
out the observed nonverbal behaviour variables in a form suitable for
controlled experiments. The significance of these variables is hypothesized
beforehand and their validity proved or disproved in the experiments.
In the process, however, several new meanings hitherto hidden are identified
and a pattern as well as a hierarchy is established.
of psychology, particularly of learning, naturally, influence the psychologically
oriented studies of nonverbal behaviour. The psychologically oriented
studies of nonverbal behaviour, in a manner of speaking, have become
the central part of all nonverbal behaviour studies. These studies are
more in number, cover most of the aspects of nonverbal behaviour, attract
more investigators and students, and accommodate findings on nonverbal
behaviour worked out in other fields, such as linguistics, anthropology
Since most of the psychologically oriented studies are
independent articles, the overall assumptions of psychologically oriented
nonverbal studies are not generally explicitly stated. Mehrabian (1972)
suggests that any attempt at a comprehensive description of findings
in the study of nonverbal communication has to include the large numbers
of behavioural cues that are studied (e.g., eye contact, distance, leg,
and foot movements, facial expressions, voice qualities).
description should also account for the relationships among these cues,
the relationships between these and the feelings, attitudes, and personalities
of the communicators, and the qualities of the situations in which the
communications occur. Note that this scheme is carried out with well-designed
tools of questionnaires administered orally or visually under appropriate
situations for both controlled and experimental groups. Also, appropriate
statistical measures are applied to data thus obtained to prove or disprove
3. 4. Semioticallly-oriented Studies of Nonverbal
Where psychologically-oriented studies of nonverbal behaviour
restrict themselves to empirical methods and findings, subjecting them
to statistical measures and arriving at theoretical models that are
generally found in psychology proper, semiotics draws facts from different
disciplines and views them from the points of view of sign theory or
There is no experiment conducted as a matter of routine, or
as a norm in semiotic investigations. Observation, and reasoning out
the inter-relationships between observed facts, identification of patterns,
validation of facts based on patterns worked out, and identification
of/or bringing out manifestly the covert processes through proposals
as regards patterns and dynamic processes dominate semiotic investigations.
There is, indeed, no model building in semiotic investigations in the
sense of forming schools and restricting pursuits within the assumptions
and postulates of the school. However, there is a body of knowledge
contributed by different scholars as regards the nature, function and
componential features of signs and their inter-relationships. There
are also procedures, generally not stated explicitly but found practiced
in most of the semiotic investigations.
The semioticallly-oriented studies
of nonverbal behaviour view it as constituting semiotic systems involving
various types of signs. Investigations may be carried out based on models
of experimental psychology by individual authors. They may, however,
build their theory and explanations in a semiotic fashion, taking the
sign values of facts as crucial. The semiotic analysis of nonverbal
behaviour is mainly the interpretation and explanation of data collected
through other means.
This interpretation (and explanation), however, leads
on to newer insights and identification of hitherto unknown facts. This
is, indeed, one of the major strengths and achievements of the semiotic
method. The oriented-oriented studies of nonverbal behaviour, generally
speaking, compare and contrast the verbal with the nonverbal behaviours.
This comparison and contrast takes on the presentation of features involved
in a binary opposition. It is also shown as to how the features balance
themselves in a communicative act. In this analysis, hidden processes
and new information and variables are also revealed and added on.
A sign is everything which can be taken as significantly substituting
for something else. This something else does not necessarily have to exist
or actually be somewhere at the moment in which a sign stands in for it.
Saussure (1915) implicitly regarded sign as a communicative device, used
between two human beings intentionally aiming to communicate or to express
something. Not all signs are, however, communicative signs. For example,
black clouds are a sign of rain. Although they represent a meaning to
us, we do not communicate with the black clouds, and the clouds do not
respond to us.As Cherry (1980) points out, any artifact may possibly be
a sign (a scratch on a stone, a printed mark, a sound-anything), but its
signhood arises solely from the observer's assumption that it is a sign
that nonverbal behaviour does fall within the system of signs directly
and immediately, because nonverbal behaviours are acts of communication.
Peirce (1931, 1935) finds sign as something which stands to somebody
for something in some respects or capacity. Morris (1938) suggests that
something is a sign only because it is interpreted as a sign of something
by some interpreter. Eco (1977) defines sign as everything that, on
the grounds of previously established social convention, can be taken
as something standing for something else. It has also been defined as
a proposition constituted by a valid and revealing connection to its
consequent, when this association is culturally recognized and systematically
Half a dozen possible relationships are empirically found to
prevail between the signifier and the signified. Signifier is the sound
or visual image of a sign. Signified is the concept aspect of a sign.
Both the signified and the signifier are dialectically united in the
sign. The six species of the sign are as follows (Sebeok, 1976):
Signal: When a sign token mechanically (naturally) or conventionally
triggers some reaction on the part of a receiver, it is said to function
as a signal. Examples of signals are the exclamation "go!" or alternatively
the discharge of a pistol to start a foot race.
- Symptom: A symptom
is a compulsive, automatic, non-arbitrary sign, with a natural link
between it and what it signifies. For example, bodily symptoms indicate
the underlying disease.
- Icon: A sign is said to be iconic when there
is a topological similarity between it and what it signifies. Examples
are pictures, diagrams, etc.
- Index: A sign is said to be indexic
in so far as it is contiguous with what it signifies. Indexes give physical
indication. Examples are compass, needles, weather vanes, footprints
and droppings of animals, etc.
- Symbol: A sign is said to be a symbol
when it does not have similarity or continuity with what it signifies,
but a conventional link between them is established. Examples are badges,
- Name: A sign which has an extensional class for its designatum
is called a name. In accordance with its definition, individuals denoted
by a proper name as Veronica have no common property attributed to them
save the fact that they all answer to Veronica.
Note that of the six
types of signs listed above, signal, symptom, icon and index fall within
nonverbal domain fairly comprehensively and fully. There are elements
of symbol as well in nonverbal communication, but these are of a limited
quality and quantity.
The sign name is perhaps nonexistent in nonverbal
communication and its nonexistence is probably a distinguishing mark
of nonverbal communication.
There are also scholars who consider all
the six types of signs occurring in nonverbal communication.
approaches to the study of nonverbal communication focus more on the
dialectics within nonverbal behaviour, on how patterns are formed, and
on how the inter-relationships between verbal and nonverbal communication
balance themselves in communicative contexts. Coupled with the experimental
investigations and findings of psychologically oriented studies of nonverbal
communication, the semiotic approaches to the study of nonverbal communication,
indeed, dominate the current assumptions and procedures in studies on
3. 5. Indian Studies of Nonverbal Behaviour
Traditional studies of nonverbal behaviour by Indian scholars link the
nonverbal behaviour of every day life with t hose of performing and
other aesthetic arts and see these behaviours in terms of their exploitation
and function in these arts. In other words, nonverbal behaviours are
seen as something which occur in nature, in normal communication and
as something not fully at the conscious level. These unconscious acts
are studied to reveal their communicative nature and to bring out their
functions and patterns.
In the process of study, the roots of nonverbal
behaviour in language, social acts and biology are emphasized. While
every act of nonverbal behaviour in language, social acts and biology
While every act of nonverbal behaviour has its basis
in language, society and biology, their exploitation, use, and the manner
of their use is based on the psychological need and state of the individual.
The ultimate goal of the study of nonverbal behaviour is their exploitation
for effective communication in aesthetic arts, for enhancing the aesthetic
value of the communication resorted to.
It is then seen as an effective
tool for aesthetic communication, providing a variety of techniques
and a variety of acts. Because the study of nonverbal behaviour is tied
to performance, their physical manifestation in the body and the intent
of these manifestations to represent underlying psychological needs
and states were emphasized.
Since, in the view of Indian scholars, there
is a unity of purpose between poetry and drama, indeed, between all
arts, physical manifestation of nonverbal behaviour as representations
of underlying psychological needs and states is included in every art,
in poetry through appropriate description and metaphor using language,
in sculpture through direct, indirect and oblique representation of
nonverbal acts as physical manifestations, and in dance combining both
poetry and sculpture adding to the combination the dimension of movement
A chief characteristic of Indian studies of nonverbal behaviour
is the inclusion of the same in grammar. For example, Indian traditional
grammars include not only the description of intonation patterns and
their functions within their scope but also other paralanguage features
meant for sarcasm, doubt, emphasis, contradiction and specific identities
This is sought to be achieved in two ways-one, by a direct
description and analysis of utterances in terms of their functions in
communicative contexts just as in linguistic description which present
how segmental sounds and sentence intonations get elliptical in the
speech of certain professional groups. Secondly, by identifying linguistic
mechanisms that carry these nonverbal acts, as in the case of prolonging
the pronunciation of consonants for certain effects.
Also, Indian traditional
grammars have developed so as to include separate chapters on nonverbal
behaviours, and their import for poetry and other aesthetic arts. The
incorporation here with linguistic facts is sometimes peripheral, at
times not relevant, but many a time highly relevant for effective communication,
choice of diction and standard speech.
Thus, by incorporating chapters
on nonverbal manifestations, the grammars focus on the performative
factors of speech as well, apart from forming a bridge between language
of every day discourse and the language of poetry and aesthetic arts.
Then, by the mere inclusion of study of nonverbal acts, the overall
goal of grammar and its learning is changed. History has not, however,
seen to it that what began originally as a descriptive-cum-prescriptive
approach to account for the then prevailing practices grew wide and
dynamic enough to be alive to the changes in practices or to further
develop the system of research applicable to matter other than texts.
In the Sanskrit school of grammar, nonverbal behaviour is prominently
discussed within rasa theory. The theory of rasa is intimately connected
with the theory of dhvani. It forms the most important aesthetic foundation
of Sanskrit poetics. It first appears in the dramatic theory of Bharata;
originally in connection with drama (explicit nonverbal behaviour),
then as one of the essential factors of poetic theory (description of
the nonverbal as suggestive of the underlying intent). While the theory
of rasa itself is older than Bharata (500 B.C.?) the general conditions
of the theory as fixed by Bharata continue to be accepted as the basis.
Elevation of nonverbal communication to aesthetic status and the exploitation
of modes of nonverbal communication for aesthetic purposes is clearly
seen in the concept of abhinaya in treatises on drama and dance, in
essence on theatrical performance. Abhinaya, according to Bharata Muni
(Natyastra Chapter IV: verse 23, translation as found in Ghosh, 1967)
has four kinds of histrionic representation, or shall we say that communication
is carried on through four kinds of means in dance and drama.
are angika which deals with bodily movements in their subtle intricacies,
vacika which refers to vocal delivery, aharya is communication via costume
and make up and sattvika is communication through the accurate representation
of the mental and emotional feelings. All these are physical manifestations.
The angikabhinaya, which is the visible form of communication through
bodily gestures and facial expressions, is certainly primary nonverbal
communication mode; there is an insistence on the need for gestures
and facial expressions to be in consonance with one another.
through perceptual factors such as costume and make up, and the physical
manifestation of mental states and emotional feelings are also emphasized
for a successful performance. The role of vocal delivery is not minimized
either in the process of communication. The practice of representation
in a dramatic performance is two fold: realistic (Natural, popular)
lokadharmi and conventional (theatrical innovation, and used conventionally)
natyadharmi (Natyasastra, Chapter VI and verse 24, as found in the translation
of Ghosh, 1967).
In other words, the communication in aesthetic arts
is carried on both by natural (realistic) and conventional signs. Of
all the modes of nonverbal communication, gestures and implied meanings
in oral delivery have been given a pointed attention in the elucidation
and exploitation of nonverbal communication for aesthetic arts.
As regards implied meanings we may
make a brief statement here on the role of suggestion treated in the
Dhvani School of Sanskrit scholars. This should properly be dealt with under nonverbal characteristics of language
use and silence.
In course of our discussions on the scope and definition
of nonverbal behaviour we suggested that implied meanings, through an
absence of linguistic units, are a form of nonverbal expression. In
the Dhvani School of poetics, it is suggestion/implied meaning that
is considered the essential characteristic of good poetry.
School, in its analysis of the essentials of poetry, finds that the
contents of a good poem may be generally distinguished into two parts.
One part is that which is expressed and thus it includes what is given
in words; the other part is the content that is not expressed, but must
be added to it by the imagination of the reader or the listener.
unexpressed or the suggested part, which is distinctly linked up with
the expressed and which is developed by a peculiar process of suggestion,
is taken to be soul or essence of poetry.
The suggestive part is something
different from the merely metaphorical. The metaphorical or the allegoric,
however veiled it may be, is still in a sense expressed and must be
taken as such; but the suggestive is always unexpressed and is therefore
a source of greater charm through its capacity for concealment; for,
this concealment in which consists the essence of art, is in reality
no concealment at all. The unexpressed in most cases is a mood or feeling
(rasa) which is directly inexpressible. The Dhvani School took up the
moods and feelings as an element of the unexpressed and harmonized the
idea of rasa with dhvani.
It is suggested that poetry is not the mere
clothing of agreeable ideas in agreeable language. In poetry, the feelings
and moods also play an important part. The poet awakens in us, through
the power of suggestion inherent in words or ideas, the feelings and
moods. Rasa is brought into consciousness by the power of suggestion
inherent in words and their sense. Thus, nonverbal communication in
aesthetic arts is viewed in Indian treatises as spectacular presence
of physical manifestation and suggestive absence of vocal elements.
In the Dravidian School of Grammar (Tolkappiyam of pre-Christian era,
300 B.C.?) also, description and study of nonverbal behaviour is an
integral part of grammar, poetry and drama. Nonverbal communication
is seen anchored on to physical (and physiological) manifestations.
The term used to refer to the nonverbal itself clearly reveals that
the idea of nonverbal communication is grounded in physical and physiological
manifestations. MeyppaaTu (mey meaning 'body' and paaTu meaning 'the acts'
based on body or expressed through bodily acts) is the term used to
refer to those manifestations which appear on the body of an individual
as a sign of what goes on inside the mind. Those manifestations for
whose understanding there need be no deliberation and whose occurrence
is revealed (in poetry and drama) in a natural manner through the bodily
acts form the scope of the study of nonverbal behaviour.
presents eight types of meyppaaTu. All of these are grounded in bodily
manifestations. Each one of these eight manifestations is related to
four moods or feelings. These moods or feelings may be either causative
or consequential. In other words, the major eight manifestations are
related to 32 different types of moods/feelings; the latter could be
either the causative mechanisms or consequential results.
have differed among themselves as to the content of 32 items, but not
on the essentiality of body acts for nonverbal communication, it being
the natural, external manifestation of internal states, and its irretrievability
and comprehension without deliberation. It is also considered an essential
component of poetry. The grammar prescribes that the poets are not to
refer to the feelings as such experienced by the individuals but only
to the external manifestations on the body.
By reference to the bodily
manifestations, and with the help of such references, the reader retrieves
the causative and consequential contexts of the poem, its intent and
so on. Because of this device, suggestion reigns supreme in poetry.
The injunction that the poet is not to refer directly to the feelings
of characters but only to bodily manifestations, while recognizing the
communicative function of bodily manifestations, aims at making a poem
more suggestive and open for varied interpretations and enjoyment. The
nonverbal mode is considered a tool to express the internal states.
The scheme also includes certain verbal acts as part of the nonverbal.
We see that even speeches by the heroine and others have been included
as forming part of the (nonverbal) group. If the speeches are mere expressions
of inner thoughts they are speeches. But if they are emotional outbursts
of inner commotion and feeling they are certainly meyppatu. If we closely
scrutinize the list of meyppatus in Tolkappiyam we will see that only
such emotional expressions have been listed under meyppaaTu (Sundaramurthy,
Suggestive power includes under the rubric of the nonverbal whatever
has been left out, not said, in the verbal act but is communicated because
of their being left out, not said, in the verbal act. Another dimension
included is that the nonverbal also includes the verbal if the latter
is one of emotional outcome.
Note that these viewpoints are also currently
held in modern studies of nonverbal behaviour (See Mehrabian, 1972).
Also note that in traditional Indian treatises the nonverbal exploits
both aural and vision media. The same classification of the nonverbal
we find in the traditional Indian grammars is also found in several
modern studies of nonverbal behaviour.
3. 6. Literature and Text-oriented Studies of Nonverbal Behaviour
Creative artists provide insights into
human mind, human behaviour, and individual and social thought and behaviour.
Both intuitive observations and empirical experimentations of nonverbal
behaviour benefit a lot from absorbing what the creative artists have
to say on various facets of nonverbal communication and what they have
identified and exploited as regards nonverbal behaviour and communication
in their works.
Creative artists are similar to the investigators who
prefer to use mainly their own intuitive analysis, but with one difference.
The investigators may tend to look at an object and/or a phenomenon
with their own set of rules, ideas and concepts whereas the creative
artists may look at the same object and/or phenomenon from so many different
angles, rather get into the soul and body of their characters in order to provide the readers with a comprehensive or suggestive picture.
Note, however, that such
a picture may be at times quite far from reality.
In literature, the nonverbal
behaviour modes depicted by authors may illumine the content or be itself
the content of the literary work. The texts provide records of nonverbal
communication of the past as well as of the present. They may be in
codified ritual texts, in didactive works, in religious discourses,
or in literary or folk episodes handed down from generation to generation.
These provide a clue to the belief system of the societies, provide
the worldview of the society whose behaviour it regulates or had regulated.
Textual analysis gives us rare as well as frequent practices, indicates
the significance of nonverbal communication across several social and
spatio-temporal levels. The past is linked with the present in the textual
analysis. The present is more clearly revealed in the past and its understanding.
Textual analysis requires several tools-semantic analysis, morphological
and syntactic description, correct identification and interpretation
of the act described in the ext and establishment of linkage between
items across texts.
Assessment of correction of interpretation requires
several measures such as identification of roots of words, morphological
patterns, syntactic comparison and establishment of patterns. The most
important function of analysis of nonverbal behaviour as found in texts
is the understanding of current behaviour that is narrated.
analysis opens up a mine of information. In literary texts, such as
novels, story is carried on and established by what the characters say
(linguistic behaviour) and by a description of the nonverbal act indulged
in by the characters. Punctuation marks are but only one device, which
give focus to some paralinguistic features. Other nonverbal communicative
acts are revealed in terms of proxemic behaviour, expressions via eye
and face, kinesics, use of implied meanings and so on.
A large part
of the author's narrative, without any one being aware of it, is aimed
at the description of nonverbal communicative acts of the characters.
Thus, because of infinite possibilities for human stories and acts,
and because of insightful observations and artistry of the authors,
literary texts also become a mine of information for those who propose
to study nonverbal communicative acts.
The paralinguistic characteristics
are conveyed by the authors in two ways-through the use of punctuation
marks using both conventional ones and those specifically created ones
by the authors themselves. The punctuation marks are of a limited quantity.
Not many have been really added to the set available, and in Indian
languages they were largely adaptations from European languages.
of a punctuation mark, reversal of its placements (in contrast to normal
practice), omission of a punctuation mark where it would be generally
expected to be used, some peculiar devices either specially defined
or brought from a stock of symbols used elsewhere for other purposes
but now sought to be used as a punctuation mark, tinkering with the
spelling are some of the initiatives one notices in this area.
device resorted to, to give an aura of the paralinguistic characteristics,
is their description sometimes through metaphorical transfer, sometimes
through foregrounding processes (foregrounding refers to the stimulus
which is not culturally expected in a social situation; when foregrounding
of something takes place, it provokes special attention; foregrounding
is generally an intentional distortion of the linguistic), many a time
by impregnating an ordinary word with potent meanings.
suggests that it is the depiction of the linguistic-paralinguistic-kinetic
structure of the people involved in the story that conveys a feeling
of authenticity and becomes a vehicle to transfer what the author has
created to the mind of the reader.
Nonverbal communication, in the hands
of authors, performs six functions, according to Poyotos. Nonverbal
communication brings about physical realism, distorting realism, individualizing
realism, psychological realism, interactive realism and documentary
realism in literary texts. Physical realism conveys the sensorial perception
of people's behaviour. Physical realism is differentiated from psychological
In psychological realism, the narration of the author delves
into the subtle inner reactions, which may be both body and purely mind-based.
In distorting realism, the literary, or artistic, expressionistic rendering
of physio-psychological reality is "meant to ridicule, to offer a caricature
of reality, or, truly to show what the eyes cannot see."
realism is shown in "the conscious effort to differentiate the characters
as to their physical and psychological characteristics, by means of
their verbal repertoires and, in the best case, by their nonverbal ones
as well." Poyotos sees interactive realism employed by authors as "a
thoughtful depiction of the mechanism of conversation mainly in face
to face encounters."
The documentary realism is historical realism and
is a consequence of physical realism as regards depiction of nonverbal
behaviour. Ritualistic and etiquette behaviours, occupational activities,
general task-performing activities, and activities conditioned by clothes,
hairdo, furniture, etc., are part of this realism.
Poyotos also identifies
four ways by which the authors usually transmit the nonverbal behaviours
in the narrative text. One way is by describing the behaviour and explaining
This is plain and has been exploited for a long time.
Although this method is plain, it, in no way, diminishes the story telling
so long as the artistry and content of the story are superb and associated
with some greatly influential thoughts.
Also note that this plain way
of presenting nonverbal behaviours may be dictated by the current practices
in story telling and could also be a stylistic marker of individual
authors. Another process of transmitting nonverbal behaviour is by describing
the behaviour without explaining the meaning.
This is generally meant
for a contemporary audience familiar with the meanings of the nonverbal
behaviour described. Also note that in contemporary contexts, an obtuse
nonverbal behaviour when described, but without its meanings explained,
becomes a technique of narration, leaving more to the personal abilities
and sensitivities of readers to retrieve the meanings.
A third way is
by explaining the meaning without describing the nonverbal behaviour.
This meaning may or may not be fully understood by the reader in the
same manner it is meant by the author.
Another method of presenting
nonverbal behaviour in the narrative text is "by providing a verbal
expression always concurrent with the nonverbal one, which is important,
but not referred to at all."
Poyotos also finds that the nonverbal repertoires
of the characters play four definite and important functions in narrative
technique. These are initial definition of the character, progressive
definition, subsequent identification and recurrent identification of
Initial definition of the character is done by means of
one or more idiosyncratic linguistic, paralinguistic and/or kinesic
features. These features include use of verbal expletives, personal
choice of words, a particular tone of voice in certain situations, a
gesture, a socially but individually conditioned way of greeting others,
other manners and mannerisms, a typical posture which we can identify
as a recurrent behaviour, etc.
Progressive definition of characters
through nonverbal behaviour is by means of adding gradually new features
as the story proceeds. "A feature adds to another feature previously
observed, complements it, builds up the physical as well as the psychological
or cultural portrait, and assists the reader in the progressive total
appreciation of the narration."
Thus, in a narrative text, the depiction
of nonverbal behaviour has several functions to perform-it carries the
burden of the story; it complements what the characters say; without
such a complementation a comprehensive locale and content cannot be
built for the story to proceed further and be comprehended by the readers.
The depiction of nonverbal behaviour also provides various types of
realism to the story, while providing at the same time various means
at the disposal of the author-various processes to define the characters
and to retain and recall such definitions to meet the demands of the
story as well as the artistry.
Both textual analysis and the analysis
of literary works provide us with insightful identification of the types,
function and defining characteristics of nonverbal communicative acts.
Empirically oriented experimental investigations of nonverbal communicative
acts can draw from this mine of information so as to fashion the acts
for controlled experimental studies.
Birdwhistell, R. L. 1970. Kinesics and Context: Essays on Body Motion Communication. University of Pennsylvania Press, Philadelphia.
Bloomfield, L. 1933. Language. Indian Reprint: Motilal Banarsidass, Delhi. 1963.
Cherry, C. 1980. The Communication Explosion. In Foster, M. L. and Brandes, S. H. (Eds.) Sumbol as Sense: New Approaches to the Analysis of Meaning. Academic Press, New York.
Codere, Helen. Ed. 1966.Kwakiutl Ethnography, by Franz Boas. The University of Chicago Press, Chicago.
Eco, U. 1977. A Theory of Semiotics. Macmillan, London.
Ekman, P. and Friesen, W. V. 1969. The Repertoire of Nonverbal behavior: Categories, Origins, Usage and Coding. Semiotica 1: 49-97.
Ghosh, M. 1967. (Translator). The Natyashastra of Bharata Muni. Vols. I and II. Manisha Granthalaya Private Ltd., Calcutta.
Hall, E. T. 1959. The Silent Language. Doubleday & Company, Inc., Garden City, NY.
Hall, E. T. 1963. A System for the Notation of Proxemic Behavior. American Anthropologist. 65, 1003-1026.
Hall, E. T. 1969. The Hidden Dimension. Doubleday & Company, Inc., Garden City, NY.
Hall, E. T. 1977. Beyond Culture. Doubleday & Company, Inc., Garden City, NY.
Harrison, R. P. 1973. Nonverbal Communication. In I de Solo Pool, W. Schramm, N. Maccoby, F. Fry, E. Parker, and J. L. Fein (Eds.). Handbook of Communication. Rand McNally, Chicago.
Mehrabian, A. 1972. Nonverbal Communication. Aldine Publishing Company, Chicago.
Morris, C. W. 1938. Foundations of the Theory of Signs. Vol. I, No. 2. University of Chicago Press, Chicago.
Peirce, C. S. 1931 - 1935. The Collected Papers of Charles Sanders Peirce, Vols. I to IV. Harvard University Press, Cambridge, Mass.
Piaget, J. 1960. Psychology of Intelligence. Adams Littlefield, Peterson, NJ.
Poyotos, F. 1977. Forms and Functions of Nonverbal Communication in the Novel: A New Perspective of the Author-Character-Reader Relationship. Semiotica 13: 199-227.
de Saussure, F. 1915. Introduction to General Linguistics. Tr. by Wade Baskin. The Philological Library, Inc., Philadelphia.
Sebeok, T. A. 1976. Contributions to the Theory of Signs. Peter de Ridder Press, Lisse.
Sommer, R. 1967. Small Group Ecology. Psycholoogical Bulletin. 67: 145-51.
Trager, G. L. 1958. Para Language: A First Approximation. Studies in Linguistics, 13, 1-12.
West, La Mont. 1963. Aboriginal Sign Language: A Statement. In Stanner, W. E. H. and Sheils, H. Eds. Australian Aboriginal Studies. Oxford University Press, London. | http://www.languageinindia.com/sep2003/nonverbalbehavior.html | 13 |
22 | The Urban Frontier
Individual pioneers who braved mountains and plains of northeast Colorado during the late nineteenth century were important to the land's development, but equally crucial were the townspeople of the time. They faced many of the same hardships with some problems peculiar to urban environments. Towns served as centers of political and commercial activity. Settlements came to exist for various reasons. Some grew because of boosters, while others were founded as part of larger projects such as agricultural colonies. Some places thrived due to locations near mining areas or on transportation corridors. All these "cities" rose and fell depending upon whether they were centers of commercial activity. Trade volume varied, but each town that survived did so because it developed a hinterland exchange network. Mining towns were among the first to use such connections. Shopkeepers who moved to these communities realized the transitory nature of mining and how their livelihoods could be wiped out overnight if the lodes played out. Businessmen sought to establish a wider economic base. One way was to build on existing trade relationships and broaden them through connections in other camps. Merchants also supported projects to improve transportation and communication. As the mining frontier spread outside northeastern Colorado, mercantilists in the older towns worked to establish themselves as outfitters for new mining districts. These efforts often paid handsomely but only if local mineral business continued to prosper. If the mines closed for lack of ore or other reasons nearby "cities" melted into ghost towns. The process could only be reversed was if new strikes were made or replacement industries like ranching or lumbering could be established. Throughout their lives, these mountain communities were dependent upon cities along the front range to funnel goods and services into the mining areas. Four northeastern Colorado towns, in particular, developed this role by the late nineteenth century. They were Fort Collins, Boulder, Denver and Colorado Springs. Of these Denver and Boulder began as supply centers. Others grew thanks to their locations near the mountains or their good transportation connections. All four cities experienced changes identified with major metropolitan areas of the period. They were not only centers of commerce but also of politics, communications, culture and finance. Their place as financial centers, especially Denver, became significant after 1870 when Colorado's industries attracted investors from around the world. The city's banking houses acted as funnels for new dollars as they came into the state. The same institutions served as information disseminators and boosters of local enterprise.
These four cities realized that if they were to fulfill their roles they had to encourage farming on the plains and attract farmers to urban markets. Initially, agricultural exchanges were simple affairs that allowed farmers to deal on a personal basis with merchants. After purchasing crops from growers, middlemen sold and shipped goods to mountain consumers. As transportation became readily available, this pattern changed. First was specialization of labor that led to the creation of facilities like grain exchanges and stockyards. This trend was furthered after railroads reached northeastern Colorado which was then connected to national markets.
The major front range towns soon became dependant upon communities out on the plains as a connection with producers. This was due, in large part, to rapid growth in the area. As more farmers and ranchers settled, it became impractical for each one to deal on an individual basis with brokers and dealers at trade centers. Instead, bargains were struck with buyers in smaller towns who then shipped to larger towns. Farmers also had needs like supplies and clothing, and communities sprang up across northeastern Colorado to meet these demands. Some places grew into medium-sized cities that became quite permanent. Other villages founded during the dryland farming boom of the late 1880s, turned into ghost towns by the late 1890s when drought and low prices ended local expansion of agriculture.
All the cities in northeastern Colorado served functions other than simply economic. They became centers of activity for those living near them. One facet was to serve as a focal point for varieties of social events, ranging from square dances to weddings to horse races. Celebration of national holidays like the Fourth of July and local festivals like Founder's Day all drew local crowds. When families went to town for these holidays not only did they participate in the "jubilee" but they also shopped and did other business. During the late nineteenth century most transactions occurred because of personal contacts, not advertising. Merchants therefore found it important to maintain such relationships and to do so they often co-sponsored local celebrations. Smaller towns scattered around the region served as social centers too. They offered retreats like saloons and pool halls where the men gathered to relax, chat and exchange ideas. Women usually had no equivalent "clubs". However, the men's organizations were strong supporters of ladies church groups and choirs. Churches were normally located in towns as were schools, especially those for the higher grades. Religious and educational events drew residents to urban areas from their farms and ranches. Larger towns could often support opera houses and the theaters to entertain residents. In smaller villages those seeking this diversion either traveled to bigger communities or depended on troupes of traveling actors, orators or musicians. Such arrivals were sometimes "the" social event of the year.
Another function northeastern Colorado cities performed was to act as spokesmen for themselves. Businessmen founded Chambers of Commerce, Boards of Trade, Boards of Immigration, and town companies, to encourage commercial activity and to attract new settlers to Colorado. These organizations assisted other "boosters" like the state, or railroad companies, that also worked to entice farmers and ranchers to the land. Occasionally, they set up locator services and arranged credit through local banks to help newcomer arrivals. The boards, at times, also found temporary housing. During times of famine and in dry years they operated as a local welfare service, soliciting aid and distributing contributions in cooperation with church groups and charitable institutions. Despite periodic bad times, promoters always painted a picture of unlimited opportunity and easy wealth for those bold enough to make the move to Colorado. Each proclaimed their town to be the "Athens of the West", the "new Denver" or the "Center of Western Civilization".
The towns of northeast Colorado supported another booster that, in many ways, was almost as unique to the American West as cowboysthe frontier editor. Newspapers of the time were not only methods to relate events of the day but also served as promoters of the local economy. William N. Byers' Rocky Mountain News set the tone for these publications when it printed its the first issue in April, 1859. Byers proclaimed Colorado as the home of all things good and a land of unbounded wealth. This style was copied by local newsmen across the region well into the twentieth century. When editors penned their lines, they hoped that papers elsewhere in the country would pick up stories about their region. A spirit of optimism permeated most of the area's journals. In addition to boosting their particular towns, the scribes also used their editorial pulpits to preach to residents on the virtues of various civic undertakings from a new railroad to closing saloons on Sundays. The local paper was a point of great civic pride for citizens in their communities.
All this booster activity was aimed at one goal. Each town sought to grow and duplicate earlier eastern lifestyles. During the late nineteenth century no one dared question the desirability of growth. A city that did not increase in size was considered "dead" and awaiting burial. To encourage expansion, as well as to ease the shock of relocation, builders attempted to recreate midwestern agricultural towns wherever they could. That is why today's physical remains in Colorado's small plains towns greatly resemble their counterparts throughout the United States. The brick schoolhouse and bank, the white frame church, the railroad hotel and the false-fronted main street could easily be moved from northeast Colorado to Iowa or Illinois and not look at all out of place. Often the fight for growth slipped from boosting settlement into criticism of neighboring towns. Usually, these rivalries involved the location of a railroad or highway businesses, the county seat or some state facility like a college. When plums like county offices were available each town not only put its best foot forward but did what it could to cast doubt on its neighbor's worthiness for such an honor by questioning its moral climate, vitality or progress. Battles like these raged across northeast Colorado throughout the late nineteenth century, often becoming quite heated, with threats of violence exchanged. Competition continued into the twentieth century as the "booster spirit" remained strong. Promoters and salesmen began their work as soon as cities were founded. There were nearly as many reasons for towns to spring up as there were why people moved to northeastern Colorado. The first settlements developed at the sites of gold and silver strikes. Agricultural cities were founded for different reasons, such as colonies. A few places like Evans or Green City started as speculative ventures for the enrichment of their founders through the sale of land and lots. In this same manner, some towns were set up by town companies that gambled on future growth in the area. Denver and Boulder were in this category. Yuma and Wray were examples of towns that developed because people in the area needed a place to trade. Both Fort Collins and Fort Morgan got their starts when settlers took up land near military posts to exploit local markets the Army offered. The beef trade of the later nineteenth century led to the development of "service centers" for cattlemen and cowboys. Karval was an outstanding example of such a community. However, northeast Colorado never had cities that survived being founded as "cattle towns", rather they were like Karval in that they evolved into this role from different beginnings or they grew up near the headquarters of major ranches.
The spread of railroads, and other transportation, across northeast Colorado served as the impetus for some cities to begin life. Colorado Springs for example, was founded as a subsidiary venture of the D&RG railway. Other places grew because of rail service facilities locations. Junctions, division points, engine houses and coaling stations all tended to increase a town's potential for expansion. One example of a new town at an important rail facility was Julesburg. It was built where the Union Pacific's cut-off for Denver joined the main transcontinental route. In fact, it was first called Denver Junction. Two towns that also found their growth stimulated by railroads were Sterling and Limon. Rail companies were well aware of the effect their choice of facilities had on the area and did what they could to encourage good will between themselves and local residents.
The cumulative effect of town building and promotion was rapidly felt in northeast Colorado. Between 1870 and 1875 towns doubled their population as the region became urbanized. These growth trends continued throughout the late nineteenth century and into the twentieth. The presence of towns had major impacts on the region's history during this period of intense use. Life in northeastern Colorado cities and villages during the years 1860 to 1890 was similar to most such areas of the United States while at the same time having a certain commonality with the rest of Colorado. Townspeople needed certain services that could best be supplied by local government or through volunteerism. Most important were police and fire protection. Nearly every community in this area went through a lawless period. For some it was short as local administration quickly established itself. Elsewhere, residents were not so lucky and had to live in fear of crime. Many factors contributed to lawlessness or lack thereof. The "boom" atmosphere and get-rich-quick mentality of mining camps was quite different from the more sedate Greeley or Fort Collins. This came not only from differences in organization but also from the settlers themselves. Miners were somewhat more accustomed to violence and had little regard for the future of their towns, while farmers sought to create utopia in Colorado. When severe infractions occurred miners took care of matters directly in their "miners courts", having little desire to pay for jailhouses. When crime could not be controlled by these rump courts, "vigilence committees" formed and took more permanent steps in problem solving. After the boom years passed in the high country, a territorial government was formed (1861) and the problem of law enforcement was turned over to counties or towns. In farm villages, generally founded after territorial (or state) status was conferred on Colorado, residents tried to avoid lawlessness by appointing a town marshall and setting up a local court system as part of the charter process. If taxes could not be raised to pay salaries for law enforcement agents, volunteers served as watchmen. All Colorado towns passed through these stages as they developed. However, it was only the larger cities, like Denver, that turned to a paid police force to replace marshalls and to control crime.
Protection from violence was only one service citizens sought from local government. Second on their list of needs was fire suppression. Most buildings were made of wood, or contained considerable lumber, that dried to tinder in the arid west. The slightest spark could ignite a fire capable of turning an entire town into smoldering ashes. To combat these events, the citizenry turned out and formed bucket brigades to stop the flames. This volunteer spirit was organized into fire companies. These groups not only raised money to buy equipment and build houses to store it, but also proved social outlets for the members. Firehouses turned into club rooms and the companies became fraternities, of sorts. Some of the larger towns were served by more than one such company and competition developed. Each sought to beat the other to a fire, to be the most efficient at extinguishing it, and to gain "glory" for the company. Contests were held between volunteer firemen during town festivals. However, at times rivalry was carried into action at fires when fist fights broke out between companies. More than one resident recalled watching as his or her home burn to the ground while firemen ignored the flames and wrestled with each other instead. To control freewheeling fire companies, communities put local government in charge of volunteer fire protection. As with police services, only a few major cities could afford fulltime paid fire departments.
One area that nearly every northeastern Colorado community left to private enterprise, between 1860 and 1900, was public utilities. Providing water or gas proved lucrative in the larger communities. Colonel James Archer and his Denver Gas Company and Denver Water Company were prototypes for others in Colorado. He originally came to the area with the Kansas Pacific railroad during the late 1860s. Once he saw the Queen City, Archer was convinced that this town would become the business and political center of Colorado, and possibly the Rocky Mountain West. Archer likewise felt this city was an excellent place to promote water and gas operations and to supply these necessities to the population. With help from local boosters, especially real estate magnate Walter S. Cheesman, Archer was able to get his companies started. By the early 1870s, less than fifteen years after it was founded, Colorado's capitol city had running water and gas for illumination. In places large enough to support similar systems, they were built, either by private individuals or by stock companies. In only a few cases did local government take care of securing these services. For those places too small for such luxuries, people used private wells for water and kerosene or coal oil for lighting.
The sense of civil accomplishment was further enhanced if a town was able to support its own urban transit system. Again, Denver took the lead in this development. During the seventies the most reliable forms of mass transit were horsecars and cable cars. Horsecars were small wagons pulled by animal power along pathways, whereas cable cars moved by long cables set between the wheels. They were in continuous motion and the vehicles hooked on to them and then released when stops were made. Both types of cars ran along rails laid in the streets. Neither method was particularly efficient and both were limited in the distances they could cover. Therefore they were not well suited to larger, dispersed cities. However by the 1890s, horsecars were popular in smaller towns where only short hauls were involved. The 1880s saw improvements in intraurban transportation technology when the use of electrical power was introduced. Eastern cities were first to take advantage of the changes. But, once systems were proven, Denver developers William G. Evans, David H. Moffat and Walter Cheesman, who owned the cable car company, investigated the possibilities of electrifying their operations. They founded the Denver Tramway Company to serve the city. The tramway served Denver with trolley cars powered from overhead wires by 1881. During the 1890, and into the 1900s, Fort Collins, Colorado Springs and others followed the capitol city's example and converted their horsecars to electric streetcars. These towns proclaimed themselves as thoroughly modern as any city back East.
Support of public education was yet another way residents demonstrated their commitment to civic betterment. Debates on how best to provide "learning" for area children began as soon as the gold rush was well established. In 1859 Professor Oscar J. Goldrick, teacher and journalist, moved to Denver and by the end of that summer he had found enough students to open a subscription school. In this institution the teacher's salary and expenses were paid by fees from parents of the students, not by taxes. Goldrick's example was duplicated in other early settlements, especially mining camps that did not have strong local governments during the early years. As more settlers filled northeastern Colorado after the Civil War, consideration was given to the creation further public schools. In areas where they were set up, educational facilities came under the control of elected boards of education, organized by city or rural area. This region's early residents placed a high value on education and by the 1890s nearly every child in northeast Colorado had available at least a primary education. Brick, stone woodframe schoolhouses were present in nearly every community and dotted many rural places.
The high literacy rate of residents and a need for information led to creation of libraries throughout northeast Colorado. The first appeared in the early 1860s as reading clubs and subscription facilities where those who wished to borrow books either paid membership fees or contributed volumes to the organization. Private libraries became popular by the 1890s. The benefits of a well informed public were not lost on town councils and other governmental groups. In towns where population and tax revenues warranted, a movement took place to create publicly supported libraries. The trend developed slowly and not until the early twentieth century did public libraries became a major form of book circulation in northeast Colorado. Another way that northeastern Coloradans kept in touch with current events was by attending lectures put on by travelling orators. Topics ranged from debates on social problems to literary figures offering interpretive readings. Such cultural shows were very popular with residents during the late nineteenth century. These road shows offered information and social exchange not otherwise available. Eventually national popularity of these events led to the creation of a nationwide Chautauqua Society and subsequent development of a Chautauqua lecture circuit throughout northeast Colorado. These organizations were outgrowths of a summer adult education program started in 1874 at Chautauqua, New York.
As an outgrowth of early evolution, at least a few of the region's cities developed "unique" images as seen by outsiders. Once boosters realized this asset they did what they could to enhance such perceptions. One place with quite individualistic characteristics was Colorado Springs. The town was founded by William J. Palmer as a perfect place for America's genteel and European tourist society to have their own hideaway. Because of close ties that many of Palmer's associates maintained with England the new community took on a marked resemblance to a British country town during the 1870s. Outsiders soon referred to Colorado Springs as "Little London". During the late 1870s the city lost some of its "Englishness" yet it remained "genteel". Residents actively supported the performing arts and other aspects of culture. For more than one travelling troupe, Colorado Springs was their only Colorado stop and shows that were hits elsewhere in the state often were closed in the Springs because of poor quality. The Springs became a favorite summer resort for eastern society. To further its elitist image, town fathers discouraged industrial growth that could harm the city's pristine appearance. Instead, a renewed Colorado City became an industrial suburb of Colorado Springs with smelters, rail yards and factories located there.
If Colorado Springs was the region's cultural center Denver, seventy miles to the north, evolved into the most cosmopolitan of northeast Colorado's cities by 1890. Because civic leaders placed heavy emphasis on growth and made no attempt to restrict most types of business, the community developed a heterogeneous population. In its attempt to become the commercial center of the Rocky Mountain West Denver was home to some of the West's major stockyards and grain depots. Railroads, smelters and heavy industry were all welcomed with open arms. So too were banking houses and businesses that facilitated economic dealings, like fine hotels. Due to intense industrial activity, nearly every ethnic group was represented in the population. Entertainment and dining facilities of all ethnic types could be found in the Queen City. These factors and the leadership role the community played in Colorado plus development of public utilities and services gave the city a good reputation that was known across the United States and western Europe. Two other cities in northeastern Colorado also gained attention that they used to promote themselves between 1870 and 1890. They were Greeley and Fort Collins; both best known as centers of farm activity. Their image was due to two factors. First was the early successes settlers in those two places enjoyed. Prosperity made for good advertising. Secondly, town fathers in each community sought to enhance their public images by supporting farmers' markets, stockyards and other ancillary facilities. In the case of Fort Collins, this extended to winning the state agricultural college. Fort Collins and Greeley were leaders in agri-business.
Each northeastern Colorado city sought growth and then when it occurred, was forced to deal with attendant problems. One of the first was sanitation. Typical practices of the era left refuse disposal to individuals. Human waste was disposed in privies, while roadsides and alleys were the usual receptacles for general refuse. This led to dirty, germ infested gutters. While towns remained small, occasional clean-ups of the curbs controlled the problem but as the cities grew they found such methods impractical. This matter led to calls for sewers as a way to remove waste. Many communities eventually built systems. Growth led to increased crime in urban areas. To prevent robberies and acts of violence, local government either increased police protection or organized citizen self-help groups such as neighborhood patrols. Certain parts of some cities were turned over to thugs and hoodlums and instead of trying to solve crimes police exerted more of an effort to contain the problem. This philosophy was also applied by peace keepers when facing problems such as prostitution or saloons. Accepting that those vices would continue despite efforts to eliminate them, the law enforcement officials allowed certain parts of their towns to be used for these activities, usually on an informal basis. Because each of the towns had ordinances to control vice, policemen often extorted payments from operators. In Boulder, the town council could brag that all brothels were closed within the city limits, however, they flourished beyond the border. In spite of periodic reform drives, little happened to permanently close the bars and bordellos until early in the twentieth century.
Despite these problems, early northeastern Colorado cities found themselves popular with vacationers from around the nation and Europe. The primary reason was Colorado's environment. Its scenery attracted visitors as did the chance to see the "Wild West". Also, the dry climate was reportedly good for health and vitality, especially for those with respiratory ailments. People hoping to find cures flocked to the area and they were especially attracted to Colorado Springs because of its genteel atmosphere and the Manitou Hot Springs, allegedly medicinal waters that would cure all manners of ailments from venereal disease to cancer. While the waters and climate were not really capable of miracles, the Colorado Springs area did prove life-giving for some and it became home of the "one-lung army" because of a large number of its residents who suffered from tuberculosis. To serve those who came west to recover, hospitals and sanatoriums were built in both Colorado Springs and Denver in addition to other towns throughout the region.
Railroads were another reason that tourists visited northeastern Colorado. Transportation companies worked hard between 1870 and 1910 to promote Colorado vacations and these efforts were rewarded by increased numbers of visitors. The mountains became a major lure as described in rail brochures. Not only did these pamphlets talk of fine air, picturesque landscapes and outdoor activities, they also touted excursion rates and special trains to facilitate travellers' needs. In one case the railroad itself became a tourist attraction: the Georgetown Loop between Georgetown and Silver Plume. If the loop was not enough, by the 1890s, passengers could continue on to the end of track and then transfer to the Argentine Central Railway and ride to the top of Mt. McClellan. Natural beauty combined with railroads' promotional work all came together to make tourism an important and lasting segment of the region's economy by 1900.
In addition to being centers of tourism and business, cities and towns of northeast Colorado were also the places of higher education in this state. Colorado territory's first college was located in Denver. It was established by John Evans to help give a further air of permanence and stability to Denver and to offer a service considered desirable. It was known as the Colorado Seminary but this name was later changed to Denver University. Other educational institutions located in or near the "Queen City" during the late nineteenth and early twentieth centuries included Loretto Heights College, Regis College, Colorado Women's College, and Westminister College. This latter facility was forced to close soon after its opening in 1915. Denver was not the only city in northeastern Colorado to become a center of learning. Boulder beat several other contestants during the 1870s to become the home of the University of Colorado in 1876. Fort Collins boosters, a few years later, convinced the legislature to locate the state's new agricultural college there. This school was much later renamed Colorado State University. Not to be left out when Colorado was establishing these institutions both Greeley and Golden made bids and received the honor of having higher education facilities located in their towns. Greeley got the State Teachers College, later becoming the University of Northern Colorado. Golden became home of the Colorado School of Mines because of its proximity to the mountains and its long involvement with mining. It specialized in training geologists and mining engineers. Unable to get a state school until the mid-twentieth century Colorado Springs nonetheless supported its own place of higher learning; Colorado College. It was begun as a religious school and is privately funded to this day. With most of these colleges in operation by 1890 it was not surprising that the cities in which they were located became the centers of thought on social and political problems faced by Coloradans of the time. However, ferment was not limited to college communities. Rather, most towns became points of political action as both the nineteenth century and northeast Colorado's frontier period drew to a close together.
1Charles N. Glaab and A. Theodore Brown, A History of Urban America, (New York: MacMillan, 1976), pp. 99-112, hereafter cited: Glaab and Brown, Urban America, and Daniel J. Boorstin, The Americans: The National Experience, (New York: Random House, 1965), pp. 119-122, hereafter cited: Boorstin, Americans.
2Duane A. Smith, Rocky Mountain Mining Camps: The Urban Frontier, (Bloomington: University of Indiana Press, 1967), pp. 242-252, hereafter cited: Smith, Rocky Camps, and Mrs. Alice Griffin Buckley interview, volume 355, Civilian Works Administration Interviews, Colorado State Historical Society, hereafter cited: CWA, CSHS.
3Lyle W. Dorsett, The Queen City, A History of Denver, (Boulder: Pruett, 1977), pp. 4-6, 57, 87, hereafter cited: Dorsett, Queen; Guy Peterson, Fort Collins: The Post, The Town, Ft. Collins: Old Army Press, 1972, pp. 58-63, hereafter cited: Peterson, Ft. Collins; A.A. Woodbury interview, vol. 343, CWA, CSHS, and Tobias Mattox interview, vol. 343, CWA, CSHS; Amanda May Ellis, The Colorado Springs Story, (Colorado Springs: House of San Juan, 1975), pp. 33-36, hereafter cited: Ellis, Springs, and James E. Fell, Jr, Ores to Metals, The Rocky Mountain Smelting Industry, (Lincoln: University of Nebraska Press, 1979), p. 53, hereafter cited: Fell, Ores.
5Lucas Brandt interview, vol. 353, CWA, CSHS; J. G. Abbott interview, vol. 352, CWA, CSHS; Kizzie Gordon Buchanan interview, vol. 341, CWA, CSHS; L. B. Gifford interview, vol. 353. CWA, CSHS; and "Larimer Towns," CWA, CSHS.
13Peterson, Ft. Collins, pp. 24-38; Mary Liz Owen and Dale Cooley, (eds.), Where Wagons Rolled, The History of Lincoln County and the People Who Came Before 1925, (n.p.: Lincoln County Historical Society, 1976), pp. 9-10, hereafter cited: Owen and Cooley, Wagons, and Woodbury, CWA, CSHS.
26Herbert M. Sommers, "My Recollections of a Youngster's Life in Pioneer Colorado Springs, (Colorado Springs: Dentan & Berkeland, 1965 ), pp. 54-57, hereafter cited: Sommers Youngster; Dorsett, Queen, pp. 91-94, and Ketley, CWA, CSHS.
29Morris Cafky, Colorado Midland, (Denver: Rocky Mountain Railroad Club, 1965), pp. 4-9; Cornelius W. Hauck, Narrow Gauge to Central and Silver Plume, (Golden: Colorado Railroad Museum, 1972), pp 79-81, and Athearn, Coloradans, pp. 230-231.
31"The Colorado School of Mines," vol. 354, CWA, CSHS; James F. Willard, "Early Days at the University of Colorado," The Trail, 7 (May 1915): 5-16; Peterson, Ft. Collins, p. 63, and E. C. E., CWA, CSHS.
Last Updated: 20-Nov-2008 | http://www.nps.gov/history/history/online_books/blm/co/16/chap8.htm | 13 |
29 | The United States is home to people of almost every race, religion, and
nationality. The Indians and Eskimos have been here for thousands of years.
Other groups arrived later and came in hope of finding riches, adventure,
and a new life. And some, fleeing war, famine, and persecution, sought only
safety and a chance to survive. Black people alone were brought here unwillingly,
stolen from their homes and forced to live as slaves. In spite of this cruel
beginning, black Americans have played a major role in defining and shaping
American beliefs, customs and traditions. From the beginning they have helped
insure the nation's security and economic well-being. Black Americans are
among our earliest explorers and have been among the first people to expand
and settle the frontier. Of the black Americans profiled on United States
postage stamps, they were selected because of the contributions they made
to our life and culture. They were also chosen because of the part they
played at critical points in our history. U.S. postage stamps show black
Americans as explorers, settlers, slaves and as patriots in vigorous pursuit
of freedom, liberty, and equality. This Black History Tour via postage stamps
is an attempt to tell the story of a way of life developed by black people
in a white society.
Social and economic institutions often develop out of necessity, and
it is not until later that rules and justifications develop. When society
looks at labor that must be done, most often they turn those unpopular jobs
over to those who have no choice except to do them - slavery. Black American
history probably got its beginnings when a group of black slaves were forcibly
shipped from their homes in Africa to America where they were compelled
to work. They came in chains, brought to the New World as slaves. They did
not immigrate, seeking greater opportunity, like others who came to America.
They were seized from their villages and homes and not allowed to take any
possessions with them.
The ancestors of most black Americans came
from the African continent. Most of the Africans imported to the Americas
came from Gambia, the Gold Coast, Guinea or Senegal. The natives of Senegal,
who were often skilled artisans, brought the highest prices. On the other
hand, the Eboes from Calabar were rated as undesirable merchandise, as they
frequently preferred suicide to bondage. Those from the Gaboons were considered
The moving of African slaves to other countries began as
early as the fifteenth century, but the first slaves to land in British
America were brought to the state of Virginia by the Dutch in 1619. Over
the next 250 years, approximately one million slaves were imported in North
America. The aim of slave-trade was to make money for the ship-owners who,
having bought slaves very cheaply in Africa, sold them again in the Americas
at a large profit to slaveowners, who would use them to do all the hard
labor on farms and cotton plantations. Since making money was the only objective,
no consideration was given to the Africans as human beings. Once in America,
many slave-owners treated slaves just like common animals, often whipping
them and even tying them up.
The slaves who arrived at the African
slave markets came from tribes all over Africa, and they were thrown together
in the slave ships without regard for tribe or language. In fact, slave-ship
captains made a point of not putting slaves from the same tribes together,
for if the slaves had been able to talk with one another, they also might
have been able to plan revolts. The same was true of slave owners in the
New World. It was in their best interests that slaves not be able to communicate
with one another. The slave-ship captains and slave owners did not understand
that the slaves were able to communicate with one another quite well through
their music. Through their songs, the slaves shared the rhythms of sorrow
and their fear and their hopelessness. Through the rhythms of their make-shift
drums, they communicated their calls to rebellion.
For some time,
slave masters did not realize that the drums the slaves made from hollowed-out
logs or nail kegs, with animal skins tightly stretched over on end, were
being used for communication. They thought the slaves were just making their
African music. They knew these drum sounds carried far, even to the next
plantation, but it didn't occur to them that the drumbeats were a sort of
"Morse code" the slaves used to make plans for revolts or escapes.
When it finally became clear to the slave masters that the drums were being
used as a form of communication, drums were outlawed. But that didn't stop
the slaves from keeping the drumbeat alive. Instead, they used their feet.
America's colonial period starts with the establishment of the first
English settlements in the New World. Introduced to North America in 1619,
black slavery would darken the fabric of American life like a spreading
bloodstain. The nation had been founded by people who loved liberty, but
it became a place where human beings could be bought and sold. The African
slave trade began not with the English colonist but centuries earlier, when
Arabs and various African and European peoples forced blacks into servitude.
Eventually, European sugar planters in the Caribbean and South America began
to import large numbers of black slaves, men and women who were deprived
of their human rights, forced to live in deplorable conditions, and made
to work until they dropped.
The English colonists of North America
knew they needed helpers to build their homes, plow and harvest land, and
work in their homes. The colonists used a variety of sources for this labor.
Indentured servants were used by the colonists. They gave up four to seven
years of labor just to pay for transportation to America. The colonists
also used apprentices as a source of labor. Apprentices were orphans, or
children of poor parents, who were given to a farmer or trades-man to be
trained. These apprentices would be freed when they reached a specified
age. And then there was slavery.
The English colonist in the New World
imported white indentured workers at first, but found there weren't enough
of them. The Indians in the Americas refused to work or proved to be poorly
fitted for long hours of hard labor. The Europeans found it easier and cheaper
to import Africans as slaves. By the seventeenth century, the African slave
trade was booming in the Americas. The slave dealers made so much money
from their human cargoes that soon Africans came to be known as "black
gold." Slaves could be secured in Africa for about $25 a head, or the
equivalent in merchandise, and sold in the Americas for about $150. Later
when the slave trade was declared illegal, Africans brought much higher
prices. Many slave-ship captains could not resist cramming their black cargo
into every foot of space, even though they might lose from 15 to 20 per
cent of the lot on the way across the ocean. It is estimated that 7,000,000
Africans were abducted during the eighteenth century alone, when the slave
trade became one of the world's great businesses.
Since England had
no laws that defined the status of a slave, the colonies made up their own.
These "slave codes" protected the property rights of the master.
The codes also made sure the white society was guarded against what was
considered a strange and savage race of people. Slaves had almost no rights
of their own. Some masters tried to treat slaves well. George Washington
freed his slaves in his will. Thomas Jefferson's slaves lived in brick cottages.
Jefferson Davis's slaves governed themselves with slave-run-trial courts.
Harsh slaveowners also existed. They half-starved their slaves, worked them
hard, whipped them often, treated them worse than cattle, and enjoyed making
life miserable for them. When a master was cruel, the slaves had no legal
protection from his brutal treatment.
Enforcement of the slave codes
varied from one area to another, and even from one plantation to another.
Slaves who lived in cities and towns were less restricted than slaves who
lived in the country. Slaves on small farms enjoyed more freedom than those
on huge plantations. Plantation slaves often had little contact with their
masters. Their supervisors were drivers and overseers. Drivers were slaves
who were made into bosses by their master, so they were in a bad situation.
Go easy on the workers, and when the work was not done, the driver would
be flogged. Go too hard on the workers and the driver made enemies among
his fellow slaves. Overseers were whites who took orders from the master.
A few were soon managers but most were not. Even in the best of circumstances,
slaves were property and could be bought, sold, lent, or rented out. Their
opportunities to learn and achieve were very limited. The slaves had little
personal incentive to work hard. Slavery offered little room for promotions.
In the South, most slaves helped plant and harvest crops. The typical
slave worked on a small farm with one or two other blacks alongside the
master and his family. Other slaves worked in and around the master's house
instead of out in the fields. In Southern towns and cities, blacks served
as messengers, house servants, and craftsmen. In the North, farming was
not as important to the economy as it was in the South. Black slaves therefore
worked in a wider variety of jobs. They provided skilled and unskilled labor
in homes, ships, factories and shipyards.
How did slaves survive the uncerainty and the danger of harsh treatment?
How did they make the best of a bad situation? Music was a relief for them.
The slaves had their songs, and they would re-create their instruments and
their music to keep their hearts and souls alive through nearly two hundred
fifty years of slavery in the New World. They liked to dance, sing, and
play the banjo, drums or fiddle.
Despite their poor treatment, the
land and the culture had become part of them. And in spite of the fact that
most white Americans at the time did not consider blacks to be their equals,
whites had taken into their own hearts certain elements of black culture.
By the time the slaves were emancipated, they had given to America not just
the sweat of their brows and the strength of their backs, but the seeds
of the first truly American cultural gift to the world - American music.
Blues, jazz, rock'n' roll originated with blacks. And white performers and
groups from Benny Goodman to Frank Sinatra to the Beatles to Rod Stewart
to Boy George have said that they owe their biggest debt to black music.
By the time slavery was abolished, most ex-slaves would not go back to Africa,
for Africa was no longer their home. America was.
Despite the risks, some blacks constantly tried to undermine the slavery
system. Some slaves chose to destroy property or fake illness to avoid having
to work. Others took bolder steps to overthrow their masters by joining
slave revolts. Still others managed to escape. But many - perhaps most -
slaves chose not to resists in the face of almost certain failure and death.
were suspicious of whites who told them about the "Underground Railroad"
that would take them to freedom. The Underground Railroad was composed of
volunteers who would hide slaves traveling north to Canada. Slaves were
hidden during daylight hours at stops along the route and, using the North
Star, they moved in the dark to the next location 10 or 15 miles north.
Until they reached Canada, they were never completely safe. If they were
caught by a slave catcher or United States marshal, they would be returned
to their master, who would probably make a great display of flogging them.
It was risky for whites to be involved, but it was even more dangerous for
blacks who helped slaves to escape. Facing a death sentence if they were
captured, it took great courage for them to help slaves escape.
the abolitionist movement to help free slaves was Harriet
Tubman, an Underground Railroad Conductor. Almost every year after
1830, the Underground Railroad assisted hundreds of slaves escape to places
in the North. Abolitionists and Quakers established hundreds of stations
on the Underground Railroad in Illinois, Indiana, and Ohio. In Illinois,
the routes converged in Chicago, where slaves would leave by ship for Canada.
Ohio, with the largest number of stations, was the center of the Underground
The following black writers and orators were
also involved in the abolitionist movement by expressing themselves on such
matters as colonization of Negroes, the institution of slavery, and the
progress of the Negro as a group. Included in this group were such people
as Frederick Douglass and Sojourner
In 1936 Ralph Bunche
published "A World View of Race," in which he stated that racial
prejudice exists because of economic needs. He wrote, "The Negro was
enslaved not because of his race but because there were very definite economic
considerations which his enslavement served. The New World demanded his
labor power... but his race was soon used [as the reason for] the inhuman
institution of slavery."
Dr. Allison Davis
challenged the cultural bias of standardized intelligence test and fought
for the understanding of the human potential beyond racial class and caste.
His work helped end legalized racial segregation and contributed to contemporary
thought on valuing the capabilities of youth from diverse backgrounds.
When the revolution started, some blacks were caught up in revolutionary
fervor. At Bunker Hill, slaves and free blacks participated, and Salem
Poor was praised by his superiors as "an excellent soldier."
When Washington took command, he told recruiters not to enlist blacks, but
some were already in the army. In October 1775, it was decided to bar blacks
from the Continental Army.
A month later, Governor Dunmore of Virginia
declared that any black or indentured servant who joined the British army
would be free. Slaves began deserting the plantations and enlisting in the
Royal Army. Wherever the British army went, slaves flocked in. The seriousness
of his mistake was made apparent to Washington when many of his own slaves
Wisely reversing policy, in December 1775 Washington order
that free blacks might be enlisted in the Continental Army. Most states
permitted both slave and free black enlistment in their militia. Black soldiers
participated in every major battle from Bunker Hill to Yorktown. They also
served in the United States Navy.
After the war for Independence ended,
black Americans began taking part in the general development of the country.
When any black stepped out from the crowd and showed that he or she could
grasp a complex idea and express himself or herself well in writing or speech,
that action weakened the premise that Africans were by nature inferior.
Stereotypes may not be destroyed but they can be weakened. Records indicate
that in 1772, on the north bank of what is now the Chicago River, Jean
Baptiste Pointe Du Sable erected a large cabin and continued to
build other structures such as barns and storehouses etc. This made him
the area's first permanent settler and the founder of the city of Chicago.
Du Sable also established a fur trading post there. It soon became a very
busy trading center, and eventually the settlement of Chicago sprang up
around the post.
is well known as a surveyor who helped to lay out the streets of the nation's
capital. He also made the first clock constructed in this country. Banneker's
work in astronomy attracted the attention of learned men on both sides of
the Atlantic. Through the use of mathematics, he was able to plot the cycles
of the 17 year locust and thus help farmers to anticipate them.
April 6, 1909, history was made when two men, one Black and one White, planted
the American Flag at the North Pole. Thus, Matthew
A. Henson, a black man, became one of the first Americans to reach
the top of the world. Yet, undoubtedly due to his race, he was for years
denied recognition of his role in this discovery.
The Civil War began on April 12, 1861, following an attack by southern
troops on Fort Sumner, South Carolina. The disagreements between the North
and the South dated back many years. They grew out of a variety of economic
and political rivalries and issues, including whether a state had the right
to sucede from the Union. Slavery was also a source of conflict, but the
Civil War was not a war against slavery.
From the beginning of the
Civil War, black Americans in the North and South offered to volunteer for
military service. The South was nervous about how slaves would react to
the war. During the first year of the Civil War, participation by blacks
was limited almost entirely to non-military service. Some blacks went to
war as body servants to their masters, and others worked faithfully on plantations
and farms. The government feared that the border states might join the rebels
if blacks were enlisted and also that white troops might refuse to fight
alongside black troops. Black Americans were limited to labor behind the
lines as teamsters, camp attendants, waiters and cooks. Most slaves saw
the war as their chance at freedom. Rather than risk getting caught by patrollers,
they stayed on the farm and did as little work as possible, waiting for
blue uniforms to appear on the horizon. When the right moment can, they
joined a long procession of contrabands.
The Civil War finally came to an end on April 9, 1865 when Confederate
General Robert E. Lee surrendered to Union General Ulysses S. Grant near
the Appomattox Courthouse in Virginia. President Abraham Lincoln and most
other white northerners were eager to put the country back together again
as soon as possible. Their plans were to reorganize and rebuild the defeated
South. This program was known as Reconstruction. But less than a week after
the war ended, Lincoln was assassinated. His successor, Andrew Johnson,
was a southerner who promised to continue Lincoln's policies. Even though
President Johnson supported outlawing slavery, he made little effort to
grant black civil rights protection or give them the right to vote. He tolerated
anti-black violence in the South and did nothing to stop white governments
in southern states from passing laws similar to the old slave codes.
leaders in Congress were afraid that President Johnson was just making it
easier for white Democrats to gain control once again throughout the South.
So they came up with a much harsher Reconstruction program. Under their
plan, southern states would not be allowed to rejoin the Union until the
Republicans had become stronger and until blacks were given the vote and
guaranteed civil rights. Thus, during the Reconstruction period, Union policy
evolved to embrace the total abolititon of slavery as provided in the 13th
Amendment to the Constitution and passed in 1865. Government policy also
moved toward equality of rights for blacks as reflected in the 14th Amendment,
passed in 1868, and the 15th Amendment passed in 1870. There was oppositon
to equal right for blacks. It was almost universal in the South and nearly
so in the North. Passage of the 14th and 15th amendments had been primarily
motivated by the desire of the Republican party to maintain political control
in the former Confederacy.
Blacks took an active part in all aspects
of public life during Reconstruction. They voted in large numbers and were
very active in the conventions that formulated new state constitutions in
the South. Many blacks held political office at the local and state levels.
Fourteen blacks were elected to the United States House of Representatives
and two were elected to the United States Sentate. Blacks also pressed for
and helped to establish public education systems where none had previously
Rising racial tension and hard times in the postwar period caused some
blacks to go West. In the West, black faces had always been rare, but explorers
like fur traders Jean Baptiste Pointe Du Sable and James P. Beckwourth had
been there before the war.
America's westward expansion has traditionally
excluded black pioneers and adventurers. Historians are now admitting that
thousands of black men and women played various roles in the exploration
and settlement of lands west of the Mississippi. There were more than 5,000
blacks among the cowboys who rode the ranges from Texas to Montana. The
work was hard, and the men were very dependent on each other. Race was not
a big issue in the bunkhouse or on the trail. One black cowboy, Bill
Pickett, added his bit to western legend when he invented bulldogging-taking
a steer by the neck and throwing him down. Most were ropers like Bill
Pickett, the "Dusky Demon from Texas," while others were
horsebreakers, wranglers, cooks and trail bosses. Some became law enforcers
and others famous mountain men like James P. Beckwourth.
the last quarter of the 19th century, the scientific interest of blacks
seem to have been directed toward applied science or invention. The following
black Americans have made significant contributions to science despite the
general absence of at least two basic conditions for scientific work: freedom
from full-time pressures for personal survival, and a stimulating cultural
environment. Slavery, segregation and cultural isolation have been the lot
of most blacks in the United States. Nevertherless, scattered throughout
history are the following individuals who have made contributions of a scientific
nature for the benefit of all:
In the progress down the winding road from slavery toward freedom, black
Americans have relied on civil rights leaders and spokespersons to carry
the beacon of hope. Black leaders have been the means of communicating to
the nation the wishes of the inarticulate masses. Their tactics have ranged
from the petitions of free blacks during the infancy of the Republic to
the moral echortation of Frederick Douglass;
from the example and opportunism of Booker T.
Washington to the rage and daring of Marcus A. Garvery; from the
blunt anger of W.E.B. Dubois to the cool
calculation of Charles H. Houston; and from the blazing zeal of Mary
McLeod Bethune to the consuming pacificism of Martin
Luther King, Jr.
Although many great black scholars had written
about black history, no one had as yet treated the subject so systematically
as Carter G. Woodson also known as the
"father of black history." Paul Laurence
Dunbar is best known for his poems in black dialect, which portray
the lives of black people in the rural South.
Following the Civil
War, a new group of black leaders came to the fore. By 1895 Booker
T. Washington was the most famous black American; however, with
Ida B. Wells writing and lecturing against
the evils of lynching, and W.E.B. DuBois
protesting the philosophy of accommodation to the staus quo, the ground
was being prepared for the birth of the following organizations directly
devoted to the cause of racial advancement. Whitney
Moore Young is commonly credited with revitalizing the Urban League,
organized in 1910 to assist blacks moving into northern cities to find jobs
and housing. For almost forty years, A. Philip
Randolph fought to improve working conditions and higher wages for
all laborers. He was particularly vigours in his vocal oppositon to racial
discrimination within the labor movement. He is best known for for organizing
the Brotherhood of Sleeping Car Porters in 1925. Congress of Racial Equality
(CORE) was founded in 1942, the Southern Christian Leadership Conference
(SCLC) was organized by Dr. Martin Luther King, Jr.
in 1957 and the Student Non-Violent Coordinating Committee (SNCC) which
dates from 1960 was also founded. These groups and their leaders have different
emphases and tactics but their objectives are in the historic tradition
of black leadership: the achievement of a condition of freedom which would
render the question of color irrelevant in American life.
Removed from Africa and shackled as a slave to a plow and hoe in America,
the African was forever divorced from his native artistry and culture. The
creative African sculture, metalwork, weaving and pottery were no longer
his to pursue in the new land. Thus black Americans as a group was denied
creative expression for more than a century, and it was not until the planters
and merchants of the South grew rich and began to ornament their mansions
and buildings that their artistic talents were employed. When given the
opportunity, black Americans probed to be fine carpenters, cabinet makers,
wood carvers, blacksmiths, harness makers and artists. There is little doubt
that the handicraft was a carryover from his ancestral Africa. Skill in
slaves was sought and encouraged among the prosperous slave holders, who
recognized that a skilled slave was worth more as a worker or when sold.
the pioneer days and up to the industrial revolution in America the nation
afforded little encouragement to the artists in the plastic and graphic
arts. Individuals who amassed fortunes and began to acquire art, patronized
the European artists. It was not until about 1870 that the American artists
began to gain recognition. Henry O. Tanner,
at the turn of the century, was the first black American artist to win international
recognition. Tanner was known for his genre paintings (studies of people's
daily life), landscapes and religious studies.
After the Civil War, military band instruments were in plentiful supply
and could be bought for the price of a little labor or cash. There were
also make-shift instruments and musicians strummed on homemade guitars and
banjos, played old pianos, blew horns, and beat on drums. The music came
not from a book, but from the heart. The main motive was pleasure rather
than financial reward or fame, but some managed to find all three.
became popular around 1900. It was highly syncopated music, usually played
on the piano, the left hand played the rhythm and the right hand played
a bright and cheerful melody. The best know ragtime writer was Scott
Joplin, whose fame came from such songs as "The Entertainer"
and "Maple Leaf Rag." Eubie Blake,
another famous ragtime musician, composed his famous "Charleston Rag"
Jazz grew out of ragtime and the blues. It is said that the blues grew
out of the songs of the slaves and has a sorrowful sound and message. W.C.
Handy wore the title "Father of the Blues," and two of
the most famous blues song were written by him: "Memphis Blues"
and "St. Louis Blues." The message of the blues is in both the
words and the melody. Handy saw jazz as a third step on the continuum of
black music: spirituals, ragtime, and the blues, and jazz.
born in New Orleans with both African and European music as its parents.
Although the precise date of origin is unknown, it is clear that by the
start of the 20th century, jazz was emerging in New Orleans as a musical
form. Some historians believe it originated in Congo Square, where black
slaves had performed music and chants from their African roots. Others claim
the music originated in Storyville, a prostitution area, where black musicians
entertain their white clients. Jazz is thought to borrow from a number of
sources: the blues, religious hymns and spirituals, and parts of old French
and Spanish music heard in Louisiana. The first jazz band director was Joe
"King" Oliver, who took a basic melody and improvised from it.
Duke Ellington's and Count Basie's reputations
have long since surpassed Oliver's. Their bands drew international attention.
the time that the predominantly male jazz instrumentalists were creating
new jazz sounds, dozens of black female singers were creating new blues
sounds. The reason black women blues singers were more acceptable to the
larger public than black men blues singers is probably the same reason black
women performers were more acceptable in the nineteenth century - they were
less threatening to whites, and perhaps also to blacks. The blues is a form
of music very often about love, and whites were not ready to accept a black
man singing about love. Also in the black community, it might have been
viewed as unseemly for a black man to sing about love on a record because
there was a strong tradition of manliness, which included keeping one's
tender feeling to oneself. At any rate, women singers such as Bessie
Smith and Ma Rainey became very
popular in the 1920s. Ma Rainey also became
known as the "Mother of the Blues."
Johnson's original blues compositions proved to be his most popular
and enduring recordings. He was a blues singer and songwriter in the 1930s.
Coleman Hawkins was one of the first grat
jazz soloists of the 1930s. He was a jazz composer and saxophonist. Early
in 1940 he put together a big band. His band was one of the first to record
bebop. Jimmy Rushing, a jazz and blues
singer is best known as a vocalist with the Count Basie orchestra from 1935
In 1917 the United States entered World War I under the slogan "Make
the World Safe for Democracy." Within a week after the United States
entered the war, the War Department stopped accepting black volunteers because
colored army quotas were filled. No black men were allowed in the Marines,
Coast Guard or Air Force. They were allowed in the Navy only as messmen.
When drafting began, of the more than 2,000,000 blacks registered 31 percent
were accepted to 26 percent of the white men. Blacks then comprising 10
percent of the population, furnished 13 percent of the inductees.
War I was a turning point in black American history. The small number of
blacks moving out of the South after 1877 increased enormously as war industries
and the decline of European immigration combined to produce demands for
labor in Northern cities. The coming together of a large number of blacks
in urban cities, the exposure of some blacks to European whites who did
not hold the same racial attitude as American whites, and the war propaganda
to make the world safe for democracy all combined to raise the hopes, dreams,
and aspiration of blacks in America.
In the decade following World War I, an artistic explosion occured within
the Black community that produced a wealth of music, literature, poetry,
dance, and visual art. The Harlem Resaissance was a period of creativity
among Black artists, writers, musicians, orators, dramatists, and entertainers
and was centered in Harlem in New York. The term renaissance was used because
the movement built on the heritage of black Americans. More books were published
by black authors during the 1920s than in any previous decade in American
history. At the end of World War I, Harlem also contained the largest black
urban population in the world and quickly became the black cultural center,
attracting immigrants from Cuba, Haiti, Puerto Rico, the British West Indies,
and elsewhere, bringing with them their languages, religions, foods, music
and literature. Music during the Harlem Renaissance ranged from jazz to
rhumbas, hymns to parlor ragtime, and from spirituals to chamber quartets.
In the field of popular music, the pianist Jelly
Roll Morton and W.C. Handy, called
the "Father of the Blues" all added to America's rich music. Duke
Ellington brought his first orchestra, the Washingtonians, to Harlem
and Broadway, and from that time on jazz has been on the upswing. Charlie
Parker and others brought the bop influence to bear on what became
President Roosevelt's various recovery and reform programs - such as
the Civilian Conservation Corps (CCC), the National Youth Administration
(NYA), and the Works Progress Administration (WPA) - helped blacks as well
as whites. Blacks welcomed the New Deal as a sign of hope and progress.
were other reasons for optimism too. Although President Roosevelt relied
on white advisors, he also turned to a group that came to be known as his
"Black Cabinet." Among its members were prominent blacks in a
variety of fields, including educator Mary McLeod
Bethune and political scientist Ralph Bunche.
They kept the president informed about issues of interest to black Americans.
When Joe Louis won the heavyweight boxing
championship in 1937, Jesse Owens won in
the 1936 Olympics, and Jackie Robinson
and Roberto Clemente were selected as
the National League MVP in 1949 and 1966 respectively, the African American
role in sports began to be important if not dominant. The names and faces
of athletes change quickly, but it would require great imagination to think
of professioanl football, basketball, baseball, boxing or track and field
events without black athletes in key roles.
On December 7, 1941, Japan attacked the United States navel base at Pearl
Harbor, Hawaii and the United States entered World War II, a war that had
already been raging in Europe. Life in the United States was immediately
changed in a drastic way for nearly everyone because the energies and the
resources of the country were fully committed to the war effort. Men went
to war and women went into the factories. Food items and gasoline were rationed,
and raw materials were directed from other industries to the war industries.
the United States entered the war in 1941, hundreds of thousands of black
Americans served in the armed forces. Their distinguished role in the victory,
along with the growing black population in American cities, a rise in the
literacy rate among blacks, and increasing economic opportunities, inspired
new efforts to end racial discrimination. Leading the way was the NAACP.
lawyers began challenging segregation and discrimination in the courts.
They took many of their cases all the way to the United States Supreme Court,
winning several important decisions before the war. But the big push came
after the war, when the NAACP slowly but surely demolished legalized segregation
and discrimination in all areas of American life - voting, housing, transportation,
education, and recreation etc. The Supreme Court's decision on school segregation,
including the landmark Brown vs Board of Education in 1954, were especially
important. They brought about changes that launched a whole new era in black
American history, the era of civil rights.
The music business was affected in a variety of ways. Many musicians
were either enlisted or drafted into the armed services. Shortages of building
materials caused new entertainment and club construction to come to a halt.
Shortages of shellac drastically reduced the production of new records and
even big companies such as Victor had to buy up old records and recycle
them. Entertainers were asked to do their part for the the war effort by
performing at war bond sales and for those men at various military bases
around the country and abroad.
The heyday of big-band music came to
an end. In jazz history, the 1940s are regarded as the beginning of "modern
jazz." as distinct from the "classical jazz" that had gone
before. Modern jazz grew out of the swing era, and its major practitioners
had played in the big swing bands of the 1930s. The first modern jazz style
was call bop. Louis Armstrong first made famous - "do wop do bop,"
for example. Alto saxophonist Charlie Parker
and pianist Thelonious Monk were also originators
of bop. Charlie Parker is considered by
most as the greatest contributor to the development of bop.
who were knowledgeable about jazz recognized that Nat
King Cole was very important in the transition from swing jazz to
modern jazz. He was one of the first pianists to introduce a lighter, more
streamlined style of playing. Cole also perfected a style of accompanying
in which chords are played in brief, syncopated bursts, a style that eventually
became known as comping. For black music, the 1940s were years of transition.
They saw the development of modern jazz from swing. They saw the popularization
of singers, due to the public interest in ballards. They saw the first successful
crossovers into the white music world by a few blacks like Nat King Cole.
In the next decade, these developments would continue, and flower.
the 1940s Erroll Garner a keyboard artist
played and composed by ear in the tradition of the founding fathers of jazz.
Strong and bouncy left-hand rhythms and beautiful melodies are the trademarks
of his extremely enjoyable music. John Coltrane,
a musician and composer of the 1960s, was the most influential innovator
in the development of modern jazz. He was always searching and seeking to
take his music further in what he quite consciously viewed as a spiritual
Dinah Washington, an important blues
singer of this period, earned the title "Queen of the Blues."
Black Americans had plenty to be blue about in the years after World War
II. Though blacks had fought in large numbers and had distinguished themselves
in the "war to make the world safe for democracy," they found
that at home they did not enjoy the freedoms that they had fought to ensure
for people abroad. They were still second-class citizens in the North and
little more than slaves in the South. They were angry, but felt powerless
to do anything about their situation. So many black singers expressed their
feelings in their music.
By 1955, postwar prosperity had found its
way to the recording business. The 45-rpm disk was taking over from the
old 78-rpm record, and since it was lighter, more durable, and easy to make
and distribute, it gave a real boost to record companies. More and more
people were buying records and record players. Every club worthy of the
name had a jukebox. There were more record companies that, along with the
already established ones, were signing up new talent. There was a greater
interest on the part of whites in black music. For example, in Chicago,
the Chess Record Company already had contracts with bluesmen Muddy
Waters and Howlin' Wolf and was actively
seeking more. Clyde McPhatter was also
a great rhythm and blues and pop tune singer in the 1950s and 1960s, and
a huge influence in the evolution of music during that time.
As the courts destroyed what remained of legalized segregation, other
branches of government took action too. Congress passed laws to make sure
white southerners could not cheat blacks out of their right to vote. President
Harry S. Truman banned segregation in the armed forces. Later President
Dwight D. Eisenhower ended discrimination in federal assistance programs.
In addition, civil rights committees assembled to investigate and report
Even though segregation and discrimination were against
the law, they did not just disappear. Blacks turned their attention to fighting
the kind of bias that was common in restaurants and hotels, on buses, and
in other public places. Boycotts and sit-in became popular and effective
ways to protest. Blacks achieved so much in the area of civil rights from
1954 until 1964 that some people started to think of the decade as "The
Second Reconstruction." To them, the work of the first Reconstruction
after the Civil War had been left unfinished, and now was the time for it
Soul came out of rhythm and blues and also out of gospel. In fact, it
was closer to gospel because it was a hopeful music, a music that celebrated
blackness in a way that black music had never done before. Otis
Redding was one of the most powerful and original rhythm and blues
singer-songwriters of the 1960s. It is no accident that soul music arose
in the 1960s, a period of unprecedented gains for black people and a great
surge in black pride. Beginning with the boycott of segregated buses in
the late 1950s, and continuing with sit-ins at segregated lunch counters
by black students in Nashville, Tennessee, and Greenville, North Carolina
in 1960, the civil rights movement spread like wildfire across the South
and also led to the passage of a series of federal civil rights laws that
struck down at least the legal underpinning of discrimination and segregation
in America. The black Americans who marched, sat-in and boycotted in order
to win equal rights followed the principles of nonviolence. Their leaders
were predominatly ministers, such as the Reverend
Martin Luther King, Jr. They were proud to have won their legal
victories by moral means. The music that came to be called soul also preached
a message of love.
Violence against black and white civil rights activists was commonplace.
Three civil rights workers were brutally murdered in Philadelphia and Mississippi
in 1964. Four black children were murdered in the bombing of the 16th Street
Baptist Church in Birmingham in 1963 and dozens of black churches throughout
the South were burned or bombed. Two whites and one black were murdered
during the 1965 demonstrations in Selma, Alabama in 1968. Martin
Luther King, Jr., the recognized leader of the civil rights movement
was also assassinated in 1968.
The federal response to the violent
reaction of segregationists was the passage of several new laws. The Civil
Rights Act (1964) undermined the remaining structure of Jim Crow laws and
provided federal protection in the exercise of civil rights. This landmark
Civil Rights Law of 1964 had barely gone into effect when a serious race
riot erupted in Harlem. Racial disturbances occured that summer in several
other northern ghettos. A year later, the black ghetto of Watts in Los Angeles,
California, exploded in violence. For the next two summers, dozens of other
riots broke out across the country. Many were sparked by fights between
blacks and white police officers.
A special presidential commission
looked into the reasons behind the riots. They found that despite all of
the court decisions, sit-ins, marches, and boycotts, the average black American
was still living with the crippling effects of segregation, discrimination,
and, above all, racism. The 1968 assassination of civil rights leader Martin
Luther King, Jr. - a champion of nonviolence - added to the sense
of despair most blacks felt.
When Martin Luther
King, Jr. was assassinated in 1968, a new wave of riots spread across
the country. A report by the National Advisiory Commission on Civil Disorders,
appointed by President Lyndon Johnson, identified more than 150 riots between
1965 and 1968. In 1967 alone, 83 people were killed (most of them black),
1,800 were injured, and property valued at more than $100 million was destroyed.
For the most part, the 1970s and 1980s casted a shadow over the dreams
of black Americans for racial justice and equality. With the exception of
Jimmy Carter's presidency from 1976 to 1980, it was a time when blacks first
felt neglected, then threatened. There was little attempt to enforce existing
civil rights laws. Very few blacks were named to top positions in the federal
government. Schools and businesses felt less pressure to recruit minorities
to make up for the unfair practices of the past, especially after white
men began to complain about "reverse discrimination."
Carter's election to the presidency in 1976 held out the promise of a new
way of thinking. He did name several blacks to high-level positions but
President Carter did come under fire for not doing enough to help the vast
majority of black Americans. A shaky economy marked by high inflation and
gas shortages hit blacks especially hard during his administration. The
Iran hostage crisis of 1979 added to the nation's depressed mood and paved
the way for a return to Republican control of the White House in 1980.
When President Ronald Reagan took office, blacks once again found themselves
shut out of the highest levels of government. Although he insisted that
his moves to strengthen the economy helped all Americans, blacks as well
as whites, President Reagan opposed or ignored many issues of interest to
black Americans. He also appointed conservative judges to various federal
courts who struck down many programs that had been designed to make up for
past discrimination against minorities.
The increase in racially motivated
violence against blacks during the 1980s supported the belief that racism
was alive and well in America. Many blacks seemed resigned to the fact that
America was still a nation of two societies - one black, one white - separate
Discouraged by these setbacks, some blacks decided that
the only way to make progress on issues of importance to black Americans
was to reject traditional politics. A few looked into alternative movements,
including the Nation of Islam and Afrocentrism, which stressed the value
of black culture and the black experience (especially its African roots).
The chief characteristic of the black experience in the 1970s and the early
1980s was the development of black consciousness and black pride. These
values found renewed vigor as increasing numbers of blacks came to believe
that the key to dealing with problems of race in the United States was the
way they felt about themselves as individuals and as a group.
late 1980s, there were black mayors in many of our country's larger cities
and some of its smaller ones too. Black representaion in state legislatures,
school boards, and state courts was also increasing, especially in the South.
George Bush took office as president in January, 1989, some blacks thought
he would reverse the trends of the Reagon years and revive the "Second
Reconstruction." The early signs were really hopeful. President Bush
named General Colin Powell head of the Joint Chiefs of Staff and made Dr.
Louis Sullivan secretary of Health and Human Services. He repeatedly expressed
his admiration for the ideals of Martin Luther King,
Jr. and observed the national holiday honoring the slain civil rights
activists. President Bush also welcomed African National Congress leader
Nelson Mandela to the White House in 1990.
By mid-1990, many blacks began to question Preident Bush's sincerity
on issues of importance to black Americans. They thought he was too eager
to support the white minority government in South Africa. Blacks were outraged
when he vetoed the 1991 Civil Rights Bill because it contained what he felt
were unconstitutional employment quotas. In addition, many blacks did not
support America's involvemnt in the Persian Gulf War or the nomination of
Clarence Thomas to the United States Supreme Court.
Years of anger
and frustration came to a head in April, 1992, after four white Los Angeles
policemen were found not guilty in the 1991 beating of black motorist Rodney
King. Los Angeles experienced the worst riots in American history. Disturbances
broke out in several other cities too. Not since the civil rights era of
the 1950s and 1960s had there been so much protest.
By the time of
the election of President Bill Clinton as President in 1992, the ongoing
economic recession, not the Los Angeles riot, was the topic on everyone's
mind. That November, a large number of white voters joined with an overwhelming
majority of black voters to demand a change. Republican George Bush was
turned out of office after only one term and elected Democrat Bill Clinton
instead. As a result of the elections, the Congressional Black Caucus grew
from twenty-five members to thirty-nine.
Although President Clinton
chose several blacks and other minorities for positions in his cabinet,
many black Americans adopted a "wait and see" attitude toward
this new administration. Some blacks questioned the sincerity of Clinton's
commitment to a "Black Agenda." They pointed out that he campaigned
heavily among middle-class whites, avoided Jesse Jackson and other more
outspoken black leaders. Clinton never presented any concrete plans for
dealing with problems unique to the black community. Many blacks also felt
that Clinton stumbled badly on a number of issues of importance to black
Americans. Many blacks were upset with Clinton on his decision to return
Haitian refugees to their country, a policy he had condemned during his
campaign. Others were disappointed by the defeat of his job creation bill,
which they blamed on an ineffective White House strategy. Perhaps the biggest
blow came when President Clinton withdrew Lani Guinier's nomination to head
the civil rights division of the Justice Department.
are increasingly recognizing what they have contributed to the national
culture and the global community and the extent of what more they have to
offer. One of the symbolic victories that has contributed to this new sense
of self-determination and self-recognition among black Americans was the
establishment of the Martin Luther King, Jr.
national holiday in 1983. This is our nations's tenth federal holiday, to
be commemorated each third Monday in January.
Blacks in the United
States today are mainly an urban people. They have migrated from the rural
South to cities of the North and West during the 20th century. Their migration
constitutes one of the major migrations of people in United States history.
The black community has developed a number of distinctive cultural features
that black Americans look upon with pride. Many of these features reflect
the influence of cultural traditions that originated in Africa. Other features
reflect the uniqueness of the black American in the United Stated such as
their speech, extended family arrangements, dress and music etc.
black American music has never gone away in terms of its worldwide popularity,
it is still present in America today. It is back with a vengeance via the
phenomenon of rap, which is now performed by natives of China, Japan, India,
Russia, England, France, Mexico, and in many parts of Africa - in varous
native tongues. Like jazz before it, rap has conquered the world.
The century is coming to an end. What does the future hold for black
Americans? Blacks are torn between being optimistic and being pessimistic
about the future. Is the future to be feared or welcomed? On the one hand,
blacks are still scaling the heights of achievement. Recently, Dr. Bernard
Harris became the first black astronaut to walk in space and Isabel Wilkerson
won a Pulitzer prize for news features. Carlton Gutrie has become a successfull
auto-parts manufacturer and Michael Johnson, often compared to Jesse Owens,
set a Olympic record when he won both the 200 and 400 meter races at the
1996 Olympics in Atlanta, Georgia. On the other hand, every day brings news
of difficulties blacks face. Unemployment still remains high for too many
blacks. Added to unemployment, the twin epidemics of AIDS and crack cocaine
(drugs) have taken a disproportionate toll on black communities. Another
factor is the instability of black families.
Unfortunately, we do
not live in a perfect world. There are still wars, sickness, poverty, and
injustice. And yes, even today, racism and prejudice, based on skin color
still exist. The black American featured on United States postage stamps
had to fight an uphill battle to obtain their basic human rights.
brave men and women honored on U.S. stamp issues came from different backgrounds.
Some were poor and lacked formal education. Some fought on different battlefronts
to bring an end to the injustices suffered by blacks in this country but
each and every one of them believed in the principle that all men and women
are created equal. And under the guidelines of the U.S. Constitution, these
brave men and women struggled and fought for the rights of blacks. They
fought for equal education and job opportunities, equal justice under the
law, the right to vote, and open housing. They fought to make America a
better world and a better place to live.
It is common knowledge that
ignorance is the cause of much of the racism and prejudice that exists today.
However, as the world gets smaller and smaller, we must learn to live with
and respect the lives, customs, and races of other people. Black Americans
and all Americans owe a lot to the brave men and women who fought to end
injustices and also for their contributions to America's history.
Asante, Molefi K. and Mattson, Mark T. Historical and Cultural Atlas
Americans. New York: Macmillan Publishing Company, 1992.
Charles M. Black Saga. New York: Houghton Mifflin Company, 1995.
PhD and Maguire, Jack. Timelines of African-American History- 500 Years
Black Achievement. New York: A Roundtable Press/Perigee Book, 1994.
Sharon. The Timetables of African-American History - A Chronology of
Most Important People and Events in African-American History. New York:
Simon & Schuster, 1995.
Hughes, Langston, Meltzer, Milton, Lincoln,
C. Eric, and Spencer, Jon Michael. A
Pictorial History of African Americans- From 1619 to the Present. New York:
Crown Publishers, Inc., 1995. | http://library.thinkquest.org/2667/Tour.htm | 13 |