score
stringclasses
605 values
text
stringlengths
4
618k
url
stringlengths
3
537
year
int64
13
21
20
3.2 Truth Tables A truth table lists all possible combinations of truth values. A single statement p has two possible truth values: truth (T) and falsity (F). Given two statements p and q, there are four possible truth value combinations, ranging from TT to FF. So there are four rows in the truth table. In general, given n statements, there are 2n cases (or rows) in the truth table. 3.2.1 Basic Truth Tables of the Five Connectives Formally, the following five basic truth tables define the five connectives. The Truth Table of Negation The possible truth values of a negation are opposite to the possible truth values of the statement it negates. If p is true, then ∼p is false. If p is false, then ∼p is true. The Truth Table of Conjunction A conjunction p ∙ q is true only when both of its conjuncts are true. It is false in all other three cases. p ∙ q The Truth Table of Disjunction A disjunction p ∨ q is false only when both of its disjuncts are false. In the other three cases, the disjunction is true. p ∨ q The Truth Table of Conditional A conditional is false only when its antecedent is true but its consequent is false. This is so because p ⊃ q says that p is a sufficient condition of q. Now if p is true but q is false, then p cannot be a sufficient condition for q. Consequently, the conditional p ⊃ q would be false. p ⊃ q The Truth Table of Biconditional A biconditional p ≡ q is true only when both p and q share the same truth value. If p and q have opposite truth values, then the biconditional is false. p ≡ q 3.2.2 Determining the Truth Value of a Compound Statement The truth value of a compound statement is determined by the truth values of the simple statements it contains and the basic truth tables of the five connectives. In the following example, the statements C and D are given as true, but E is given as false. To determine the truth value of the conditional, we first write down the given truth value under each letter. Afterwards, using the truth table of conjunction, we can determine the truth value of the antecedent C ∙ D. Because both C and D are true, C ∙ D is true. We then write down “T” under the dot “∙” to indicate that C ∙ D is true. Finally, since the antecedent C ∙ D is true, but the consequent E is false, the conditional is false. The final truth value is written under the horseshoe “⊃”. In the next example, the compound statement is a disjunction. The statement H is given as true, but G and K false. To figure out the truth value of the disjunction, we need to first determine the truth value of the second disjunct ∼H ⊃ K. Since H is true, ∼H is false. We write down “F” under the tilde “∼”. Next, since the antecedent ∼H is false and the consequent K is false, the conditional ∼H ⊃ K is true. So we write down “T” under the horseshoe “⊃”. In the last step, we figure out that the disjunction is true because the first disjunct G is false but the second disjunct ∼H ⊃ K is true. In the third example, we try to determine the truth value of a biconditional statement from the given truth values that A and D are true, but M and B are false. Since M is false, but A is true, from the third row in the truth table of biconditional, we know that M ≡ A is false, and write down “F” under the triple bar “≡”. We then decide that D ⊃ B is false because D is true but B is false. Next we write down “T” under the tilde “∼” to indicate that ∼(M ≡ A) is true. Finally, we can see that the whole conjunction is false because the second conjunct is false. 3.2.3 Three Properties of Statement In Propositional Logic, a statement is tautologous, self-contradictory or contingent. Which property it has is determined by its possible truth values. A statement is tautologous if it is logically true, that is, if it is logically impossible for the statement to be false. If we look at the truth table of a tautology, we would see that all its possible truth values are Ts. One of the simplest tautology is a disjunction such as D ∨ ∼D. To see all the possible truth values of D ∨ ∼D, we construct its truth table by first listing all the possible truth values of the statement D under the letter “D”. Next, we derive the truth values in the green column under the tilde “∼” from the column under the second “D”. The tilde column highlighted in green lists all the possible truth values of ∼D. Finally, from the column under the first “D” and the tilde column, we can come up with all the possible truth values of D ∨ ∼D. They are highlighted in red color, and since the red column under the wedge “∨” lists all the possible truth values of D ∨ ∼D, we put a border around it to indicate that it is the final (or main) column in the truth table. Notice that the two possible truth values in this column are Ts. Since the truth table lists all the possible truth values, it shows that it is logically impossible for D ∨ ∼D to be false. Accordingly, it is a tautology. The next tautology K ⊃ (N ⊃ K) has two different letters: “K” and “N”. So its truth table has four (22 = 4) rows. To construct the table, we put down the letter “T” twice and then the letter “F” twice under the first letter from the left, the letter “K”. As a result, we have “TTFF” under the first “K” from the left. We then repeat the column for the second “K”. Under “N”, the second letter from the left, we write down one “T” and then one “F” in turn until the column is completed. This results in having “TFTF” under “N”. Next, we come up with the possible truth values for N ⊃ K since it is inside the parentheses. The column is highlighted in green. Afterwards, we derive the truth values for the first horseshoe (here highlighted in red) based on the truth values in the first K column and the second horseshoe column (i.e., the green column). Finally we put a border around the first horseshoe column to show it is the final column. We see that all the truth values in that column are all Ts. So K ⊃ (N ⊃ K) is a tautology. A statement is self-contradictory if it is logically false, that is, if it is logically impossible for the statement to be true. After completing the truth table of the conjunction D ∙ ∼D, we see that all the truth values in the main column under the dot are Fs. The truth table illustrates clearly that it is logically impossible for D ∙ ∼D to be true. The conjunction G ∙ ∼(H ⊃ G) has two distinct letters “G” and “H”, so its truth table has four rows. We write down “TTFF” under the first “G” from the left, and then repeat the values under the second “G”. Under “H”, we put down “TFTF”. We then derive the truth values under the horseshoe from the H column and the second G column. Next, all the possible truth values for ∼(H ⊃ G) are listed under the tilde (highlighted in blue). Notice they are opposite to the truth values in the horseshoe column. Afterwards, we use the first G column and the tilde column to come up with the truth values listed under the dot. Since they are all Fs, G ∙ ∼(H ⊃ G) is a self-contradiction. A statement is contingent if it is neither tautologous nor self-contradictory. In other words, it is logically possible for the statement to be true and it is also logically possible for it to be false. The conditional D ⊃ ∼D is contingent because its final column contains both a T and a F. Since each row in the truth table represents one logical possibility, this shows that it is logically possible for D ⊃ ∼D to be true, as well as for it to be false. In constructing the truth table for B ≡ (∼E ⊃ B), we need to first come up with the truth values for ∼E because it is the antecedent of the conditional ∼E ⊃ B inside the parentheses. We then use the truth values under the tilde and the second “B” to derive the truth values for the horseshoe column. Next we come up with the column under the triple bar from the first B column and the horseshoe column. The final column has three Ts, representing the logical possibility of being true, and one F, representing the logical possibility of being false. Since its main column contains both T and F, B ≡ (∼E ⊃ B) is contingent. 3.2.4 Relations between Two Statements By comparing all the possible truth values of two statements, we can determine which of the following logical relations exists between them: logical equivalence, contradiction, consistency and inconsistency. Two statements are logically equivalent if they necessarily have the same truth value. This means that their possible truth values listed in the two final columns are the same in each row. To see whether the pair of statements K ⊃ H and ∼H ⊃ ∼K are logically equivalent to each other, we construct a truth table for each statement. Notice in both truth tables, the statement K has the truth value distribution TTFF, and H has the truth value distribution TFTF. This is crucial because we need to make sure that we are dealing with the same truth value distributions in each row. For instance, in the third row, K is false but H is true in both truth tables. After we complete both truth tables, we see that the two main (or final) columns are identical to each other. This shows that the two statements are logically equivalent. It is important to be able to tell whether two English sentences are logically equivalent. To see whether these two statements - The stock market will fall if interest rates are raised. - The stock market won’t fall only if interest rates are not raised. are logically equivalent, we first symbolize them as R ⊃ F and ∼F ⊃ ∼R. We then construct their truth tables. Since the final two columns are identical, indeed they are logically equivalent. Two statements are contradictory to each other if they necessarily have the opposite truth values. This means that their truth values in the final columns are opposite in every row of the truth tables. After completing the truth tables for D ⊃ B and D ∙ ∼B, we can see clearly from the two final columns that they are contradictory to each other. Two statements are consistent if it is logically possible for both of them to be true. This means that there is at least one row in which the truth values in both the final columns are true. To find out whether ∼A ∙ ∼R and ∼(R ∙ A) are consistent with each other, we construct their truth tables below. Notice again that we have to write down “TTFF” under both “A”, and then “TFTF” under both “R”. After both truth tables are completed, we can see that in the fourth row of the final column each statement has T as its truth value. Since each row stands for a logical possibility, this means that it is logically possible for both of them to be true. So they are consistent with each other. If we cannot find at least one row in which the truth values in both the final columns are true, then the two statements are inconsistent. That is, it is not logically possible for both of them to be true. In other words, at least one of them must be false. Therefore, if it comes to our attention that two statements are inconsistent, then we must reject at least one of them as false. Failure to do so would mean being illogical. In the final columns of the truth tables of M ∙ S and ∼(M ≡ S), we do not find a row in which both statements are true. This shows that they are inconsistent with each other, and at least one of them must be false. There can be more than one logical relation between two statements. If two statements are contradictory to each other, then they would have opposite truth values in every row of the main columns. As a result, there cannot be a row in which both statements are true. This means that they must also be inconsistent to each other. However, if two statements are inconsistent, it does not follow that they must be contradictory to each other. The above pair, M ∙ S and ∼(M ≡ S) are inconsistent, but not contradictory to each other. The last row of the two final columns shows that it is logically possible for both statements to be false. For logically equivalent statements to be consistent with one another, they have to meet the condition that none of them is a self-contradictory statement. The final column of a self-contradictory statement contains no T. So it is not logically possible for a pair of self-contradictory statements to be consistent with each other. 3.2.5 Truth Tables for Arguments A deductive argument is valid if its conclusion necessarily follows from its premises. That is, if the premises are true, then the conclusion must be true. This means that if it is logically possible for the premises to be true but the conclusion false, then the argument is invalid. Since a truth table lists all logical possibilities, we can use it to determine whether a deductive argument is valid. The whole process has three steps: - Symbolize the deductive argument; - Construct the truth table for the argument; - Determine the validity—look to see if there is at least one row in the truth table in which the premises are true but the conclusion false. If such a row is found, this would mean that it is logically possible for the premises to be true but the conclusion false. Accordingly, the argument is invalid. If such a row is not found, this would mean that it is not logically possible to have true premises with a false conclusion. Therefore, the argument is valid. To decide whether the argument |If young people don’t have good economic opportunities, there would be more gang violence. Since there is more gang violence, young people don’t have good economic opportunities.||3.2a| is valid, we first symbolize each statement to come up with the argument form. Next, we line up the three statements horizontally, separating the two premises with a single vertical line, and the premises and the conclusion with a double vertical line. Afterwards, we write down “TTFF” under “O” and “TFTF” under “V”. We then derive the truth values for ∼O. Next, we complete the column under the horseshoe and put a border around it. We then complete the column for the second premise V and the conclusion ∼O. The three columns with borders list all the possible truth values for the three statements. To determine the validity of (3.2a), we go over the three main columns row by row to see whether there is a row in which the premises are true, but the conclusion false. We find such a case in the first row. This means that it is logically possible for the premises to be true, but the conclusion false. So (3.2a) is invalid. To determine whether Argument (3.2b) is valid, we check the three final columns row by row to see if there is a row in which the premises are true but the conclusion false. We do not find such a row. So (3.2b) is valid. |Psychics can foretell the future only if the future has been determined. But the future has not been determined. It follows that psychics cannot foretell the future.||3.2b| In the next example, there are three different letters in the argument form of (3.2c). So its truth table has eight (23 = 8) rows. To exhaust all possible truth value combinations, we write down “TTTTFFFF” under the first letter from the left, “E”. For the second letter from the left, “F”, we put down “TTFFTTFF”, and for the third, “P”, “TFTFTFTF”. For the first premise, we first come up with the truth value for F ∙ P and write down the truth values under the dot. We then derive the truth values under the horseshoe using the first E column and the dot column. Next, we fill out the columns for ∼E and ∼P. After the truth table is completed, we go over the three main columns to see if there is at least one row with true premises but a false conclusion. We find such a case in the fifth and the seventh rows. So the argument is invalid. |Public education will improve only if funding for education is increased and parents are more involved in the education process. Since public education is not improving, we can conclude that there is not enough parental involvement in the education process.||3.2c| Notice the next argument (3.2d) has three premises. After symbolization, its argument form contains three different letters. So its truth table has eight rows. After completing the truth table, we check each row of the four final columns, looking for rows with true premises but a false conclusion. We do not find any. So (3.2d) is valid. |If more money is spent on building prisons, then less money would go to education. But kids would not be well-educated if less money goes to education. We have spent more money building prisons. As a result, kids would not be well-educated.||3.2d| - Determine the truth-values of the following symbolized statements. Let A, B, and C be true; G, H, and K, false; M and N, unknown truth-value. You need to show how you determine the truth-value step by step. - A ∙ ∼G - ∼A ∨ G - ∼(A ⊃ G) - ∼G ≡ (B ⊃ K) - (A ⊃ ∼C) ∨ C - B ∙ (∼H ⊃ A) - M ⊃ ∼G - (M ∨ ∼A) ∨ H - (N ∙ ∼N) ≡ ∼K - ∼(M ∨ ∼M) ⊃ N - Use the truth table to decide whether each of the following symbolized statements is tautologous, self-contradictory or contingent. - ∼G ⊃ G - D ⊃ (B ∨ ∼B) - K ∙ (∼M ∨ ∼K) - (S ⊃ H) ≡ (∼H ∙ S) - (R ⊃ E) ∨ ∼H - (N ∙ ∼(D ∨ ∼E)) ≡ D - Use truth tables to determine whether each pair of statements are logically equivalent, contradictory, consistent or inconsistent. If necessary, symbolize the statements. Identify all the relations between the statements. - G ∙ ∼D D ∨ ∼G - ∼(M ⊃ B) ∼B ∙ ∼M - ∼A ≡ C (A ∙ ∼C) ∨ (C ∙ ∼A) - J ⊃ ∼(L ∨ N) (∼L ⊃ N) ∙ J ⊃ K) ∙ ∼O) O ∨ (E ∙ ∼K) - ∼(H ∨ ∼(R ∙ S)) (∼S ∙ R) ⊃ ∼H - If Steve does not support you, then he is not your friend. (S, F) If Steve supports you, then he is your friend. (S, F) - If someone loves you, then she or he is nice to you. (L, N) If someone is nice to you, then she or he loves you. (N, L) - Without campaign finance reform people would not have equal access to political power. (C, E) With campaign finance reform people would have equal access to political power. (C, E) - The economy will slow down unless consumer confidence stays high and inflation is under control. (S, H, U) The economy will slow down if consumer confidence does not stay high and inflation is not under control. (S, H, U) - Symbolize the arguments and then use the truth table to decide whether they are valid. - Not both the music program and the art curriculum will be cut. So if the music program is not cut, then the art curriculum will. (M, A) - Retailers cannot have a good holiday season unless the consumer confidence in the economy is high. Currently the consumer confidence in the economy is high. We can predict that retailers can have a good holiday season. (R, C) - If nations do not cut back the use of fossil fuel, then worldwide pollution will get worse. Nations are cutting back the use of fossil fuel. Therefore, worldwide pollution will not get worse. (C, W) - People would not demand cultural assimilation if they embrace diversity. However, people do demand cultural assimilation. Consequently, they do not embrace diversity. (A, D) - We can have world peace only if people are compassionate and not prejudiced. Since people are prejudiced; therefore, we cannot have world peace. (W, C, P) - Germ-line genetic engineering should be banned if we do not want to have designer babies. If we want to have designer babies, there would be greater social inequality. So either germ-line genetic engineering should be banned or there would be greater social inequality. (G, D, I) - The economy will suffer if we fall behind other nations in science, technology and innovation. If we do not improve mathematics education, we will fall behind other nations in science, technology and innovation. Therefore, the economy will suffer unless we improve mathematics education. (S, F, I) - If we continue the acceleration of production and consumption, nations will fight over natural resources unless alternative technologies are developed. Since we are developing alternative technologies, nations won’t fight over natural resources. (C, F, D) This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
http://www.butte.edu/~wmwu/iLogic/3.2/iLogic_3_2.html
13
17
- Meet George Washington - Visit His Estate - Support His Vision - Educational Resources In January 1791, President George Washington's Secretary of the Treasury Alexander Hamilton proposed a seemingly innocuous excise tax "upon spirits distilled within the United States, and for appropriating the same."1 What Congress failed to predict was the vehement rejection of this tax by Americans living on the frontier of Western Pennsylvania. By 1794, the Whiskey Rebellion threatened the stability of the nascent United States and forced President Washington to personally lead the United States militia westward to stop the rebels. By 1791 the United States suffered from significant debt incurred during the Revolutionary War. Secretary Hamilton, a Federalist supporting increased federal authority, intended to use the excise tax to lessen this financial burden. Despite resistance from Anti-Federalists like Thomas Jefferson, Congress passed the legislation. When news of the tax spread to Western Pennsylvania, individuals immediately voiced their displeasure by refusing to pay the tax. Residents viewed this tax as yet another instance of unfair policies dictated by the eastern elite that negatively affected American citizens on the frontier. Western farmers felt the tax was an abuse of federal authority wrongly targeting a demographic that relied on crops such as corn, rye, and grain to earn a profit. However, shipping this harvest east was dangerous because of poor storage and dangerous roads. As a result, farmers frequently distilled their grain into liquor which was easier to ship and preserve. While large-scale farmers easily incurred the financial strain of an additional tax, indigent farmers were less able to do so without falling into dire financial straits. President Washington sought to resolve this dispute peacefully. In 1792, he issued a national proclamation admonishing westerners for their resistance to the "operation of the laws of the United States for raising revenue upon spirits distilled within the same."2 However, by 1794 the protests became violent. In July, nearly 400 whiskey rebels near Pittsburgh set fire to the home of John Neville, the regional tax collection supervisor. Left with little recourse and at the urgings of Secretary Hamilton, Washington organized a militia force of 12,950 men and led them towards Western Pennsylvania, warning locals "not to abet, aid, or comfort the Insurgents aforesaid, as they will answer the contrary at their peril."3 The calling of the militia had the desired effect of essentially ending the Whiskey Rebellion. By the time the militia reached Pittsburgh, the rebels had dispersed and could not be found. The militia apprehended approximately 150 men and tried them for treason. A paucity of evidence and the inability to obtain witnesses hampered the trials. Two men, John Mitchell and Philip Weigel, were found guilty of treason, though both were pardoned by President Washington. By 1802, then President Thomas Jefferson repealed the excise tax on whiskey. Under the eye of President Washington, the nascent United States survived the first true challenge to federal authority. Loyola University Chicago Baldwin, Leland. Whiskey Rebels: The Story of a Frontier Uprising. Pittsburgh: University of Pittsburgh Press, 1939. Hogeland, William. The Whiskey Rebellion: George Washington, Alexander Hamilton, and the Frontier Rebels who Challenged America’s Newfound Sovereignty. New York: Simon & Schuster Paperbacks, 2006. Slaughter, Thomas. The Whiskey Rebellion: Frontier Epilogue to the American Revolution. New York: Oxford University Press, 1986.
http://www.mountvernon.org/educational-resources/encyclopedia/presidency/whiskey-rebellion
13
55
In all the four countries being considered the most important change since 1867 has been the growth of the party system. Nearly all members of the lower houses are now elected as representatives of political parties. Party discipline in all the parliaments has been greatly strengthened, and in some of the parliaments it is almost unknown for an MP to fail to support the agreed party position-that is, the position agreed by a majority of the parliamentary party. In some of the parties, an MP may be expelled from the party for failing to support the party line. Nevertheless, there have been differences in the ways the various parliaments have developed, and it is worth looking at these before considering the performances of the various parliaments in their key roles. Three big developments in the political system of the UK since Bagehot’s day have been the emasculation of the House of Lords, the devolution of power to Scotland and Wales (without any move towards the UK becoming a federation), and the loss of sovereignty resulting from membership of the European Union. The House of Lords The Lords turned out to be far from the politically timid body that Bagehot had described. In 1893 Gladstone’s Liberals, aided by most of the Irish members, carried a bill to give home rule to Ireland. The bill was rejected by the Lords, but no action was taken against them, for it could be said that they were reflecting popular opinion more accurately than were the Commons. The situation was very different in 1909. The Liberal government had become increasingly restive as the Conservative-dominated Lords rejected or mutilated its bills. The Chancellor of the Exchequer, Lloyd George, skilfully manoeuvred the Lords into rejecting the 1909 budget. Two elections were held in 1910, the first to give authority to force through the ‘people’s budget’ (the Lords yielded), and the second to end such struggles between the two houses. The Parliament Act of 1911 provided that bills which had passed the Commons unaltered in three successive sessions would become law after two years even if the Lords did not agree, and all power of the Lords over money bills was effectively lost, being reduced to a mere one month’s ‘suspensive veto’. The Lords very reluctantly agreed, but the alternative was the creation of perhaps 400 or 500 new peers, who would pass the bill. In 1949 the delaying powers of the Lords were further reduced from two years to one and from three sessions to two, as a result of the Lords delaying a 1947 proposal of the Attlee Labour Government to nationalise the steel industry. Of course there have been many inquiries into the role and composition of the Lords. Russell produced a reform scheme in 1869 and Rosebery in 1884 and 1888. The Lords themselves tried in 1907. The preamble to the Parliament Act of 1911 announced the intention of making the upper house elective, ‘constituted on a popular instead of an hereditary basis’, and the Bryce Conference was appointed in 1917 to produce a scheme, but nothing came of it. In 1968 an all party plan was produced for a nominated upper house with a six-months suspensory veto. Nominations were to be controlled so that the government of the day had a narrow majority over the opposition, with the balance of power held by Independents. In 1958 life peers had been introduced, a measure advocated by Bagehot a century earlier. Before this change-and it took some time to have an effect-the Lords met for only 60 days a year, rarely for more than three hours a day, and only about 60 members attended at all regularly. It seemed to be dying, peacefully, in its sleep. But the influence of the life peers was eventually decisive. There were Labour peers, and thus some party conflict. The ‘crossbench’ Independent peers played an important role, and there were now ‘working’ peers, once almost a contradiction in terms. The result was a much livelier house, prepared to challenge the government-whether Labour or Conservative-where there was evidence of strong public support. The quality of inquiries by the Lords also improved, as did the pool of potential ministerial talent, the latter particularly important for a Labour government, which could expect to find few supporters among the hereditary peers. In May 1997 the Labour Party, led by Tony Blair, won an overwhelming victory in the general election. The new Lord Chancellor tried to modernise the dress of his office. ‘I feel that ... the days of breeches, tights and buckled shoes should go’, he told a parliamentary committee, but the House of Lords was still very conservative on matters which seemed to erode its dignity and power. Eventually the Lord Chancellor was allowed to jettison his half-pants, stockings and slippers in favour of ordinary black trousers and well-polished black shoes, but when he was presiding over the Lords he still had to wear his long, heavy robe and his long, heavy wig. One of the promises in the 1997 Labour manifesto was the removal of the right of the 758 hereditary peers to sit in the House of Lords, but some negotiations were necessary to get the bill through the Lords, for the Conservative Party opposed reform, the House of Lords being one of their only effective forums of opposition. Eventually a deal was struck with Lord Cranborne, the Leader of the Conservatives, that 92 hereditary peers, elected by their colleagues, would remain in the Lords as an interim measure. Lord Cranborne was sacked by the leader of the opposition, William Hague, for negotiating the agreement. This was only the first stage in the Lords reform for, as Tony Blair said, the government was ‘perfectly prepared to agree that in the first stage one in ten hereditaries stays, and in the second stage they go altogether.’ A royal commission was set up to make recommendations by December 1999 on full-scale reform of the upper house. The Blair Government promised that a reformed upper house would be in place by the next general election, but this election was held in 2001, without the reform of the House of Lords being completed. The House of Commons Bagehot thought that the effects of the 1867 Reform Act would take some time to become evident, but in fact there were almost immediate changes. The 90 per cent increase in the number of voters completely changed the relationship between a member and his constituents. To gain the support of such a number of voters there had to be a mass organisation, and the Conservative National Union was formed in 1867 and the National Liberal Federation in 1877 to meet this need. These new organisations had to offer the voters some policies, and to offer some prospect of the promises being kept. This in turn necessitated a disciplined parliamentary party which would support the government in implementing the promises, and MPs began to be elected as representatives of a party rather than as individuals. The change in voting patterns in the House of Commons was dramatic. In 1860 in only 6 per cent of the divisions were there party votes, normally defined as one where at least 90 per cent of a party voting in a division do so on the same side. This rose to 35 per cent in 1871, 47 per cent in 1881 and 76 per cent in 1894. By 1967, a hundred years after Bagehot wrote, party discipline was taken for granted, and many thought that MPs were mere robots and that the possibility of significant cross voting was negligible. The House of Commons now consists of 659 members, from single member constituencies with roughly-equal numbers of voters, the boundaries being drawn by independent commissioners. Yet it took a long while to get there, and in all the changes the UK lagged years behind the more developed of its colonies. It will be remembered that in 1867 less than a third of the adult male population could vote, and voting was in public. The secret ballot was introduced in 1872, and in 1884 the electorate was increased from three to five million by enfranchising rural workers, but voters still had to be householders. In the following year there was an attempt to redistribute electoral districts so they would be equal on a population basis and each have one MP. However, some universities and a score of towns retained two MPs. Women had a very difficult time gaining the vote. From 1903 onwards the suffragettes fought with increasing vigour, but the decisive event was the First World War. After the success of women in performing jobs previously exclusively done by men, they could scarcely any longer be regarded as incompetent to vote. The Representation of the People Act of 1918 gave the vote to women over 30 who were local government electors (or whose husbands were) and also effectively gave adult male suffrage. These changes increased the electorate from eight million to 21 million. Women were given the vote on equal terms with men in 1928, and as a result there are now more women voters than men. Until 1948, second votes were possible for university graduates and for owners of business premises, and in 1950 the last of the double-member constituencies were abolished. The voting age was lowered to eighteen in 1969. Since 1944 electorate boundaries have been adjusted regularly by independent commissions with the intention of ensuring equality of representation. The populations of Scotland, Wales and Northern Ireland have been falling in comparison with that of England. Because the distribution of seats between the four countries is done by act of parliament and changes are always controversial, Scotland, Wales and Northern Ireland have been able to resist reductions in their numbers of seats and are relatively over-represented while England is under represented. The voting has always been first-past-the-post and voluntary, though there has been some recent pressure for proportional representation. In its manifesto for the 1997, election the Blair Labour Government promised to set up an independent commission ‘to recommend a proportional alternative to the first-past-the-post system.’ This was done, and the commission reported in October 1998, with a proposal which the commission described as ‘alternative vote with top-up members’. Each elector would have two votes, the first for choice of a constituency MP, the other either for individuals or a party list. The commission envisaged that 80-85 per cent of the MPs should be constituency members, the remaining 15-20 per cent should be the top-up members. When the report was debated in the House of Commons in November 1998, there was a great deal of criticism. The Conservatives were strongly opposed to the whole idea, and the Labour Party had a range of views. The only significant party strongly supporting the report was the Liberal Democrats. Winding up for the government, George Howarth said that ‘the people should make the decision. It is appropriate that there will be a referendum at the right time’. The right time has evidently not yet arrived. Devolution of power to Scotland and Wales The 1997 Labour election manifesto also contained promises to give Scotland ‘a parliament with law-making powers’ and Wales an assembly to ‘provide democratic control of the existing Welsh Office functions’. Referendums on these matters were held in September 1997. In Scotland, 60 per cent voted and of these 74 per cent were in favour of a Scottish Parliament, and 63 per cent were in favour of that Parliament having the power to vary taxes imposed by Westminster. The Welsh voted a week later, and narrowly supported their new assembly. Only just over 50 per cent of those eligible voted, and 50.3 per cent of these were in favour of the assembly, a margin of less than 7 000 votes. The Blair Government nevertheless decided to proceed with both the Scottish Parliament and the Welsh Assembly, and the bills duly passed the UK Parliament. Elections for the Scottish Parliament were held in May 1999, for a single house. The 129 members were elected in two different ways, broadly on the lines recommended by the Proportional Representation Commission for the UK Parliament. The majority (73) were elected by a ‘first-past-the-post’ system from constituencies which were broadly the same as those for the UK Parliament, while the remaining 56 members were elected by proportional representation, seven of them from each European parliament constituency. Elections will be held every four years. The powers of the Scottish Parliament were ‘devolved’ from the UK Parliament, and in these areas the Scottish Parliament is allowed to make laws for Scotland. It can legislate on a wide range of matters of importance to the people of Scotland, including law and order, local government, support for industry, education, health and the promotion of tourism and exports. A devolution could of course be revoked at any time by the UK Parliament if it was felt that the actions of the Scottish Parliament were unacceptable, though this revocation might present political difficulties. The main source of revenue of the Scottish Parliament is a block grant from the UK Parliament, although it has the power to vary the basic rate of income tax by up to three percentage points either side of what is charged south of the border. Wales too has a single house, the Welsh Assembly, with 60 members elected for a four year term. It is chosen on a similar system to the Scottish Parliament, with 40 members elected from constituencies by the ‘first-past-the-post’ system, topped up with four members elected by proportional representation from each of the five European Parliament constituencies. The Welsh Assembly however has very much less power than the Scottish Parliament. It cannot pass acts dealing with Welsh matters, which remain the responsibility of the UK Parliament. It does have a secondary legislative capacity, being able to draw up different orders and statutory instruments to those which apply in England, but these will have to be in conformity with the acts passed by the UK Parliament. Really what the Welsh Assembly has done is to take over the administrative functions of the Welsh Office in Westminster, and with an annual budget of over seven billion pounds a year will take decisions on issues such as education and the health service in Wales, agriculture, transport and roads and the environment. The size of the annual block grant is decided by the UK government, and the Welsh Assembly has no power to vary taxes, an open invitation when voters are fretful to pass the blame to London for providing too little cash. Northern Ireland Parliament There was some feeling that these constitutional changes, particularly the establishment of the Scottish Parliament, were a dramatic breakthrough. In fact, Britain had already had 50 years’ experience of a similar parliament. A parliamentary system modelled on Westminster was established in Northern Ireland in 1921, following the separation of the Irish Free State. There were two houses, a Senate with 26 members and a House of Commons with 52 members. There were two ex-officio senators, the Mayors of Belfast and Londonderry, and the remaining 24 were elected by the Commons by proportional representation. The 52 members of the Commons came from single member constituencies. The powers of the Northern Ireland Parliament were similar to those now given to the Scottish Parliament. Most powers were transferred to the Northern Ireland Parliament, but Westminster kept control over such matters as constitutional and security issues, law and order, policing and relations with the European Union. The Northern Ireland Parliament lasted for 50 years, but in 1972 the level of sectarian violence persuaded the Heath Government in London to prorogue the Northern Ireland Parliament and impose direct rule. There were sustained efforts to restore self-government to Northern Ireland, which eventually achieved something in June 1998, when a 108-member Assembly from eighteen six-member constituencies was elected. There were delays in restoring self-government, but in December 1999 power was returned to the elected Assembly, with a ten-strong Cabinet voted in by the Assembly, and containing three ministers from each of the Unionists and the Irish-nationalist Social Democratic and Labour Party, and two each from the pro-Irish and militant Sinn Fein and the hardline Ulster Unionist Party. Unfortunately this lasted for only a very brief time before problems over disarming the militants caused direct rule from London to be reimposed, but after three months, when the IRA had agreed to disarm, self-government was restored. But the IRA proved very reluctant actually to give up their weapons, and the situation remains uncertain. The Northern Ireland problem is religious, and religious wars are always the most difficult to solve. There is no serious pressure towards the United Kingdom becoming a federation. There seems to be no desire in England, except possibly in the north-east, for the establishment of regional parliaments. The Irish are encouraging moves towards independence for Scotland and Wales. Dublin’s motive seems to be a belief that if those two countries become independent countries in the European Union, it will become almost impossible for England to retain control of Northern Ireland. But independence is a long way off for Wales. It is too early to say how effectively the Scottish Parliament and the Welsh Assembly will work, but it seems certain that if they do not satisfy their constituents the pressure will be for the devolution of more powers, not the return of the present powers to Westminster. Scotland may move towards becoming an independent country in the European Union, though whether they would then retain the British monarch as their head of state is very doubtful. Heads of state Looking at the performance of the British heads of state, Queen Victoria’s successors have been much more meticulous in observing the limitation of the rights of the monarch to the right to be consulted, to encourage and to warn. There have been no occasions on which a prime minister’s or Cabinet’s request for a dissolution has been refused, a discretion which Bagehot thought rested with the sovereign. George V was prepared to agree to Prime Minister Asquith’s request for the creation of perhaps 500 peers in 1911, though it is far from certain that Edward VII, had he survived, would have been so acquiescent. This is not to say that there has not been a need for royal decisions, for the selection of a prime minister was difficult if no party had a majority: there were no less than eight minority and two coalition governments during Victoria’s reign. The Labour Party has always had an elected leader, but the Conservative leader was, until 1964, supposed to ‘emerge’. On one occasion no one did clearly emerge as leader of the Conservatives. In 1923 Conservative Prime Minister Bonar Law resigned, mortally ill, too ill to be consulted about his successor. The party was split between Stanley Baldwin and Lord Curzon. Although there was much consultation, the final selection was King George V’s, and he chose Baldwin, finally ending any thought that a prime minister could come from the House of Lords. On other occasions, such as Macmillan’s succession to Eden, or Douglas-Home’s succession to Macmillan, although the royal prerogative was used, in fact the process of consultation and elimination had resulted in a single name emerging. Election of parliamentary leaders The Conservative method of choosing party leaders was, though, a confusing and in fact undemocratic process, and was replaced by the formal election of a Conservative parliamentary leader by the party members in the House of Commons. To win on the first ballot a candidate had to obtain a simple majority of the number of Conservative MPs and have a lead of at least 15 per cent over his or her nearest challenger. If a winner did not emerge from the first ballot a second ballot was held, for which fresh nominations were called. Two leaders (Heath and Thatcher) were removed by this system. In the Labour Party, until 1982 the parliamentary party had elected the leader. In that year the responsibility was transferred to an electoral college of MPs (30 per cent), party members (30 per cent) and block votes from the trade unions (40 per cent). After a bitter fight the block votes from the trade unions were eliminated by the Labour Party Conference in 1993, and a one-member-one-vote system introduced, with voting by mail. Something nevertheless had to be done to weight the votes, for there were four million trade unionists paying the political levy as compared with 270 000 individual party members and only a few hundred MPs at Westminster and in the European Parliament. The final solution was that the votes would be weighted so that a third came from trade unionists (voting as individuals), a third from local party members and a third from the MPs and MEPs. The first leader to be elected under this system was Tony Blair. The European Union Before we leave the United Kingdom to look at developments in Canada, Australia and New Zealand, it is necessary to mention one change which has limited the sovereignty of the UK Parliament. On 28 October 1971 the House of Commons approved the terms for entry into the European Economic Community (which has been known since 1993 as the European Union). In effect they were voting to join an embryo federation, with the federal government having designated powers, which could be expanded by agreement, and the member nations retaining the remaining powers. There is a parliament, but there certainly is not responsible government. Citizens of any EU country have the right to live and work and be educated anywhere within the Union, and are entitled to medical treatment there. The EU now has fifteen members, and has membership applications from twelve more countries, ten of them from Central and Eastern Europe; the other two are Cyprus (the Greek part only, at the moment) and Malta. Five of them have been short-listed, and may join as early as 2004. And when the twelve have been dealt with, there will be another queue of similar length. Before membership negotiations can start, the EU has to be satisfied that the applicant has met the political requirements of ‘democracy, the rule of law, human rights and ... protection of minorities’. Turkey would like to join the EU, and has had a preliminary agreement since 1963, but as it has not yet met the political requirements, membership talks have not yet begun. As far as the sovereignty of the UK Parliament is concerned, European Union membership means that EU laws can override British laws in areas within the EU’s powers, and disputes over law-making powers are decided in the EU’s own court of justice, thus limiting the traditional sovereignty of the UK Parliament. The UK Parliament has no direct power over proposed EU legislation, but committees of the Lords and Commons examine drafts of important proposed laws and make recommendations to their respective houses, who in turn may give advice to the UK minister who will be attending the EU Council of Ministers. The amendment of UK laws rendered inappropriate by EU legislation is left to the government, which usually does it by statutory instrument, as authorised by the European Communities Act of 1972. As an additional measure, to avoid problems in the courts, which would be interpreting human rights under local law, the EU Convention on Human Rights has been incorporated into English and Scottish statute law. The UK Parliament has no direct influence on EU policies, and the European Parliament, based in Strasbourg, has proved to be not very effective, although its members have more practical opportunity to influence the content of European legislation than the members of the UK House of Commons has over its legislation. Its influence on the EU’s budget, too, is much greater than the UK Parliament has over its national budget. Prime Minister Tony Blair has proposed a second chamber, where the European Union nations would be equally represented, so as to prevent the major nations dominating the smaller ones, but there is no sign of this second chamber being set up. European Union voters have shown little interest in voting for the European Parliament, and the MEPs are surprisingly unreliable in their attendance at parliamentary sessions, particularly as weekends approach. The bureaucracy, the European Commission, is based in Brussels, and has 16 000 professional staff. The commissioners who head it are nominated by national governments, but are supposed to be independent. The European Commission has the sole right to propose legislation for the EU, though it is for the Council of Ministers and the European Parliament to decide what is enacted. The European Commission was becoming very corrupt in the 1990s, and the European Parliament, using one of its few effective powers, managed to have the sixteen commissioners removed. The governments of the EU member countries have become more involved as the power of the European Commission was restrained, particularly as the EU moved into new areas such as a common currency and foreign and defence policy. The European Council is composed of the heads of government of the member countries, with the chairman chosen from among them on a six-month rotating basis. The Council provides only broad guidelines. Detailed policy aspects are dealt with by councils of ministers comprising appropriate representatives of the member nations, the membership depending on the subject matter: thus trade ministers discuss trade, farm ministers agriculture, and so on. Some policies are decided by a majority vote of member countries, others require unanimity. There is a General Affairs Council of Ministers, made up of foreign ministers, which is supposed to co-ordinate the activities of the various councils of ministers, but it does not work very effectively. The question of whether member countries should have power of veto over EU policies is very divisive in Britain. The Blair Labour Government says that there is a good case for reducing the policy areas in which governments have a veto. It is hard enough, it is argued, to achieve unanimity among the present fifteen countries. Achieving it among twenty could prove impossible. For instance, the Blair Government suggests that European court procedures, transport, and even changes to the EU’s fundamental treaty, should be decided by majority voting, though issues such as economics and defence and foreign policy should be subject to national veto. The Conservatives, on the other hand, oppose the extension of majority voting and the enlargement of common policies. They also want member countries to be able to opt out of new EU legislation. The EU became a single market on 1 January 1993, and the Maastricht Treaty, negotiated in 1991 and finally ratified in 1993, was intended to move towards a common currency by 1999, the establishment of an EU bank, and the formulation of common foreign and defence policies. The new currency, the euro, was introduced for electronic and paper transactions in 1999, and in 2002 notes and coins will replace national equivalents. When monetary union was introduced, eleven member countries joined but Britain stayed out, together with Sweden, Denmark and Greece. Greece wanted to join, but was delayed until it could meet the economic criteria. Public opinion in Sweden and Denmark seems to be swinging in favour of joining the monetary union. Prime Minister Blair has promised a referendum before the next election, but this may not happen if public opinion remains strongly against joining. Governments do not like the humiliation of losing referendums. Britain may find itself the solitary outsider, though it might be joined by several of the EU applicant countries. The development of common foreign and defence policies has not moved as fast as monetary union, but after NATO’s war in Kosovo the leading EU countries began to feel strongly that they should possess a capability for collective military action which was independent of NATO, and did not necessarily depend on the military leadership of the United States. British Prime Minister Tony Blair has declared his support for this, departing from the previous British position that such moves should be resisted for fear of damaging NATO. There have also been formal moves for the development of a common foreign and security policy for the EU, though this will take some time to be effective, with ancient national prejudices to be overcome. It will not be easy, for Britain and France are used to being in a position of power, as both permanent members of the UN Security Council and as nuclear powers, and will not yield their influence easily, particularly as an increasing number of EU members, such as Sweden, Finland, Ireland and Austria, are becoming neutral. As an indication of the declining power of the European Commission, the EU governments handled monetary union themselves, instead of consigning it to the European Commission. So they wrote the rules for the new currency, and set up a new independent central bank to manage it. Governments have reserved to themselves the development of the EU defence structure, and the common foreign and security policy. The Scottish government has followed the example of other autonomous regions of the EU by establishing an office in Brussels, to represent Scottish interests on devolved matters, and to ensure the implementation in Scotland of EU obligations which concern such matters. Westminster is beginning to find out what it is like to be a provincial parliament. In the new dominion of Canada several constitutional problems emerged over the years: the status and method of amendment of the Constitution; disputes over the status of the Province of Quebec; the composition and role of the Senate; and the removal of the power of the British Privy Council to interpret the Canadian Constitution. The Constitution Act 1867 (usually referred to as the BNA Act) was an Act of the UK Parliament, and could be amended only by that body. Unlike New Zealand from 1857 onwards, the Canadian Parliament had no power to amend the national Constitution. It was not that the British made any difficulties. If a proposed constitutional amendment was passed by the Canadian Parliament (House of Commons, Senate and Governor-General) the necessary new Constitution Act was passed at Westminster without delay, or much interest. On no occasion did a Governor-General refuse to approve, or Westminster fail to enact, a constitutional amendment passed by the two Canadian houses. In 1949 both the UK and Canadian parliaments passed the BNA (No.2) Act which gave the Canadian Parliament the power to amend the Constitution in matters lying solely within federal jurisdiction. Yet the position remained anomalous, particularly as the Statute of Westminster in 1931 had made Canada otherwise completely independent. The UK Parliament grew increasingly uneasy about the exercise of its remaining power. What if one or more of the provincial governments objected to a constitutional amendment requested by the Canadian Federal Parliament? After all, the Constitution was supposed to be a pact between the federation and the provinces. How many provinces had to object before the UK Parliament should take notice? When the Trudeau Government first approached the UK government to have the Canadian Constitution amended and ‘patriated’, eight of the ten provinces lobbied Westminster MPs against the proposal. It seems certain that the UK Parliament would not have passed the necessary act, but the issue was resolved by the Canadian Supreme Court, which ruled that constitutional convention required that there must be substantial support among the provinces for such a change to the Constitution to be accepted. Trudeau was forced to modify his proposals, and managed to get the final version approved by nine of the ten provinces, Quebec of course being the dissenter. It was with some relief that the UK Parliament passed the act and relinquished the remainder of its power over the Canadian Constitution. The Constitution Act of 1982 contains several amending formulas, depending on the subject matter. Typically a constitutional amendment has to be passed by the House of Commons and authorised by at least two-thirds of the provincial legislatures, representing at least half of the total population of all the provinces, but some amendments have to be unanimous, some can be agreed by a majority of provinces, and others which affect only some of the provinces may be agreed by the legislatures concerned. A provincial legislature can exclude its province from the operation of a constitutional amendment which affects the powers of provinces. The Senate was given only a 180-day suspensive veto over constitutional amendments, though it retained all its existing rights over other legislation. The Constitution Act also incorporated a Charter of Rights and Freedoms. The successful formula was the result of the accord signed by the federal government and the provinces, with the exception of Quebec, in November 1981. Quebec was the second of the constitutional problems of the dominion. It was not easy to incorporate a province of largely different language, religion and social attitudes, particularly as the province did not wish to be assimilated. There were ‘two nations warring in the bosom of a single state,’ as Lord Durham put it. The original confederation settlement had given a unique status to Quebec, permitting it to preserve its own civil law and to retain the use of the French language. The other original provinces received no such special privileges, though provinces which later joined the confederation were sometimes able to make special deals. Manitoba, for instance, received a guarantee of the protection of religious education and the French language, and special land was set aside for the Mtis (the offspring of French fur-traders and native Indian women). The Meech Lake Accord was an attempt to induce the province of Quebec to accept the Constitution Act of 1982, by which Quebec is legally bound, despite refusing to ratify it. Quebec produced five proposed constitutional changes, which, if accepted, would persuade it to accept the whole Constitution. The proposed changes covered the special status of Quebec, a provincial veto on constitutional changes affecting a province, a voice for the provinces in Supreme Court and Senate appointments, increased power for the provinces over immigration, and limits on federal spending in areas of exclusive provincial jurisdiction. These conditions were agreed by Prime Minister Mulroney and all the provincial premiers at Meech Lake in 1987, and were passed overwhelmingly by the House of Commons. However, ratification required unanimous agreement by the provincial legislatures, and in 1990 Manitoba and Newfoundland refused to do so, basically because they did not agree with the special advantages for Quebec and francophones. After the collapse of the Meech Lake Accord, another attempt was made to hold Quebec in the federation by reforming the Senate and offering other baits to Quebec. In July 1992, under the Charlottetown Agreement, the other provinces offered Quebec a ‘Triple E Senate-Elected, Equal, Effective’. Each province would elect eight senators, and there would be no ministers in the Senate. The Senate would have only a 30-day suspensive veto over money bills, but Ontario (which, like Quebec, would have had to accept a reduction in the number of its senators from 24 to eight) also insisted that a 70 per cent Senate majority be required before ordinary legislation could be rejected. Whether this is compatible with an effective Senate is very debatable. The baits for Quebec were provisions that Quebec would be recognised as a ‘distinct society’ with some special privileges, that federal legislation dealing with French culture and language would have to be approved by a majority of French-speaking senators, and the giving to each province of a veto over any future changes to federal institutions, thus returning to Quebec a veto power it had lost in 1982. There was also recognition of the inherent right of aboriginal self-government. The Quebec government was involved in the constitutional negotiations, for the first time in two years, and accepted the Charlottetown offer, though it insisted on more seats in the House of Commons to compensate for the lost senators. The agreement was put to the voters in a non-binding referendum. A major problem was that the referendum asked the voters to approve 50 pages of proposals covering everything from Senate reform to aboriginal self-government. Many voters had to find only one proposal they disagreed with in the 50 pages of the document for them to be persuaded to vote ‘no’. The referendum was defeated, both nationally (with 54 per cent of the voters against the agreement) and in six of the ten provinces (including Quebec). The idea of a constitutional amendment was dropped. Of course the Quebec problem did not go away. In October 1995 there was a referendum in Quebec province on the question: ‘Do you agree that Quebec should become sovereign after having made a formal offer to Canada for a new economic and political partnership ... ?’ The referendum was narrowly defeated by a vote of 50.6 per cent to 49.4 per cent. There was an extraordinarily high participation rate of 94 per cent of eligible voters. It may be, though, that the result of this referendum did not really represent the number of Quebec voters who wanted to secede from the Canadian federation. There was considerable misrepresentation in the ‘yes’ campaign about the consequences of secession. A poll conducted at the end of the campaign revealed that 80 per cent of the Quebec voters who were planning to vote ‘yes’ were under the impression that Quebec would continue to use the Canadian dollar after secession; 90 per cent thought that economic ties with Canada would be unchanged, and 50 per cent thought that they would be able to use Canadian passports. More than 25 per cent of ‘yes’ voters believed that Quebec would continue to elect members to the Parliament in Ottawa. Of course none of these would have automatically continued after secession. After the referendum, Prime Minister Chrtien kept a promise he had made during the referendum campaign, and introduced a package into the Parliament which included recognition of Quebec as a ‘distinct society’, and giving a veto over constitutional changes to four regions (Quebec, Ontario, the Western Provinces and the Atlantic Provinces). The package was passed, though Quebec dismissed it as meaningless, and British Columbia successfully campaigned for its inclusion as a fifth veto area. The legal right of Quebec to secede was challenged in the Supreme Court in 1997. The government of Quebec boycotted the proceedings, so the Supreme Court appointed a ‘friend of the court’ to argue Quebec’s case. In its judgment the Supreme Court ruled that Quebec did not have the right to secede unilaterally under either the Canadian Constitution or international law, but it also ruled that should a future referendum in Quebec produce a clear majority on a clear question in favour of secession, then the federal government and the other provinces would have a duty to enter into negotiations with Quebec on constitutional change. The momentum for secession seems to be failing. In the Quebec election in November 1998, although the Parti Qubcois won government, the Liberal Party, which is opposed to secession, won a larger share of the vote. Premier Bouchard admitted after the election that the voters ‘are not prepared to give us the conditions for a referendum right now.’ So far there have been no further referendums. The original composition of the Senate had been in part an attempt to soothe Quebec’s fears. One of the key figures of confederation, George Brown, said that Quebec had ‘agreed to give us representation by population in the lower house, on the express condition that they could have equality [with Ontario] in the upper house. On no other condition could we have advanced a step.’ Although the Quebec representation (originally 24 out of 72 senators) has been maintained, its influence has been reduced as new provinces have joined or been created, and have been granted an entitlement to Senate positions. Manitoba was created in 1870, British Columbia joined in 1871 and Prince Edward Island in 1873, Alberta and Saskatchewan were created in 1905, and Newfoundland joined in 1949. The Senate now has 104 members, so that Quebec’s representation has dropped from one-third to less than a quarter. Not that it matters much, for the Senate has become almost totally ineffective and is another unsolved constitutional problem. In the early days of confederation the Senate did exercise a significant legislative role. There were five senators in Macdonald’s first cabinet, and senators have held most important cabinet posts, including the prime ministership. But since the early days the Senate’s importance has greatly diminished. The reason is of course the non-elective character of the Senate, which has usually led it to back away from any direct confrontation with the Commons. The Senate’s lack of prestige has been exacerbated by its highly party political nature. Senators appointed since 1965 retire at 75, but before that they were appointed for life. The appointments are in the gift of the prime minister, and prolonged rule by one party causes serious imbalances in the Senate, since appointments are usually made to reward loyal party service. Worse still, from the point of view of Senate prestige, the prime minister sometimes does not even bother to fill vacancies. Under the Meech Lake Accord, new senators were to be chosen from lists of names provided by the provinces. There was a vacancy for a senator from Alberta, and that province held a Senate election in October 1989 in an attempt to speed up reform of the Senate. The winner was appointed to the Senate, but after the collapse of the Meech Lake Accord Prime Minister Mulroney announced that he would not be bound by such elections in future. Alberta did not happily accept this, and in 1998 the provincial government announced its intention to elect two ‘senators in waiting’, available to fill Alberta vacancies in the Senate as they arose. A vacancy arose just before the election was due, and Liberal Prime Minister Chrtien, who had never supported the concept of the election of senators, named a replacement without waiting for the election. The premier of Alberta regarded this as a ‘slap in the face for Albertans’, but in fact it is unrealistic to think that the Constitution can be changed by piecemeal acts by individual provinces. Although the Senate is under severe criticism, it is not because it does nothing. It provides occasional ministers, usually because there is not a suitable member of the Commons from a particular province. The Senate reviews complex bills, and sometimes suggests amendments. It conducts public inquiries, many of them useful, and it helps to watch over delegated legislation. But in the mid-1980s things changed dramatically. In 1984 the Progressive Conservatives under Brian Mulroney were swept into power in Ottawa, after more than half a century of Liberal rule, broken only by the very short term of John Diefenbaker and the even shorter one of Joe Clark. As a consequence there was a substantial Liberal majority in the Senate, and this majority was used when the Mulroney government endeavoured to pass a bill to ratify the free-trade pact with the USA. The Liberal-dominated Senate refused to pass the bill until there had been an election on the issue. This was held, the Mulroney Government was returned with a comfortable majority, and the bill was re-introduced and speedily passed by both houses. Things became even more dramatic a few years later, when the Mulroney Government introduced a bill to implement a goods and services tax. When it reached the Senate it was referred to its Standing Committee on Banking, Trade and Commerce. The committee toured Canada hearing witnesses, who of course were largely opposed, as voters nearly always are when new taxes are proposed. The Liberal senators on the committee saw a wonderful opportunity to exploit the political situation, and the committee, by a majority, duly recommended the rejection of the tax bill. The Mulroney Government clearly had to do something about the Senate, for not only was the Goods and Services Tax Bill held up, but so were two other important tax bills. There were fifteen vacancies in the Senate, and Mulroney filled them with Progressive Conservative supporters. Even then his party was still in a minority in the Senate, which had 46 Conservative senators, 52 Liberals and six senators not supporting either of the major parties. Mulroney then used the deadlock-breaking power, by which he could ask the Queen of Canada to authorise the Governor-General to appoint either four or eight more senators. He chose eight, and as they were of course nominated by him, the Progressive Conservatives gained an effective majority in the Senate. The three bills were duly passed, after an astonishing filibuster by Liberal senators. These events brought Senate reform to the forefront of the political debate, but there were still great difficulties, for there was no general agreement on what should be done. Nearly everyone agrees that there should be a Senate. Nearly everyone agrees that it should be elected. Everyone agrees that its original role as protector of property interests is no longer desirable. Everyone agrees that it should have no power to remove a government. But there agreement stops. What are to be the Senate’s powers? Are provinces to be represented equally, or on a population basis? Would a suspensive veto enable the Senate to perform a useful role? Are senators to be elected by voters or by provincial parliaments, and what is to be the method of election? Should there be a requirement for two majorities, both overall and of francophones, for legislation dealing with linguistic matters? It will be a long time, it seems, before there will be sufficient agreement for a constitutional amendment to have any chance of success. In the abortive Charlottetown Agreement, it was proposed that senators should be elected, with the same term as the House of Commons. There were to be six senators from each province and one from each territory, with the possibility of additional senators from the aboriginal peoples. Elections could be either by the voters or by provincial legislatures. According to a government pamphlet: the Senate would be able to block key appointments, including the heads of key regulatory agencies and cultural institutions. It would also be able to veto bills that result in fundamental tax policy changes directly related to natural resources. In addition, it would have the power to force the House of Commons to repass supply bills. Defeat or amendment of ordinary legislation would lead to a joint sitting process with the House of Commons. At a joint sitting a simple majority would decide the matter. These Senate reforms sank with the rejection of the Charlottetown Agreement. The Privy Council The other original constitutional problem has disappeared. Since the various Constitution Acts were enacted by the UK Parliament, appeals on constitutional matters lay with the judicial committee of the Privy Council in London, via the Canadian Supreme Court, after its establishment in 1875. In a federation, the division of powers between the various governments is a frequent source of dispute, and in the early years the Privy Council showed a remarkable bias towards the provinces, creating some surprising consequential powers to add to the specific powers given to the provinces under the 1867 Constitution. Nevertheless on one occasion at least the Privy Council had a benign influence, when in 1929 it overturned a decision of the Canadian Supreme Court which held that women were not ‘persons’ under the Constitution, and therefore could not be appointed to the Senate. The first woman senator was appointed in 1930. The ‘patriation’ of the Canadian Constitution in 1982 ended appeals to the Privy Council. The Governor-General, in the beginning, exercised power over foreign affairs and international trade on behalf of the British government, but it was a sign of the times when the first prime minister of Canada, Sir John Macdonald, was one of the British negotiating commission which signed the Treaty of Washington in 1871. By the 1870s Canada was imposing protective tariffs and trying to negotiate trade agreements with the United States. The British declaration of war in 1914 automatically involved Canada, but the war changed things. The Imperial War Conference of 1917 decided, largely at Canadian insistence, that after the war there should be ‘a full recognition of the dominions as autonomous nations of an Imperial Commonwealth’, and that the dominions and India should have ‘an adequate voice in foreign policy’. Canada signed the Versailles Treaty as an independent nation and became an inaugural member of the League of Nations. As early as 1920 the right to separate Canadian diplomatic representation was established, though it was not until 1926 that the first legation (in Washington) was opened, to be followed by one in Paris in 1928 and another in Tokyo in 1929. At the 1926 Imperial Conference it was declared that the dominions and Britain were equal in status, bound together only by an allegiance to the Crown, an arrangement which was formalised in 1931 by the Statute of Westminster. Governors-General have generally been punctilious in following the principles set out by Bagehot, with two notable exceptions. In 1873 Lord Dufferin was prepared to dismiss the prime minister (Sir John Macdonald) over allegations of electoral bribes. The crisis was averted when the prime minister resigned. In 1926 Lord Byng refused a request for an election by Prime Minister Mackenzie King, who had lost the confidence of the House of Commons. Byng commissioned the leader of the opposition to form a government, but this collapsed after three days and an election was unavoidable. Unfortunately for Byng, Mackenzie King won the election. Since 1952 the Governor-General has always been a Canadian. The Governor-General is the representative of the Queen, but the selection is made by the Canadian prime minister, the Queen merely rubber-stamping the name put forward to her. Seven provinces have joined the federation since 1867, an expansion not without pain. There were two civil wars between the English-speaking settlers and the Mtis in what is now Manitoba in 1879-80 and in what is now Saskatchewan in 1885. As new provinces joined, or the population increased, the number of members of the House of Commons was increased from 181 in 1867 to 301 in 2000. The total number of members is now determined by parliamentary commissions which review the decennial census figures and adjust electorate boundaries and the number of electorates accordingly, with the proviso that no province should have fewer MPs than it has senators. Most Canadians have always voted in single member constituencies, on a first-past-the-post basis. The last two-member constituencies were abolished in 1966. Some of the provinces tried, but abandoned, preferential voting (the single transferable vote). The secret ballot was introduced federally in 1874, but until 1917 the federal franchise was determined by the various provinces, except for the 1885-1898 period. This of course resulted in variations between the provinces, though in all provinces in the early days the vote was confined to adult males who met income or property requirements, which meant that only about 15 per cent of the population could vote. The franchise restrictions were gradually lowered and women were given the vote in four provinces in 1916-17. Women in the armed forces and close female relatives of servicemen were given the federal vote in 1917. In 1920 the electoral law, now under federal control, was changed to universal adult suffrage with a minimum voting age of 21. The voting age was lowered to eighteen in 1970. The maximum federal parliamentary term is five years. This provision is entrenched in the Constitution with the proviso that ‘in time of real or apprehended war, invasion or insurrection’ the Parliament may, provided there is a two-thirds majority in the House of Commons, extend the life of the House indefinitely. There are many unusual features about Canadian elections. The long-term stability of the two main political parties, the Conservatives and the Liberals, is remarkable. They were there in the early days of federation, and are still there, though the Conservatives were nearly wiped out in the 1993 federal election and have still not recovered. Then there is the remarkable turnover of members of the House of Commons, there being, by international standards, very few ‘safe’ seats. A study has shown that only 23.6 per cent of seats in the Canadian House of Commons are secure for a particular party, compared with 77 per cent in Britain. This estimate seems much too high for Canada, for in the 1993 election the Progressive Conservatives retained only two of their 157 seats, and the New Democrats only nine of their 44. The resultant parliamentary inexperience of many Canadian MPs has a significant effect on all the activities of the House of Commons. The bulk of MPs (over three-quarters) is likely to have served less than seven years, and the proportion of new MPs in a parliament averages about 40 per cent, with a peak of 68 per cent in 1993. After the 1993 election, the new prime minister, Jean Chrtien, delayed the first meeting of the new parliament on the grounds that ‘200 members are brand new ... and have to do their homework to be ready ... The same thing is true for the cabinet.’ This a very different pattern to that of the other countries we are considering. In Britain, 70 per cent of MPs are likely to have served for at least ten years, and the proportion of new members after an election is rarely greater than a fifth. The longevity of governments is also unusual. The Conservatives ruled from 1867-73 and 1878-96, and the Liberals from 1896-1911 and 1935-58. This was perhaps a factor in the development of widespread political patronage. In 1871 Prime Minister Macdonald claimed that there was a constitutional principle that whenever an office was vacant it belonged to the party supporting the government. This principle is still adhered to, though since 1910 with less rigour. It was still a major issue in the 1984 election, when the Liberals were ousted by the Progressive Conservatives. Finally, perhaps the most unusual of all is the failure to develop a nationwide party system. Parties tend to be based in particular provinces or groups of provinces, with very little strength elsewhere. A group such as the Bloc Qubcois can be formed to represent the interests of a particular province, and may be strong enough to become the official opposition for a time. A government may have no MPs at all in half the provinces. This does not make for national unity. Provincial upper houses There are no surviving upper houses in the Canadian provinces, which has removed an important restraint on the behaviour of provincial governments. The heads of states, the lieutenant-governors, are appointed by, and responsible to, the federal government. On joining the dominion, the provinces had various parliamentary structures. Each, of course, was given a lieutenant-governor appointed by the federal government. All had elected lower houses, called legislative assemblies. Of the four original provinces, Nova Scotia and New Brunswick were authorised by the BNA Act of 1867 to retain their existing structures, which contained nominated upper houses called legislative councils. On their partition in 1867 Quebec and Ontario took different paths. Ontario chose not to have an upper house in order to eliminate resistance to the Cabinet, and for reasons of economy. Quebec chose to have a Legislative Council, primarily to protect the English-speaking minority. Of the provinces to enter the Confederation after 1867, British Columbia (1871) had never had an upper house. Manitoba was granted an upper house by the Act creating the province and admitting it to the Confederation, while Prince Edward Island was the only province to have an elected Legislative Council, which it retained. Alberta and Saskatchewan, created in 1905 and joining the dominion at the same time, have never had upper houses. Newfoundland proved reluctant to join the dominion of Canada. It had been annexed by England in 1583, was granted responsible government in 1855, and had an upper house. In 1869 the voters rejected the idea of joining the Canadian Confederation: Hurrah for our native isle, Newfoundland. Not a stranger shall hold an inch of its strand. Her face turns to Britain, her back to the gulf- Come near at your peril, Canadian wolf! Economic reality eventually forced a modification of these views. Newfoundland became bankrupt in 1933, responsible government was suspended, and for sixteen years the country was governed by an autocratic commission, aided by British subsidies. Responsible government, without an upper house, was restored in 1949 so that Newfoundland could join Canada. There are now no provincial upper houses. The reasons for abolition have been their lack of prestige caused by party political appointments, the dislike of governments at having their will frustrated, and economy. Abolition was by no means always easy, for the Legislative Councils had veto power over the legislation necessary to abolish themselves. Success was achieved in various ways. In New Brunswick and Nova Scotia the government-appointed legislative councils had unlimited numbers, and it was possible for the government to ‘swamp’ the councils by appointing new members pledged to vote for abolition. In Manitoba sufficient members of the Council were bribed, by being offered comparable salaries elsewhere in the government service. In tiny Prince Edward Island the two houses were merged into a single Assembly. The rights of property were protected by having two members from each electoral district, an assemblyman and a councillor. Voters for the assemblymen had to have a small property qualification, designed merely to deny the vote to transients, whereas to vote for a councillor required substantial property. These property requirements have only recently been removed. The last Legislative Council to disappear was that of Quebec. There had been intermittent smouldering disputes with the Quebec government, and the Legislative Council was abolished in 1968 by the simple expedient of offering councillors annual pensions equal to their salaries. Lieutenant-governors are appointed by the federal government for a five year term, and are expected to heed its instructions. By the BNA Act of 1867 the federal government could veto any provincial bill within a year of its passage. As Sir John Macdonald put it in 1873: ‘if a bill is passed which conflicts with the Lieutenant-Governor’s instructions or his duty as a dominion officer, he is bound to reserve it, whatever the advice tendered to him [by the provincial government] may be.’ Seventy provincial bills have been vetoed since 1867, the last being in 1961. The power of veto in fact became increasingly difficult to use, as advocates of provincial rights managed to focus the debate on the question of interference by Ottawa in local matters. Disputes over jurisdiction are now settled by the Supreme Court, and the power to veto provincial legislation has become politically unusable. In the early days after Confederation, lieutenant-governors often took an active role in politics, in such ways as refusing assent to bills and dismissing ministers. They no longer do so, but between 1867 and 1903 five provincial governments were dismissed, and before 1945, 27 provincial bills were refused assent. Lieutenant-governors may refuse a request for a dissolution from a premier who has lost the support of the Legislative Assembly if another leader is likely to have the support of the Assembly. Such refusals were fairly common in the early days, but lieutenant-governors have been more wary since the furore over Governor-General Byng’s action in 1926, and there have in fact been no refusals of requests for dissolutions since that date. The provincial electoral systems have gradually changed to universal suffrage for all those aged over eighteen. The electoral districts in all provinces are organised with a strong rural or remote area bias. In the 1999 election in New Brunswick, for instance, one riding had 13 786 eligible voters while another had only 3444. In 1995 the province of Ontario adopted the federal electorates for the provincial parliament, reducing the number of seats from 130 to 99 by means of the ‘Fewer Politicians Act 1996’. The federal electoral system has a strong rural bias, and a rural vote in Ontario is worth as much as six urban votes. The number of registered voters in 1996 in the largest riding was 129 108 and the smallest 19 406. The development of responsible government in the provinces has been caustically criticised by Professor Mallory, who has written that: the chaotic politics of British Columbia, which has never cheerfully accepted a two party system on national lines, has modified from time to time the normal operation of cabinet government. In British Columbia, as in Manitoba, coalition governments have eroded the clear lines of collective responsibility which cabinet government requires. In the prairies the powerful impact of agrarian reform movements with their distrust of party politicians and firm belief in constituency autonomy has undermined party discipline and authority of cabinets. In the Atlantic provinces, politics still wears the raffish air of the eighteenth century. The scent of brimstone hangs about the hotel-rooms and caucus-rooms of politicians who have yet to receive the gospel of political reform. In Quebec, even among French Canadians, the phrase ‘boss-rule’ is in common currency. Ontario has had, within the last twenty years, a regime at once radical, demagogic and corrupt, in which it was difficult to distinguish the sober lineaments of the British cabinet system. This was written in 1957, but the situation does not seem to have changed very much since then. The Liberals, the Reform Party and the Progressive Conservatives have not been organised nationally, and give virtually no assistance or direction to their provincial organisations. This has led to the emergence of provincial parties. In Quebec the separatist Party Qubcois is a potent force. In Alberta there was an extraordinary 36 year dominance by the Social Credit Party from 1935 to 1971, but the party has since virtually disappeared, winning only 0.8 per cent the vote (and no seats) in 1986. A Social Credit Party (the Socreds) survived in British Columbia until the 1990s, ruling that province almost continuously from 1952, but has since almost disappeared, and since then the battle has been between the New Democrats and the Liberals. The Progressive Conservatives and the Liberals contend for power in Ontario and the Maritime Provinces, though there are special features. In Prince Edward Island, policy differences are hard to find, for ‘each has advocated and opposed everything, depending on whether it was the party in power or in opposition at the time.’ New Brunswick politics tend to concentrate on personalities rather than issues. One successful Progressive Conservative premier who had lasted for four terms was defeated in 1987 because of allegations of a liking for drugs and parties with young boys, the Liberals winning all 58 seats. It is difficult to make responsible government work if there is no opposition. The great change in Australia since Bagehot’s day has been the federation of the six colonies. Australia is one of the few countries to achieve a federation by negotiation rather than as the result of violence. Responsible government was adopted, although the Constitution never actually said so. As Australia became effectively independent of the UK, there was increasing pressure to become a republic, but this question is still unresolved. The first timid step towards Australian federation was taken by the UK Parliament in 1885 when it set up the Federal Council of Australasia. This had two representatives from each self-governing colony and one from each crown colony, but it had no executive powers and no revenue, and was of very limited effectiveness. A contemporary wrote that it was little more than a debating society. Neither New South Wales nor New Zealand ever joined it and South Australia was a member only from 1888 to 1890. Perhaps it may have helped the federal idea but by 1890 it was clear that an Australian federation would not grow from the Federal Council of Australasia. The Council met for the last time in January 1899 and thereafter disappeared unmourned. In 1889 the veteran premier of New South Wales, Sir Henry Parkes, proposed a national convention to devise a scheme of federal government, which he thought ‘would necessarily follow close on the type of the dominion government of Canada.’ Such a conference was held in Sydney in 1891, with delegates from all six Australian colonies and observers from New Zealand, and a draft Constitution was produced, composed largely by Sir Samuel Griffith. The Canadian model was substantially modified. There was to be a House of Representatives representing the people, and a Senate (with equal powers except over some money matters) representing the states. The states were to have equal representation in the Senate. Specific powers were given to the federal Parliament, some were given concurrently to the federal and state parliaments, and all remaining powers left to the states-the opposite to the Canadian model. There was deliberately no mention of responsible government. Griffith wanted the matter left open. After success, anti-climax. It is not necessary here to trace the events of the next few years and to try to apportion blame between the various forces which delayed federation: the decline of the political power of Parkes, the rise of the Labor Party, the devastation wrought by the economic depression of the 1890s and the resentment of the colonial parliaments at being asked to approve a constitution in whose drafting most of them had had no hand. Federation was recovered from the grave, or perhaps from limbo, largely by the activities of the Australian Natives Association and the Federation Leagues. A conference of premiers in 1895 agreed that federation was ‘the great and pressing question’. More importantly, they agreed to a procedure that would make the convention they proposed likely to be effective. The lessons of 1891 had not been forgotten. The convention was to consist of ten representatives from each colony directly chosen by the electors, and they would have the duty of framing a draft federal constitution. The convention would then adjourn for not more than 60 days so that there would be an opportunity for changes to the draft constitution to be proposed by interested people. The constitution finally agreed by the convention would then be put to the voters of each colony for acceptance or rejection by direct vote, and if passed by three or more colonies would be sent to the Queen, with the request that the necessary act be passed by the UK Parliament. Colonial parliaments would not be able, by mere inaction, to stop the process after it had begun. In a series of conventions in Adelaide, Melbourne and Sydney in 1897-98 the constitution, largely based on the 1891 draft, was finally hacked out. Responsible government was extensively discussed by the conventions. Most delegates wanted it, but some doubted whether it was compatible with a federation and a powerful Senate. The smaller colonies were insisting on a strong Senate, and they also wanted responsible government, though one delegate did say that he would rather kill responsible government than federation. It was implicit in the arguments of those fighting for the combination of responsible government and a strong Senate that the Senate would restrict its use of its power so as not to imperil responsible government. In the event, there was no mention in the draft constitution of responsible government-or a Cabinet, or a prime minister-the only clue being the provision that a minister must be or become a member of one of the houses of Parliament. What emerged was a House of Representatives of 75 members, elected for three year terms, and apportioned among the states on a population basis (excluding Aborigines), though each state had to have a minimum of five MPs. The provision continues to this day and Tasmania has always fought against an increase in the number of Representatives, because it diminishes Tasmanian influence. Even now, when there are 148 Representatives, Tasmania is over-represented with five MPs. The senators were elected on a state-wide basis for six year terms, with half elected every three years. The state-wide electorate was a change from the 1891 draft, by which senators were to have been selected by state parliaments, the system generally in use at that time in the United States of America. State-wide elections were not universally adopted there until 1913, when the Seventeenth Amendment to the US Constitution was ratified. The powers of the two houses were almost identical, except in financial matters where the Constitution provided that appropriation and taxation bills must originate in the lower house. The Senate, although it could reject bills for the ordinary annual services of the government, could not amend them. It could only request that the Representatives make amendments. It was soon established, in the First Parliament, that the Senate could press its requests after rejection by the House of Representatives. The distinction between requests and amendments became almost invisible. The Constitution was passed by referendum in Victoria, South Australia and Tasmania. It also had a majority in New South Wales, but the New South Wales government had inserted a new condition-a minimum number of affirmative votes-which was not met. New South Wales then used the opportunity to press for some changes to the draft Constitution, which were considered at a special premiers’ conference in January 1899. Eight changes were agreed, on matters such as adjusting the arrangements for solving deadlocks between the two houses over legislation, easing the way for Queensland to join the federation, and permitting the federal Parliament to make financial grants to any state ‘on such terms and conditions as the parliament thinks fit’. This last change, although it was not realised at the time, paved the way for the financial dominance of the federal government over the states. The referendum on the revised Constitution was passed in all states except Western Australia, which did not put it at this time. To be sure, there were still difficulties. A delegation had to visit Britain to discuss objections raised by the imperial government. After all, the Australian Constitution was to be an act of the UK Parliament, and eyebrows were raised there at giving the new Australian Parliament power over ‘external affairs’. Surely this was a matter for the imperial government. They had some reason for concern, too, for only seventeen years earlier, in 1883, Queensland had actually annexed the eastern half of New Guinea, to forestall what it saw as German (or possibly French) expansion in the south-west Pacific. Westminster had first rather huffily annulled the annexation, and then agreed to accept Papua, the south-eastern portion, as a protectorate. The Germans soon seized the remainder of the eastern half of the island. But the imperial spirit was changing, and the British government eventually agreed to all the powers being sought, the only significant change being over the right of appeal to the Privy Council in certain cases. Western Australia tried fruitlessly to induce the British government to insist that if Western Australia entered the federation as an ‘original state’ it should be allowed to levy its own tariffs for five years. This proposal was resisted by the other colonies, and by a referendum in September 1900 Western Australia finally decided to join as an original state on the terms laid down in the Constitution. The Constitution, after enactment, proved much more difficult to amend than its authors had expected. Unlike the BNA Act of 1867 and the New Zealand Constitution Act of 1852, the method of amendment was laid down in the Constitution itself. Amendments could be made only if passed in a referendum approved by an overall majority of votes and by a majority of votes in a majority of states (four out of six). There have been eighteen attempts to amend the Constitution, with 42 questions being submitted to the voters. Nearly all were to give increased power to the federal Parliament, but only eight have been successful. The successful ones were Senate elections (1906), state debts (1910), state borrowings (1928), social services (1946), Aborigines (1967) and Senate casual vacancies, referendums and the retiring age of judges (1977). Australia is not unique in making infrequent amendments to its Constitution. Since 1901 the US Constitution has been amended nine times compared with Australia’s eight times. The only amendments to the US Constitution which gave increased power to the federal government were the Sixteenth and Nineteenth Amendments, which gave power to impose income tax, and power to enforce Prohibition. The latter power has since been withdrawn. As the Australian Constitution was an act of the UK Parliament it could, in theory, have been amended by that Parliament. Such action was never taken, though in 1916 the wartime Australian government passed a resolution in the House of Representatives asking the UK Parliament to extend the life of the Australian Parliament. The idea was dropped when it became evident that the Senate would not support it. In 1933, during the Great Depression, Western Australia voted to secede from the federation in a referendum organised by the state government. The federal government took no notice, and a request to the UK Parliament was pigeon-holed by being referred to a committee of the two houses, which (after two years) declared itself incompetent to consider the Western Australian petition. In fact, decisions by the High Court have made greater changes to the Constitution than have been achieved by referendums. The High Court has given the federal government control over taxation, tying the states to the chariot wheels of the federal Parliament (as Prime Minister Deakin once wrote, anonymously). The interpretation of the external affairs power by the High Court, by which the negotiation of an international agreement gives the federal Parliament the necessary power to implement the agreement, even in areas which are state powers under the Constitution, also has the potential for enormously increasing federal power. There have been some restraints on the use of this power since the establishment of the Joint Standing Committee on Treaties in the federal Parliament in 1996. These matters are discussed in more detail in Chapter 4. The unwritten understanding about restraint in the use of Senate powers was put to the test on a few occasions. There were successful attempts in 1974 and 1975 by the Senate to force the government to a premature election by threatening to block supply, though in each case the technical grounds for the election were deadlocks between the two houses over other bills. There were similar actions by the legislative councils of Victoria in 1947 and 1952, South Australia in 1912 and Tasmania in 1949. These events are discussed in more detail in Chapter 8. The number of members of the House of Representatives is determined by the Parliament, with a constitutional proviso that the number of Representatives must be as nearly as practicable twice the number of senators. An attempt in 1967 to remove this ‘nexus’ was rejected at a referendum, despite being supported by all the major parties. The original Parliament comprised 75 representatives and 36 senators. This was increased to 125 representatives and 60 senators in 1949, and 148 Representatives and 76 senators in 1983. The six original states have maintained equal numbers in the Senate. Two senators from each of the Northern Territory and the Australian Capital Territory were added in 1975. The voting for the First Parliament was necessarily done under state legislation and one of the early tasks of the new federal Parliament was to lay down its own rules. All non-Aboriginal adults, male and female, were given the vote, after some displays of male chauvinism. But, after all, women already had the vote in South Australia and Western Australia and attempts to achieve it had been made in all the other states except Queensland. Preferential voting was introduced in 1918 and the vote was made compulsory for non-Aborigines in 1924. The voting age was lowered to eighteen in 1973. Although it is much used, the description ‘compulsory voting’ is not strictly accurate. It is compulsory to register, to attend at a polling place (or apply for a postal vote), and to receive a ballot paper. What is written on the ballot paper is up to the voter. Racism was evident in discussions on Aborigines, with remarks like ‘halfwild gins living with their tribes’ being made. The final compromise was to give Aborigines the vote in states where they already had it, which did not include the states (Queensland, Western Australia and South Australia) where most of them lived. All Aborigines were given the right to enrol in 1962, but enrolment was not made compulsory. It was not until 1984 that the voting rights and responsibilities of Aborigines were made the same as the rest of the community. At normal Senate elections, each state elected three senators (increased to five in 1949 and six in 1983). There was an early proposal for proportional representation in the Senate, but this was howled down as an instance of ‘new-fangled notions for which the great majority of the people of the Commonwealth have no knowledge’, although proportional representation was already in use in Tasmania, for the lower house. First-past-the-post voting was rapidly adopted, to be changed in 1919 to preferential voting. This change did nothing to stop the radical swings in party numbers in the Senate, and sometimes overwhelming majorities: 35 to 1 in 1919, 33 to 3 in 1934 and again in 1946 are examples. The solution finally adopted in 1949 was proportional representation, which has had the predictable result of making the major parties evenly balanced and making it possible for minority groups to gain Senate seats. Indeed, in the first 50 years of proportional representation in the Senate, the government has had a majority for only twelve years, and it seems unlikely that in the foreseeable future any government will have a Senate majority. This creates obvious problems, and, as we shall see, opportunities. The position of the head of state was clarified in 1973 with the statutory declaration of Elizabeth II as Queen of Australia. The early appointments of governors-general were made by the UK government, and were English or occasionally Scots. They were never Welsh or Irish. They were rarely of the first rank, though perhaps rather better than suggested by Hilaire Belloc: Sir! You have disappointed us! We had intended you to be The next Prime Minister but three ... But as it is! ... My language fails! Go out and govern New South Wales! Since the 1930s the appointment of the Governor-General has rested with the federal government, and the Governor-General is now always an Australian. At one time an exception might have been made for a royal appointment, but that now seems inconceivable. The governors-general have generally followed Bagehot’s principles, with four notable exceptions: the refusal, in 1904, 1905 and again in 1908 of a prime minister’s request for a dissolution after being defeated in the House, the Governor-General believing, correctly in each case, that an alternative government could be formed. Even more dramatic was the dismissal of Prime Minister Whitlam in 1975, because he would not recommend a general election when the Senate refused to pass his budget. Australia gradually moved to an independent foreign policy, though the Statute of Westminster was not ratified until 1942. As late as 1939 Prime Minister Menzies could say: ‘Great Britain is at war; as a result Australia is at war.’ As with New Zealand, the Second World War dramatically changed such attitudes. The Australia Act 1986 and corresponding state and UK Acts, passed at the request of the state and federal parliaments, removed any residual power the UK Parliament had to make laws affecting Australia, any residual executive power, and any remaining avenues of appeal from Australian Courts to the Privy Council. The republic issue The question of Australia becoming a republic was first publicly raised by a prime minister when Paul Keating, who had just taken over the office from Bob Hawke, raised it in a speech of welcome to the Queen at a parliamentary reception in Canberra in February 1992. Keating spoke of Australia being ‘necessarily independent’, and his words were interpreted as giving, as Liberal leader John Hewson put it, ‘a tilt in favour of republicanism in front of the Queen’. Keating was of Irish descent, and had no great regard for British institutions. Keating’s proposal was for a minimal change, with the president exercising the power of the Governor-General. He wished the prime minister to have the right to select the president, though his selection would have to be agreed by both houses of Parliament. In a speech to Parliament in 1995 he proposed a national referendum during the next Parliament, aiming for a republic to be achieved by 1 January 2001, the centenary of federation. Although he was personally opposed to a republic, as the leader of the opposition John Howard had to respond to Keating’s campaign. He promised that the next coalition government would set up a convention to consider the republic issue, and if they recommended a republic the matter would be put to a referendum. With the victory of the Coalition in the 1997 election, a Constitutional Convention was held in February 1998 to consider the question of Australia becoming a republic. There were 152 delegates, half elected by a voluntary national postal-ballot and the other half nominated by the government, 36 of them non-parliamentary. The convention considered three questions: whether Australia should become a republic; if so, which republican model should be put to the voters; and the time frame of any change. The convention supported, in principle, the idea of Australia becoming a republic. The method of election of the president they recommended was controversial. In the proposal to be put to the voters, anyone could be nominated for the post. The prime minister, after discussions with the leader of the opposition, would put forward a single name to a joint sitting of the two houses of Parliament, where it would have to be agreed by a two-thirds majority of the joint sitting. The powers of the president were not defined, being left as they were for the Queen and Governor-General in the existing Constitution. Of course these are sweeping powers, most of which the president was not expected to use except on advice from the government. The question of the dismissal of the president was also the subject of debate. The Republic Advisory Committee, set up by Prime Minister Keating, reported that they had encountered an almost universal view that the head of state should not hold office at the prime minister’s whim, and that he must be safe from instant dismissal to ensure appropriate impartiality, but because of the fear that the president might use some of the enormous powers he would have under the existing Constitution, the proposal put to the voters was that the prime minister should be given the power of instant dismissal of the president, the president’s position then being taken by the senior state governor until a new president could be elected. The prime minister’s action would have to be approved by the House of Representatives within 30 days. It should be noted that the approval of the prime minister’s action would have come from the House of Representatives, normally controlled by the government, not by a joint sitting of both houses who appointed the president. Even if the House of Representatives disagreed with the dismissal the dismissed president could not be reappointed. He could stand for re-nomination, but it is inconceivable that the prime minister who dismissed him would nominate him. In all the existing republics with a separate head of state and head of government, none gives the head of government the power to dismiss the president. The convention recommended a referendum in 1999 on the proposed changes to the Constitution, and that if the changes were accepted the republic should come into effect on 1 January 2001. Although public-opinion polls showed that the voters were in favour of a republic by a narrow majority, this did not necessarily mean that all the republicans wanted this republic. With the well-established difficulty in amending the Constitution, the republican objectors believed they would be stuck with the republican model being presented, and that it would prove impossible to amend. The main objection was the method of selection of the president, which many republicans thought should be by nationwide vote. Some objected to the failure to set out clearly the powers of the new president, and others objected to the power given to the prime minister to dismiss the president. Still others objected to the failure to tackle the problems that had emerged with the 100 year-old Constitution, feeling that if the opportunity was not seized when making the major transition to a republic the chance would be lost for ever. These republican objectors, plus the royalists and the many voters who did not understand the issues but had the stalwart habit of voting no on such matters, were enough to reject the proposed republic. The referendum failed with a 54.87 per cent ‘no’ vote, losing in all six states and in one of the two territories, the Australian Capital Territory being the odd one out. It was interesting that there was a clear correlation between the average education-level of voters in an electorate and the voting for a republic in that electorate, the better the average education the higher the ‘yes’ vote. For instance, in John Howard’s electorate the voting was strongly ‘yes’, despite the fact that Howard was opposed to the republic, while in Kim Beazley’s electorate the voting was strongly ‘no’, despite the leader of the opposition’s campaign in favour of the republic. Rural and regional electorates showed little interest in Australia becoming a republic. It seems that the republican issue is dead for the moment. But with the strong support in the community for a republic, it seems certain that the issue will not lie down. When leader of the opposition, Kim Beazley, suggested an indicative referendum on a republic, followed by a new convention to develop the necessary constitutional changes, a second plebiscite to determine the preferred republican model and mode of appointment of the head of state, and finally a constitutional referendum based on the outcome of the two plebiscites. This might work if the convention is given plenty of time to work out the constitutional changes, and consults frequently with the community (by indicative referendums if necessary) to ensure the model being produced has majority community support. After all, it took seven years and four conventions to produce the present Constitution. It will not be necessary to trace the political histories of the states. All that is needed is a sketch of the background to events which have influenced or illuminated the development of responsible government, so that events discussed in later chapters can be seen in perspective. Unlike the Canadian provinces, five of the six states have upper houses. Queensland is the exception, and most of the time has been an excellent example of an elective dictatorship. Tasmania is the only one of our twenty parliaments to use proportional representation for the lower house, which has caused inevitable instability in government. Even after the Statute of Westminster was ratified by the Commonwealth in 1942, the Australian states continued to be excluded from its provisions. The Colonial Laws Validity Act and certain other UK Acts still applied to the states, and continued to do so until the passage of the Australia Act in 1986. Unlike the Canadian provinces, only one of which has an entrenched written constitution-and that an incomplete one-all six Australian states have written constitutions. In four of the six states amendments are made by referendum, after the terms of a proposed amendment have been agreed by the Parliament. In the other two states amendments are totally in the hands of the Parliament. At federation all the states had two houses of parliament. Queensland abolished its appointed upper house in 1921 by ‘swamping’ the Legislative Council with new councillors who would vote for its abolition. Swamping was used after the abolition proposal had been five times defeated in the Council, and a referendum had also failed. New South Wales also made attempts to abolish its upper house, but failed three times, in 1925, 1930 and 1959. So five of the six states still have upper houses. The upper houses had been seen largely as defenders of the rights of property, with legislative councillors either appointed by the government or elected by voters with a substantial property qualification. The property qualification for voters in upper house elections has been abandoned in all states, South Australia being the last to do so, in 1973. All upper houses are now elected by the same voters who choose the lower house. New South Wales held on for some time with an appointed upper house, only changing to an elected model in 1933. Proportional representation was used, but even then they would not trust the ordinary voters, preferring to have the current members of the two houses as the electorate. It was not until 1978 that a change was made. Now the New South Wales Legislative Council consists of 42 members, with fourteen elected by state-wide proportional representation at each election for the lower house. One of the most difficult electoral problems in all the states has been the heavy concentration of the populations in the capital cities. In most states more than half of the population are resident there. The country voters, who regard themselves as the real wealth-creators, feel threatened by this city dominance, while a secondary problem is the enormous area of some remote electorates. The improvement in communications has reduced this second problem, and all the states except Western Australia and Queensland now have reasonably numerically-equal electorates for the lower house. Queensland is a special case. Not only does it have no upper house, but until 1992 it had an electoral system so skewed that a vote in western Queensland was worth four times as much as one in Brisbane. The result was a quarter of a century of dictatorial rule by the rural-based National Party, first in coalition with the Liberals, later on its own. Parliament met as infrequently as possible, and was used as a rubber stamp, denied even such fundamental scrutiny bodies as a public accounts committee. Since 1909 Tasmania has had proportional representation for its lower house. Until 1989 this did not have the usual effect of giving the balance of power to minor parties and Independents, but the rise of the environmental movement caused a change, and there was a succession of minority governments. The Tasmanian government proposed to reduce the total number of MPs, ostensibly for economy reasons but really to reduce the number of minor party members and Independents in the lower house. In November 1993 Liberal Premier Ray Groom introduced a measure to reduce the size of the lower house from 35 to 30 and the upper house from nineteen to fifteen. The bait for MPs was a 40 per cent increase in their salaries. The lower house passed the bill, but the upper house rejected the new scheme, though the members were prepared to accept the pay rise. After this failure, there were several inquiries into whether the number of parliamentarians should be reduced, and if so, how. To the surprise of many, in July 1998 Liberal Premier Rundle, who had been heading a minority government, announced that he would recall Parliament for a special two-day session to pass an act reducing the number of assemblymen from 35 to 25 (that is, five from each electorate instead of seven) and reducing the upper house from nineteen to fifteen members, to be achieved over three years. The passage of this Act was to be followed by an election, which was in fact eighteen months early. The Act was formally passed by both houses, and the election results partly justified Rundle’s action. With only five members from each electorate instead of seven, the quota of votes required to be elected was increased from 12.5 per cent to 16.7 per cent. The Greens (the environmental party) had held four seats, and the balance of power, in the previous Parliament. They were reduced to one seat, and lost the balance of power. To dramatise the intention to eliminate the minor parties, the cross benches were actually removed from the lower house at the time of the election. The one Green who did manage to be re-elected brought a folding chair into the chamber so that she would not be obliged to sit with either government or opposition. The trouble for Liberal Premier Rundle was that it was the Labor Party, not his Liberals, who gained the absolute majority, with fourteen seats out of 25. It has not only been Tasmania that has had minority governments in the 1990s. Four of the other five states have had that experience, Western Australia being the only exception. Perhaps the most interesting was Queensland. In the July 1995 election the Goss Labor Government’s majority was reduced to one, with 45 of the 89 seats. The Labor government was paralysed when the Court of Disputed Returns declared that in a seat in Townsville, held by a Cabinet minister, there had been voting irregularities and that there was to be another election for that seat. The government lost the seat, and the situation in the Parliament was 44 Labor, 44 Liberal-National Coalition, and one Independent. The Independent supported the Coalition, and the government was out. The situation was reversed after the June 1998 election, when the Labor Party won 44 of the 89 seats and formed a government with the support of an Independent (a different member to the one who decided the issue in 1995). State governors are now appointed by the Queen of Australia on the advice of state premiers, though until the passage of the Australia Act 1986 the state governments had the curious practice of approaching the Queen of Australia through the UK government. The state governors, anyway this century, have generally followed Bagehot’s principles. There has been only one occasion when a Governor has refused a premier’s request for an election. This occurred in Victoria in 1952. The upper house had blocked supply, and the Governor refused the premier’s request for an election because supply was not secure. The leader of the opposition was then made premier and he too was refused an election. The original premier was then reinstated, and granted an election, supply having been passed. In 1926 the Governor of New South Wales, Sir Dudley de Chair, refused the request of Premier Jack Lang for the creation of a new batch of life members of the Legislative Council so that they could vote to abolish it, four of a previous batch having changed their minds after receiving life appointments. The Governor relied on his royal instructions which included the direction that ‘if in any case he shall see significant cause to dissent from the opinion of the [Executive] Council, he may act ... in opposition to the opinion of the Council.’ More controversial was the 1932 decision of another Governor, Sir Philip Game, to dismiss the same premier because ‘I cannot possibly allow the Crown to be placed in the position of breaking the law of the land.’ In fact, this action was the culmination of a period of disastrous financial mismanagement by Lang, with government cheques being dishonoured, the budget for 1931-2 still not passed by the lower house, the government surviving through temporary supply bills, and ministers lining up at the Treasury for their salaries because the government did not dare to use the banks for fear the federal government would seize the funds. Game was in frequent contact with the Dominions Office in London, but personally took the decision to dismiss Lang. Game used the authority given in Letters Patent issued in 1879, but still in force: ‘The governor may, so far as we ourselves lawfully may, upon sufficient cause to him appearing, remove from his office ... any person exercising any office ... in the State.’ Game was lucky that the opposition won the ensuing election. It cannot be said that Australian state governments are generally held in high regard. At the start of the last decade of the twentieth century a royal commission in Queensland had recently ended, having revealed widespread corruption in the National Party government, with three former ministers already having been sentenced to jail, and with more former ministers (including the former premier) awaiting trial. In Victoria and South Australia royal commissions had been appointed to investigate disastrous losses by state-owned banks. In Western Australia another royal commission was uncovering corrupt business involvement by the state Labor government, and extortion of hefty party donations from businesses seeking contracts with government agencies. In Tasmania yet another royal commission was investigating an attempt to bribe a Labor MP to change sides. It was a very depressing picture. The state parliaments concerned had obviously been unable, or unwilling, to restrain gross abuses of power by governments which were supposed to be responsible to them. There was a very interesting state election in Victoria in 1999, which showed that the voters could respond effectively to abuses of power. The Liberal state premier, Jeff Kennett, had been very successful in restoring and developing Victoria’s economy, but he was becoming increasingly arrogant. Worse still, he was dismantling the checks there should be on any democratic government, sharply restricting the powers of the Auditor-General to investigate government activities. He was narrowly defeated in the election, despite two very effective terms in office. New Zealand does not have an entrenched constitution, for it can be amended by a vote in the House of Representatives. It is also the only one of our four national parliaments to have abolished its upper house. It was a world leader in the development of democratic voting systems, and has now adopted a partly-proportional system for the election of its MPs. There is no serious move in New Zealand towards republicanism. The New Zealand Constitution Act, passed by the UK Parliament in 1852, was amended in 1857 to give the New Zealand Parliament power to amend or repeal all but 21 sections of the Act, though any bill taking such action had to be reserved for Crown (that is, UK government) approval. These entrenched sections were gradually whittled away by amending acts of the UK Parliament, until full powers of amendment, without reservation, were given to the New Zealand Parliament in 1947. A Constitution Act which can be amended by a unicameral legislature by a simple majority is of course not entrenched. There has been an attempt to entrench provisions covering such matters as the life of parliament, the electoral redistribution provisions, the adult franchise, and secret ballots. By an Electoral Act passed in 1956 these important provisions cannot be repealed or amended except by a 75 per cent majority of the House of Representatives, or by a majority of the electorate at a referendum. Despite the Act being passed unanimously, these provisions are not fundamentally entrenched. No parliament can bind its successor, unless it is prepared to enact a complicated double entrenchment procedure. Such entrenchment as there is comes from fear of the wrath of voters at a subsequent election. Before the passage of the 1956 Act, parliament had no such inhibitions. The abolition of the provincial governments in 1876 was probably inevitable. They were altogether too parochial, and in any case it is unlikely that any federation will survive unless provincial rights are effectively entrenched in the constitution. The provincial governments were replaced by a ‘confused multitude of road boards, rabbit boards, drainage, harbour, hospital and education boards, borough, country and city councils.’ There was a slow movement towards full responsible government in the early days. The New Zealand government took over complete responsibility for Maori affairs after the Maori wars, with some reluctance because the New Zealanders did not want to pay for the wars. Foreign affairs and overseas trade lagged far behind. There were attempts, in 1868-73, and again at the first Colonial Conference in 1887, to give the New Zealand government the right to negotiate trade agreements with foreign countries, initially with the United States. The proposals were firmly rejected by the British government, although there was a minor concession so that tariffs could be negotiated with the Australian colonies. The upper house Originally the members of the upper house, the Legislative Council, were appointed for life, but this was reduced to seven years in 1891, and in 1950 the Legislative Council was abolished, the necessary support being obtained by the usual technique of ‘swamping’. It was not clear whether the abolition was to be temporary or permanent. ‘Let’s see how we get along’, said Prime Minister Holland. Over the next decade there were many proposals to re-establish an upper house, but there was no agreement on its composition or its powers. Worse still, there was very little public interest. Attention shifted to trying to make the unicameral system work better. There have also been substantial changes to voting rights. The secret ballot was adopted in 1869, though not for the Maori electorates until 1937. In 1879 the term of parliament was reduced from five to three years and the property qualifications for voters were abolished. Nevertheless plural voting continued, for ownership of property entitled an adult man to be placed on the electoral roll in every electorate in which he owned property. This multiple voting-later changed to a choice of where to vote-was finally abolished in 1893. In the same year women were given the vote. The only women to have the vote before the New Zealanders were those of the American State of Wyoming, the Isle of Man, and the tiny British colony of Pitcairn Island. In the case of Pitcairn Island, the vote was granted in 1838 under the island’s first constitution, and the voting age for both sexes was eighteen. It was not until 1919 that women were permitted to be MPs, but since then women have advanced further than in any other country. In the year 2000 the prime minister, the leader of the opposition, the Governor-General, the Chief Justice and the Attorney-General were all women. Until recently, voting has been voluntary and first-past-the-post, and typically over 90 per cent of electors now vote. In the elections of 1908 and 1911 there were provisions for a second ballot where no candidate gained an absolute majority on the first ballot. But by the 1980s New Zealanders were becoming concerned at the lack of representation of substantial minor parties in their single house. For instance, in 1978 the Social Credit Party won 16 per cent of the vote but only one out of 92 seats and in 1984 the New Zealand Party won 12.3 per cent of the vote without winning a seat. A royal commission in 1986 recommended that New Zealand adopt the West German Additional Member System, which it called the Mixed Member Proportional System, usually shortened to MMP. In 1993 a referendum was narrowly carried to adopt this system, which was first used in the 1996 election. The consequences of the adoption of this system will be described in Chapter 3. In foreign affairs New Zealand has been less innovative. The first overseas post, in London, was opened in 1871. From the 1880s until the First World War New Zealand pressed ineffectively for imperial federation. A loose federation it would certainly have been, for the New Zealanders wished to retain their autonomy. Their real aim was to have some influence on British foreign policy. The idea of having a foreign policy of their own was not yet an option they would consider. New Zealand was an original member of the League of Nations, and occasionally took an independent stand on such matters as sanctions, but remained essentially a political satellite of Britain. The change of New Zealand’s title in 1907 from colony to dominion made no real difference, although New Zealand began timidly conducting its own foreign policy in 1935. It was not until 1942 that New Zealand opened its first legation in a foreign country (in Washington) and an embryo Foreign Affairs Department was set up in 1943, though negotiation of foreign commercial treaties had started in the 1920s. The 1931 Statute of Westminster, which gave formal independence to New Zealand, was not ratified by the New Zealand Parliament until 1947. Since the Second World War New Zealand has pursued an independent but pro-Western foreign policy. New Zealand was reluctant to join the ANZUS Treaty with Australia and the United States unless Britain also joined, and other signs of New Zealand’s former dependence occasionally surfaced. The dramatic banning of visits by nuclear-powered or nuclear-armed ships, which caused New Zealand to be suspended from membership of the ANZUS Treaty, was out of character, though it is now generally accepted in New Zealand. As Britain moved into the European Community New Zealand argued for favoured treatment because of a special economic relationship with Britain. This was successful for a time, but New Zealand is favoured no longer, and is now facing the problem of having First-World living standards while the exports to finance these living standards have to come largely from primary products for which the traditional markets have substantially disappeared. After this brief historical background on responsible government in our four chosen countries, it is time to turn to a more detailed examination of how it has actually worked in modern times, from 1970 until the end of the century. Let us look first at how these parliaments have performed what Bagehot regarded as their fundamental duty: choosing a government. Back to top
http://www.aph.gov.au/About_Parliament/Senate/Research_and_Education/hamer/chap02
13
102
Federal Reserve history In 1913, Congress passed the Federal Reserve Act, creating the Federal Reserve System (Fed) in response to several banking panics in the late 1800s and early 1900s. Its main purpose was to act as a lender of last resort, or supplier of liquidity when banks faced temporary financial problems. Since the early 1900s the role of the Fed in the U.S. economy has grown to one of chief economic watchdog. There are three main parts of the Federal Reserve System: the board of governors in Washington, D.C., 12 regional Federal Reserve banks, and the Federal Open Market Committee (FOMC). The board of governors is made up of seven individuals nominated by the president and confirmed by the Senate to formulate monetary policy, supervise and regulate member banks, and oversee the smooth functioning of the payment system in the economy. The most powerful member of the board of governors is the chairman. The 12 regional banks act as the operating branches of the Fed. They can be thought of as a banker’s bank, managing reserve accounts and currency levels in their regions. The most well-known part of the Fed is the FOMC. The FOMC meets regularly during the year to set monetary policy. The board of governors and five of the 12 regional bank presidents make up the voting members of the FOMC. The FOMC meetings have became some of the most watched and anticipated events by financial markets. At each meeting, the FOMC now sets a target for the federal funds rate, a key overnight interest rate that affects the cost of borrowing throughout the economy. For this reason, financial market participants closely scrutinize the motives of the FOMC. There are several key moments in the history of the Fed. Prior to 1929, the Federal Reserve had no clear notion of its role in responding to cyclical forces. This resulted in a policy that allowed the money supply to contract dramatically over the first few years of the Great Depression. After the election of President Roosevelt in 1932, the Federal Reserve System was reorganized to resemble the structure we observe today. The Eccles Act was passed in 1935, enlarging some of the powers of the Fed and giving it greater control over the system of 12 branch banks. During World War II, the Fed pegged interest rates, lasting until the end of the Korean War, in order to manage the wartime economy. Banks were also allowed to hold TREASURY BONDS in exchange for a relaxation of reserve requirements. During the 1940s, the Federal Reserve moved from keeping Treasury borrowing costs low toward seeking to achieve full employment. The latter of these goals was in response to the Employment Act of 1946, which set as a responsibility of the federal government the stabilization of employment at near-full employment levels. These goals of low borrowing costs and stable employment at near-full employment levels sometimes clashed, until March 1951, when an “Accord” was reached between the Treasury and the Federal Reserve System in which the Fed could actively and independently set monetary policy. The 1950s and 1960s were an era of relatively good economic outcomes for the U.S. economy. During the 1950s, the Fed developed open market operations (the buying and selling of U.S. government securities on the open market) as the main policy tool used to affect interest rates. The next major challenge for the Federal Reserve was the “Great Inflation” of the 1970s. The inflation rate in the United States rose to 12.5 percent in 1974 and was 11 percent in 1980. In 1979, in response to the spiraling inflation rate, Federal Reserve chairman Paul VOLCKER instituted an era of “tight money” in which the growth rate of the money supply was reduced. This policy was intended to slow the growth of output and reduce the inflation rate. It succeeded very well. In the early 1980s, the United States suffered a severe RECESSION that many economists credit (or blame) the Federal Reserve for creating. By 1984, inflation was less than 4 percent. The final years of Paul Volcker’s term as chairman and the appointment of Alan Greenspan to replace him in 1987 mark the beginning of a very successful period of monetary policy. The main goal of inflation stability initiated during the 1979 monetary policy tightening resulted in historically high interest rates until 1984 but has since been reinforced with the additional goal of stabilizing the growth of output. Currently the Federal Reserve actively uses open market operations as its main tool in meeting its goals. Also at the disposal of monetary policy makers are two additional tools: the discount rate (the rate at which banks can borrow from the Federal Reserve) and the required reserve ratio (the proportion of bank deposits that must be held as reserve against possible withdrawals). By far the most often used tool is open market operations. In accordance to directions given by the FOMC, the Federal Reserve Bank of New York actively enters the market for U.S. government securities as a buyer or seller in an effort to influence the level of interest rates. The main target of the Federal Reserve is the federal funds rate, an overnight rate directly affected by open market operations. The New York bank either buys or sells securities to move the Federal Funds rate to the target level set by the FOMC. The power of monetary policy is then transmitted to the economy by the changes in interest rates. An increase (or decrease) in interest rates reduces (increases) the level of consumer and business expenditures that require borrowing. This in turn decreases (increases) the level of output in the economy, reducing (increasing) pressure on prices to rise (fall). The FOMC sets the target Federal Funds rate in accordance with its feelings as to the direction of the U.S. economy. If the FOMC believes inflation is on the upswing, it will raise interest rates to slow the economy. If it believes unemployment is too high (reducing pressure on inflation), it will lower interest rates to increase economic activity. For this reason, financial market participants pay very close attention to economic activity to gain some insight into the future actions of the Federal Reserve in setting interest rates. The Fed also acts as agent for the U.S. Treasury in the marketplace. It intervenes in the FOREIGN EXCHANGE MARKET when requested and also auctions Treasury securities for the government. The Federal Reserve has a long history of intervening in the U.S. economy. From overseeing a dramatic decrease in the money supply during the early years of the Great Depression, to participating in producing monetary growth rates that allowed the Great Inflation to continue, to engineering a dramatic recession to lower inflation rates in the early 1980s, the Federal Reserve has been instrumental in the evolution of economic activity in the United States. Much of the expertise used by the Federal Reserve has been developed over its long history. This has culminated in perhaps the greatest period of economic expansion in U.S. history. From 1983 to 2000, gross domestic product grew steadily with only a slight interruption in the early 1990s, and inflation steadily fell. See also COMMERCIAL BANKING; ECCLES, MARRINER S. - Beckner, Steven. Back from the Brink: The Greenspan Years. New York: John Wiley & Sons, 1996. - Greider, William. Secrets of the Temple: How the Federal Reserve Runs the Country. New York: Simon & Schuster, 1987. - Meltzer, Allan H. A History of the Federal Reserve, 1913–1951. Chicago: University of Chicago Press, 2003. - Meulendyke, Ann-Marie. U.S. Monetary Policy and Financial Markets. New York: Federal Reserve Bank of New York, 1989.
http://american-business.org/2971-federal-reserve-history.html
13
33
Overview | What is the current status of the Occupy Wall Street movement? What effects has it had so far, and where is it going from here? In this lesson, students react to current news, read an Opinion piece about its role in history, and do an analysis of the strengths and weaknesses of the movement to make predictions about its future. Note to Teacher | This lesson involves students exploring and expressing their political views. Please establish ground rules that will allow for the full range of views expressed to be welcomed and heard, and permit students to keep their opinions to themselves if they are not comfortable sharing them with the group. Materials | Student journals, computer with Internet access and projector. Warm-Up | Ask students what they have heard about Occupy Wall Street, also sometimes known merely as “Occupy,” or “OWS,” and jot ideas on the board. Next, show the slide show that goes with the article “Police Clear Zuccotti Park of Protesters.” Discuss what the photos depict and what happened when the police moved into the park. Explain that Zuccotti Park is located near Wall Street in Manhattan, and that it was the initial location of what has become an movement in numerous cities and college campuses around the United States and around the world. (The Times Topics page on Occupy Wall Street has a lengthier overview if needed.) What questions does this latest news raise for them? Jot students’ responses on the board. What do the protesters want? Note that though their grievances are generally about the financial system, income inequality and a sense that the poor and middle class have been disenfranchised, protesters have not made specific demands. Tell students that they will now express their views on the movement, regardless of how extensive their background knowledge may be. They will be using their journals to “take a stand” based on their feelings, values and impressions, but will not need to share these with others unless they choose to. Tell students open their journals and make a chart with five columns and five rows, labeling the columns Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree and the rows Strongly Oppose, Oppose, Neutral, Support, Strongly Support. Explain to the class that you will read aloud five statements. Upon hearing the first statement, they should mark the box that best represents both how much they agree or disagree about the movement’s main goals (the columns) and the extent to which they support or oppose the methods that have been used by the protesters (the rows). They should mark each box with the number of the statement they are responding to. After they have marked a box, they should jot down some notes about why they chose the boxes that they did. Continue the process through the remaining statements. The statements are as follows: 1. Income inequality has contributed to the country’s problems. 2. Congress has contributed to the country’s problems. 3. The White House has contributed to the country’s problems. 4. Large banks have contributed to the country’s problems. 5. The wars in Iraq and Afghanistan have contributed to the country’s problems. After the exercise is over, have students write about their choices in their journals, or discuss as a class, using the following questions: Which boxes did you find yourself going to most often? Why might that be? How would you summarize your stance on the Occupy Wall Street Movement at this point? Why do you feel that way? What do you need to know or understand in order to solidify your views one way or the other? Update | Nov. 16: We now have a Student Opinion question that we will leave open to which you can invite your students to post: Do You Sympathize With the Occupy Wall Street Movement? To place these decisions in some context, share with students the Times interactive feature “Public Opinion and the Occupy Movement,” on which Times readers placed themselves on a similar grid in response to the same five statements. Ask: What similarities do you notice between the Times feature and your own chart? What differences? Are you surprised by the similarities and differences? Why or why not? A recent New York Times/CBS News poll found that almost half of the public thinks the views of the Occupy activists generally reflect the views of most Americans. Why do they think that might be? After this exercise, would they include themselves in that group or not? Related | The Sunday Review Op-Ed “The New Progressive Movement” examines the potential future of the Occupy Wall Street movement by comparing the movement to past progressive eras in American history: Occupy Wall Street and its allied movements around the country are more than a walk in the park. They are most likely the start of a new era in America. Historians have noted that American politics moves in long swings. We are at the end of the 30-year Reagan era, a period that has culminated in soaring income for the top 1 percent and crushing unemployment or income stagnation for much of the rest. The overarching challenge of the coming years is to restore prosperity and power for the 99 percent. Read the entire article with your class, using the questions below. Questions | For discussion and reading comprehension: - What evidence does the writer, Jeffrey D. Sachs, offer to support his claim that the Occupy Wall Street movement might be marking the start of a new era in American history? - Do you think the author’s argument is controversial? What arguments do you think could be made to counter his claims? - What similarities does the author offer between contemporary economic realities and those of past eras? - What does the author believe the Occupy Wall Street Movement should focus on moving forward? Do you agree? Why or why not? - How would you characterize the main argument of this author? Do you think he offers enough evidence to support his claims? Why or why not? Do you agree with him? Why or why not? From The Learning Network - Who Are the 99%? Ways to Teach About Occupy Wall Street - Income and the Top 1% - From Wall Street to the World - A Weekend of Arrests and Confrontations at Occupy Protests Around the U.S. - Occupy Wall Street Protesters Shifting to Collecge Campuses - Whatever Happened to Discipline and Hard Work? Around the Web - The Occupied Wall Street Journal - The Mother Jones Occupy Wall Street Arrest Map - Occupy Together - Meet the Ad Men Behind Occupy Wall Street O.W.S. Lesson Plans Around the Web Activity | At this point in the lesson, there are many directions you could take for deeper inquiry. You may wish to move directly to a SWOT (strengths, weaknesses, opportunities, threats) analysis of the Occupy movement overall (detailed below); choose a movement or time period from the related article to explore as a comparison; examine the decisions to clear the encampments in Zuccotti Park and Oakland, focusing on the tactics by protesters and police, the government’s decision and Americans’ Constitutional rights; or skim our Oct. 11 lesson on Occupy Wall Street for still more ideas that can be adapted to the latest news. To begin the SWOT analysis, tell students that they will work in groups to analyze the Occupy movement and then make predictions about its future. Explain that the acronym SWOT stands for “Strengths,” “Weaknesses,” “Opportunities” and “Threats.” These categories are areas of analysis that businesses and not-for-profit organizations use to determine their internal strengths and weaknesses and their external opportunities and threats for strategic planning purposes. This form of analysis can also be applied to departments or branches of larger institutions, as well as to individuals within these institutions. If desired and time permits, delve a bit further into how SWOT analyses are performed. You can also see our previous lesson plans using the SWOT format, like one from the 2010 midterm elections ora 2007 lesson on China’s readiness for the Beijing Olympics. Create a chart like this one, with four boxes, labeled “Strengths,” “Weaknesses,” “Opportunities” and “Threats,” on the board. Inform students that they will be working in groups to fill in their own SWOT chart for the Occupy Wall Street Movement, but that the whole class will work together to come up with a few initial ideas before breaking up into smaller groups. You may wish to model the process using the latest reporting you can find, though students will be digging deeper in small groups. For example, using this Nov. 15 Times article, you might add the line “The [Nov. 15 police operation] in and around the park struck a blow to the Occupy Wall Street movement, which saw the park as its spiritual heart” to the “Threat” category, while the fact that the police action was quickly challenged by lawyers could be an “Opportunity.” Tell students they may use information from the article they read as a class and other New York Times articles and features about Occupy Wall Street to complete their charts. (Teachers without Internet access might consider printing several articles beforehand for use by student groups during class.) You may also want to pull in elements from some of our other recent lesson ideas for teaching about Occupy Wall Street. For example, you might tell groups to look at different opinions on the movement’s potential or examine the facts and figures underlying focus on the top 1 percent of the economy as part of the analysis. When students have completed their charts, bring the class back together so groups can share their analyses with the class. Encourage all students to ask each other clarifying questions and to take notes to refer to for the next part of the activity. Next, have students reconvene in their groups to make predictions about the Occupy Wall Street movement, given the movement’s strengths and weaknesses and existing opportunities and threats. Have each small group write down at least five predictions about any aspect of the movement and its future that they investigated. For each, have them provide evidence of some sort, whether in the form of a related article, statistics, a historical parallel, etc. When the predictions are complete, reconvene the class and have each group post or otherwise share their work. Provide an opportunity for students to read and comment on each other’s ideas, then display the full list of class predictions and a class SWOT chart for future reference. Conclude by asking students to reflect on what they have learned. Ask: What propels movements forward? What are the necessary ingredients for making a social movement powerful and effective? What roles might Occupy and other socio-political movements, like the Tea Party, play in the upcoming presidential election? Why? Students follow the news about the Occupy Wall Street movement in the Times and keep personal and class logs whenever events occur connected to their predictions or to the strengths, weaknesses, opportunities and threats identified in class. After several weeks or months, return to your SWOT chart and discuss whether the strengths, weaknesses, opportunities and threats are the same. Have some strengths turned into weaknesses, or vice versa? Have some threats become opportunities, or vice versa? Did any of group’s predictions come to pass? What new predictions can the class make? 1. Understands that group and cultural influences contribute to human development, identity and behavior. 2. Understands various meanings of social group, general implications of group membership and different ways that groups function. 4. Understands conflict, cooperation, and interdependence among individuals, groups and institutions. 1. Understands that scarcity of productive resources requires choices that generate opportunity costs. 2. Understands characteristics of different economic systems, economic institutions, and economic incentives. 3. Understands the concept of prices and the interaction of supply and demand in a market economy. 4. Understands basic features of market structures and exchanges. 5. Understands unemployment, income, and income distribution in a market economy. 6. Understands the roles government plays in the United States economy. 7. Understands savings, investment and interest rates. 8. Understands basic concepts of United States fiscal policy and monetary policy. 9. Understands how Gross Domestic Product and inflation and deflation provide indications of the state of the economy. United States History 31. Understands economic, social, and cultural developments in the contemporary United States.
http://learning.blogs.nytimes.com/2011/11/15/moving-the-movement-analyzing-the-future-of-occupy-wall-street/?nl=learning&emc=learninga1
13
17
The Unfinished Nation, Part I A study of the main cultural, economic and political trends and events from the rise of civilization in the Near East to the eve of the French Revolution. All videos are closed captioned. Lessons 1 - 13 Lesson 1 - From Days Before Time Early human habitation of the North American continent. The civilizations of the North and South and tribal cultures. The journeys of Christopher Columbus. Exploration and exploitation of the west by the Spanish; impact on native population. Early slave trade. Biological and cultural exchanges between the Spanish and native cultures. Lesson 2 - Turbulent Virginia: Pirate Base…Royal Colony Incentives for English colonization. French and Dutch presence in North America. British attempts to establish a base at Roanoke and the outcome. Subsequent Virginia Company settlement at Jamestown. The people who come; the struggle to survive; the response of Native Americans. Establishing the royal colony in 1619. Lesson 3 - Saints and Strangers Discontent of Puritan Separatists; their journey to Plymouth. The Mayflower Compact; life once off the ship. Experience with Irish influences attitude toward Native Americans. Massachusetts Bay Company as structure for a colony. Creating a godly community; the role of the town meeting and church. Expansion and growth brings colonists into conflict with Native Americans. Lesson 4 - The Lure of Land English interest in colonization hindered by Civil War and rise of Oliver Cromwell; resumes under Charles II. Expanded list of immigrants find home in middle colonies. Agricultural interests take hold in Carolinas and Georgia. James II attempts to exert greater control over colonies, parliament, and European neighbors; overthrown in Glorious Revolution. Colonies exerting more independence, becoming more diverse. Lesson 5 - Coming to America: Portrait of Colonial Life Immigration key to American history. Contrasting experiences of different groups. The challenges of indentured servitude; their contributions. Family groups in New England; roles within the family unit. The emergence of a slave society in North America. Upsurge in Scotch Irish and German immigration. Lesson 6 - Divergent Paths Contrast between large scale agricultural operations in South vs. small family farms and commercial operations in North. Emergence of cities; absence of rigid class differences of Europe. Distinctly American form of communities: the plantation society of the South, the tightly knit New England towns governed by town covenants. Witch trials in Salem. The Great Awakening. Lesson 7 - Strained Relations Symbiotic relationship between the colonies and the British government in the 1750s. French and Indian War alters British holdings in North America, spawns questions related to cost of colonial defense and administration. Proclamation of 1763 attempts to control westward migration. Colonists upset by Revenue Acts; boycott of British goods. Boston Massacre, Boston Tea Party, and Intolerable Acts stiffens resolve of some colonists to resist British control. British attempts to appease colonists yet regain authority. First shots fired at Lexington and Concord. Lesson 8 - Not Much of a War Colonists ill equipped for armed conflict with Great Britain. Meetings of Continental Congress; Declaration of Independence. British response to incidents of hostility in northeast and mid-Atlantic states. Leadership of George Washington. Support of the French government. Defeat of British forces at Saratoga and Yorktown. Destiny now in colonists’ hands. Lesson 9 - A Precarious Experiment Peace Treaty (1783) with Great Britain negotiated by Richard Oswald, John Jay, and Benjamin Franklin. Uncertainties facing young country. Effects of war on new nation’s people, politics, and economy. Difficulty of functioning as “nation” under Articles of Confederation. Partial resolution of controversies related to western lands, but not problem of mounting national debt. Annapolis Convention to modify Articles of Confederation fails to attract enough delegates. Lesson 10 - Vision for a Nation Delegates meet in Philadelphia, May 1787 to “create a government with sufficient power to govern vast territory.” The convention, closed to public, chaired by George Washington. Controversy over basis for representation in House of Representatives and slavery issues; over sovereignty and limiting national power. Ground rules for ratification process. Debate between Federalists and Anti-federalists; winning approval. George Washington, first elected president, puts plan to work. Different philosophies within cabinet, Federalist vs. Republican. Washington declines to run for third term. Lesson 11 - Rivals and Friends Election of 1796, first contested presidential election, won by John Adams. His opponent, Thomas Jefferson becomes vice-president. Adam’s difficult relationship with cabinet, congress, and Jefferson. Struggle with France on the high seas; the Alien and Sedition Acts set stage for election of 1800, won narrowly by Jefferson. Lame-duck Congress changes court structure; Adams’ appoints Federalists to new posts before he leaves office. Jefferson challenges appointments. Role of Chief Justice John Marshall and Marbury vs. Madison decision in strengthening Supreme Court. Lesson 12 - Best Laid Plans… Republican desire to minimize national government and encourage small town/ agrarian lifestyles overtaken during Jefferson’s presidency by economic vitality and growth of cities, beginnings of industrialization, the Louisiana Purchase, expansion to the west. Jefferson’s embargo in response to Napoleonic Wars hurts economy of northeast. Election of James Madison in 1808; Jefferson lifts embargo as he leaves office. The War of 1812. Lesson 13 - Pressure From Within War of 1812 magnifies need for internal improvements and stronger national government. Distinct patterns of American growth and expansion: areas of fur trading, plantations, and farms. Monroe elected president; “Era of Good Feelings.” War with Indians and economic depression of 1819. The Missouri Compromise temporarily resolves sectional differences; Monroe Doctrine asserts U.S. preeminence in hemisphere. Lessons 14 - 26 Lesson 14 - He Brought The People With Him The controversial presidency of John Quincy Adams. Changing party politics ushers in “Age of Jackson.” Struggles between the establishment and the Washington outsider. South Carolina’s Nullification challenge, and Jackson’s response. Lesson 15 - Legacy of an Autocratic Ruler President’s Jackson’s policy related to Native Americans. The Veil of Tears. Henry Clay’s introduction of bill to renew charter of Bank of United States as ploy prior to presidential election of 1832. Jackson’s veto of bill and reelection victory. Jackson’s attempts to deplete power of bank after election. Van Buren, Jackson’s successor, and economic crises. Political battles between the Whigs and Democrats; Harrison’s election in 1840 and his brief tem in office; John Tyler, his successor. Lesson 16 - Revolution of a Different Sort Growth in population and surge in transportation and technology which occurs during the first half of the 18th century. Rising immigration, and incidence of nativism. Growth of canal system, emergence of railroads, and its priming of news industry. Industrialists, the emerging factory system, and its effect on workers and artisans. Lesson 17 - Worlds Apart Changes in social order in first half of the nineteenth century. The wealthy class as an derivative of industrial success; contrast with the very poor. Emergence of the middle class. Concept of a leisure wife and the cult of domesticity. Portrait of life on the farm in the North in comparison with life on the Southern plantation. Lesson 18 - Master and Slave Southern life and the differences in class and culture that arose. Plantations not all pillared mansions with hundreds of slaves. Life of poor white families. The slave family in agricultural and city settings. The contrast in living conditions; the harshness of slave laws. The rich culture built by slaves: religion, family, music and art. Lesson 19 - Voices of Reform Emergence of romanticism in literature and art. The transcendentalists and utopian societies. Reform movements that emerged from the Second Great Awakening. A closer look at the temperance movement, the abolitionists movement, and women’s suffrage efforts: their leaders and effects on each other. Lesson 20 - Manifest Destiny? American expansion into the west. Tensions between Mexico and the United States over Texas. The Mexican War and its aftermath. Discovery of gold in California in 1848, and the rush for riches. How the gold rush changed lives of the Californios and Native Americans, the Chinese workers and those lured by the luster of gold. Lesson 21 - Decade of Discord Sectional tensions over slavery subside with the Compromise of 1850; resurface as western states seek territorial and then state status. Railroad expansion plans ignite bloodshed in Kansas and Missouri. Parties sectionalized; Whigs disappear, Republicans emerge. Lincoln-Douglas debates foreshadow future Harper’s Ferry Raid convinces South they’ll never be safe in the Union. Election of 1860 divides country; Abraham Lincoln wins ballot. Lesson 22 - House Divided Election of Abraham Lincoln spawns secession of seven states led by South Carolina. Confederate States form provisional government before Lincoln is inaugurated, take over two southern forts. Neither side believes there will be war, but attack on Fort Sumter by Confederate forces signals beginning of conflict. Both sides mobilize quickly. Union has material advantage; Confederates win early battles. Lincoln and Union generals want to minimize damage to South so they can come back into Union. Influence of border states. Confiscation Acts address problem of captured and runaway slaves. War becomes harder with no end in sight. Lesson 23 - Battle Cry South needs support of England and France to win war; North needs to maintain status quo. North needs to win militarily; South needs to avoid defeat. Most of American West removed from fighting. Lee appointed commander of Army of Virginia; takes war into Northern territory. Union forces have parade of generals. Battles extremely costly to both sides; Lincoln uses near victory at Antietam to issue Emancipation Proclamation. Both sides resort to draft. Lesson 24 - Final Stages 1863 pivotal year in determining outcome of war. Decisive battles at Vicksburg, Gettysburg, and Chattanooga. Ulysses S. Grant named Commander of Western Theater for Union after victory at Vicksburg. Material superiority of North begins to make a difference. South devastated by Sherman’s march through Georgia. Grant and Meade pursue Lee in Virginia in 1864. Vastly outnumbered, Lee holds on behind fortifications at Petersburg; finally surrenders to Lee at Appomattox in April of 1865. In military terms, long war is over. Lesson 25 - What Price Freedom Reconstruction plans discussed long before war’s end, with little agreement. Lincoln’s assassination brings Johnson into office. New president grants amnesty to thousands of former Confederates: plantation owners, politicians, generals. Angers Radical Republicans in Congress. Mid-year Republican victories give party clout to override Johnson’s vetoes of their proactive Reconstruction efforts. Fourteenth and Fifteenth Amendments to Constitution passed. Johnson impeached and just misses conviction in Senate. Grant elected his successor. Lesson 26 - Tattered Remains White Southerners consider Reconstruction vicious and destructive; former slaves see it as small step toward gaining economic and political power. Improvements occur in Southern public education; blacks rebuild family structures. Sharecropping initiated to get the land into production without cash outlay; abuses of system. Hayes elected president and withdraws last of troops. Violence used to subvert voting privileges of black citizens. Legal barriers enacted, supported by Supreme Court, diminish the rights blacks had gained.
http://www.lamission.edu/itv/video/info@video=13.html
13
19
During World War II, the United States and Great Britain carried on a massive supply program for the USSR based on the rationale that the Soviet Union's continuance in the war as an active and powerful ally was a fundamental condition for victory over Hitler's Germany. Until May 1945, common agreement on the necessity for defeating Germany totally and finally tended to obscure differences in political aims. American and British leaders-both military and political-agreed that without involvement of the major portion of the German Army on the Eastern Front, any invasion of Fortress Europe from the west would be rendered practically impossible. They therefore accorded aid to the USSR a claim of extremely high priority on Anglo-American material resources. But getting the promised supplies delivered and satisfying the demands of the Soviet Government, a most exacting ally, was an onerous task. It involved some of the most difficult decisions that the Western Allies had to make. One of the most important of these, reached in August and September 1942, was to give the U.S. Army control of the movement of munitions and supplies to the USSR through the Persian Corridor, and to accord that project one of the highest priorities in the Allied scale. (See Map 6.) This decision was made at a critical juncture of the war against Germany, in a period before the tide had definitely turned in favor of the Allies, when any commitment, however small, of ships, supplies, and trained men had to be carefully weighed in the strategic balance. Only the President and Prime Minister could make the basic decision that the Americans should have responsibility. But before this basic decision could be given any practical effect, military agencies at several different levels had to formulate plans and estimate the impact of fulfilling them. And it was the Combined Chiefs of Staff (CCS) who gave the project the final stamp of approval after the military plan was drawn up. The whole process serves as a prime example of the complexity of the processes by which such politico-military decisions are arrived at in the conduct of coalition warfare. The supply program for the USSR took the form of a series of protocols, definite diplomatic commitments negotiated at the highest governmental levels stipulating exact quantities of specific types of supplies to be made available to the USSR by the United States and Great Britain over a given period of time. The First Protocol, signed at Moscow on 1 October 1941 while the United States was still at peace, covered the nine-month period from that date until 30 June 1942. The Second Protocol was negotiated to cover the period from 1 July 1942 to 30 June 1943 and the Third and Fourth for similar annual periods in 1943-44 and 1944-45. These protocols were the bibles, so to speak, by which supply to the USSR was governed. In this way they differed from any other of the lend-lease commitments of the United States Government before and during World War II. To be sure, adjustments in protocol quantities could be made by negotiation with the Russians, and each protocol contained a safeguarding clause stipulating that the fortunes of war might make delivery impossible, but neither adjustments nor safeguarding clauses The present study is based primarily on Richard M. Leighton and Robert W. Coakley, Global Logistics and Strategy, 1940-1943 (Washington 1956), and T. H. Vail Motter, The Persian Corridor and Aid to Russia (Washington, 1952), both in UNITED STATES ARMY IN WORLD WAR II. Some use has also been made of two other volumes in the same series: Joseph Bykofsky and Harold Larson, The Transportation Corps: Operations Overseas (Washington, 1957) for certain details relating to the transportation problem in Iran: and Maurice Matloff and Edwin M. Snell, Strategic Planning for Coalition Warfare, 1941-1942 (Washington, 1953), for the story of the development of Anglo-American strategy. On the convoys to North Russia, Samuel Eliot Morison, The Battle of the Atlantic, September 1939-May 1943 (Boston: Little, Brown and Company, 1947) and Winston S. Churchill, The Hinge of Fate (Boston: Houghton Mifflin Company, 1950) contain useful information. Guides to the original source material beyond those cited herein may be found in the footnotes and bibliographies of UNITED STATES ARMY IN WORLD WAR II. For the text of the Soviet protocols see U.S. Dept of State, WARTIME INTERNATIONAL AGREEMENTS, Soviet Supply Protocols, Publication 2759, European Ser. 22 (Washington, no date). For instance, in the Second Protocol the safeguarding clause read as follows: "It is understood that any program of this sort must be tentative in character and must be subject to unforeseen changes which the progress of the war may require from the stand-point of stores as well as from the standpoint of shipping." See above, n. 2. provided any genuine avenue of escape from commitments except when the Russians were willing to agree. And pressure from the Russians was relentless not only for fulfillment of existing commitments to the letter but for additional quantities and for new weapons that the developing war on the Eastern Front led them to think desirable. The rationale behind the program gave these pressures almost irresistible force despite the sacrifices involved for the Anglo-American effort in the West. These sacrifices were greatest during the years 1941 and 1942 when British and American resources were under heavy strain to meet even the minimum requirements of their own forces. Every military move required a close calculation of the availability of troops, of equipment, and of shipping to transport them. The supplies and equipment promised to the Soviets could be made available only at considerable sacrifice of an American Army in training and a British Army fighting for its life in the Middle East. Shipping, the most crucial resource of all in the period following Pearl Harbor, could also be put on the run to the USSR only by accepting limitations on the deployment of American and British forces to danger spots round the globe. Yet furnishing the supplies and the shipping in the end proved to be the less difficult part of the task of supplying the USSR; by mid-1942 the central problem had become that of opening or keeping open routes of delivery over which these ships and supplies, made available at such sacrifice, could move to the USSR. These routes of delivery were long, roundabout, and difficult. With the Germans in control of most of western Europe and of French North Africa, the Mediterranean and the Baltic were closed to Allied cargo vessels. This left three main alternative routes for the transport of supplies from the United States to the Soviet Union. The first ran across the Atlantic and around the coast of Norway to Soviet Arctic and White Sea ports, principally Murmansk and Archangel, the second across the Pacific to Vladivostok and over the Trans-Siberian Railway to European Russia, the third around the coast of Africa to the Persian Gulf and thence across Iran to the Soviet border. (See Map III, inside back cover.) Each of these routes had its definite limitations. The northern route around Norway was the shortest but it also was the most vulnerable to attack by German submarines and land-based aircraft. Moreover, winter cold and ice frequently blocked Soviet harbors and rendered sailing conditions for Allied merchantmen scarcely tolerable even without the German threat. The route to Vladivostok ran directly past the northern Japanese island of Hokkaido. Ships flying American or British flags could not proceed through waters controlled by the Japanese once Japan had gone to war against Brit- ain and the United States. And even in Soviet flag shipping, a very scarce commodity in 1941-42, the United States did not dare risk supplies and equipment definitely identifiable as for military end use. Moreover, the rail line from Vladivostok to European Russia had initially a very limited capacity. The southern route via the Persian Gulf was the only one relatively free of the threat of enemy interference, but in 1941 it possessed an insignificant capacity. Iranian ports were undeveloped and the Iranian State Railway running north to the USSR was rated in October 1941 as capable of transporting but 6,000 tons of Soviet aid supplies monthly, hardly the equivalent of a single shipload. In August 1941, by joint agreement with the USSR, the British moved into control of southern Iran while the Soviet Union took over the northern portion of the country. This joint occupation, regularized by treaty arrangements between the two powers and a new Iranian Government, secured the land area through which supplies transported by sea over the southern route could be carried on to the USSR. The question of the effort the British and Americans should devote to developing the necessary facilities in Iran to make any considerable flow of aid through this area possible was therefore a basic one from the moment the Western Allies committed themselves to a large-scale Soviet aid program. For a year after the initial occupation, preoccupation with other tasks in a period of scarcity of men and materials combined with Soviet intransigence to delay any positive decision or practicable plan. During that year the major effort was devoted to forwarding supplies to the USSR over the more vulnerable northern route. Only after the Germans had demonstrated beyond any reasonable doubt that they could make the northern route prohibitively costly, did the United States and Britain decide on a concentrated effort to develop the Persian Corridor as an alternate route. American and British transportation experts in September 1941 freely predicted that the southern route would eventually provide the best avenue for the flow of supplies to the USSR, but there was little immediate follow-up on this prediction. The Russians insisted on the use of the northern route, evidently both because it promised quicker delivery of supplies closer to their fighting fronts and because they feared the establishment of a strong British or American position in Iran so close to the Soviet border. The British, faced with the necessity of developing adequate supply lines for their own hard-pressed forces dispersed through the Middle East from Egypt to India, lacked resources to devote to developing facilities for Soviet aid. On the borders of Egypt and in Libya, the British Eighth Army was engaged in a seesaw battle with the Afrika Korps; in Syria and Iraq the British Tenth Army stood guard against a German drive southward through the Caucasus to the oilfields of Iraq and Iran whence the very lifeblood of the Commonwealth war effort flowed. Immediately after entry into southern Iran, the British prepared a plan for developing transport facilities through their zone to a point where they could carry by the spring of 1942, 72,000 long tons of Soviet aid supplies in addition to essential cargoes for British military forces and the Iranian civilian economy, but this plan proved to be more a hope than a promise. Soviet insistence on the use of the northern route left the British with no strong incentive to push developments in Iran when the limited manpower and materials available to them were sorely needed to develop supply lines more vital to their own military effort in the Middle East. Initially the American position in Iran was anomalous and it remained so even after Pearl Harbor. The United States was not a party to the agreement with the Iranian Government. The American Government therefore had to limit its actions in Iran to supporting the British. And before American entrance into the war against Germany, this support had to be rendered through lend-lease channels in such a way as not to compromise the neutrality of the United States. At the urgent request of the British, two missions were dispatched to the Middle East in the fall of 1941, one to Egypt under Brig. Gen. Russell L. Maxwell and the other to Iran under Brig. Gen. Raymond A. Wheeler, with the justification that they were necessary to make lend-lease aid "effective." These missions were instructed to aid the British in the development of their lines of communication, under conditions where British desires as to projects to be undertaken were to govern. Projects were to be financed with lend-lease funds and carried out by civilian contractors. The British plan for development of Iranian facilities was conditioned on the expectation of the assistance of Wheeler's mission as well as of large-scale shipments of American lend-lease supplies and equipment. Elaborate plans were drawn up but Pearl Harbor completely disrupted them. Mission projects were shoved far down the scale of priorities while the United States carried out its initial deployments to the Pacific and the British Isles. Mission personnel and materiel waited at dockside for shipping that could not be allocated. And even when initial U.S. deployments were completed, these priorities were advanced very little. Under arrangements made by the Combined Chiefs of Staff shortly after Pearl Harbor, the whole Middle East was designated a British area of strategic responsibility just as the Pacific was designated an American one. American strategic plans placed their emphasis on concentration of resources for an early invasion of Europe and Army planners sought to keep their commitments in support of the British Middle East to a minimum. In the running argument between the British and American Chiefs of Staff over a peripheral strategy versus one of concentration, the Americans won at least a temporary victory in April 1942. In a conference in London at that time, it was agreed that preparations should be made for both an emergency entrance onto the Continent in 1942 to prevent Soviet collapse (SLEDGEHAMMER) and for full-scale invasion in 1943 (ROUNDUP). The build-up in the British Isles for both these purposes (designated BOLERO) was placed at the top of the American priority scale from April through July and the Middle East missions continued to be treated as poor relatives. A War Department decision in February 1942 that the missions should be militarized served only to produce additional delays and confusion. Requisite numbers of service troops to perform the tasks planned for civilian contractors were simply not available under the priority the missions were granted. Against a request for something over 25,000 men submitted by General Wheeler as the requirement to carry out projects planned, the War Department decided it could allot but 6,950 in the troop basis and only 654 of these could be moved to Iran before 1 September 1942. This decision, predicated on continuing use of contractor personnel, gradual rather than immediate militarization of contractor projects, and utmost use of indigenous labor, meant that the great bulk of Wheeler's projects had to be placed in a long-deferred second priority. Few even of the contractor personnel had arrived in the Persian Gulf by April 1942. During that month General Wheeler himself was transferred to India to become head of the Services of Supply there and was succeeded as head of the Iranian mission by Col. Don G. Shingler. Without the extensive American assistance expected, the British were unable to devote sufficient resources to the development of Iranian facilities to increase significantly the transit capacity through their zone in Iran. Almost inevitably they concentrated their resources in the area on supply installations and facilities and the port of Basra in Iraq, designed to serve their own Tenth Army. The few American contractor personnel who did arrive were assigned the task of developing the port of Umm Qasr in Iraq, designed as a subsidiary port in the Basra complex. Thus the first opportunity to develop Persian Gulf facilities went largely by default. On these early developments see Motter, Persian Corridor, pp. 13-15, 28-100; Leighton and Coakley, Global Logistics, pp. 108-14, 503-07, 552- 56, 567-59. While the Persian Gulf languished, the Americans and British devoted their main energies toward forwarding supplies over the difficult northern route, basically in accordance with Russian desires. This effort mounted to its crescendo in April and May 1942, when the Americans, having completed initial deployments and finally found supplies and ships to transport them to the USSR, attempted to make up previous deficits in their commitments under the First Protocol. During April some 63 ships cleared American ports headed for north Russia, and plans were laid to send almost as many in May. For the long pull, the President proposed that some 50 American ships be placed in regular monthly service over the northern route from March through November each year, 25 from November through the following March. The Persian Gulf was given but a small role. The Russians indicated they wanted only trucks and planes delivered via this route. In accordance with their desires, the goal for the southern route was set, in January 1942, at 2,000 trucks and 100 bombers monthly, these to be shipped knocked down, assembled in plants to be operated by contractor personnel under the Iranian mission, and driven or flown to the Soviet Zone; only small additional quantities of general cargo were to be forwarded over the Iranian Railway and in the assembled trucks. This planning in early 1942 ignored latent German capabilities to interrupt shipments around the coast of Norway. Shipping over the northern route proceeded under convoy of the British Navy from Iceland onward. During 1941 and the early part of 1942, these convoys were virtually unmolested by the Germans. As of the end of March 1942, only one ship had been lost out of the 110 that had sailed over the route. But in February Hitler began to shift the weight of his naval and air strength to Norway and the March convoys, although they suffered small loss, were subject to heavy attack. As the daylight hours in the far north lengthened during April and May, attacks were stepped up, losses mounted, and each convoy became a serious fleet operation posing a heavy drain on British naval resources. Churchill and the British Admiralty, fearing that if British naval strength was concentrated too heavily in protecting the Murmansk convoys the Germans would shift their naval strength to the mid-Atlantic, decided in late April that only three convoys of 25 to 35 (1) Rpt on War Aid Furnished by the United States to the USSR, prepared by the Protocol and Area Information Staff, USSR Br, and the Div of Research and Rpts, Dept of State, 28 Nov 45 (hereafter cited as Rpt on War Aid to USSR, 28 Nov 45). (2) Leighton and Coakley, Global Logistics, pp. 555-56, 567-68. ships each could be sent through every two months. Since planned loadings in the United States had been going forward on the supposition that 107 ships would move in these convoys during May alone, the proposed curtailment came as a heavy blow to Roosevelt's hopes that American commitments under the First Protocol could be fulfilled. But deplore the decision as he might, the American President was in no position to offer American naval convoy as a supplement to British, and on 3 May 1942 he acquiesced in Churchill's decision, expressing at the same time the hope that the convoys could at least be kept to the maximum of 35 ships. Even this hope was doomed to disappointment. In the two convoys started out from Iceland in May only 57 ships sailed rather than 70 and of these 9 were lost, despite heavy naval convoy. Many of the 63 ships sent out from the United States in April merely served to create a log jam of shipping at the Iceland convoy rendezvous, a log jam that was liquidated only by unloading many cargoes in British ports. Curtailment of the northern convoys made it impossible for the United States to fulfill its promises under the First Protocol. Yet in the midst of these difficulties a Second Protocol was negotiated covering the period 1 July 1942 through 30 June 1943 based on the premise, as stated by the President, that strategic considerations required that "aid to Russia should be continued and expanded to the maximum extent possible." The British and American shipping authorities, basing their calculations on the British plan to send through three convoys every two months, estimated the capacity of the northern route at slightly over three million short tons over the protocol year and optimistically added another million short tons to be carried via the Persian Gulf. The Pacific route was left entirely out of their calculations. Accepting these tenuous shipping figures as gospel, the President and his advisers offered the USSR a total of 4.4 million short tons over the Second Protocol year, about three times as much as was actually delivered under the First Protocol. Though this Second Protocol was not officially signed until October 1942, it actually went into effect in July when the first expired, and from that date forward the Americans and British stood committed to the delivery of this massive tonnage to the USSR. And in contrast to the First Protocol, in which British and American obligations were approximately equal, the great majority of supplies under the second were to come from American sources. This first crisis on the northern route inevitably threw the spot- (1) Churchill, The Hinge of Fate, pp. 256-66. (2) Morison, Battle of the Atlantic, pp. 158-71. (3) Leighton and Coakley, Global Logistics, pp. 557-58. Ltr, President to SW, 24 Mar 42, AG 400.3295 (8-14-41), Sec. 1. light on the Persian Gulf as the only important alternative for forwarding war supplies to the USSR. The Russians, now taking a more realistic view of the situation, reversed their previous position and asked that not only planes and trucks but all sorts of military equipment in the largest quantities possible come via the southern route. In cutting back shipments scheduled to move over the northern route in May 1942, the shipping authorities decided to divert 12 ships to the Persian Gulf and to follow with 12 more in June. Harry Hopkins, the President's confidential adviser, wanted to increase this rate and send 8 more monthly if the Persian Gulf could handle them. The Second Protocol schedules, as noted above, proposed shipment of a million short tons via the southern route over the year beginning 1 July 1942. This decision in May 1942 to speed up shipments to the Persian Gulf was a premature one made in an atmosphere of crisis. It was soon obvious that even the cargoes of the twenty-four ships sent out in May and June could not be unloaded and sent on to the USSR unless more drastic steps were taken to develop Iranian facilities. An effort began almost immediately to push this development but it was unaccompanied by any realistic appraisal of what was needed, any fundamental upgrading of priorities, or more logical division of responsibilities. The major effort was simply devoted to accelerating unfulfilled plans already on the books. On the American side, the Iranian mission was given a clear directive stating that its primary responsibility would be to facilitate the flow of aid to the USSR and not to aid the British, and that projects in Iran should be placed in first priority and those in Iraq and elsewhere in second. Colonel Shingler was told of the new million-ton goal for the Second Protocol year and designated the American representative for executing the program for "receipt, assembly and forwarding" of the material to be shipped through Iran under these arrangements. As a consequence, the handful of American construction personnel at Umm Qasr quickly transferred the center of their activities to the port of Khorramshahr in Iran. Nevertheless, the position of both the mission and the American Government remained anomalous. The British retained strategic responsibility for the area and direction of the effort to forward supplies to the USSR; the American mission's task was still only that of aiding them to effect these deliveries. If the primacy of the task of Leighton and Coakley, Global Logistics, pp. 560-69. Msg 100, AGWAR to AMSIR, 10 Apr 42 Msg 177, 9 May 42, and Msg 208, 20 May 42, all in AG 400.3295 (8-9-41), Secs. 4 and 5. forwarding supplies to the USSR was recognized on the American side, the British were still in no position to place it above their own military needs. Nevertheless when the American mission shifted its activities from Iraq to Iran in April 1942, the dimensions of the task to be performed in developing Iranian facilities had at least been generally defined. Reduced to bare essentials, this task involved development of port facilities and of egress roads, increase of the capacity of the Trans-Iranian Railway as far north as Tehran at least tenfold, improvement of existing roads and construction of new ones north from the ports to the Soviet Zone, construction and operation of aircraft and truck assembly plants, and development of trucking facilities to supplement the carrying capacity of the railroad. The best developed Iranian port was on the island of Abadan, the site of what was then one of the world's largest oil refineries, owned by the Anglo-Iranian Oil Company. But Abadan figured in British plans for supply to the USSR only as a site for delivery of cased aircraft for assembly and of particularly heavy equipment that could not be unloaded elsewhere. The rest of the capacity of the port was reserved for oil shipments. Similarly Basra in Iraq, the only other well-developed port in the area, was already overloaded with cargo for the British Army, although it also had to serve initially as the principal reliance for handling Soviet-aid cargoes. Any really significant augmentation of shipments to the USSR would require development of the Iranian ports proper-Khorramshahr, Bandar Shahpur, Ahwaz, and Bushire-and of the lighterage basin at Tanuma (or Cheybassi) across the Shatt-al-Arab from Basra in Iraq. Khorramshahr and Bandar Shahpur were the key ports, and each initially possessed only one berth capable of handling large vessels. Ahwaz was a small barge port one hundred miles up the Karun River from Khorramshahr; Bushire, a small port on the west shore of the Persian Gulf whence the main highway in Iran ran north to Tehran. From Bandar Shahpur the railway ran north via Ahwaz and Andimeshk to Tehran and thence through the Soviet Zone to Bandar Shah on the Caspian Sea, through some of the most difficult mountainous terrain in the world. The railway was without adequate high-powered locomotives and rolling stock, the line was laid with light rail, and it lacked an automatic signal system to speed traffic. The British had placed the railway under military control and assigned a force of 4,000 soldiers to run it, but the locomotives and rolling stock promised from the United States were slow in arriving, and the increase in rail capacity came equally slowly. Motter, Persian Corridor, pp. 59-64. To supplement the railroad, the British had four trucking routes under development, all operated by a quasi-governmental corporation, the United Kingdom Commercial Corporation, using native drivers. Two routes ran wholly within Iran, from Bushire and Andimeshk, respectively, to Tabriz within the Soviet Zone. A third started at Khanaqin on the Iraqi railway, ran north from Basra through Baghdad, and also terminated at Tabriz. The fourth involved a devious route running by rail out of Karachi, India, to Zahidan in southeastern Iran and thence by truck to Meshed in the Soviet Zone in the northwest. This last route was used but infrequently and the Russians objected that deliveries over it provided supplies too far from the fighting fronts. All the routes were over the poorest sort of dirt roads, and United Kingdom Commercial Corporation operations were seriously handicapped by lack of trucks and efficient drivers. Once it had been concentrated in Iran, the American mission was assigned some of the most essential tasks-construction of additional docks at Khorramshahr, operation of truck assembly plants at Andimeshk and Khorramshahr and of an aircraft assembly plant at Abadan, construction of highways connecting Khorramshahr, Ahwaz, Andimeshk, Tanuma, and Tehran, and assistance to the British in the performance of a variety of other tasks. The British Army and the United Kingdom Commercial Corporation remained in control of all transport operations. When queried by Lt. Gen. Brehon B. Somervell in May 1942 about Hopkins' project for sending twenty ships per month via the Persian Gulf, Shingler replied that the ports would not be prepared to handle that many (120,000 tons of Soviet cargo) until the end of October 1942, when planned improvements were scheduled for completion, and that even then inland clearance would be limited to 78,000 tons monthly and there would be insufficient storage for the excess until clearance capacity had been improved. He offered little hope that the ports would be able to unload and clear in expeditious fashion the 87,000 long tons of Soviet aid dispatched from the United States during May and the 91,000 tons shipped in June when these cargoes arrived in July and August. British shipping representatives in the area were even more pessimistic. Undeterred, the Washington authorities cut back these shipments only slightly in July and August, to 63,000 and 66,000 long tons, respectively. While forwarding these tonnages, Washington and London contributed more by way of pressure for accomplishment than they did Bykofsky and Larson, Transportation Corps: Operations Overseas, pp. 379-82, 403-04. (1) Ibid. pp. 380-81. (2) Motter, Persian Corridor, p. 84. Leighton and Coakley, Global Logistics, pp. 569-70. by way of sending men and materiel to accelerate the pace of development. The British remained unable to spare men or resources, and the Americans were reluctant to commit significant additional resources to the Middle East. The handful of Americans present in Iran in April had grown to only slightly more than 1,000 by 1 July, 817 civilians and 190 military personnel. Though shipments of necessary transportation, construction, and port equipment were expedited, all too frequently delays developed in shipping the most critical items such as port cranes, rail equipment, and heavy construction supplies. The effects of a lack of centralized responsibility and a coordinated plan with high priority were all too apparent. As a result, in no particular did progress during the three months after the May decision justify optimism. The heavy shipments to the Gulf ports inevitably brought an increasing threat of port congestion. Development of the ports lagged behind Shingler's predictions, and inland clearance, ever the biggest bottleneck, lagged even further. The Iranian State Railway, necessarily the primary reliance, was carrying, as late as August 1942, only 35,770 long tons of supplies for all purposes and of these only 12,440 were supplies for the USSR. The trucking operations of the United Kingdom Commercial Corporation, never characterized by a high degree of efficiency, were but a poor supplement. While the need for capacity for Soviet aid rose, The British found it necessary to add the burden of supply for the Polish Army they were evacuating through Iran to that of the British military and the Iranian civilian economy. While the two U.S. truck assembly plants at Andimeshk and Bandar Shahpur and the plane assembly plant at Abadan began operations in April, their capacity continued low and it was further limited by the lack of adequate port and inland clearance facilities. Such was the situation in the Persian Corridor when the Allies found themselves facing a new and more serious crisis in their effort to maintain even a limited schedule of convoys over the northern route. On 27 June 1942, convoy PQ-17, the third of the three convoys the British had promised to push through during the two-month period of May-June, departed Reykjavik, Iceland, for the long run over the northern route. The convoy contained 33 merchantmen, 22 American and 11 British, and had an unusually large naval escort. In a grim running battle with German air and sea raiders, 22 of the 33 (1) Ibid., pp. 570-73. (2) Motter, Persian Corridor, pp. 85-101, and App. A, Table 5. merchant vessels were lost. Shocked by these heavy losses, the British Admiralty decided to suspend the northern convoys "at least till the northern ice-packs melted and receded and until perpetual daylight passed." On 17 July Churchill informed Stalin of the decision, saying that continuation "would bring no benefit to you and would only involve dead loss to the common cause." Stalin's reply was a brutal rejection of the British reasons for halting the convoys and a bitter protest, in the strongest language, against the action taken. The decision to suspend the northern convoys came at a critical juncture in the affairs of the Anglo-American coalition, at a time when the entire strategic concept for the year 1942 was undergoing drastic revision. In June the war in the Middle East took a dangerous turn. General Field Marshal Erwin Rommel launched a drive into Egypt opening up a new threat to the Suez Canal and the Middle East oilfields. At the same time, the German drive through the USSR was plowing relentlessly forward through the Caucasus, threatening these same oilfields from another direction, and raising the possibility of complete defeat of the Soviet Army. In this critical situation, the American staff was forced to reconsider its position and take immediate emergency steps to bolster the British position in the Middle East. Supply aid was stepped up and an American air force (the Ninth) established in Egypt. A new command was set up, United States Army Forces in the Middle East (USAFIME) under Maj. Gen. Russell L. Maxwell, formerly head of the North African mission, and Maxwell was allotted the quota of service troops he had previously been denied. The crisis in the Middle East gave the final death blow to any hopes that SLEDGEHAMMER, the plan to invade the Continent in 1942, could be carried out. The American staff continued to hope that commitments to the Middle East could be kept from interfering with the execution of ROUNDUP, the invasion plan in 1943. But this hope ran afoul of the President's determination that American troops must be put in action against the Germans in 1942. In instructions given to his staff for conferences with the British at London in mid-July, Roosevelt made it quite clear that unless SLEDGEHAMMER could be carried out either an American Army must be committed to the Middle East or the invasion of North Africa undertaken. The decision taken at the conference (18-25 July 1942) was on the invasion of North Africa in the fall (Operation TORCH). (1) Churchill, Hinge of Fate, pp. 262-71. (2) Morison, Battle of the Atlantic, pp. 179-92. On the strategic developments of this period see Matloff and Snell, Strategic Planning, particularly pages 233-84; for the TORCH decision, see above, pages 173-98. The TORCH decision vastly complicated relations with the Russians at precisely the same time that the northern convoys were suspended. In conversations in May with Soviet Foreign Commissar Vyacheslav M. Molotov, President Roosevelt had given more positive assurances of the opening of a second front in 1942 than the British or even his own staff thought justified. The TORCH decision, in the Russian view, did not conform to these assurances nor did it promise to take much of the pressure off the USSR. While both Roosevelt and Churchill continued to hope that it would not prevent invasion of the Continent in 1943, both the American and British military staffs were convinced that it would. Thus the TORCH decision and the cancellation of the northern convoys created a doubly embarrassing situation for the President and Prime Minister vis-a-vis Stalin. Even if the convoys were resumed in September, they would probably have to be suspended again for at least two months to provide the requisite naval support for TORCH in November. Thus, while the Russians battled for their very existence, the second front in Europe that they had been clamoring for was not to become a reality, nor would they receive the supply aid promised under the Second Protocol unless some new means of delivery were found. It promised to be, as Churchill told Roosevelt in September, "a formidable moment in Anglo-American-Soviet relations." The July crisis evoked a diligent and almost frantic search for alternate means of delivery of supplies to the USSR. Churchill had long supported an operation (JUPITER) to secure the northern fringes of Norway and thus clear the route for the northern convoys, but neither his own nor the American staff ever looked with favor on this plan. It could, in any event, hardly be carried out except as a substitute for TORCH. The Pacific route to the USSR also inevitably came in for increased consideration. Plans were developed for delivering the majority of all planes to the USSR via an air ferry from Alaska to Siberia, but the Russians were at first un-cooperative and development of the ferry route was distressingly slow. For a brief moment, the Americans considered sending vessels on the long route through the Bering Sea and around the northern fringes of Siberia and actually turned over seven vessels to the Russians for this purpose, but the Russians themselves evidently found the route impractical and placed the vessels instead on the run to Vladivostok. The transfer of more ships to the Soviet flag in the Pacific for use on this Vladivostok run was of course a possibility, but in July and August 1942 it had little to recommend it. The greatest Soviet needs seemed Msg 151, Prime Minister to President, 22 Sep 42, ABC 381 (7-25-42), Sec. 4-B. clearly to be for military equipment and supplies that could not be risked on the Pacific route; Vladivostok was a long way from the critical fighting front in the Caucasus; and the outright transfer of ships to the USSR involved a complete loss of control over their future use, a very serious thing in view of the general shortage of cargo shipping in 1942. The finger thus pointed to the Persian Gulf as the only logical alternative to the northern route for the shipment of military supplies; indeed it had already been pointing in that direction since the first difficulties with the northern convoys in April. But each turn of the strategic wheel had brought some new demand on British and American resources that prevented the assignment of sufficiently high priority and the diffusion of responsibility between British and Americans had prevented the development of any co-ordinated plan. Paradoxically enough, the decision to commit additional American resources to the Middle East in June had the practical immediate effect of slowing shipments of men and material to the Persian Gulf for the highest priority went to getting the Ninth Air Force to Egypt and supporting the British effort in the desert. There were no significant accretions of American personnel in Iran in July and August 1942. And under the new command arrangement, Colonel Shingler's Iranian mission was made a service command in the USAFIME Services of Supply. The crisis in July produced a situation in which either facilities in the Persian Gulf would have to be extensively developed or else the United States and Britain would have to renege completely on their promises under the Second Protocol. Whereas, under the shipping estimates that originally lay behind the Second Protocol, one million tons were to be forwarded through Iran during the protocol year, that goal had now to be more than doubled if the southern route was to compensate for the deficiencies of the northern. It was set, in fact at 200,000 tons monthly, in a situation where the previous goal of 72,000 tons a month, proposed by the British in the fall of 1941, was still far short of attainment. The question for decision, by mid-July 1942, was less whether Iranian facilities should be developed than how, by whom, and to what extent. The welter of confused responsibilities that had characterized the earlier effort had to be resolved, and a clear-cut decision rendered on the priority to be given the project. From the very beginning it had been clear that only the Americans had the resources to accomplish the task; but to turn it over to them would require delicate adjustments in relationships as long as the area Leighton and Coakley, Global Logistics, pp. 564-66. remained one of British strategic responsibility and the military forces there under British command. In terms of priority, the basic question was the extent to which the BOLERO build-up for invasion of the Continent in 1943, already subordinated to TORCH, should be further subordinated to the effort to ensure continued deliveries of supplies to the USSR. These were questions that only the President, the Prime Minister, and the combined Chiefs of Staff could decide. And both because it was primarily American commitments for delivery of supplies that were concerned and because only the Americans had the resources adequate to the task of developing the facilities in Iran to the desired extent, the responsibility for decision lay mainly with the President of the United States. The President showed no inclination to view the obstacles that had arisen to the continued delivery of supplies to the USSR as insuperable. In his instructions to his staff for the London negotiations in July he answered categorically the question of whether a serious effort should be made to meet the Second Protocol: British and American materiel promises to Russia must be carried out in good faith.... This aid must continue a long as delivery is possible and Russia must be encouraged to continue resistance. Only complete collapse, which seems unthinkable, should alter this determination on our part. In taking this position, the President indicated clearly that he thought this aid must flow mainly via the Persian Gulf until the northern convoys could be resumed. An intensive exploration of the question of how this could be accomplished followed. On 13 July 1942, evidently anticipating the British decision to suspend the northern convoys, Averell Harriman, the President's personal lend-lease representative in London, cabled Harry Hopkins calling attention to the need for speed in expanding transit facilities through Iran. His recommendation was that the U.S. Army should take over operation and control of the Iranian State Railway in the British Zone. Admiral King, General Marshall, and General Somervell agreed generally that steps must be taken to increase Iranian transit facilities, but they stopped short of any positive recommendation that the Americans should take over the railroad, pending further study. The President, nonetheless, readily accepted Memo, President for Hopkins, Marshall, and King, 16 Jul 42, quoted in Robert E. Sherwood, Roosevelt and Hopkins: An Intimate History (New York: Harper & Brothers, rev. ed., 1950), pp. 603-05. See also Matloff and Snell, Strategic Planning, pp. 273-78. Harriman's proposal. Replying on 16 July to Churchill's formal notification of the suspension of the northern convoys, he placed it before the Prime Minister. Churchill accepted the proposal immediately with some enthusiasm and informally communicated his views to Harry Hopkins, then in London, though he delayed a formal response to the President until the whole matter had been subjected to further study. In a sense, then, the basic decision that the Americans would take over the task of developing facilities in the Persian Gulf had been taken by the President and agreed to by the Prime Minister by mid-July. But it took two months more to make that decision final enough to give it practical effect. Recognition on both sides that the matter needed further study reflected the immense complications of the problem and the fact that it seemed unlikely that merely turning the Iranian State Railway over to the Americans would provide an adequate solution. It was clear that a much more far-reaching decision was needed which would delineate clearly the dimensions of the task of supplying the USSR through Iran, the cost of carrying out such a task, the division of responsibility and the best organization for it, and the priority to be accorded this effort in relation to other essential military and civilian activities in the area. The "further study" consequently took over a month. Many hands entered into it. Brig. Gen. Sidney P. Spalding, Assistant Executive of the Munitions Assignments Board, went out to the Middle East on a special mission in late July as the personal representative of General Marshall and Harry Hopkins to determine on the spot what steps should be taken to increase Persian Gulf capacity for Soviet aid. Churchill and Harriman, after a visit to Stalin in August, returned via Tehran and Cairo also to investigate the situation at first hand. At the hub of the fact-finding stood General Maxwell in Cairo, who, as commander of USAFIME, had a newly assigned responsibility for American operations in the Persian Gulf. It was less on the highly placed dignitaries, nevertheless, than on the pick and shovel men, British and American, in the Persian Gulf, that the real job of fact-finding fell. The final estimates on which action was based were gathered together by Colonel Shingler, largely on information received from British transportation authorities in the area. Shingler's tables were postulated on the use of all the Iranian ports and partial use of Basra in Iraq and Karachi in India for cargoes to be cleared through Iran. Against a current (August 1942) ca- (1) Motter, Persian Corridor, pp. 177-78, 190n. (2) Sherwood, Roosevelt and Hopkins, pp. 544, 600. (3) Leighton and Coakley, Global Logistics, p. 574. pacity of 189,000 long tons for all these ports, Shingler proposed a target of 399,500 tons for June 1943. Rail clearance currently running at little more than 35,000 long tons he thought could be increased to 180,000 in the same period under American operation. By providing trucking lines to haul 139,500 tons per month, he would bring total monthly inland clearance capacity to 319,500 tons. Deducting estimated essential requirements of the British military in Iran and the Iranian civilian economy, Shingler figured it would ultimately be possible to forward 241,000 long tons of supplies monthly to the USSR. This would provide enough capacity to meet the currently accepted goal of 200,000 long tons per month of Soviet-aid supplies via the southern route, but it must be kept in mind that Shingler did not believe that target could be met until June 1943, much too late to meet the immediate need for an alternate to the northern route. In mid-August most of the interested parties-Harriman, Churchill, Spalding, Maxwell, Shingler, and British commanders in the Middle East-gathered at Cairo in a conclave that lasted several days. Using Shingler's estimates as their point of departure, but modifying them in several ways, they arrived at a general estimate and plan for action. This plan and estimate Maxwell forwarded to the War Department on 22 August. Excluding Shingler's figures for Basra and Karachi, it set the target for the Iranian ports at 261,000 long tons monthly. The monthly target for the railroad remained at 180,000 tons, but the trucking goal was expanded from 139,500 to 172,000 tons, making a total inland clearance target of 352,000 long tons monthly. To achieve these goals, Maxwell recommended that the U.S. Army take over the operation not only of the railway, but also of the ports-Khorramshahr, Bandar Shahpur, Bushire, and Tanuma-and operate a truck fleet to supplement that of the United Kingdom Commercial Corporation. Troop requirements to meet these objectives were calculated to be 3 port battalions, 2 railway operating battalions, 1 engineer battalion, and 2 truck regiments-a total of approximately 8,365 men, all of whom, Maxwell said, had been included in the troop basis for the Middle East on a deferred priority. Materiel requirements, in addition to organizational equipment for the service troops, were set at 75 additional steam locomotives, 2,200 20-ton freight cars or their equivalent, and 7,200 trucks averaging seven tons in capacity. Maxwell's recommendations were contingent on receipt of "specific requests . . . from the British authorities." The specific request Motter, Persian Corridor, pp. 180-90. (2) Leighton and Coakley, Global Logistics, pp. 574-75. came from Winston Churchill to the President on the same day. Churchill said: I have delayed my reply until I could study the Trans-Persian situation on the spot. This I have now done both at Tehran and here, and have conferred with Averell, General Maxwell, General Spalding and their railway experts. The traffic on the Trans-Persian Railway is expected to reach three thousand tons a day for all purposes by the end of the year. We are all convinced that it ought to be raised to six thousand tons. Only in this way can we ensure an expanding flow of supplies to Russia while building up the military forces which we must move into North Persia to meet a possible German advance. To reach the higher figure, it will be necessary to increase largely the railway personnel and to provide additional quantities of rolling stock and technical equipment. Furthermore, the target will only be attained in reasonable time if enthusiasm and energy are devoted to the task and a high priority accorded its requirements. I therefore welcome and accept your most helpful proposal contained in your telegram, that the railway should be taken over, developed and operated by the United States Army; with the railroad should be included the ports of Khorramshahr and Bandar Shahpur. Your people will thus undertake the great task of opening up the Persian Corridor, which will carry primarily your supplies to Russia. All our people here agree on the benefits which would follow your approval of this suggestion. We should be unable to find the resources without your help and our burden in the Middle East would be eased by the release for use elsewhere of the British units now operating the railway. The railway and ports would be managed entirely by your people, though the allocation of traffic would have to be retained in the hands of the British military authorities for whom the railway is an essential channel of communication for operational purposes. I see no obstacle in this to harmonious working. Harriman followed with a cable to the President the next day, strongly reinforcing the Prime Minister's arguments and Maxwell's recommendations. Maxwell, Spalding, and he all agreed, Harriman said: (a) that with proper management and personnel and with additional equipment the capacity of the railroad to Teheran can be increased to six thousand long tons a day, (b) that the British have not the resources or personnel to carry out this program even if we should supply the equipment, (c) that unless the United States Army undertakes the task the flow of supplies to Russia will dry up as the requirements of the British forces in the theatre increase, (d) that the importance of the development of the railroad to its maximum cannot be over-emphasized, (e) that the condition in the Prime Minister's cable of the British retaining control of traffic to be moved is reasonable, offers no practicable difficulty and should be accepted. Msg, Churchill to Roosevelt, 22 Aug 42, quoted in Motter, Persian Corridor, p. 190. Msg, Harriman (signed Maxwell) to President, 22 Aug 42, CM-IN 8657, 23 Aug 42. While placing his main emphasis on the railroad, Harriman also recommended the dispatch of the three port battalions and asked favorable action on the request for trucks and personnel to increase road transport, though he placed the last in a priority second to the railroad and the ports. On 25 August, the President turned both Churchill and Harriman's cables over to General Marshall with a request that he have a plan drawn up to accomplish what was being proposed and give his judgment as to whether the United States should accede to the request. Marshall assigned the task to General Somervell's Services of Supply (SOS). Within the SOS primary responsibility fell on Col. D. O. Elliot, head of the Strategic Logistics Division, working under the general supervision of Brig. Gen. LeRoy Lutes, Assistant Chief of Staff for Operations, SOS. Somervell told his subordinates that he wanted to present the Chief of Staff with "a complete study in every respect . . . one that can be regarded as a model." The resultant SOS Plan, presented by Lutes to Somervell on 4 September 1942, met this high standard in almost every respect. It brought all the earlier proposals on the railroad, ports, and trucking organization together in one single plan for a balanced and self-contained American service command in the Persian Gulf. This command was to be formed in the United States and shipped to Iran by increments to take over from Shingler's sparsely staffed mission, absorbing the latter in the process. Thus, while the SOS Plan was built on the recommendations drawn up at Cairo, it expanded the personnel and materiel recommendations contained in those recommendations considerably, producing a far more accurate estimate of what the project to develop the Persian Gulf would actually cost other efforts. In order to provide for a balanced service command, (1) Memo, Somervell for Lutes, 29 Aug 42, Hq Army Service Forces (ASF) Folder Operations. ASF records have been retired to National Archives. (2) MS Index to Hopkins Papers, V, Aid to Russia, Item 69. A copy of this index to Hopkins papers in the Hyde Park Library is in OCMH. (3) Motter, Persian Corridor, pp. 191-92. (1) Plan for the Operation of Certain Iranian Communications Facilities Between Persian Gulf Ports and Tehran by U.S. Army Forces, 3 Sep 42, Persian Gulf Command Folder 235 (hereafter cited as SOS Plan). The Persian Gulf Command folders (PGF), at present with Army records in Federal Records Center, Alexandria, Va. consist of some of the documents used by T. H. Vail Motter in preparation of the volume The Persian Corridor and Aid to Russia. (2) Control Division, ASF, folder of same title as above contains most of the papers used by Colonel Elliot in preparation of the plan, including memorandums from various persons who rendered advice and from chiefs of technical services. It is to be found with the rest of the ASF records in National Archives. troop requirements were expanded from the 8,365 in the Maxwell cable to 23,876. While 4,515 of these, road maintenance personnel, were placed in a deferred category, they were to prove in the end as necessary as any of the others. Though the target figures and the estimated numbers of trucks, rail cars, and locomotives remained the same in the SOS Plan as in the Cairo recommendations, the additional organizational equipment for the service troops vastly expanded the total amount of materiel required. Meeting these requirements, the planners found, would be difficult. The pool of service troops available was small, and a large proportion of those activated had either been earmarked for BOLERO or would be necessary for TORCH. The production of heavy equipment-locomotives, rail cars, and large trucks-was limited, and, outside domestic requirements, most of that under production had been earmarked for the British. As usual, shipping was the most critical commodity of all. The SOS Plan noted that "all troop and cargo ships have been assigned missions [and] any new operations must be at the expense of other projects." If the project was to succeed it must be given a priority second only to the operational requirements of TORCH, and above those of the build-up for the invasion of the Continent in 1943. Presupposing this high priority, the SOS planners proposed that of the 19,361 troops considered essential for early shipment, 8,969 could be made available by diversion from BOLERO, 8,002 from other troop units already activated (mainly for second priority objectives in Iran and North Africa), and 1,501 from new activations. Of the road maintenance troops in a delayed category, 1,503 would also have to come from the BOLERO troop basis, the rest from miscellaneous sources. One port battalion of 889 men was to be diverted from General Wheeler's Service of Supplies in Karachi where it was reportedly not doing port work. Provision of the locomotives and rail cars would also require diversions from other sources in the Middle East and India, but the major portion would have to come from new production or from the domestic railroads. Trucks presented the most difficult problem of all. It was thought that 500 of 10-ton capacity could be repossessed from a number turned over to the British under lend-lease and 600 of capacity unknown withdrawn from a stock at Karachi intended for shipment into China; but for the rest it would be necessary to substitute 2 1/2-ton cargo trucks in larger numbers for the trucks of 7-ton capacity requested. Shipping requirements for men and materials added up, according to Transportation Corps estimates, to a total of 471,000 ship tons. The SOS Plan provided for movement of 11,000 men on the West Point and Wakefield in late October, the rest on British troopships to be released from deployment of U.S. air forces to the Middle East in January 1943. Both movements represented diversions from BOLERO. Cargo shipments should begin 1 October and continue through January at the rate of 110,000 tons, approximately 10 ships per month-again a diversion from BOLERO, but partially compensated by the fact that the shipping pool would be increased by release of cargo ships originally scheduled to sail over the northern route. The most difficult question of all was timing. Shingler had estimated that the final targets for port capacity and inland clearance could not be met before June 1943. Both General Spalding and Averell Harriman insisted that this target date could be moved forward to February 1943 and Spalding presented estimates to Somervell on this basis on his return to Washington. The British, remembering their own experience, were extremely dubious that more than half the target could be achieved by February and felt that June would be far more realistic. The SOS planners refused to commit themselves definitely but postulated a "material advancement" of the June target date set by Shingler. The SOS Plan for movement was geared to this "material advancement." Priorities were proposed as, first, rail operations; second, ports; and third, road operations. The 5,004 troops required for the railroads and the 5,016 for the ports (including miscellaneous service and headquarters elements necessary to complement them) could be taken care of in the first troop movement scheduled for October. The equipment for their operations could be made available and shipped in coordination with the troops. They should be in the theater and ready to take over operation of the ports and railroads by the end of the year. The 8,114 troops primarily for truck operations and in third priority, would follow in January and should be in the theater at least by early March. The heavy trucks, whose availability was most in question, or smaller substitutes for them, could probably be made available by this time. The essence of the conclusion dictated by General Somervell was that, if high enough priority was given to the movement of troops and supplies, the whole operation was feasible. He ended with the recommendation "that this plan be accepted as the basis of future operation of supply routes in the Persian Corridor." Memo, Maj Gen C. P. Gross, CofT, for Gen Somervell, 30 Aug 42, sub: Transportation Service for Persian Gulf. Control Div, ASF, folder on SOS Plan. (1) SOS Plan, par. 4. (2) Memo, Spalding for Somervell, 4 Sep 42, sub: Target Estimates of Persian Gulf Supply Routes, and Memo, Spalding for Elliot, 5 Sep 42, with Incl, Comments by Lt Col W. E. V. Abraham of British Middle East Command on American Estimates, both in Control Div, ASF, folder on SOS Plan. SOS Plan, Synopsis, pars. 7, 8. Somervell submitted the plan to the Chief of Staff with a draft cable for the President to send to the Prime Minister indicating his acceptance of the latter's proposal and his approval of a plan to put it into effect. But final approval of the plan was not to come through this channel. The Persian Gulf project involved both matters of combined strategy and division of military responsibility in the Middle East that required consideration by the Combined Chiefs of Staff. General Marshall therefore placed the plan before them, and it was they who rendered the final decision. In consideration of the plan before the Combined Staff Planners (CPS), the British had their opportunity to present their views. The British planners in general accepted the SOS Plan but they remained more pessimistic about the possible rate of development. They pointed to one glaring contradiction in the American calculations. Persian ports would be unable, during the months following, to handle shipments of supplies on the scale contemplated for the new American command "without cutting the scheduled Russian shipments and the essential civil and military maintenance commitments." They therefore insisted on a reduction in the schedule of these cargo shipments from ten to five ships per month at the outset, stipulating that it might be increased later at the discretion of the authorities on the spot. Beyond this, the only other revision in the plan proposed by the CPS was to add the barge port of Ahwaz to the list of facilities to be operated by the American command. Having made what they considered necessary revisions in the plan, the CPS turned to the question of its strategic implications and the problem of division of responsibility between British and Americans in Iran. Its strategic implications were clear. It would "increase the dispersion of . . . U.S. military resources" and "divert personnel, equipment and ships that are at present set up for other theaters." The greatest effect would be on the build-up in the British Isles for invasion in 1943 and this effect would be felt most severely in the diversion of cargo shipping. The CPS noted: Transportation required for this plan will reduce the number of sailings for BOLERO to the extent of about 2 1/2 times the number of sailings for the project. On the assumption that 44 cargo ship sailings are required to complete the move the cost to BOLERO would therefore be a total of 110 sailings during the period of the move. The longer turnaround to the Persian Gulf results in a proportionately larger quantity of shipping being removed from other military operations. The number of cargo sailings CCS 109/1, Rpt by CPS, 22 Sep 42, title: Development of Persian Transportation Facilities. monthly may be increased in direct proportion to the reduction of ships allocated to the Persian Gulf for handling lend-lease to Russia. Personnel shipping in ships of 20 knots or better can be made available without interfering with present planned operations of higher priority. The planners could see no alternative to accepting this cost. "If our shipping losses continue at their present excessive rate along the Northern Russian route," they noted, "it may become necessary to use the Persian Gulf route entirely." This statement had additional force in view of developments since July when the planning for American operation of Persian Gulf facilities had begun. After the two months' suspension during July and August, the British resumed the northern convoys in September using a very heavy naval escort, only to lose some thirteen cargo vessels out of forty. Neither this scale of escort nor this rate of loss could be sustained during the early stages of TORCH and the President and Prime Minister were forced to the decision that the convoys must be canceled again during October and November. While the Combined Staff Planners were weighing the Persian Gulf plan, Churchill and Roosevelt were pondering the question how to break the bad news of the new suspension of convoys to Stalin. "My persisting anxiety is Russia," wrote the British Prime Minister, "and I do not see how we can reconcile it with our consciences or with our interests to have no more PQ's till 1943, no offer to make joint plans for JUPITER [the invasion of Norway], no signs of a spring, summer or even autumn offensive in Europe." It is not therefore surprising that the CPS recommended to the Combined Chiefs "favorable consideration" of the proposition that "the U.S. Army accept responsibility for developing and operating the transportation and port facilities in the Persian Corridor" in accordance with the SOS Plan. In making this recommendation, the CPS had to resolve finally the old question of the relative responsibilities of British and Americans for movement control. The general principle of Churchill's cable that the United States should operate the transport facilities subject to British allocation of traffic required some definition, and the first version of the plan presented by the British raised definite fears on the American side that they wished to control shipments from the United States as well as internal traffic through the corridor. American strategic planners, in General Marshall's Operations Division, never very enthusiastic about this diversion of American resources Ibid. (1) Msg 151, Prime Minister to President, 22 Sep 42, ABC 381 (7-25- 42), Sec. 4-B. (2) Leighton and Coakley, Global Logistics, p. 583. CCS 1O9/1, 22 Sep 42. from their primary objective of a cross-Channel invasion, thought the United States should not undertake the project with responsibility so divided in the theater. They wished to give first priority to supplies for the USSR. But British counter-objections to this produced a compromise, if not satisfactory to all at least acceptable, in the tradition of nearly all Anglo-American wartime relations. The British were to continue to exercise strategic responsibility for the defense of the area against enemy attack and for security against internal disorders. In view of this responsibility, the British commander-in-chief of the Persia-Iraq Command would control "priority of traffic and allocation of freight" for movement from the Gulf ports northward. But, recognizing the primary objective of the United States as increasing and ensuring the uninterrupted flow of supplies to Russia, the CPS proposed this statement: "It is definitely understood that the British control of priorities and allocations must not be permitted to militate against attainment of such objective, subject always to the military requirements for preparing to meet a threat to the vital Persian Gulf oil areas." The U.S. commanding general in the Persian Gulf was granted the right of appeal though the Joint Chiefs of Staff to the Combined Chiefs of Staff on any British decision which he thought would prejudice the flow of supplies to the USSR. General priority of movement was stated as follows: "Over and above the minimum requirements for British forces consistent with their combat mission, and essential civilian needs, Russian supplies must have highest priority." These provisions meant that the British control over allocation of freight would not be exercised except in case of imminent threat of a German attack or an Axis-inspired uprising. In normal circumstances there would be fixed allocations for British military forces and for civilian needs which would be transported as first priority, and all additional capacity would go to the movement of Soviet-aid supplies. This delicate matter decided, the Combined Chiefs approved the CPS recommendations on 22 September 1942 without any recorded discussion. In rendering its decision, the CCS made no definite stipulation as to the priority to be given the Persian Gulf project, apparently on the theory that this must be left to the President. They had, nevertheless, in mid-August adjusted their shipping priorities to give Soviet aid cargoes going via the southern route priority equal to that of military shipments for TORCH and the Middle East and above those (1) All quotations above from CCS 109/1. (2) Memo, Elliot for Lutes, 4 Sep 42, OPD 334.8, CCS, Case 16. (3) CCS 109, 2 Sep 42, title: Development of Persian Transportation Facilities. (4) Research Draft prepared by OPD Hist. Unit, USSR in U.S.-British Plans and Operations in 1942, pp. 83-85, MS, OCMH. for the BOLERO build-up. A Presidential directive giving virtually the same priority to the project for developing Iranian facilities to handle these shipments was therefore almost a foregone conclusion. It was forthcoming on 2 October 1942 when the President instructed the Secretary of War that "the project for the operation and enlargement of the Persian Corridor be given sufficient priority and support in the form of men, equipment and ships to insure its early and effective accomplishment." With this directive the SOS Plan as modified by the CCS decision was put into action. Persian Gulf facilities under American operation eventually provided an adequate substitute for the northern route for delivery of supplies to the USSR but this development came much later than the planners in August and September 1942 had hoped and much too late to permit fulfillment of American commitments under the Second Protocol. Shipments of both supplies and personnel for the Persian Gulf Command were delayed well beyond the schedule proposed in the SOS Plan. Mistakes were made both in the planning and in the early operations. A trucking fleet that would carry anything like the 172,000 tons proposed in the SOS Plan was never sent. The transition from British to American operation took longer than planned, and the Americans also took longer to make their operation effective. Under British operation, improvement was slow during the latter half of 1942. Approximately 40,000 long tons of Soviet aid were delivered through the Corridor in September 1942, only 51,000 in January 1943. Total tonnage on the Iranian State Railway expanded only from 36,000 in August 1942 to 52,000 in January 1943. Between January and May 1943, the Americans assumed operation step by step and the turnover was generally complete by 1 May. During this transition period total tonnage delivered to the Russians expanded to 101,000 in April, while the railroad carried 65,000 tons in March. Under complete American operation, the figure for tonnage delivered to the USSR was nearly doubled by September 1943, reaching 199,000, and the railroad achieved a capacity of 175,000 tons in October. This achievement of the target loads came six months after the date predicted by Harriman and Spalding and three months after the more pessimistic goal proposed by Shingler in August 1942. After October 1943 the Persian Gulf was in a position to forward even more cargo than it proved necessary to send by that route. In the peak month (1) Memo, President for SW, 2 Oct 42, AG 400.3295 (9-1 42) Sec. 12. (2) Motter, Persian Corridor, p. 180. (3) Leighton and Coakley, Global Logistics, pp. 578, 584. of July 1944 some 282,097 long tons of supplies were delivered to the USSR through the Corridor. The effects of the ultimate success achieved by the American command are clearly apparent in the figures on performance under the various protocols. On the first and second, deliveries were only about 75 percent of the material promised, while on the third the United States exceeded its promises by 30 percent and on the fourth had already met 95 percent of its commitments when the war in Europe came to an end on 8 May 1945 and the schedules were revised. True, a shift in Soviet priorities, after Stalingrad, from military equipment to civilian-type supplies that made possible a far greater use of the Pacific route during this later period also influenced the result, but in large measure it was the opening of the Persian Gulf that made possible so high a scale of shipments, with the northern route intermittently closed throughout the war. Despite the delays in fulfillment of the goals then, the decision must be evaluated as a sound one if the rationale of the program of aid to the USSR is accepted. The cost involved to the build-up for invasion of the Continent was not a determining factor in postponing that operation until mid-1944. By the time the decision was finally rendered, there were so many other diversions and dispersions of American resources under way or in prospect that the Persian Corridor project was simply one of the minor factors contributing to the delay in concentration in the British Isles. The principal criticism of the decision then must be that it was belated, unduly slow both in the making and in execution. It was reasonably apparent in October 1941 that the Persian Corridor would have to be extensively developed if supply commitments to the USSR were to be met and the crisis on the northern convoy route in April and May 1942 made it doubly certain. For a long period, there was a clear contradiction in American policy on aid to the USSR. Supplies and the ships to carry them were accorded almost the highest priority possible while the means of developing the only secure route for delivery of the supplies were accorded one of the lowest. This situation had reached the point, by July 1942 when the northern convoys had to be suspended, where only a decision at the highest level could resolve it. The President made that decision, ending the contradiction in policy. Yet the necessity for carrying out, almost de novo, (1) Estimate for August 1942 is based on Msg, AMSIR to AGWAR, 12 Oct 42 CM-IN 05027. All other figures are from Motter, Persian Corridor, App. A, Tables 4, 5. (2) Leighton and Coakley, Global Logistics, pp. 577-83. (1) State Dept Rpt on War Aid to USSR, 28 Nov 45. (2) Leighton and Coakley, Global Logistics, pp. 583-97. a survey of requirements needed to perform the task and the means of meeting them delayed even the beginning of fulfillment of the decision for almost two months. It took another year for its complete effects to be felt. The lesson then appears to be that the plans for development of any line of communications must be prepared well in advance and a decision taken as early as possible on the means to fulfill them. Otherwise, amidst the competing claims of a global conflict, the relatively small requirements of such a project tend to get lost in the shuffle of major undertakings despite the importance they may have for over-all strategy. In 1942 the importance of the Persian Corridor project for overall strategy was not inconsiderable. The need for speed must be evaluated in terms of developments on the Russian front in that year. While the Persian Gulf decision was in the making the Germans were moving steadily forward to their rendezvous with destiny at Stalingrad. If the Persian Gulf facilities had been ready, the amount of British and American supplies reaching the Russians during this critical battle would have been much greater. As it was, the Russians won with what they had and what the British and Americans did in fact contribute. But had the battle gone the other way, British and American leaders might well have had good cause to regret the fact that the decision to make a concentrated effort to develop the southern route had not been made earlier. ROBERT W. COAKLEY, Historian with OCMH since 1948. William and Mary College, University of Virginia, Ph.D. Taught: University of Virginia, Tulane University, University of Arkansas, Fairmont State College. Historian, Headquarters, European Theater of Operations, U.S. Army, and U.S. Forces, European Theater, 1945-46. Coauthor: Global Logistics and Strategy, 1940-1943 (Washington, 1955) and Global Logistics and Strategy 1943-1945 (in preparation), UNITED STATES ARMY IN WORLD WAR II.
http://www.history.army.mil/books/70-7_09.htm
13
15
Boron-based chemical compounds rarely form simple structures. Boron is an electron-deficient element; and, as electrons are the glue that hold compounds together, this leads to some unusual bonding behavior. Using a new method developed in Japan to link two boron atoms together by a regular, single covalent bond, the element can be forced into more conventional behavior. The method was developed by a team of researchers including Yoshiaki Shoji, Tsukasa Matsuo, and Kohei Tamao at the RIKEN Advanced Science Institute, Wako. The compound that the researchers made features two boron atoms held together by a shared pair of electrons. For other elementscarbon, for examplethat would be a typical bond, but electron-poor boron tends to prefer a more complex arrangement. In the boron compound diborane (B2H6), for example, two boron atoms are bridged by hydrogen atoms, with each boronhydrogenboron bond sharing a single pair of electrons across three atoms rather than the usual two. Theory has long predicted that by pumping extra electrons into a compound such as diborane, the boronhydrogenboron structure should break down to form a boronboron single bond. Until now, however, all such attempts to make and isolate such a structure had failed, instead generating clusters or single boron species. Matsuo and Tamaos strategy for generating the boronboron bond was to start with a borane precursor where each boron atom was fitted with a bulky side-group known as an Eind group. The researchers suspected that previous attempts probably succeeded in generating the boronboron single bond but failed to protect that structure from quickly falling apart through over-reaction. Using the bulky side-groups, they were able to block these over-reaction processes, and successfully isolate the desired boronboron single bond (Fig. 1). Having discovered a new way to make the boronboron bond, the next step will be to assess its chemistry and reactivity, and to explore related structures, says Shoji. The bond has already proved to be relatively stable: the team has shown that if protected from air and moisture, the boronboron compound can be stored for months at ambient temperature. It can also be converted into a three-membered ring, in which a bridging hydrogen atom is the third member, forming a molecule with potentially useful properties. We think that the hydrogen-bridged boronboron bond has a double-bond character, says Matsuo. We would like to explore the new reaction chemistry of multiply bonded boron species. Explore further: Femtosecond 'snapshots' reveal a dramatic bond tightening in photo-excited gold complexes More information: Shoji, Y., et al. Boronboron σ-bond formation by two-electron reduction of a H-bridged dimer of monoborane. Journal of the American Chemical Society 133, 1105811061 (2011).
http://phys.org/news/2011-10-unprecedented-formation-boron-boron-covalent-bond.html
13
18
In this video adapted from Need to Know, artist Steve Brodner uses simple drawings to compare the size of the 2010 BP oil spill to more familiar things, like a football field, a shopping mall, the state of Texas, and Earth’s moon. On March 24, 1989, the oil tanker Exxon Valdez grounded on a reef in Prince William Sound, Alaska, spilling more than 11 million gallons of crude oil that eventually spread over approximately 11,000 square miles of ocean. Yet the size of this spill pales in comparison to the 2010 BP oil spill—also referred to as the Deepwater Horizon spill—which poured about 185 million gallons of oil into the Gulf of Mexico, covering over 78,000 square miles. The volume of the BP oil spill would fill 145 typical-sized gymnasiums; the Exxon Valdez, only 9. Even so, the Exxon Valdez spill had a huge environmental impact, and may offer some understanding of the eventual ecological toll of the BP spill. To clean up the oil in Prince William Sound after Exxon Valdez, workers tried three methods: burning, mechanical cleaning (such as skimmers to collect and remove the oil), and chemical dispersants. Despite the extensive cleanup effort, only a small percentage of the oil was recovered. According to the National Oceanic and Atmospheric Administration (NOAA), more than 26,000 gallons of oil remained in shoreline soil nearly a decade after the accident. Both the long- and short-term effects of the oil spill were enormous. Hundreds of thousands of animals died within the first few months. It is estimated that 100,000 to 250,000 seabirds perished, along with sea otters, harbor seals, bald eagles, orcas, and billions of salmon and herring eggs. Effects from the spill continue to be felt years after the accident. Some animals, such as sea otters, showed higher rates of mortality in the years following the spill, likely a result of ingesting contaminants, and reduced populations of many other ocean animals have been observed. Scientists estimate that it may take up to 30 years for habitats to recover. Based on this history, what can we expect from the BP oil spill? Mainly that it may take decades to assess the full impact of this disaster. As demonstrated by Exxon Valdez, oil can persist in the environment for many years, and the long-term impacts on fish and wildlife cannot be overlooked. In the Gulf of Mexico, thousands of species of fish, birds, crustaceans, sea turtles, and marine mammals are at risk from the oil itself, as well as from toxic dispersants and loss of habitat. It is still unclear what will happen in the coming decades and how this will affect the Gulf ecosystems and the health and economic welfare of humans who depend upon it. Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co. We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment.
http://www.teachersdomain.org/resource/envh10.health.oilspillsize/
13
31
Late Middle Ages |Late Middle Ages Europe and Mediterranean region The Late Middle Ages was the period of European history generally comprising the 14th to the 16th century (c. 1300–1500). The Late Middle Ages followed the High Middle Ages and preceded the onset of the early modern era (and, in much of Europe, the Renaissance). Around 1300, centuries of prosperity and growth in Europe came to a halt. A series of famines and plagues, such as the Great Famine of 1315–1317 and the Black Death, reduced the population to around half of what it was before the calamities. Along with depopulation came social unrest and endemic warfare. France and England experienced serious peasant uprisings: the Jacquerie, the Peasants' Revolt, as well as over a century of intermittent conflict in the Hundred Years' War. To add to the many problems of the period, the unity of the Catholic Church was shattered by the Western Schism. Collectively these events are sometimes called the Crisis of the Late Middle Ages. Despite these crises, the 14th century was also a time of great progress within the arts and sciences. Following a renewed interest in ancient Greek and Roman texts that took root in the High Middle Ages, the Italian Renaissance began. The absorption of Latin texts had started before the 12th Century Renaissance through contact with Arabs during the Crusades, but the availability of important Greek texts accelerated with the capture of Constantinople by the Ottoman Turks, when many Byzantine scholars had to seek refuge in the West, particularly Italy. Combined with this influx of classical ideas was the invention of printing which facilitated dissemination of the printed word and democratized learning. These two things would later lead to the Protestant Reformation. Toward the end of the period, an era of discovery began (Age of Discovery). The growth of the Ottoman Empire, culminating in the Fall of Constantinople in 1453, eroded the last remnants of the Byzantine Empire and cut off trading possibilities with the east. Europeans were forced to discover new trading routes, as was the case with Columbus’s travel to the Americas in 1492, and Vasco da Gama’s circumnavigation of India and Africa in 1498. Their discoveries strengthened the economy and power of European nations. The changes brought about by these developments have caused many scholars to see it as leading to the end of the Middle Ages, and the beginning of modern history and early modern Europe. However, the division will always be a somewhat artificial one for scholars, since ancient learning was never entirely absent from European society. As such there was developmental continuity between the ancient age (via classical antiquity) and the modern age. Some historians, particularly in Italy, prefer not to speak of the late Middle Ages at all, but rather see the high period of the Middle Ages transitioning to the Renaissance and the modern era. Historiography and periodization The term "Late Middle Ages" refers to one of the three periods of the Middle Ages, the others being the Early Middle Ages and the High Middle Ages. Leonardo Bruni was the first historian to use tripartite periodization in his History of the Florentine People (1442). Flavio Biondo used a similar framework in Decades of History from the Deterioration of the Roman Empire (1439–1453). Tripartite periodization became standard after the German historian Christoph Cellarius published Universal History Divided into an Ancient, Medieval, and New Period (1683). For 18th century historians studying the 14th and 15th centuries, the central theme was the Renaissance, with its rediscovery of ancient learning and the emergence of an individual spirit. The heart of this rediscovery lies in Italy, where, in the words of Jacob Burckhardt: "Man became a spiritual individual and recognized himself as such" (The Civilization of the Renaissance in Italy, 1860). This proposition was later challenged, and it was argued that the 12th century was a period of greater cultural achievement. As economic and demographic methods were applied to the study of history, the trend was increasingly to see the late Middle Ages as a period of recession and crisis. Belgian historian Henri Pirenne introduced the now common subdivision of Early, High and Late Middle Ages in the years around World War I. Yet it was his Dutch colleague Johan Huizinga who was primarily responsible for popularising the pessimistic view of the Late Middle Ages, with his book The Autumn of the Middle Ages (1919). To Huizinga, whose research focused on France and the Low Countries rather than Italy, despair and decline were the main themes, not rebirth. Modern historiography on the period has reached a consensus between the two extremes of innovation and crisis. It is now (generally) acknowledged that conditions were vastly different north and south of the Alps, and "Late Middle Ages" is often avoided entirely within Italian historiography. The term "Renaissance" is still considered useful for describing certain intellectual, cultural or artistic developments, but not as the defining feature of an entire European historical epoch. The period from the early 14th century up until – sometimes including – the 16th century, is rather seen as characterised by other trends: demographic and economic decline followed by recovery, the end of western religious unity and the subsequent emergence of the nation state, and the expansion of European influence onto the rest of the world. The limits of Christian Europe were still being defined in the 14th and 15th centuries. While the Grand Duchy of Moscow was beginning to repel the Mongols, and the Iberian kingdoms completed the Reconquista of the peninsula and turned their attention outwards, the Balkans fell under the dominance of the Ottoman Empire. Meanwhile, the remaining nations of the continent were locked in almost constant international or internal conflict. The situation gradually led to the consolidation of central authority, and the emergence of the nation state. The financial demands of war necessitated higher levels of taxation, resulting in the emergence of representative bodies – most notably the English Parliament. The growth of secular authority was further aided by the decline of the papacy with the Western Schism, and the coming of the Protestant Revolution. After the failed union of Sweden and Norway of 1319–1365, the pan-Scandinavian Kalmar Union was instituted in 1397. The Swedes were reluctant members of the Danish-dominated union from the start. In an attempt to subdue the Swedes, King Christian II of Denmark had large numbers of the Swedish aristocracy killed in the Stockholm Bloodbath of 1520. Yet this measure only led to further hostilities, and Sweden broke away for good in 1523. Norway, on the other hand, became an inferior party of the union, and remained united with Denmark until 1814. Iceland benefited from its relative isolation, and was the only Scandinavian country not struck by the Black Death. Meanwhile, the Norwegian colony on Greenland died out, probably under extreme weather conditions in the 15th century. These conditions might have been the effect of the Little Ice Age. The death of Alexander III of Scotland in 1286 threw the country into a succession crisis, and the English king, Edward I, was brought in to arbitrate. When Edward claimed overlordship over Scotland, this led to the Wars of Scottish Independence. The English were eventually defeated, and the Scots were able to develop a stronger state under the Stewarts. From 1337, England's attention was largely directed towards France in the Hundred Years' War. Henry V’s victory at the Battle of Agincourt in 1415 briefly paved the way for a unification of the two kingdoms, but his son Henry VI soon squandered all previous gains. The loss of France led to discontent at home, and almost immediately upon the end of the war in 1453, followed the dynastic struggles of the Wars of the Roses (c. 1455–1485), involving the rival dynasties of Lancaster and York. The war ended in the accession of Henry VII of the Tudor family, who could continue the work started by the Yorkist kings of building a strong, centralized monarchy. While England's attention was thus directed elsewhere, the Hiberno-Norman lords in Ireland were becoming gradually more assimilated into Irish society, and the island was allowed to develop virtual independence under English overlordship. The French House of Valois, which followed the House of Capet in 1328, was at its outset virtually marginalized in its own country, first by the English invading forces of the Hundred Years' War, later by the powerful Duchy of Burgundy. The appearance of Joan of Arc on the scene changed the course of war in favour of the French, and the initiative was carried further by King Louis XI. Meanwhile Charles the Bold, Duke of Burgundy, met resistance in his attempts to consolidate his possessions, particularly from the Swiss Confederation formed in 1291. When Charles was killed in the Burgundian Wars at the Battle of Nancy in 1477, the Duchy of Burgundy was reclaimed by France. At the same time, the County of Burgundy and the wealthy Burgundian Netherlands came into the Holy Roman Empire under Habsburg control, setting up conflict for centuries to come. Bohemia prospered in the 14th century, and the Golden Bull of 1356 made the king of Bohemia first among the imperial electors, but the Hussite revolution threw the country into crisis. The Holy Roman Empire passed to the Habsburgs in 1438, where it remained until the Empire's dissolution in 1806. Yet in spite of the extensive territories held by the Habsburgs, the Empire itself remained fragmented, and much real power and influence lay with the individual principalities. Also financial institutions, such as the Hanseatic League and the Fugger family, held great power, both on an economic and a political level. |Louis the Great| The kingdom of Hungary experienced a golden age during the 14th century. In particular the reign of the Angevin kings Charles Robert (1308–42) and his son Louis the Great (1342–82) were marked by greatness. The country grew wealthy as the main European supplier of gold and silver. Louis the Great led successful campaigns from Lithuania to Southern Italy, From Poland to Northern Greece. He had the greatest military potential of the 14th century with his enormous armies.(Often over 100,000 men). Meanwhile Poland's attention was turned eastwards, as the union with Lithuania created an enormous entity in the region. The union, and the conversion of Lithuania, also marked the end of paganism in Europe. However, Louis did not leave a son as heir after his death in 1382. Instead, he named as his heir the young prince Sigismund of Luxemburg who was 11 years old. The Hungarian nobility did not accept his claim, and the result was an internal war. Sigismund eventually achieved total control of Hungary, and established his court in Buda and Visegrád. Both palaces were rebuilt and improved, being considered between the richest of its time in Europe. Inheriting the throne of Bohemia and the Holy Roman Empire, Sigismund continued conducting his politics from Hungary, but being busy fighting the Hussites, and the Ottoman Empire which started to become an important menace to Europe in the beginning of the 15th century. The King Matthias Corvinus of Hungary was known to hold the biggest army of mercenaries of his time (The Black Army of Hungary) which he used to conquer Bohemia, Austria and fight the Ottoman Empire's menace. However, the end of the glory of the Kingdom happened in the beginning of the 16th century, when the King Louis II of Hungary was killed in the battle of Mohács in 1526 against the Ottoman Empire. Then Hungary fell in a serious crisis and was invaded, ending not only its significance in central Europe, but its medieval time with it. The 13th century had seen the fall of the state of Kievan Rus', in the face of the Mongol invasion. In its place would eventually emerge the Grand Duchy of Moscow, which won a great victory against the Golden Horde at the Battle of Kulikovo in 1380. The victory did not end Tartar rule in the region, however, and its immediate beneficiary was Lithuania, which extended its influence eastwards. It was under the reign of Ivan III, the Great (1462–1505), that Moscow finally became a major regional power, and the annexation of the vast Republic of Novgorod in 1478 laid the foundations for a Russian national state. After the Fall of Constantinople in 1453 the Russian princes started to see themselves as the heirs of the Byzantine Empire. They eventually took on the imperial title of Tsar, and Moscow was described as the Third Rome. Balkans and Byzantines The Byzantine Empire had for a long time dominated the eastern Mediterranean in politics and culture. By the 14th century, however, it had almost entirely collapsed into a tributary state of the Ottoman Empire, centred on the city of Constantinople and a few enclaves in Greece. With the Fall of Constantinople in 1453, the Byzantine Empire was permanently extinguished. The Bulgarian Empire was in decline by the 14th century, and the ascendancy of Serbia was marked by the Serbian victory over the Bulgarians in the Battle of Velbazhd in 1330. By 1346, the Serbian king Stefan Dušan had been proclaimed emperor. Yet Serbian dominance was short-lived; the Balkan coalitioned armies led by the Serbs were defeated by the Ottomans at the Battle of Kosovo in 1389, where most of the Serbian nobility were killed and the south of the country came under Ottoman occupation, like Bulgaria before it. Serbia fell in 1459, Bosnia in 1463 and Albania was finally conquered in 1479 a few years after the death of Skanderbeg. Belgrade (Hungarian domain) was the last Balkan city to fall under Ottoman rule in 1521. By the end of the medieval period, the entire Balkan peninsula was annexed by, or became vassals to, the Ottomans. Avignon was the seat of the papacy from 1309 to 1376. With the return of the Pope to Rome in 1378, the Papal State developed into a major secular power, culminating in the morally corrupt papacy of Alexander VI. Florence grew to prominence amongst the Italian city-states through financial business, and the dominant Medici family became important promoters of the Renaissance through their patronage of the arts. Also other city states in northern Italy expanded their territories and consolidated their power, primarily Milan and Venice. The War of the Sicilian Vespers had by the early 14th century divided southern Italy into an Aragon Kingdom of Sicily and an Anjou Kingdom of Naples. In 1442, the two kingdoms were effectively united under Aragonese control. The 1469 marriage of Isabella I of Castile and Ferdinand II of Aragon and 1479 death of John II of Aragon led to the creation of modern-day Spain. In 1492, Granada was captured from the Moors, thereby completing the Reconquista. Portugal had during the 15th century – particularly under Henry the Navigator – gradually explored the coast of Africa, and in 1498, Vasco da Gama found the sea route to India. The Spanish monarchs met the Portuguese challenge by financing Columbus’s attempt to find the western sea route to India, leading to the discovery of America in the same year as the capture of Granada. Late Medieval European society Around 1300–1350 the Medieval Warm Period gave way to the Little Ice Age. The colder climate resulted in agricultural crises, the first of which is known as the Great Famine of 1315-1317. The demographic consequences of this famine, however, were not as severe as those of the plagues of the later century, particularly the Black Death. Estimates of the death rate caused by this epidemic range from one third to as much as sixty percent. By around 1420, the accumulated effect of recurring plagues and famines had reduced the population of Europe to perhaps no more than a third of what it was a century earlier. The effects of natural disasters were exacerbated by armed conflicts; this was particularly the case in France during the Hundred Years' War. As the European population was severely reduced, land became more plentiful for the survivors, and labour consequently more expensive. Attempts by landowners to forcibly reduce wages, such as the English 1351 Statute of Laborers, were doomed to fail. These efforts resulted in nothing more than fostering resentment among the peasantry, leading to rebellions such as the French Jacquerie in 1358 and the English Peasants' Revolt in 1381. The long-term effect was the virtual end of serfdom in Western Europe. In Eastern Europe, on the other hand, landowners were able to exploit the situation to force the peasantry into even more repressive bondage. The upheavals caused by the Black Death left certain minority groups particularly vulnerable, especially the Jews. The calamities were often blamed on this group, and anti-Jewish pogroms were carried out all over Europe; in February 1349, 2,000 Jews were murdered in Strasbourg. Also the state was guilty of discrimination against the Jews, as monarchs gave in to the demands of the people, the Jews were expelled from England in 1290, from France in 1306, from Spain in 1492 and from Portugal in 1497. While the Jews were suffering persecution, one group that probably experienced increased empowerment in the Late Middle Ages was women. The great social changes of the period opened up new possibilities for women in the fields of commerce, learning and religion. Yet at the same time, women were also vulnerable to incrimination and persecution, as belief in witchcraft increased. Up until the mid-14th century, Europe had experienced a steadily increasing urbanisation. Cities were of course also decimated by the Black Death, but the urban areas' role as centres of learning, commerce and government ensured continued growth. By 1500 Venice, Milan, Naples, Paris and Constantinople probably had more than 100,000 inhabitants. Twenty-two other cities were larger than 40,000; most of these were to be found in Italy and the Iberian peninsula, but there were also some in France, the Empire, the Low Countries plus London in England. Through battles such as Courtrai (1302), Bannockburn (1314), and Morgarten (1315), it became clear to the great territorial princes of Europe that the military advantage of the feudal cavalry was lost, and that a well equipped infantry was preferable. Through the Welsh Wars the English became acquainted with, and adopted the highly efficient longbow. Once properly managed, this weapon gave them a great advantage over the French in the Hundred Years' War. The introduction of gunpowder affected the conduct of war significantly. Though employed by the English as early as the Battle of Crécy in 1346, firearms initially had little effect in the field of battle. It was through the use of cannons as siege weapons that major change was brought about; the new methods would eventually change the architectural structure of fortifications. Changes also took place within the recruitment and composition of armies. The use of the national or feudal levy was gradually replaced by paid troops of domestic retinues or foreign mercenaries. The practice was associated with Edward III of England and the condottieri of the Italian city-states. All over Europe, Swiss soldiers were in particularly high demand. At the same time, the period also saw the emergence of the first permanent armies. It was in Valois France, under the heavy demands of the Hundred Years' War, that the armed forces gradually assumed a permanent nature. Parallel to the military developments emerged also a constantly more elaborate chivalric code of conduct for the warrior class. This new-found ethos can be seen as a response to the diminishing military role of the aristocracy, and gradually it became almost entirely detached from its military origin. The spirit of chivalry was given expression through the new (secular) type of chivalric orders; the first of which was the Order of St. George founded by Charles I of Hungary in 1325, the best known probably the English Order of the Garter, founded by Edward III in 1348. Christian conflict and reform The Papal Schism The French crown's increasing dominance over the Papacy culminated in the transference of the Holy See to Avignon in 1309. When the Pope returned to Rome in 1377, this led to the election of different popes in Avignon and Rome, resulting in the Great Schism (1378–1417). The Schism divided Europe along political lines; while France, her ally Scotland and the Spanish kingdoms supported the Avignon Papacy, France's enemy England stood behind the Pope in Rome, together with Portugal, Scandinavia and most of the German princes. At the Council of Constance (1414–1418), the Papacy was once more united in Rome. Even though the unity of the Western Church was to last for another hundred years, and though the Papacy was to experience greater material prosperity than ever before, the Great Schism had done irreparable damage. The internal struggles within the Church had impaired her claim to universal rule, and promoted anti-clericalism among the people and their rulers, paving the way for reform movements. Though many of the events were outside the traditional time-period of the Middle Ages, the end of the unity of the Western Church (the Protestant Reformation), was one of the distinguishing characteristics of the medieval period. The Catholic Church had long fought against heretic movements, in the Late Middle Ages, it started to experience demands for reform from within. The first of these came from the Oxford professor John Wyclif in England. Wycliffe held that the Bible should be the only authority in religious questions, and spoke out against transubstantiation, celibacy and indulgences. In spite of influential supporters among the English aristocracy, such as John of Gaunt, the movement was not allowed to survive. Though Wycliffe himself was left unmolested, his supporters, the Lollards, were eventually suppressed in England. Richard II of England's marriage to Anne of Bohemia established contacts between the two nations and brought Lollard ideas to this part of Europe. The teachings of the Czech priest Jan Hus were based on those of John Wyclif, yet his followers, the Hussites, were to have a much greater political impact than the Lollards. Hus gained a great following in Bohemia, and in 1414, he was requested to appear at the Council of Constance, to defend his cause. When he was burned as a heretic in 1415, it caused a popular uprising in the Czech lands. The subsequent Hussite Wars fell apart due to internal quarrels, and did not result in religious or national independence for the Czechs, but both the Catholic Church and the German element within the country were weakened. Martin Luther, a German monk, started the German Reformation by the posting of the 95 theses on the castle church of Wittenberg on October 31, 1517. The immediate provocation behind the act was Pope Leo X’s renewing the indulgence for the building of the new St. Peter's Basilica in 1514. Luther was challenged to recant his heresy at the Diet of Worms in 1521. When he refused, he was placed under the ban of the Empire by Charles V. Receiving the protection of Frederick the Wise, he was then able to translate the Bible into German. To many secular rulers, the Protestant reformation was a welcome opportunity to expand their wealth and influence. The Catholic Church met the challenges of the reforming movements with what has been called the Catholic or Counter-Reformation. Europe became split into a northern Protestant and a southern Catholic part, resulting in the Religious Wars of the 16th and 17th centuries. Trade and commerce |Medieval Merchant Routes| The increasingly dominant position of the Ottoman Empire in the eastern Mediterranean presented an impediment to trade for the Christian nations of the west, who in turn started looking for alternatives. Portuguese and Spanish explorers found new trade routes – south of Africa to India, and across the Atlantic Ocean to America. As Genoese and Venetian merchants opened up direct sea routes with Flanders, the Champagne fairs lost much of their importance. At the same time, English wool export shifted from raw wool to processed cloth, resulting in losses for the cloth manufacturers of the Low Countries. In the Baltic and North Sea, the Hanseatic League reached the peak of their power in the 14th century, but started going into decline in the fifteenth. In the late 13th and early 14th centuries, a process took place – primarily in Italy but partly also in the Empire – that historians have termed a 'commercial revolution'. Among the innovations of the period were new forms of partnership and the issuing of insurance, both of which contributed to reducing the risk of commercial ventures; the bill of exchange and other forms of credit that circumvented the canonical laws for gentiles against usury, and eliminated the dangers of carrying bullion; and new forms of accounting, in particular double-entry bookkeeping, which allowed for better oversight and accuracy. With the financial expansion, trading rights became more jealously guarded by the commercial elite. Towns saw the growing power of guilds, while on a national level special companies would be granted monopolies on particular trades, like the English wool Staple. The beneficiaries of these developments would accumulate immense wealth. Families like the Fuggers in Germany, the Medicis in Italy, the de la Poles in England, and individuals like Jacques Coeur in France would help finance the wars of kings, and achieve great political influence in the process. Though there is no doubt that the demographic crisis of the 14th century caused a dramatic fall in production and commerce in absolute terms, there has been a vigorous historical debate over whether the decline was greater than the fall in population. While the older orthodoxy was that the artistic output of the Renaissance was a result of greater opulence, more recent studies have suggested that there might have been a so-called 'depression of the Renaissance'. In spite of convincing arguments for the case, the statistical evidence is simply too incomplete that a definite conclusion can be made. Arts and sciences In the 14th century, the predominant academic trend of scholasticism was challenged by the humanist movement. Though primarily an attempt to revitalise the classical languages, the movement also led to innovations within the fields of science, art and literature, helped on by impulses from Byzantine scholars who had to seek refuge in the west after the Fall of Constantinople in 1453. In science, classical authorities like Aristotle were challenged for the first time since antiquity. Within the arts, humanism took the form of the Renaissance. Though the 15th century Renaissance was a highly localised phenomenon – limited mostly to the city states of northern Italy – artistic developments were taking place also further north, particularly in the Netherlands. Philosophy, science and technology The predominant school of thought in the 13th century was the Thomistic reconciliation of the teachings of Aristotle with Christian theology. The Condemnation of 1277, enacted at the University of Paris, placed restrictions on ideas that could be interpreted as heretical; restrictions that had implication for Aristotelian thought. An alternative was presented by William of Ockham, who insisted that the world of reason and the world of faith had to be kept apart. Ockham introduced the principle of parsimony – or Occam's razor – whereby a simple theory is preferred to a more complex one, and speculation on unobservable phenomena is avoided. This new approach liberated scientific speculation from the dogmatic restraints of Aristotelian science, and paved the way for new approaches. Particularly within the field of theories of motion great advances were made, when such scholars as Jean Buridan, Nicole Oresme and the Oxford Calculators challenged the work of Aristotle. Buridan developed the theory of impetus as the cause of the motion of projectiles, which was an important step towards the modern concept of inertia. The works of these scholars anticipated the heliocentric worldview of Nicolaus Copernicus. Certain technological inventions of the period – whether of Arab or Chinese origin, or unique European innovations – were to have great influence on political and social developments, in particular gunpowder, the printing press and the compass. The introduction of gunpowder to the field of battle affected not only military organisation, but helped advance the nation state. Gutenberg's movable type printing press made possible not only the Reformation, but also a dissemination of knowledge that would lead to a gradually more egalitarian society. The compass, along with other innovations such as the cross-staff, the mariner's astrolabe, and advances in shipbuilding, enabled the navigation of the World Oceans, and the early phases of colonialism. Other inventions had a greater impact on everyday life, such as eyeglasses and the weight-driven clock. Visual arts and architecture A precursor to Renaissance art can be seen already in the early 14th century works of Giotto. Giotto was the first painter since antiquity to attempt the representation of a three-dimensional reality, and to endow his characters with true human emotions. The most important developments, however, came in 15th century Florence. The affluence of the merchant class allowed extensive patronage of the arts, and foremost among the patrons were the Medici. The period saw several important technical innovations, like the principle of linear perspective found in the work of Masaccio, and later described by Brunelleschi. Greater realism was also achieved through the scientific study of anatomy, championed by artists like Donatello. This can be seen particularly well in his sculptures, inspired by the study of classical models. As the centre of the movement shifted to Rome, the period culminated in the High Renaissance masters da Vinci, Michelangelo and Raphael. The ideas of the Italian Renaissance were slow to cross the Alps into northern Europe, but important artistic innovations were made also in the Low Countries. Though not – as previously believed – the inventor of oil painting, Jan van Eyck was a champion of the new medium, and used it to create works of great realism and minute detail. The two cultures influenced each other and learned from each other, but painting in the Netherlands remained more focused on textures and surfaces than the idealised compositions of Italy. In northern European countries gothic architecture remained the norm, and the gothic cathedral was further elaborated. In Italy, on the other hand, architecture took a different direction, also here inspired by classical ideals. The crowning work of the period was the Santa Maria del Fiore in Florence, with Giotto's clock tower, Ghiberti's baptistery gates, and Brunelleschi's cathedral dome of unprecedented proportions. The most important development of late medieval literature was the ascendancy of the vernacular languages. The vernacular had been in use in France and England since the 11th century, where the most popular genres had been the chanson de geste, troubadour lyrics and romantic epics, or the romance. Though Italy was later in evolving a native literature in the vernacular language, it was here that the most important developments of the period were to come. Dante Alighieri's Divine Comedy, written in the early 14th century, merged a medieval world view with classical ideals. Another promoter of the Italian language was Boccaccio with his Decameron. The application of the vernacular did not entail a rejection of Latin, and both Dante and Boccaccio wrote prolifically in Latin as well as Italian, as would Petrarch later (whose Canzoniere also promoted the vernacular and whose contents are considered the first modern lyric poems). Together the three poets established the Tuscan dialect as the norm for the modern Italian language. The new literary style spread rapidly, and in France influenced such writers as Eustache Deschamps and Guillaume de Machaut. In England Geoffrey Chaucer helped establish English as a literary language with his Canterbury Tales, which tales of everyday life were heavily influenced by Boccaccio. The spread of vernacular literature eventually reached as far as Bohemia, and the Baltic, Slavic and Byzantine worlds. Music was an important part of both secular and spiritual culture, and in the universities it made up part of the quadrivium of the liberal arts. From the early 13th century, the dominant sacred musical form had been the motet; a composition with text in several parts. From the 1330s and onwards, emerged the polyphonic style, which was a more complex fusion of independent voices. Polyphony had been common in the secular music of the Provençal troubadours. Many of these had fallen victim to the 13th century Albigensian Crusade, but their influence reached the papal court at Avignon. The main representatives of the new style, often referred to as ars nova as opposed to the ars antiqua, were the composers Philippe de Vitry and Guillaume de Machaut. In Italy, where the Provençal troubadours had also found refuge, the corresponding period goes under the name of trecento, and the leading composers were Giovanni da Cascia, Jacopo da Bologna and Francesco Landini. In the British Isles, plays were produced in some 127 different towns during the Middle Ages. These vernacular Mystery plays were written in cycles of a large number of plays: York (48 plays), Chester (24), Wakefield (32) and Unknown (42). A larger number of plays survive from France and Germany in this period and some type of religious dramas were performed in nearly every European country in the Late Middle Ages. Many of these plays contained comedy, devils, villains and clowns. Morality plays emerged as a distinct dramatic form around 1400 and flourished until 1550. The most interesting morality play is The Castle of Perseverance which depicts mankind's progress from birth to death. However, the most famous morality play and perhaps best known medieval drama is Everyman. Everyman receives Death's summons, struggles to escape and finally resigns himself to necessity. Along the way, he is deserted by Kindred, Goods, and Fellowship - only Good Deeds goes with him to the grave. At the end of the Late Middle Ages, professional actors began to appear in England and Europe. Richard III and Henry VII both maintained small companies of professional actors. Their plays were performed in the Great Hall of a nobleman's residence, often with a raised platform at one end for the audience and a "screen" at the other for the actors. Also important were Mummers' plays, performed during the Christmas season, and court masques. These masques were especially popular during the reign of Henry VIII who had a House of Revels built and an Office of Revels established in 1545. The end of medieval drama came about due to a number of factors, including the weakening power of the Catholic Church, the Protestant Reformation and the banning of religious plays in many countries. Elizabeth I forbid all religious plays in 1558 and the great cycle plays had been silenced by the 1580s. Similarly, religious plays were banned in the Netherlands in 1539, the Papal States in 1547 and in Paris in 1548. The abandonment of these plays destroyed the international theatre that had thereto existed and forced each country to develop its own form of drama. It also allowed dramatists to turn to secular subjects and the reviving interest in Greek and Roman theatre provided them with the perfect opportunity. After the Middle Ages After the end of the late Middle Ages period, the Renaissance would spread unevenly over continental Europe from the southern European region. The intellectual transformation of the Renaissance is viewed as a bridge between the Middle Ages and the Modern era. Europeans would later begin an era of world discovery. Combined with the influx of classical ideas was the invention of printing which facilitated dissemination of the printed word and democratized learning. These two things would lead to the Protestant Reformation. Europeans also discovered new trading routes, as was the case with Columbus’s travel to the Americas in 1492, and Vasco da Gama’s circumnavigation of Africa and India in 1498. Their discoveries strengthened the economy and power of European nations. The period 1300–1500 in Asia and North Africa At the time of the fall of the Ayyubids, the Mamluk Sultanate rose to ruled Egypt. The Mamluks, Arabs and Kipchak Turks, were purchased but were not ordinary slaves. Mamluks were considered to be “true lords,” with social status above freeborn Egyptian Muslims. Mamluk regiments constituted the backbone of the late Ayyubid military. Each sultan and high-ranking amir had his private corps. The Bahri mamluks defeated the Louis IX's crusaders at the Battle of Al Mansurah. The Sultan proceeded to place his own entourage and Mu`azzamis mamluks in authority to the detriment of Bahri interests. Shortly thereafter, a group of Bahris assassinated the Sultan. Following the death of the Sultan, a decade of instability ensued as various factions competed for control. The sultanate was officially established after the defeat of the Mongols at the battle at Ain Jalut, and the Sultan Qutuz was assassinated by Baibars in 1260. The decline of the Sultanate began after the first Ottoman-Mamluk War with the rise of power of the Portuguese in the Indian Ocean. The Ottomans conquered Syria at the end of the Middle Ages when he defeated the Mamluks at the Battle of Marj Dabiq near Aleppo after participating alongside the Mamluks against the Portuguese. The Ottoman's campaign against the Mamluks continued and they conquered Egypt following the Battle of Ridanieh, bringing an end to the Mamluk Sultanate. Ottomans and Europe |Ottomans and Europe| In the end of the 15th century the Ottoman Empire advanced all over East Europe conquering eventually the Byzantine Empire and extending their control on the states of the Balkans. Hungary became eventually the last bastion of the Latin Christian world, and fought for the keeping its rule on his territories during two centuries. After the tragic death of the young King Vladislaus I of Hungary during the Battle of Varna in 1444 against the Ottomans, the Kingdom without monarch was placed in the hands of the count John Hunyadi, who became Hungary's regent-governor (1446–1453). Hunyadi was considered by the pope as one of the most relevant military figures of the 15th century (Pope Pius II awarded him with the title of Athleta Christi or Champion of Christ), because he was the only hope of keeping a resistance against the Ottomans in Central and West Europe. Hunyadi succeeded during the Siege of Belgrade in 1456 against the Ottomans, which meant the biggest victory against that empire in decades. This battle became a real Crusade against the Muslims, as the peasants were motivated by the Franciscan monk Saint John of Capistrano, which came from Italy predicating the Holy War. The effect that it created in that time was one of the few main factors that helped achieving the victory. However the premature death of the Hungarian Lord left defenseless and in chaos that area of Europe. As an absolutely unusual event for the Middle Ages, Hunyadi's son, Matthias, was elected as King for Hungary by the nobility. For the first time, a member of an aristocratic family (and not from a royal family) was crowned. The King Matthias Corvinus of Hungary (1458–1490) was one of the most prominent figures of this Age, as he directed campaigns to the west conquering Bohemia answering to the Pope's claim for help against the Hussite Protestants, and also for solving the political hostilities with the German emperor Frederick III of Habsburg he invaded his west domains. Matthew organized the Black Army, composed of mercenary soldiers that is considered until the date as the biggest army of its time. Using this powerful tool, the Hungarian king led wars against the Turkish armies and stopped the Ottomans during his reign. Though the Ottoman Empire grew in strength and, with end of the Black Army, Central Europe was defenseless after the death of Matthew. At the Battle of Mohács, the forces of the Ottoman Empire annihilated the Hungarian army, and in trying to escape Louis II of Hungary drowned in the Csele Creek. The leader of the Hungarian army, Pál Tomori, also died in the battle. This episode is considered to be one of the final ones of the Medieval Times. The Mongol Great Yuan Empire, founded by Kublai Khan, existed between the high and late Middle Ages. A division of the Mongol Empire and an imperial dynasty of China, the Yuan followed the Song Dynasty and preceded the Ming Dynasty. During his rule, Kublai Khan claimed the title of Great Khan, supreme Khan over the other Mongol khanates. This claim was only truly recognized by the Il-Khanids, who were nevertheless essentially self-governing. Although later emperors of the Yuan Dynasty were recognized by the three virtually independent western khanates as their nominal suzerains, they each continued their own separate developments. The Empire of the Great Ming rule began in China during the mid-late Middle Ages, following the collapse of the Yuan Dynasty. The Ming was the last dynasty in China ruled by ethnic Han Chinese. Ming rule saw the construction of a vast navy and army. Although private maritime trade and official tribute missions from China had taken place in previous dynasties, the tributary fleet under the Muslim eunuch admiral Zheng He surpassed all others in size. In the Ming Dynasty, there were enormous construction projects, including the restoration of the Grand Canal and the Great Wall and the establishment of the Forbidden City in Beijing. Society was fashioned to be self-sufficient rural communities in a rigid, immobile system that would have no need to engage with the commercial life and trade of urban centers. Rebuilding of China's agricultural base and strengthening of communication routes through the militarized courier system had the unintended effect of creating a vast agricultural surplus that could be sold at burgeoning markets located along courier routes. Rural culture and commerce became influenced by urban trends. The upper echelons of society embodied in the scholarly gentry class were also affected by this new consumption-based culture. In a departure from tradition, merchant families began to produce examination candidates to become scholar-officials and adopted cultural traits and practices typical of the gentry. Parallel to this trend involving social class and commercial consumption were changes in social and political philosophy, bureaucracy and governmental institutions, and even arts and literature. The Kenmu restoration was a three-year period of Japanese history between the Kamakura period and the Muromachi period. The political events included the restoration effort, but made with many and serious political errors, to bring the Imperial House and the nobility it represented back into power, thus restoring a civilian government after almost a century and a half of military rule. The attempted restoration ultimately failed and was replaced by the Ashikaga shogunate. This was to be the last time the Emperor had any power until the Meiji restoration in the modern era. Beginning around the mid-late Middle Ages, the Muromachi period marks the governance of the Ashikaga shogunate. The first Muromachi shogun was established by Ashikaga Takauji, two years after the brief Kemmu restoration of imperial rule was brought to a close. The period ended with the last shogun, Ashikaga Yoshiaki, driven out of the capital in Kyoto by Oda Nobunaga. The early Muromachi period, or the Northern and Southern Court period, experienced continued resistance of the supporters of the Kemmu restoration. The end of the Muromachi period, or the Sengoku period, was a time of warring states. The Muromachi period has the two cultural phases, the medieval Kitayama and modern Higashiyama periods. Dates are approximate, consult particular article for detailsMiddle Ages Themes Other themes - 14th century 1307 - The Knights Templar destroyed 1364 - Jagiellonian University funded - 15th century 1409 - Venetian's Dalmatia 1461 - The Empire of Trebizond falls |Wikimedia Commons has media related to: Late Middle Ages| - List of basic medieval history topics - Timeline of the Middle Ages - Church and state in medieval Europe - History of the Jews in the Middle Ages - Thirteen forty-five – In-depth treatment of the year ||Constructs such as ibid., loc. cit. and idem are discouraged by Wikipedia's style guide for footnotes, as they are easily broken. Please improve this article by replacing them with named references (quick guide), or an abbreviated title. (October 2011)| - The New Cambridge Medieval History, vol. 6: c. 1300 - c. 1415, (2000). Michael Jones (ed.), Cambridge: Cambridge University Press. ISBN 0-521-36290-3. - The New Cambridge Medieval History, vol. 7: c. 1415 - c. 1500, (1998). Christopher Allmand (ed.), Cambridge: Cambridge University Press. ISBN 0-521-38296-3. - Brady, Thomas A., Jr., Heiko A. Oberman, James D. Tracy (eds.) (1994). Handbook of European History, 1400–1600: Late Middle Ages, Renaissance and Reformation. Leiden, New York: E.J. Brill. ISBN 90-04-09762-7. - Cantor, Norman (1994). The Civilization of the Middle Ages. New York: Harper Perennial. ISBN 0-06-017033-6. - Hay, Denys (1988). Europe in the Fourteenth and Fifteenth Centuries (2nd ed.). London: Longman. ISBN 0-582-49179-7. - Hollister, C. Warren (2005). Medieval Europe: A Short History (10th ed.). McGraw-Hill Higher Education. ISBN 0-07-295515-5. - Holmes, George (ed.) (2001). The Oxford History of Medieval Europe (New ed.). Oxford: Oxford University Press. ISBN 0-19-280133-3. - Keen, Maurice (1991). The Penguin History of Medieval Europe (New ed.). London: Penguin Books. ISBN 0-14-013630-4. - Le Goff, Jacques (2005). The Birth of Europe: 400-1500. WileyBlackwell. ISBN 0-631-22888-8. - Waley, Daniel; Denley, Peter (2001). Later Medieval Europe: 1250-1520 (3rd ed.). London: Longman. ISBN 0-582-25831-6. - Abulafia, David (1997). The Western Mediterranean Kingdoms: The Struggle for Dominion, 1200-1500. London: Longman. ISBN 0-582-07820-2. - Duby, Georges (1993). France in the Middle Ages, 987-1460: From Hugh Capet to Joan of Arc (New ed.). WileyBlackwell. ISBN 0-631-18945-9. - Fine, John V.A. (1994). The Late Medieval Balkans: A Critical Survey from the Late Twelfth Century to the Ottoman Conquest (Reprint ed.). Ann Arbor: University of Michigan Press. ISBN 0-472-08260-4. - Jacob, E.F. (1961). The Fifteenth Century: 1399–1485. Oxford: Oxford University Press. ISBN 0-19-821714-5. - McKisack, May (1959). The Fourteenth Century: 1307–1399. Oxford: Oxford University Press. ISBN 0-19-821712-9. - Mango, Cyril (ed.) (2002). The Oxford History of Byzantium. Oxford: Oxford University Press. ISBN 0-19-814098-3. - Martin, Janet (2007). Medieval Russia, 980–1584 (2nd ed.). Cambridge: Cambridge University Press. ISBN 0-521-85916-6. - Najemy, John M. (ed.) (2004). Italy in the Age of the Renaissance: 1300-1550 (New ed.). Oxford: Oxford University Press. ISBN 0-19-870040-7. - Petry, Carl F. (1998). The Cambridge History of Egypt, Volume 1. Cambridge: E.J. Brill. ISBN 9780521471374. - Reilly, Bernard F. (2008). 978-1845115494. Cambridge: I. B. Tauris. ISBN 978-1845115494. - Wandycz, Piotr (2001). The Price of Freedom: A History of East Central Europe from the Middle Ages to the Present (2nd ed.). London: Routledge. ISBN 0-415-25491-4. - Behrens-Abouseif, Doris (1994). Cairo of the Mamluks: A History of Architecture and its Culture (Reprint ed.). Ann Arbor: University of Michigan Press. ISBN 0-472-08260-4. - Chazan, Robert (2006). The Jews of Medieval Western Christendom: 1000-1500. Cambridge: Cambridge University Press. ISBN 0-521-61664-6. - Herlihy, David (1985). Medieval Households. Cambridge, Massachusetts; London: Harvard University Press. ISBN 0-674-56375-1. - Herlihy, David (1968). Medieval Culture and Society. London: Macmillan. ISBN 0-88133-747-1. - Jordan, William Chester (1996). The Great Famine: Northern Europe in the Early Fourteenth Century. New Jersey: Princeton University Press. ISBN 0-691-01134-6. - Klapisch-Zuber, Christiane (1994). A history of women in the West (New ed.). Cambridge, Mass.; London: Harvard University Press. ISBN 0-674-40368-1. The Black Death: - Benedictow, Ole J. (2004). The Black Death 1346-1353: The Complete History. Woodbridge: Boydell Press. ISBN 0-85115-943-5. - Herlihy, David (1997). The Black Death and the transformation of the West. Cambridge, Mass.; London: Harvard University Press. ISBN 0-7509-3202-3. - Horrox, Rosemary (1994). The Black Death. Manchester: Manchester University Press. ISBN 0-7190-3497-3. - Shillington, Kevin (2004). Encyclopedia of African History, Volume 1 (1st ed.). Taylor & Francis, Inc.: Cambridge University Press. ISBN 9781579582456. - Ziegler, Philip (2003). The Black Death (New ed.). Sutton: Sutton Publishing Ltd. ISBN 0-7509-3202-3. - Allmand, Christopher (1988). The Hundred Years War: England and France at War c.1300-c.1450. Cambridge: Cambridge University Press. ISBN 0-521-31923-4. - Chase, Kenneth (2003). Firearms: A Global History to 1700. Cambridge: Cambridge University Press. ISBN 9780521822749. - Contamine, Philippe (1984). War in the Middle Ages. Oxford: Blackwell. ISBN 0-631-13142-6. - Curry, Anne (1993). The Hundred Years War. Basingstoke: Macmillan. ISBN 0-333-53175-2. - Davis, Paul K. (2001). 100 Decisive Battles: From Ancient Times to the Present. Oxford: Oxford University Press. ISBN 0195143663. - Keen, Maurice (1984). Chivalry. New Haven: Yale University Press. ISBN 0-300-03150-5. - Verbruggen, J. F. (1997). The Art of Warfare in Western Europe during the Middle Ages: From the Eighth Century to 1340 (2nd ed.). Woodbridge: Boydell Press. ISBN 0-85115-630-4. - Cipolla, Carlo M. (1993). Before the Industrial Revolution: European Society and Economy 1000–1700 (3rd ed.). London: Routledge. ISBN 0-415-09005-9. - Cipolla, Carlo M. (ed.) (1993). The Fontana Economic History of Europe, Volume 1: The Middle Ages (2nd ed.). New York: Fontana Books. ISBN 0-85527-159-0. - Postan, M.M. (2002). Mediaeval Trade and Finance. Cambridge: Cambridge University Press. ISBN 0-521-52202-1. - Pounds, N.J.P. (1994). An Economic History of Medieval Europe (2nd ed.). London and New York: Longman. ISBN 0-582-21599-4. - Kenny, Anthony (1985). Wyclif. Oxford: Oxford University Press. ISBN 0-19-287647-3. - MacCulloch, Diarmaid (2005). The Reformation. Penguin. ISBN 0-14-303538-X. - Ozment, Steven E. (1980). The Age of Reform, 1250-1550: An Intellectual and Religious History of Late Medieval and Reformation Europe. New Haven and London: Yale University Press. ISBN 0-300-02477-0. - Smith, John H. (1970). The Great Schism, 1378. London: Hamilton. ISBN 0-241-01520-0. - Southern, R.W. (1970). Western society and the Church in the Middle Ages. Harmondsworth: Penguin Books. ISBN 0-14-020503-9. Arts and science: - Brotton, Jerry (2006). The Renaissance: A Very Short Introduction. Oxford: Oxford University Press. ISBN 0-19-280163-5. - Burke, Peter (1998). The European Renaissance: Centres and Peripheries (2nd ed.). Oxford: Blackwell. ISBN 0-631-19845-8. - Curtius, Ernest Robert (1991). European Literature and the Latin Middle Ages (New ed.). New York: Princeton University Press. ISBN 0-691-01899-5. - Grant, Edward (1996). The Foundations of Modern Science in the Middle Ages: Their Religious, Institutional, and Intellectual Contexts. Cambridge: Cambridge University Press. ISBN 0-521-56762-9. - Snyder, James (2004). Northern Renaissance Art: Painting, Sculpture, the Graphic Arts from 1350 to 1575 (2nd ed.). Prentice Hall. ISBN 0-13-189564-8. - Welch, Evelyn (2000). Art in Renaissance Italy, 1350-1500 (reprint ed.). Oxford: Oxford University Press. ISBN 0-19-284279-X. - Wilson, David Fenwick (1990). Music of the Middle Ages. New York: Schirmer Books. ISBN 0-02-872951-X. - Austin Alchon, Suzanne (2003). A pest in the land: new world epidemics in a global perspective. University of New Mexico Press. p. 21. ISBN 0-8263-2871-7. - Cantor, p. 480. - Cantor, p. 594. - Leonardo Bruni, James Hankins, History of the Florentine people, Volume 1, Books 1–4, (2001), p. xvii. - Brady et al., p. xiv; Cantor, p. 529. - Burckhardt, Jacob (1860). The Civilization of the Renaissance in Italy. p. 121. ISBN 0-06-090460-7. - Haskins, Charles Homer (1927). The Renaissance of the Twelfth Century. Cambridge, Mass.: Harvard University Press. ISBN 0-19-821934-2. - "Les periodes de l'historie du capitalism", Academie Royale de Belgique. Bulletin de la Classe des Lettres, 1914. - Huizinga, Johan (1924). The Waning of the Middle Ages: A Study of the Forms of Life, Thought and Art in France and the Netherlands in the XIVth and XVth Centuries. London: E. Arnold. ISBN 0-312-85540-0. - Allmand, p. 299; Cantor, p. 530. - Le Goff, p. 154. See e.g. Najemy, John M. (2004). Italy in the Age of the Renaissance: 1300–1550. Oxford: Oxford University Press. ISBN 0-19-870040-7. - Brady et al., p. xvii. - For references, see below. - Allmand (1998), p. 3; Holmes, p. 294; Koenigsberger, pp. 299–300. - Brady et al., p. xvii; Jones, p. 21. - Allmand (1998), p. 29; Cantor, p. 514; Koenigsberger, pp. 300–3. - Brady et al., p. xvii; Holmes, p. 276; Ozment, p. 4. - Hollister, p. 366; Jones, p. 722. - Allmand (1998), p. 703 - Bagge, Sverre; Mykland, Knut (1989). Norge i dansketiden: 1380–1814 (2nd ed.). Oslo: Cappelen. ISBN 978-82-02-12369-7. - Allmand (1998), p. 673. - Allmand (1998), p. 193. - Alan Cutler (1997-08-13). "The Little Ice Age: When global cooling gripped the world". The Washington Post. Retrieved 2008-03-12. - Jones, pp. 348–9. - Jones, pp. 350–1; Koenigsberger, p. 232; McKisack, p. 40. - Jones, p. 351. - Allmand (1998), p. 458; Koenigsberger, p. 309. - Allmand (1998), p. 458; Nicholas, pp. 32–3. - Hollister, p. 353; Jones, pp. 488–92. - McKisack, pp. 228–9. - Hollister, p. 355; Holmes, pp. 288-9; Koenigsberger, p. 304. - Duby, p. 288-93; Holmes, p. 300. - Allmand (1998), pp. 450-5; Jones, pp. 528-9. - Allmand (1998), p. 455; Hollister, p. 355; Koenigsberger, p. 304. - Allmand (1998), p. 455; Hollister, p. 363; Koenigsberger, pp. 306-7. - Holmes, p. 311–2; Wandycz, p. 40 - Hollister, p. 362; Holmes, p. 280. - Cantor, p. 507; Hollister, p. 362. - Allmand (1998), pp. 152–153; Cantor, p. 508; Koenigsberger, p. 345. - Wandycz, p. 38. - Wandycz, p. 40. - Jones, p. 737. - Koenigsberger, p. 318; Wandycz, p. 41. - Jones, p. 7. - Martin, pp. 100–1. - Koenigsberger, p. 322; Jones, p. 793; Martin, pp. 236–7. - Martin, p. 239. - Allmand (1998), p. 754; Koenigsberger, p. 323. - Allmand, p. 769; Hollister, p. 368. - Hollister, p. 49. - Allmand (1998), pp. 771–4; Mango, p. 248. - Hollister, p. 99; Koenigsberger, p. 340. - Jones, pp. 796–7. - Jones, p. 875. - Hollister, p. 360; Koenigsberger, p. 339. - Hollister, p. 338. - Allmand (1998), p. 586; Hollister, p. 339; Holmes, p. 260. - Allmand, pp. 150, 155; Cantor, p. 544; Hollister, p. 326. - Allmand (1998), p. 547; Hollister, p. 363; Holmes, p. 258. - Cantor, p. 511; Hollister, p. 264; Koenigsberger, p. 255. - Allmand (1998), p. 577. - Hollister, p. 356; Koenigsberger, p. 314; Reilly, p. 209. - Allmand (1998), p. 162; Hollister, p. 99; Holmes, p. 265. - Allmand (1998), p. 192; Cantor, 513. - Cantor, 513; Holmes, pp. 266–7. - Grove, Jean M. (2003). The Little Ice Age. London: Routledge. ISBN 0-415-01449-2. - Jones, p. 88. - Harvey, Barbara F. (1991). "Introduction: The 'Crisis' of the Early Fourteenth Century". In Campbell, B.M.S. Before the Black Death: Studies in The 'Crisis' of the Early Fourteenth Century. Manchester: Manchester University Press. pp. 1–24. ISBN 0-7190-3208-3 - Jones, pp. 136–8;Cantor, p. 482. - Herlihy (1997), p. 17; Jones, p. 9. - Hollister, p. 347. - Duby, p. 270; Koenigsberger, p. 284; McKisack, p. 334. - Koenigsberger, p. 285. - Cantor, p. 484; Hollister, p. 332; Holmes, p. 303. - Cantor, p. 564; Hollister, pp. 332–3; Koenigsberger, p. 285. - Hollister, pp. 332–3; Jones, p. 15. - Chazan, p. 194. - Hollister, p. 330; Holmes, p. 255. - Brady et al., pp. 266–7; Chazan, pp. 166, 232; Koenigsberger, p. 251. - Klapisch-Zuber, p. 268. - Hollister, p. 323; Holmes, p. 304. - Jones, p. 164; Koenigsberger, p. 343. - Allmand (1998), p. 125 - Jones, p. 350; McKisack, p. 39; Verbruggen, p. 111. - Allmand (1988), p. 59; Cantor, p. 467. - McKisack, p. 240, Verbruggen, pp. 171–2 - Contamine, pp. 139–40; Jones, pp. 11–2. - Contamine, pp. 198–200. - Allmand (1998), p. 169; Contamine, pp. 200–7. - Cantor, p. 515. - Contamine, pp. 150–65; Holmes, p. 261; McKisack, p. 234. - Contamine, pp. 124, 135. - Contamine, pp. 165–72; Holmes, p. 300. - Cantor, p. 349; Holmes, pp. 319–20. - Hollister, p. 336. - Cantor, p. 537; Jones, p. 209; McKisack, p. 251. - Cantor, p. 496. - Cantor, p. 497; Hollister, p. 338; Holmes, p. 309. - Hollister, p. 338; Koenigsberger, p. 326; Ozment, p. 158. - Cantor, p. 498; Ozment, p. 164. - Koenigsberger, pp. 327–8; MacCulloch, p. 34. - Hollister, p. 339; Holmes, p. 260; Koenigsberger, pp. 327–8. - A famous account of the nature of, and suppression of a heretic movement, is Emmanuel Le Roy Ladurie's Emmanuel Le Roy Ladurie. (1978). Montaillou: Cathars and Catholics in a French Village, 1294–1324. London: Scolar Press. ISBN 0-85967-403-7. - MacCulloch, p. 34–5. - Allmand (1998), p. 15; Cantor, pp. 499–500; Koenigsberger, p. 331. - Allmand (1998), pp. 15–6; MacCulloch, p. 35. - Holmes, p. 312; MacCulloch, pp. 35–6; Ozment, p. 165. - Allmand (1998), p. 16; Cantor, p. 500. - Allmand (1998), p. 377; Koenigsberger, p. 332. - Koenigsberger, p. 332; MacCulloch, p. 36. - Allmand (1998), p. 353; Hollister, p. 344; Koenigsberger, p. 332–3. - MacCulloch, p. 115. - MacCulloch, pp. 70, 117. - MacCulloch, p. 127; Ozment, p. 245. - MacCulloch, p. 128. - Ozment, p. 246. - Allmand (1998), pp. 16–7; Cantor, pp. 500–1. - MacCulloch, p. 107; Ozment, p. 397. - MacCulloch, p. 266; Ozment, pp. 259–60. - Allmand (1998), pp. 159–60; Pounds, pp. 467–8. - Hollister, pp. 334–5. - Cipolla (1976), p. 275; Koenigsberger, p. 295; Pounds, p. 361. - Cipolla (1976), p. 283; Koenigsberger, p. 297; Pounds, pp. 378–81. - Cipolla (1976), p. 275; Cipolla (1994), p. 203, 234; Pounds, pp. 387–8. - Koenigsberger, p. 226; Pounds, p. 407. - Cipolla (1976), pp. 318–29; Cipolla (1994), pp. 160–4; Holmes, p. 235; Jones, pp. 176–81; Koenigsberger, p. 226; Pounds, pp. 407–27. - Jones, p. 121; Pearl, pp. 299–300; Koenigsberger, pp. 286, 291. - Allmand (1998), pp. 150–3; Holmes, p. 304; Koenigsberger, p. 299; McKisack, p. 160. - Pounds, p. 483. - Cipolla, C.M. (1964). "Economic depression of the Renaissance?". Economic History Review xvi: 519–24. - Pounds, pp. 484–5. - Allmand (1998), pp. 243–54; Cantor, p. 594; Nicholas, p. 156. - Buringh, Eltjo; van Zanden, Jan Luiten: "Charting the “Rise of the West”: Manuscripts and Printed Books in Europe, A Long-Term Perspective from the Sixth through Eighteenth Centuries", The Journal of Economic History, Vol. 69, No. 2 (2009), pp. 409–445 (416, table 1) - Jones, p. 42; Koenigsberger, p. 242. - Hans Thijssen (2003). "Condemnation of 1277". Stanford Encyclopedia of Philosophy. Retrieved 2008-04-21. - Grant, p. 142; Nicholas, p. 134. - Grant, pp. 100–3, 149, 164–5. - Grant, pp. 95–7. - Grant, pp. 112–3. - Jones, pp. 11–2; Koenigsberger, pp. 297–8; Nicholas, p. 165. - Grant, p. 160; Koenigsberger, p. 297. - Cantor, p. 433; Koenigsberger, p. 363. - Allmand (1998), p. 155; Brotton, p. 27. - Burke, p. 24; Koenigsberger, p. 363; Nicholas, p. 161. - Allmand (1998), p. 253; Cantor, p. 556. - Cantor, p. 554; Nichols, pp. 159–60. - Brotton, p. 67; Burke, p. 69. - Allmand (1998), p. 269; Koenigsberger, p. 376. - Allmand (1998), p. 302; Cantor, p. 539. - Burke, p. 250; Nicholas, p. 161. - Allmand (1998), pp. 300–1, Hollister, p. 375. - Allmand (1998), p. 305; Cantor, p. 371. - Jones, p. 8. - Cantor, p. 346. - Curtius, p. 387; Koenigsberger, p. 368. - Cantor, p. 546; Curtius, pp. 351, 378. - Curtius, p. 396; Koenigsberger, p. 368; Jones, p. 258. - Curtius, p. 26; Jones, p. 258; Koenigsberger, p. 368. - Koenigsberger, p. 369. - Jones, p. 264. - Curtius, p. 35; Jones. p. 264. - Jones, p. 9. - Allmand, p. 319; Grant, p. 14; Koenigsberger, p. 382. - Allmand, p. 322; Wilson, p. 229. - Wilson, pp. 229, 289–90, 327. - Koenigsberger, p. 381; Wilson, p. 329. - Koenigsberger, p. 383; Wilson, p. 329. - Wilson, pp. 357–8, 361–2. - Brockett and Hildy (2003, 86) - Brockett and Hildy (2003, 101-103) - Behrens-Abouseif, p. 1 - Petry, p.225 - Shillington, p. 757 - Davis, pp 141-42 - Petry, p. 250 - Chase, pp.104-06 - Chase, p.106 - Draskóczy, István (2000). A tizenötödik század története. Pannonica Kiadó. Budapest: Hungary. - Engel Pál, Kristó Gyula, Kubinyi András. (2005) Magyarország Története 1301- 1526. Budapest, Hungary: Osiris Kiadó. - Fügedi, Erik. (2004). Uram Királyom. Fekete Sas Kiadó Budapest:Hungary.
http://en.wikipedia.org/wiki/Late_Medieval
13
106
Hydrocarbons are compounds of carbon and hydrogen only. Hydrocarbons are obtained mainly from petroleum, natural gas and coal. Examples of hydrocarbons are methane, ethane, acetylene and benzene. The important fuels like petrol, kerosene , coal gas, oil gas , CNG (compressed natural gas) , LPG (liquefied petroleum gas) etc. are all hydrocarbons or their mixtures. The hydrocarbons are divided into two main categories , aliphatic and aromatic. The aliphatic hydrocarbons are further classified into saturated (alkanes) , unsaturated (alkenes and alkynes) and alicyclic (cycloalkanes) hydrocarbons. ALKANES AND CYCLOALKANES They are also known as paraffins derived from the Latin words, ‘parum’ (little) and ‘affins’ affinity, that is with little affinity. The name is justified as under normal conditions, alkanes are inert towards reagents like acids, bases, oxidising and reducing agents. However, under drastic conditions, i.e., at high temperature and pressure alkanes undergo different types of reactions like halogenation, nitration, sulphonation , pyrolysis etc. Alkanes have tetrahedral structure around carbon atom with average C- C and C- H bond lengths of 154 pm and 112 pm respectively. Alkanes form homologous series with the general formula CnH2n+2 , where n is the number of carbon atoms in the molecule. Methane CH4 with n = 1 and ethane C2H6 with n = 2 are the first two members of the series. Cycloalkanes are cyclic hydrocarbons which form a homologous series with the general formula CnH2n, whose first member is cyclopropane ( with n = 3). The nomenclature has discussed in Unit 14. 1. Write the IUPAC names for the following structures. 2. Write the structures for the compounds having IUPAC names, 3. Write structures of different chain isomers of alkanes corresponding to molecular formula C6H14. Also write their IUPAC names. 4. Write structures of different isomeric alkyl groups corresponding to the molecular formula C5H11. Write IUPAC names of alcohols obtained by attachment of –OH groups at different carbons of the chain. 5. Write the IUPAC names of the following compounds. (i) (CH3)3CCH2C(CH3) 3 (ii) (CH3)2C(C2H5) 2 6. Write the structural formula of the following compounds : i) 3, 4, 4, 5-tetramethylheptane 7. Write the structures for each of the following compounds. Why are the given names incorrect ? Write the correct IUPAC names. 8. Write IUPAC names of the following compounds. 9. For the following compounds , write structural formula and IUPAC names for all possible isomers having the number double or triple bond as indicated : (a) C4H8 (one double bond) (b) C5H8 (one triple bond) PREPARATION OF ALKANES Alkanes can be prepared on a small scale in the laboratory by the following methods. 1. Hydrogenation of alkenes and alkynes Hydrocarbons containing double or triple bonds can be hydrogenated to the corresponding alkanes. The addition of hydrogen to unsaturated compounds is called hydrogenation. Alkenes and alkynes are hydrogenated to alkanes by Sabatier and Senderen’s reaction. It is the addition of hydrogen to alkenes and alkynes in the presence of nickel catalyst at 570 K. For example, ethylene (C2H4) when mixed with hydrogen and passed over nickel catalyst at 570 K gives ethane(C2H6) and acetylene (C2H2) on hydrogenation yields ethane. 2. From Alkyl halides Alkyl halides are halogen derivatives of alkanes with general formula R-X. They can be used to prepare alkanes by following methods. (a) It involves the chemical reaction between alkyl halide(usually bromides and iodides) and metallic sodium in presence of dry ether (wurtz reaction). The product is the symmetrical alkane containing twice the number of carbon atoms present in alkyl halide. In case, two different alkyl halides are taken in order to prepare alkane with odd number of carbon atoms, a mixture of three alkanes will be produced as follows: Therefore, this method is not suited to prepare alkanes containing odd number of carbon atoms. Methane, however, cannot be prepared by this method. (b) By the reduction of alkyl halides It involves the reduction of alkyl halides (usually bromides or iodides) by suitable reducing agents such as H2/ Pd or nascent hydrogen obtained from Zn and HCl or Zn-Cu couple and ethanol. Reduction of iodo-derivative can be carried out by the use of hydro-iodic acid (HI) in presence of red phosphorus. Red phosphorus removes iodine formed and pushes the reaction in the forward direction. If not removed, iodine will convert ethane back into ethyl iodide. (c) By the use of Grignard reagent Alkyl halides especially bromides and iodides react with magnesium metal in diethyl ether to form alkyl magnesium halides (RMgX). Alkyl magnesium halides are known as Grignard reagents. In the Grignard reagent, the carbon-magnesium bond is highly polar with carbon atom being relatively more electronegative than magnesium. It reacts with water or with other compounds having active hydrogen ( the H atom attached on N, O, F or with triple bonded carbon atoms are known as active hydrogen) to give alkane. In these reactions the alkyl group of alkyl magnesium halide gets converted into alkane. In general the reaction can be represented as : 3. From Fatty acids (monocarboxylic acids) (i) By heating with soda lime Sodium salts of carboxylic acids on heating with soda lime give alkanes. Soda lime is prepared by soaking quick lime (CaO) with a solution of sodium hydroxide (NaOH). For example, sodium acetate which is sodium salt of acetic acid(CH3COOH) on heating with soda lime gives methane. Sodium propanoate gives ethane. In this reaction CaO does not participitate , but helps in the fusion of the reaction mixture. (ii) Electrolytic method ( Kolbe’s method ) : An alkane is obtained when an aqueous solution of sodium or potassium salt of carboxylic acid is electrolysed. The reaction is known as decarboxylation reaction (Kolbe’s reaction) 2 RCOONa + 2 H2O ® R-R + 2 CO2 + 2 NaOH + H2 For example, ethane is obtained when a solution of sodium acetate is electrolysed. 2 CH3COONa + 2 H2O ® CH3CH3 + 2 CO2 + 2 NaOH + H2 At anode the carboxylate ion (RCOO- ) gives up one electron to produce free radical RCOO· which decomposes to give the alkyl radical and carbondioxide. The two such radicals combine together to yield higher alkane. Normally these methods are useful for preparing alkanes containing even number of carbon atoms. 10. How do you account for the formation of ethane during chlorination of methane ? PROPERTIES OF ALKANES The following are the physical properties of alkanes. State : The lower members in the alkane series are gases (methane, ethane, propane and butane). Alkanes containing 5 to 17 carbon atoms are liquids, while higher members are waxy solids. Thus alkanes change from gaseous state to the solid state with an increase in molecular mass. Non-polar nature : Alkanes are non-polar in nature. Therefore alkanes are soluble in non-polar solvents such as benzene, ether, chloroform, carbon tetrachloride etc. Liquid alkanes themselves are good solvents for other non-polar substances. Boiling points : Lower alkanes have lower boiling points. The boiling point gradually increases with increase in the molecular mass. The increase is by 20° to 30° for each –CH2- unit added to the chain. The reason for the increase in boiling point is that the alkane molecule is non-polar in nature. These are attracted to each other by weak van der Waal’s forces. In the case of alkanes with higher molecular masses, these forces are more because of the larger surface area. The variation in boiling points of n-alkanes with increase in number of carbon atoms per molecule is shown in Fig. Variation in boiling points of n-alkanes with increase in number of carbon atoms per molecule. Moreover, straight chain alkanes have higher boiling point than the corresponding branched chain hydrocarbons (isomers). Greater the branching in the chain is, lower is the boiling point. This is due to the fact that branching of the chain makes the molecule compact and brings it closer to a sphere. This decreases the surface area and hence the magnitude of inter-particle van der Waal’s forces leads to the decrease in boiling point. The boiling points of the isomeric pentanes are as follows. Melting points : The intermolecular forces in a crystal depend not only on the size of the molecues but also on how they are packed into a crystal. The rise in melting point with increase in number of carbon atoms is, therefore , not as regular as the boiling point is in the case of liquids. On plotting the melting points of n-alkanes against the number of carbon atoms in a chain , a sawtooth pattern shown in Fig is obtained. Variation in melting points of n-alkanes with increase in number of carbon atoms The increase in melting point is much more in moving from an alkane having odd number of carbon atoms to higher alkane than in moving from alkane having even number of carbon atoms to the higher alkanes as given below : This implies that the molecules with an odd number of carbons do not fit well into the crystal lattice. The carbon chains in the alkanes are zig-zag rather than straight. Now in n-alkanes, terminal methyl groups lie on the same side when the number of carbon atoms is odd and on opposite sides when the number of is even (Fig). Representation of alkanes with even and odd number of carbon atoms CH3 groups of odd and even carbon compounds appreciably affect the forces of interaction and hence the melting temperature. The energy required to break the crystal structure and thus melt the alkane is lesser in case of alkanes having an odd number of carbon atoms. This is because their molecules do not fit well into the crystal lattice. Colour : Alkanes are colourless gases, liquids or solids. Density : The density of alkanes increases with increase in chain length. Solubility : As ‘like dissolves like’ , therefore alkanes being non-polar in character are more soluble in non-polar solvents such as ether, carbon tetrachloride, etc. These are insoluble in water and other polar solvents. Alkanes are quite unreactive. They do not react with usual reagents. Alkanes are inert (less reactive). Inertness of alkanes is due to the presence of strong carbon-carbon and carbon-hydrogen bonds, which are difficult to break. However, hydrogen can be substituted by other atoms or radicals under drastic reaction conditions such as high temperature, presence of ultra violet light or catalyst etc. Alkanes undergo few reactions, the most important being halogenation, oxidation and thermal cracking. Alkanes react with chlorine or bromine in presence of sunlight or ultraviolet light to give halogen substituted products, i.e., alkyl halides containing one or more halogen atoms. Halogenation of alkanes is a substitution reaction. Hydrogen atoms of alkanes are replaced by halogen atoms. Alkanes also undergoes halogenation if a mixture of alkane and halogen is heated to 600 K. For example, when a mixture of methane and chlorine is exposed to diffused sunlight or ultra violet light or heated to 600 K, the hydrogen atoms are replaced by chlorine atoms, one after another. The nature of the products formed depends upon the amount of chlorine. If excess of chlorine is used, then the product contains larger amounts of carbon tetrachloride. Smaller amounts of chlorine give more of less chlorinated products (such as methyl chloride) in this reaction. In case the reaction is allowed to take place for smaller duration of time, then less chlorinated products are formed in larger amounts. The order of reactivity of halogens with alkanes is F2 > Cl2 > Br2 > I2 Fluorine reacts with a violent explosion. The reaction with chlorine is less vigorous than fluorine and with bromine less vigourous than chlorine. The reaction with iodine is slow and reversible. Therefore iodination is carried in presence of some oxidising agents like iodic acid (HIO3) or nitric acid which converts HI to I2 and this pushes the reaction in the forward direction. 5 HI + H IO3 ® 3 H2O + 3 I2 2 HNO3 + 2 HI ® 2 H2O + 2 NO2 + I2 Fluorination takes place with almost explosive violence to produce fluorinated compounds. It also involves rapture of C-C bonds in the case of higher alkane. The reaction can be made less violent by dilution of fluorine with nitrogen. The ease of substitution of a hydrogen atom by a halogen atom is : Tertiary > secondary > primary Mechanism of Halogenation Halogenation of alkanes is a free radical chain substitution. The generally accepted mechanism for chlorination of methane is given below: In the first step , Cl2 molecule breaks homolytically to give two Cl· free radicals. The step in which Cl-Cl bond homolysis occurs is called the initiation step. Each chlorine atom formed in the initiation step has seven valence electrons and is very reactive. Once formed, a chlorine atom abstracts a hydrogen atom from the methane as shown below : Hydrogen chloride , one of the isolated products from the overall reacton, is formed in this step. A methyl radical is also formed. Methyl radical formed in step 2, attacks a molecule of Cl2. Attack of methyl radical on Cl2 gives chloromethane, the other product of oveall reaction, along with a chlorine atom then cycles back to step 2, repeating the process. Steps 2 and 3 are called propagation steps of the reaction and , when added together give the overall reaction. Since one initiation step can result in a large number of propagation steps , the overall process is called a free-radical chain reaction. . In actual pracice, some side reactions are taking place in addition to propagation steps which reduce the efficiency of the propagation steps. The chain sequence is interrupted whenever two odd-electron species (free radicals) combine to form an even-electron species. Reactions of this type are called chain terminating steps. Some commonly observed chain terminating steps in the chlorination of methane are given below : It may be noted that termination steps are , in general , less likely to occur than the propagation steps. Each of the termination steps requires two very reactive free radicals to encounter each other in a medium that contains far greater quantities of other materials such as methane and chlorine molecules with with they can react. Thus some monochloromethane undoubtedly gets formed via direct combination of methyl radicals with chlorine atoms, most of it in steps 2 and 3. Nitration is a substitution in which a hydrogen atom of an alkane is replaced by nitro (-NO2 ) group. When alkanes having six or more than six carbon atoms boiled with fuming nitric acid, then a hydrogen atom is replaced by the nitro group. Lower alkanes cannot be nitrated by this method. However, when a mixture of such an alkane and nitric acid vapours is heated to a temperature of about 723-773 K in a sealed tube, one hydrogen atom of the alkane is substituted by nitro group. The process is called vapour phase nitration and compounds obtained are called nitroalkanes. Since the reaction is carried out at higher temperatures, the rapture of carbon-carbon bonds occurs during the process. As a result, the reaction yields a mixture of different products. For example, nitration of propane results in the formation of mixture of four nitro compounds as shown below : Nitration of ethane yields a mixture of nitroethane and nitromethane. When alkanes having six or more carbon atoms are heated with fuming sulphuric acid(H2S2O7), then hydrogen is replaced by the sulphonic acidgroup(-SO3H) group. The compound (RSO3H) thus formed is known as the alkane sulphonic acid. Lower alkanes except those having tertiary hydrogen cannot be sulphonated. Oxidation of alkanes gives different products under different conditions. (a) COMBUSTION OR COMPLETE OXIDATION Alkanes readily burn with non-luminous flame in excess air or oxygen to form carbon dioxide and water with evolution of large quantity of heat. It forms the basis of the use of alkanes as fuels. Burning in excess of oxygen : Alkanes burn with blue flame in air or oxygen. In this process alkanes get oxidised to carbondioxide and water. CH4 + 2 O2 ® CO2 + 2H2O : DH° = - 890.4 kJ/mol 2C2H6 + 7 O2 ® 4 CO2 + 6 H2O : DH° = - 1580.0 kJ/mol The cooking gas ,which is often called L.P.G (liquefied petroleum gas) is a mixture of propane and butane. (ii) Burning in limited supply of oxygen : Alkanes burn with a suity flame (smoky flame) . Incomplete oxidation of alkanes give carbon black (a variety of carbon used in the manufacture of printing inks and tyres). CH4 + O2 ® C + 2 H2O Catalytic oxidation : On controlled oxidation, alkanes give alcohols which are further oxidised to aldehydes(or ketones), acids and finally carbon dioxide and water. CH4 ® CH3OH ® HCHO ® HCOOH ® CO2 + H2O methane methanol formaldehyde formic acid Alkanes undergo oxidation under special conditions to yield a variety of products. Example, Alkanes having tertiary H can be oxidised to alcohols in presence of potassium permanganate. FRAGMENTATION OF ALKANES When higher members of the alkane family are heated to high temperatures (700 – 800 K) or to slightly lower temperature in presence of catalysts like alumina or silica, they break down to give alkanes and alkenes with lesser number of carbon atoms. E.g., The fragmentation of alkanes is also called pyrolysis or cracking . The chemical reactions taking place in cracking are mostly free radical reactions involving rupture of carbon-carbon and carbon-hydrogen bonds. Alkanes isomerise to branched chain alkanes when heated with anhydrous aluminium chloride and hydrogen chloride. The alkanes containing 6 or more carbon atoms when heated at about 773 K under high pressure (10-20 atm) , in presence of catalysts like oxides of chromium, vanadium or molybdenum supported over alumina get converted into aromatic hydrocarbons. This process is called aromatisation. Under similar conditions , n-heptane yields toluene. REACTION WITH STEAM Methane reacts with steam at 1273 K in presence of nickel catalyst forming carbon monoxide and hydrogen. The method is used for industrial preparation of hydrogen. CONFORMATIONS IN HYDROCARBONS A single covalent bond is formed between two atoms by axial overlap of half-filled atomic orbitals. In alkanes, the C- C bond is formed by axial overlapping of adjacent carbon atoms. The electron distribution in the molecular orbital of sp3 - sp3 sigma bond is cylindrical around the inter-nuclear axis. The single covalent bond therefore allows the freedom of rotation about it because of its axial symmetry. As a result of rotation about C- C bond, the molecule can have different spatial arrangement of atoms attached to the carbon atoms. Such different spatial arrangements of atoms which arise due to rotation around a single bond are called conformers or rotational isomers (rotomers) . The molecular geometry corresponding to a conformer is known as conformation.. The rotation around a sigma bond is not completely free ; it in fact is hindered by an energy barrier of 1 to 20 kJ mol-1. There is a possibility of weak repulsive interactions between bonds or electron pairs of the bonds on adjacent carbon atoms. Such type of repulsive interactions is referred to as torsional strain. CONFORMATION OF ETHANE In ethane (CH3CH3), the two carbon atoms are bonded by a single covalent bond and each of the carbon atom is further linked to three hydrogen atoms. If one of the carbon atom is held still and the other carbon atom is allowed to rotate around the C- C bond, a large number of different spatial arrangements of hydrogen atoms of one carbon atom with respect to the other carbon atom can be obtained. However, the basic structure of the molecule including C- C bond length and H - C - H or H - C - C bond angles will not change due to rotation. Out of the infinite number of possible conformations of ethane, two conformations represent the extremes. These are called staggered conformation (a) and eclipsed conformation (b). In staggered conformation, the hydrogen atoms of the two carbon atoms are oriented in such a way so that they lie far apart from one another. In other words, they are staggered away with respect to one another. In eclipsed conformation, the hydrogen atoms of one carbon atom are lying directly behind the hydrogen atoms of the other. In other words, hydrogen atoms of one carbon atom are eclipsing hydrogen atoms of other. Conformations can be represented by Sawhorse and Newmann projections. 1. Swhorse projection In this projection the molecule is viewed along the axis of the model from above and right. The central C- C is drawn as a straight line slightly tilted to the right for the sake of clarity. The line is drawn somewhat longer. The front carbon is shown as the lower left hand carbon whereas the rear carbon is shown as the upper right hand carbon. Each carbon has three lines corresponding to three atoms/groups ( H in the case of ethane). The saw horse projection formulae of the two extreme conformations of ethane are shown in the Figure. Sawhorse projections of ethane 2. Newman projection Newman proposed simpler formulae for representing the conformations. These are called Newman projection formulae. In Newman projection, the two carbon atoms forming the s - bond are represented by two circles, one behind the other, so that only the front carbon is seen. The hydrogen atoms attached to the front carbon atom are represented by C- H bonds from the centre of the circle. The C- H bonds of the back carbon are drawn from the circumference of the circle. Newman’s projection formulae for staggered and eclipsed conformations of ethane are shown in Fig. Newman projections of ethane It may be noted that one conformation of ethane can be converted into the other by rotation of 600 about the bond. The infinite other conformations of ethane lying between the two extremes are called skew conformations. Relative stabilities of the conformers of ethane The coformers of ethane do not have the same stability. In eclipsed conformation of ethane the hydrogen atoms eclipse each other resulting in crowding, whereas in staggered conformation hydrogen atoms are as far apart as possible. The infinite number of possible intermediate conformations between eclipsed and staggered are called as skew conformations. The staggered conformation of ethane is more stable than eclipsed conformation. It is because in staggered form the H-atoms are far apart, therefore, the magnitude of repulsion is lesser than in eclipsed conformation. Hence the order of stability follows the sequence : Staggered > gauche (skew) > eclipsed The repulsive interaction between electron clouds , which affects stability of a conformation , is called torsional strain. Magnitude of torsional strain depends upon the angle of rotation about C – C bond. This angle is also called dihedral angle or torsional angle. Of all the conformations of ethane, staggered form has the least torsional strain and eclipsed form, the maximum torsional strain. Thus it may be inferred that rotation around C – C bond in ethane is not completely free. The difference in energy content of the staggered and eclipsed conformation is 12.5 kJ mol-1. The variation in energy versus rotation about the bonds has been shown in Fig. Changes in energy during rotation about C- C bond in ethane. The difference in the energy of various conformations constitutes an energy barrier to rotation. But this energy barrier is not large enough to prevent rotation. Even at ordinary temperature, the molecule possesses sufficient thermal and kinetic energy to overcome the energy barrier through molecular collisions. Thus conformations keep on changing from one form to another very rapidly and cannot be isolated as separate conformers. CONFORMATIONS OF PROPANE AND BUTANE The Newman conformations of propane and butane are shown below. Newman projections of propane Newman projections of butane (changes with dihedral angle) 11. How many eclipsed conformations are possible in butane ? The unsaturated hydrocarbons containg C=C bond and having general formula CnH2n are called alkenes . The simplest member of the alkene family is ethene, C2H4 , which contains 5 s and 1 p bond. The carbon-carbon double bond is made up of a s bond and a p bond. The bond enthalpy of a C=C is 681 kJ mol-1 while carbon-carbon bond enthalpy of ethane is (348 kJ mol-1 ). As a result of this C- C bond length in ethene (134 pm) is shorter than the C- C bond length in ethane(154 pm). The presence of the pi(p) bond makes alkenes behave sources of loosely held mobile electrons. Therefore alkenes are attacked by reagents or compounds which are in search of electrons. Such reagents are called electrophilic reagents. The presence of weaker weaker p -bond makes alkenes unstable molecules in comparison to alkenes and thus alkenes can be changed into single bond compounds by combining with the electrophilic reagents . Strength of the double bond(681 kJ mol-1) is greater than that of a carbon-carbon single bond in ethane (348 kJ mol-1). Orbital diagrams of ethane molecule are shown the following Figures. Orbital picture of ethane depicting s-bonds only Orbital picture of ethane showing formation of (a) p-bond (b) p-cloud and (c) bond angles and bond lengths ISOMERISM IN ALKENES Alkenes generally show the types of isomerism given below: · Position isomerism · Chain isomerism · Geometrical isomerism 1. Position isomerism First two members of alkenes (ethene and propene) do not show this type of isomerism. However, butene exhibits position isomerism as but-1-ene and but-2-ene differ in the position of the double bond. 2. Chain isomerism Butene also shows chain isomerism as isobutene has branched chain. 3. Geometrical isomerism Alkenes exhibit geometrical isomerism due to restricted rotation around the C=C bond. This results in two possible arrangements of groups attached to the two doubly bonded carbon atoms, as shown below for but-2-ene. This isomerism is called cis-trans isomerism. When the groups of similar nature are on the same side of the double bond , the isomer is designated as cis and when they are on opposite sides , as trans. The necessary and sufficient condition for geometrical isomerism is that the two groups attached to the same carbon must be different , i.e., olefins of the type abC=Cab or abC=Cax or abC=Cbx show geometical isomerism. When two groups of of highest priority are on the same side of the double bond, the isomer is designated as Z (Zusammen in German means together) and when these groups are on opposite sides, the isomer is designated as E (Entegegen in German meaning opposite). The priority of groups decided by sequence rules given by Cahn-Ingold and Prelog. According to these rules the atom having higher atomic number gets higher priority. Thus, between carbon(atomic number 6 ) and oxygen (atomic number 8 ), oxygen gets priority over carbon. If the relative priority of the two groups cannot be decided i.e., their bonded atoms are same , then the next atoms in the groups (and so on) are compared. Thus between the groups methyl (-CH3) and ethyl (-CH2CH5), the latter will get the priority since the next atoms in methyl is H while in ethyl it is C(carbon has higher atomic number than hydrogen) , as shown below. There are two systems for naming alkenes. 1. The common system The common names of first few alkenes are derived from corresponding alkanes by replacement of ‘ane’ by ‘ylene’, e.g,. H2C=CH2 is ethylene. 2. IUPAC system In this most commonly used system the ending ‘ane’ of corresponding alkanes is replaced by ‘ene’. The parent chain , is considered to be one corresponding to the longest chain of carbon atoms containing the double bond. The next longest side chain may be selected provided it contains the maximum number of double bonds present. It is then numbered, starting from the end nearest to the double bond. The position of the double bond is indicated by the number of carbon atom preceding the double bond. If there are two or more double bonds the ending ‘ane’ is replaced by ‘adiene’ or ‘atriene’ etc. The remaining rules are the same as applicable to alkanes. Some examples are : The locant for double bond may be written preceding or following the name of parent alkane or following the root name followed by the suffix , ene , diene or triene. All the the three names, i.e., 1,3-butadiene , butadiene-1,3 or buta-1,3-diene are correct for butadiene. 12. Which of the following compounds will show cis-trans isomerism ? (i) (CH3)2C=CHCH3 (ii) CH2=CCl2 (iii) C6H5CH=CHCH3 (iv) CH3CH=CBr(CH3) 13. Classify the following as Z or E isomers. 14. Write IUPAC names of following compounds. 15. Calculate the number of s and p bonds in the above structures ( i – iv) . 16. Draw cis and trans isomers of the following compounds. Also write their IUPAC names. i) CHCl=CHCl ii) C2H5C(CH3)=C(CH3)C2H5 17. Which of the following compounds will show cis-trans isomerism ? PREPARATION OF ALKENES Some of the general methods of preparation of alkenes are : 1. From alkyl halides Alkenes can be prepared from alkyl halides (preferably bromides or iodides) by treatment with alcoholic solution of caustic potash (KOH) at about 353-363 K. The reaction is known as dehydrohalogenation of alkyl halides. CH3CH2Br + KOH(alco) ® H2C=CH2+ KBr + H2O Similarly , treatment of n-propyl bromide with ethanolic solution of potassium hydroxide produce propene. This is an elimination reaction. The ease of dehydrohalogenation for different halides is: iodide > bromide > Chloride while for carbons, it is , tertiary > secondary > primary. i.e., a tertiary alkyl iodide is most reactive. 2. From alcohols Alkenes can be prepared by dehydration (removal of water molecule) of alcohols. The two common methods for carrying out dehydration are to heat the alcohol with either alumina or a mineral acid such as phosphoric acid or Con. Sulphuric acid. In dehydration reaction the OH group is lost from a-carbon while H atom is lost by b-carbon atom creating a double bond between a and b-carbons. Some examples are : This is an elimination reaction , i.e., a molecule of water is eliminated. 3 From dihalogen derivatives Dihalogen derivatives are the derivatives of alkanes containing two halogen atoms. Alkenes can be prepared from vicinal dihalogen derivatives (having halogen atoms on adjacent carbon atoms) by the action of zinc. The process is called dehalogenation of vicinal dihalides. 4. From alkynes Alkynes on partial reduction with calculated amount of dihydrogen in presence of palladised charcoal partially deactivated with poisons like sulphur compounds or quinoline give alkenes. Partially deactivated charcoal is known as Lindlar’s catalyst. Alkenes thus obtained are having cis geometry. However, alkynes on reduction with sodium in liquid ammonia form trans alkenes. Physical properties of alkenes Alkenes as a class have physical properties similar to those of alkanes. (i) First three members are gases, next fourteen members are liquids and higher ones are solids. (ii) They are lighter than water and insoluble in water but soluble in organic solvents like benzene, ether etc. (iii) Their boiling points (b.p) increase with increase in the number of carbon atoms, for each added –CH2 group b.p. rises by 20° to 30°. Their b.ps. are comparable to alkanes with corresponding carbon skeleton. (iv) Alkenes are weakly polar. The p electrons of the double bond can be easily be polarised. Therefore , their dipole moments are higher than those of alkanes. (v) The dipole moments, melting points and boiling points of alkenes are dependent on the position of groups bonded to the two doubly bonded carbons. Thus, cis-but-2-ene with two methyl groups on the same side has a small resultant dipole while trans–but-2-ene , the bond moments cancel out. Due to relatively high polarity of the cis-isomer compared to its trans-isomer it fits well into crystalline lattice and therefore generally has higher m.p. than the cis-isomer. However, this is not a general rule and therefore may be exceptions. The measurement of dipole moment is a method to assign configuration to geometrical isomers. Chemical properties of alkenes Alkenes contain two types of bonds viz., sigma and pi-bonds. The pi-electrons are loosely bound between carbon atoms and are quite mobile. Hence electron deficient reagents are attracted by pi-electrons. Therefore such reagents readily react with alkenes to give addition products. The presence of mobile (or loosely held pi-electrons) are responsible for the reactive nature of alkenes. The most important reactions of alkenes are electrophilic additions and free radical additions. Other important common types of reactions of alkenes are oxidation and polymerisation. A. Electrophilic addition 1. Addition of Hydrogen Halides Hydrogen halides (HCl, ) readily add to alkenes forming alkyl halides. The order of reactivity of hydrogen halides in this reaction is : HBr, HI HI > HBr > HCl CH2=CH2 + HX ® CH3CH2X ( X = I, Br, Cl ) It is an electrophilic addition reaction and proceeds through the formation of carbocations. Addition of HX takes place in two steps. In step (i) , addition of proton (electropile) to the double bond is slow, while step (ii) i.e., addition of nucleophile (X- ) to carbocation is fast. Any addition in which electrophile adds first is an electrophilic addition. In the case of unsymmetrical alkenes the addition takes place according to Markownikov rule, i.e., the more electronegative of the addentum adds to that carbon which contains lesser number of hydrogen atoms. Thus a molecule of HBr adds to propene in such a way that bromine adds to to the central carbon atom which has lesser number of hydrogen atom. CH3CH=CH2 + HBr ® CH3CHBrCH3 If we look at the structure of propene it has one methyl group attached to one of the doubly bonded carbons. In this case there are two possibilities of formation of carbocation in step (i) i.e., the proton either adds to terminal carbon (path 1) or to the central carbon (path 2). The two carbocations are (CH3)2C+H, a secondary (2°) carbocation and H2C+CH2CH3 a primary (1°) carbocation. The secondary carbocation (CH3)2C+H is more stable than primary carbocation H2C+CH2CH3 , therefore the cation formed in step (1) i.e., CH3C+HCH3 will be more stable and thus nucleophile (negatively charged X:- ) will add to the central carbon. Tertiary carbonium ion (3°) is is even more stable than secondary(2°) and primary (1°) carbocations. Markowinkov‘s rule can also be interpreted in terms of electronic interaction that is ‘’electrophilic addition to a carbon-carbon double bond involves the intermediate formation of the more stable carbocation’’. Decreasing order of reactivity of alkenes towards electrophilic addition is : R2C=CR’2 > R2C=CHR’ > R2C=CH2 ³ RCH=CHR > RCH=CH2 > CH2=CH2 > CH2=CHX R and R’ are alkyl groups and X is a halogen. Free radical addition - Peroxide Effect Markowinkoff rule is general, but not universal. In 1933 M.S. Kharash discovered that, the addition of HBr to unsymmetrical alkenes in the presence of organic peroxide (R-O-O-R) takes a course opposite to that suggested by Markowinkoff . This phenomenon of anti Markonikov addition of HBr in the presence of peroxide is known as peroxide effect. For example, when propylene reacts with HBr in the presence of a peroxide, the major product is n-propyl bromide, whereas in absence of a peroxide, the major product is isopropyl bromide. Remember that HCl and HI do not give anti-Markonikov’s products in the presence of peroxides. 2. Addition of sulphuric acid Alkenes react with sulphuric acid to produce alkyl hydrogen sulphates. Markowinkoff’s rule is followed in the case of unsymmetrical alkenes. Alkyl hydrogen sulphates on hydrolysis yield alcohols. The overall result of the above reaction appears to be Markowinkoff addition of H2O (hydration) to double bond. This method is used for the industrial preparation of alcohols. 3. Addition of halogens Halogens (chrorine and bromine only) add, at ordinary temperature and without exposure to UV light to alkene to give vicinal dihalides. The order of reactivity is : Fluorine > Chlorine > Bromine > Iodine CH2=CH2 + Br2 ® CH2BrCH2Br Ethene red 1,2-Dibromoethane (colourless) This is a test of unsaturation , since the colour of bromine solution is decolourised. 4. Hydoboration oxidation Diboranes adds to alkenes to give trialkyl borane, which on oxidation gives alcohol. The net addition is that of a water molecule, but follows anti-Markowinkov’s addition. Alkenes can be oxidised by different reagents to get different products. (a) Burning in excess of air or oxygen gives carbon dioxide and water. The reaction is exothermic. CH2=CH2 + 3 O2 ® 2 CO2 + 2 H2O (b) Bayer’s reagent : Bayer’s reagent is a cold dilute and weakly alkaline solution of KMnO4. Bayer’s reagent oxidises alkenes to diols. Two hydroxyl groups are introduced, where there is a double bond in the alkene. This reaction is called hydroxylation of alkene. (c) Hot alkaline KMnO4 : Hot alkaline KMnO4 oxidises alkene to carbonyl compounds. In case a hydrogen atom is attached to the carbon atoms containing a double bond, the hydrogen atom is replaced by a hydroxyl group to form a carboxylic acid on oxidation with hot alkaline KMnO4. Thus by identifying the products formed on oxidation, it can be possible to fix the location of the double bond in alkene molecule. Ozonolysis is a method of locating unsaturation in a hydrocarbon. Ozone reacts with alkenes (also with alkynes) to form ozonides. On reduction with zinc / water , the products formed are aldehydes or ketones or both. By identifying the products, it is possible to fix the position of double bond in an alkene. If a hydrogen is attached to the carbon atom forming the double bond, an aldehyde results ; otherwise ketones are formed. 18. What are the products obtained when the following molecules are subjected to ozonolysis ? (i) 1-Pentene (iii) 2-Methyl-2-butene (ii) 2-Pentene (iv) ethane 19. Write IUPAC names of the products obtained by addition of HBr to hex-1-ene . i) in the absence of peroxide and ii) in the presence of peroxide 20. Write IUPAC names of the products obtained by the ozonolysis of the following compounds : i) Pent-2-ene ii) 3,4-Dimethylhept-3-ene iii) 2-Ethylbut-1-ene iv) 1-phenylbut-1-ene 21. An alkene ‘A’ on ozonolysis gives a mixture of ethanal and pentan-3-one. Write structure and IUPAC name of ‘A’. 22. An alkene ‘A’ contains three C – C , eight C – H s bonds and one C – C p bond. ‘A’ on ozonolysis gives two moles of an aldehyde of molecular mass 44 u. Write IUPAC name of ‘A’. 23. Propanal and pentan-3-one are the ozonolysis products of an alkene ? What is the structural formula of the alkene ? The addition of hydrogen is called hydrogenation. Alkenes add hydrogen to give alkanes when heated under pressure in presence of suitable catalyst such as finely divided nickel, platinum or palladium. In the absence of a suitable catalyst, the hydrogenation reaction is extremely slow. Polymerisation is the process in which a large number of simple molecules combine under suitable conditions to form large molecule, known as macromolecule or a polymer. The simple molecules are known as monomers. Alkenes undergo polymerisation in the presence of catalysts. In this process, one alkene molecule links to another molecule as represented by the following reaction of ethene. 2 CH2=CH2 ® -CH2-CH2-CH2-CH2 or (-CH2-CH2-)2 When ethene is heated in the presence of traces of oxygen at about 500 - 675 K under pressure, ‘n’ molecules of ethene participate in the reaction as shown below : n CH2=CH2 ® (-CH2-CH2-)n ethene polyethylene or polythene A variety of polymers are obtained by using substituted ethenes in place of ethene. For example : Uses of various Polymers Polythene is used in making electrical insulators, laboratory articles like funnel, burette, beaker etc. Polyvinyl chloride (PVC) is used in making plastic bottles, plastic syringes, rain coats , pipes etc. Polystyrene is used in making household goods, toys and models. Polytetrafluoroethylene is also known as teflon is inert towards the action of chemicals. It is used in making chemically resistant pipes and some surgical tubes. The high thermal stability and chemical inertness of teflon makes it advantageous in the manufacture of non-stick cooking utensils, where a thin layer of teflon is coated on the interior of the vessel. 24. An alkene with molecular formula C7H14 gives propanone and butanal on ozonolysis. Write down the structural formula. The hydrocarbons having carbon-carbon triple bond (CºC) and general formula CnH2n-2 are called alkynes.. The simplest member of this class is ethyne C2H2. Ethyne has a total of 3 s bonds and 2 pbonds. The triple bond is made up of one s and 2 pbonds. C2H2 is linear molecule in which CºC has a bond strength of 823 kJ mol-1 in ethyne. It is stronger than the C=C of ethene (610 kJ mol-1) and C- C of ethane (370 kJ mol-1) In IUPAC system the suffix –ane of the corresponding alkane is replaced by yne , e.g. The remaining rules of nomenclature are same as in the case of alkanes and alkenes. Ethyne and propyne have got only one structure but there are two possible structures for butyne- (i) but-1-yne and (ii) but-2-yne. These two compounds differ in their structures due to position of triple bond, they are known as position isomers. 25. Write structures of different isomers corresponding to the 5th member of alkyne series. What type of isomerism is exhibited by different pairs of isomers ? Structure of Triple bond Ethyne is the simplest molecule of alkyne series. Structure of ethyne is shown in Fig. Orbital picture of ethyne showing (a) sigma overlaps (b) pi overlaps Each carbon atom of ethyne has two sp hybridized orbitals. Carbon-carbon sigma (s) bond is obtained by the head-on overlapping of two sp hybridized orbitals of two carbon atoms. The remaining sp hybridized orbital of each carbon atom undergoes overlapping along the internuclear axis with 1s orbital of each of the two hydrogen atoms forming two C – H sigma bonds. H – C – C bond angle is 180°. Each carbon has two unhybridised p-orbitals which are perpendicular to each other as well as to the plane of the C – C sigma bond. The 2p-orbitals of one carbon atom are parallel to 2p orbitals of other carbon atom, which undergo lateral or sideways overlapping to form two pi (p) bonds between two carbon atoms. Thus ethyne molecule consists of one C –C s -bond, two C – H s- bonds and two C – C p- bonds. The strength of CºC bond (bond enthalpy 823 kJ mol-1) is more than those of C = C (bond enthalpy 681 kJ mol-1) and C – C bond(bond enthalpy 348kJ mol-1). The CºC (133 pm) and C – C (154 pm) . The electron cloud between two carbon atoms is cylindrically symmetrical about the internuclear axis. Thus ethyne is a linear molecule. PREPARATION OF ALKYNES 1. From vicinal dihalides : Acetylene and its higher homologoues can be prepared by treatment of alcoholic alkali with vicinal dihalides. 2. By dehalogenation of tetrahalides From tetrahalides, the alkynes can be prepared by the action of zinc. 3. Synthesis from carbon and hydrogen Acetylene can be prepared by passing a stream of hydrogen through electric arc struck between carbon electrodes. 4. By electrolysis of potassium salt of fumaric acid 5. Industrial preparation : Acetylene is obtained by the action of water on calcium carbide(CaC2) . Calcium carbide is prepared by heating quick lime(CaO) with carbon at high temperature. CaO + 3 C ® CaC2 + CO CaC2 + 2 H2O ® H-CºC-H + Ca(OH) 2 6. Formation of higher alkynes Higher alkynes can be prepared by the action of alkyl halides on sodium acetylide. Sodium acetylide can be obtained from acetylene by the action of sodamide. H-CºC-H + NaNH2 ® H-CºC-Na + NH3 ethyne sodium acetylide H-CºC-Na + BrCH2CH3 ® H- CºC- CH2-CH3 + NaBr Physical properties of alkynes Alkynes have the following general properties. 1. State : Lower members in alkynes series are gases, while higher members are liquids and solids. 2. Colour : Alkynes have no colour. 3. Non-polar nature : Alkynes are non-polar in nature. Therefore, alkynes are soluble in non-polar(organic) solvents such as benzene. Alkynes are unsaturated compounds. Therefore, like alkenes, they are quite reactive. The most common type of reactions of alkyne are addition reactions. On addition, alkynes ultimately give saturated compound. Alkynes also undergo oxidation and polymerisation. 1. Hydrogenation : In the presence of a catalyst, hydrogen adds to alkynes ultimately give alkane. 2. Addition of Halogen acids : This adds on to alkynes to give a dihalide. The addition follows Markowinkoff’s rule. 3. Addition of Halogen : Halogen adds to alkynes to give halogen substituted alkanes. 4. Hydration : Alkynes react with water in the presence of mercuric sulphate and sulphuric acid to form aldehyde or ketone. When acetylene is bubbled through 40% sulphuric acid in the presence of mercuric sulphate(HgSO4) acetaldehyde is obtained. The reaction can be considered as the addition of water to acetylene. 5. Reaction with alcohols, hydrogen cyanide and carboxylic acids (vinylation) Ethyne adds a molecule of alcohol in the presence of alkali to give vinyl ether. With HCN , ethyne gives vinyl cyanide. Similarly, alkynes add acids in the presence of a Lewis acid catalyst or Hg2+ ions to give vinyl esters. 6. Oxidation : Alkynes can be oxidised to different products using different reagents and conditions of oxidation. (i) Burning in air : Acetylene burns in air with suity flame, emitting a yellow light. For this reason it is used for illumination. (ii) Burning in excess of air : Acetylene burns with a blue flame when burnt in excess of air or oxygen. A very high temperature (3000 K) is obtained by this method. Therefore, in the form of oxy-acetylene flame, it is used for welding and cutting metals. 2 H-C º C-H + 5 O2 ® 4 CO2+ 2 H2O + heat (iii) Degradation with KMnO4 : The oxidation of alkynes with strong alkaline potassium permanganate give carboxylic acids generally containing lesser number of carbon atoms. (iv) Ozonolysis : Alkynes react with ozone to give ozonides which are decomposed by water to form diketones and hydrogen peroxide. Diketones are oxidised by hydrogen peroxide to carboxylic acids by the cleavage of carbon-carbon bonds. 6. Self addition or Polymerisation Alkynes polymerise to give linear or cyclic compounds depending upon their temperature and catalyst used. However, these polymers are different from the polymers of alkenes as they are usually low molecular weight polymers. When acetylene is passed through red hot tube of iron or quartz, it trimerises to benzene. Similarly, propyne polymerises to form mesitylene. Polymerisation of acetylene produces linear polymer polyacetylene. It is high molecular weight conjgated polymer containing repeating units (-CH=CH- CH=CH -)n. Under proper conditions this material conducts electricity. The films of polyacetylene can be used as electrodes in batteries. Having much higher conductance than metal conductors, lighter and cheaper, batteries can be made from it. 26. How will you convert ethanoic acid into benzene ? ACIDIC NATURE OF ACETYLENE Acetylene forms salt like compounds because hydrogen atoms are slightly acidic. Therefore hydrogen atoms directly attached to carbon atoms linked by triple bond can be replaced by highly electropositive metals such as sodium, silver, copper etc. Salts of acetylenes are known as acetylides. (i) Sodium acetylide is formed when acetylene is passed over molten sodium. H-CºC-H + Na ® H-CºC-Na + ½ H2 ethyne sodium acetylide Sodium acetylide is also formed when acetylene is treated with sodamide (NaNH2). Sodamide is obtained by dissolving sodium metal in liquid ammonia. Na + NH3 ® NaN H2 + ½ H2 NaNH2 + H-CºC-Na ® Na-CºC-Na + NH3 Other alkynes can also react in a similar manner. NaNH2 + CH3 -CºC-H ® CH3 -CºC-Na + NH3 (ii) Silver acetilide is obtained as a white precipitate, when acetylene is passed through ammoniacal silver nitrate solution(Tollen’s reagent). H-CºC-H + 2 AgNO3 +2 NH4OH ® Ag-CºC-Ag+ NH4NO3 + 2 H2O silver acetylide (white ppt) (iii) Copper acetilide is obtained as a red precipitate when acetylene is passed through ammoniacal solution of cuprous chloride. H-CºC-H + Cu2Cl2 +2 NH4OH ®Cu-CºC-Cu+ NH4Cl + 2 H2O copper acetilide (red ppt) Alkynes form insoluble silver acetylide when (alkynes having acidic hydrogen) passed through ammoniacal silver nitrate (Tollen’s reagent). R-Cº C-H + Ag+® R-Cº C- Ag + H+ An alkyne having no hydrogen atom attached to carbon linked by a triple bond does not form acetylide e.g., CH3-CºC-CH3 (2-Butyne) as it lacks an acidic hydrogen(acetylinic hydrogen). Thus, salt formation by alkyne is shown by those alkynes which contain a triple bond at the end of the molecule (i.e., when triple bond is terminal group). Alkynes having a non-terminal triple bond do not yield acetylides. Therefore, this reaction is used for distinguishing terminal alkynes from non-terminal alkynes. Cause of acidic character Hydrogen attached to carbon atoms by sp-hybrid orbital is slighly acidic, the reason for acidic character is that sp-hybrid orbital has greater s-character than sp2 or sp3 hybrid orbitals. The s-character in sp3, sp2 and sp hybrid orbitals are 25, 33.3 and 50%respectively. An s-orbital tends to keep electrons closer to the nucleus than a p-orbital. It means that greater the share of s-orbital in the hybrid orbital, the nearer will be the shared pair of electrons to the nucleus of carbon atom. This makes the sp-hybrid carbon more electronegative than sp2 or sp3 hybridised carbon atom. Thus, the hydrogen attached through sp-hybid orbital acquires a slight positive charge. Therefore, it can be replaced by highly electropositive metals (such as sodium, copper, silver etc). The acidic character of hydrocarbons varies as : Alkane < Alkene < Alkyne However , alkynes are extremely weak acids. Compared to carboxylic acids like acetic acid, ethyne is 1020 times less acidic. Ethene is 1040 times less acidic than acetic acid. Test for alkane, alkene and alkynes Alkanes, alkenes and akynes can be distinguished from one another by the following tests. Alkanes do not decolourise bromine water. Baeyer’s reagent (alkaline KMnO4) remains unchanged on treating with alkanes. Alkenes and alkynes decolourise bromine water. It is used to distinguish alkanes from unsaturated compounds such as alkenes and alkynes. Alkenes and alkynes decolourise Bayer’s reagent. Therefore, this test is used for distinguishing alkenes from alkanes. 27. How will you separate propene from propyne ? The name alkadiene is often shortened as diene. These are unsaturated hydrocarbons having two carbon-carbon double bonds per molecule. The general formula of alkadiene is similar to alkynes CnH2n-2, hence these are isomeric with alkynes. Classification of Dienes The dienes are classified into three types depending upon the relative positions of the two double bonds. 1. Isolated dienes The double bonds are separated by more than one single Penta-1,4-diene Hexa - 1,5-diene 2. Cumulated dienes The double bonds between successive carbon atoms are called cumulated double bonds. E.g. Propa-1,2-diene (allene) Buta-1,2-diene (methylallene) 3. Cojugated dienes The dienes in which double bond is alternate with single bond are called conjugated dienes, e.g CH2=CH- CH = CH2 CH3- CH=CH-CH=CH-CH3 Relative stabilities of Dienes A conjugated diene is more stable as compared with non-conjugated dienes. The relative order of stability of diene is Cojugated diene > Isolated diene > Cumulated diene Exceptional stability of conjugated dienes can be explained in terms of orbital structure. Consider Buta-1,3-diene as an example of conjugated diene. All the four carbon atoms in Buta-1,3-diene are sp2 hybridised. Each carbon atom also has an unhybridised p-orbital which is used for p bonding. The hybrid orbitals of each carbon atom are used for the formation of s bonds. The p-orbital of C-2 can overlap equally with the p-orbitals of C-1 and C-3 . Similarly, the p-orbital of C-3 can overlap equally with the p-orbitals of C-2 and C-4 as shown in Fig. As a result the p-electrons in conjugated dienes are delocalised. Delocalisation of p - electrons in conjugated diens The delocalisation of p-electrons in conjugated dienes makes them more stable because now p-electrons feel simultaneous attraction of all the four nuclei of four carbon atoms. This type of delocalisation of p-electrons is not possible in isolated or cumulated dienes. The non-conjugated dienes behave exactly in the same way as simple alkenes except that the attacking reagent is consumed in twice the amount required for one double bond. But due to the mutual interactions of the double bonds, i.e., delocalisation of the electrons in conjugated dienes, their properties are entirely different. 1,3-Butadiene and HBr taken in equimolar amounts yield two products 3-Bromo-but-1-ene and 1-Bromo-but-2-ene through 1,2 and 1,4-addition respectively. The resonating structures of the intermediate carbocation , formed after addition of the electrophile, i.e., H+ , can add to the anion in two alternative ways (path a or path b ) to yield 1,2 and 1,4 addition products. 27. A conjugated alkadiene having molecular formula C13H22 on ozonolysis yielded ethyl methyl ketone and cyclohexanal. Identify the diene, write its structural formula and give its IUPAC name. The term aromatic (Greek ; aroma means fragrance) was first used in compounds having pleasant odour although structure was not known. Now the term aromatic is used for a class of compounds having a characteristic stability despite having unsaturation. These may have one or more benzene rings (benzanoid) or may not have benzene ring (non-benzanoid) . Benzanoid compounds include benzene and its drivatives having aliphatic side chains (arenes) or polynuclear hydrocarbons, e.g. naphthalene, anthracene etc. The nomenclature of aromatic compounds discussed in Unit 12. AROMATIC HYDROCARBONS (ARENES) Aromatic hydrocarbons or arenes are the compounds of carbon and hydrogen with at least one benzene type ring (hexagonal ring of carbons) in their molecules. We have hydrocarbons of benzene series (containing benzene rings) and so on. STRUCTURE OF BENZENE The molecular formula of benzene is C6H6 . It contains eight hydrogen atoms less than the corresponding parent hydrocarbon, i.e., hexane(C6H14). It took several years to assign a structural formula to benzene because of its peculiar properties. In 1865 Kekule suggested the first structure of benzene. In this structure, there is a hexagonal ring of carbon atoms distributed in a symmetrical manner, with each carbon atom carrying one hydrogen atom. The fourth valence of the carbon atom is fulfied by the presence of alternate system of single and double bond as shown. The above formula had many draw backs as described below : (i) The presence of three double bonds should make the molecule highly reactive towards addition reactions. But contrary to this, benzene behaves like saturated hydrocarbons. (ii) The carbon-carbon bond lengths in benzene should be 154 pm (C-C) and 134 pm (C=C) . It implies that the ring should not be regular hexagon but actually all carbon-carbon distances in benzene have been found to be 139 pm. (iii) Finally two isomers should result in a 1,2-disubstituted benzene as shown in Fig While Kekule formula could not explain the differences in properties between benzenes and alkenes based on this structure, he explained the lack of isomers as in figure by postulating a rapid interchange in the postion of the double bond as shown below: Orbital picture of Benzene According to orbital structure , each carbon atom in benzene assumes sp2 hybrid orbitals lying in one plane and oriented at an angle of 120º. There is one unhybridised p-orbital having two lobes lying perpendicular to the plane of hybrid orbitals for the axial overlap with 1s-orbital of hydrogen atom to form C- H sigma bond. The other two hybrid orbitals are used for axial overlap of similar orbital of two adjacent carbon atoms on either side to form C- C sigma bonds. The axial overlapping of hybrid orbitals to form C- H and C- C bonds have been shown in Fig. Sigma bond formation in benzene As is evident, the frame work of carbon and hydrogen atoms is coplanar with H-C-C or C-C-C bond angle as 120º. The unhybridised p-orbital on each carbon atom can overlap to a small extent with p-orbital of the two adjacent carbon atoms on either side to constitute pi bonds as shown in Fig. Side-wise orverlapping of p-orbitals The molecular orbitals containing pi-electrons spreads over the entire carbon skeleton as shown in Fig. Orbital picture of benzene The spreading of pi-electrons in the form of ring of pi electrons above and below the plane of carbon atoms is called delocalisation of pi-electrons. The delocalisation of pi-electrons results in the decrease in energy and hence accounts for the stability of benzene molecule. The delocalised structure of benzene accounts for the X-ray data (all C- C bond lengths equal) and the absence of the type of isomerism shown in Fig ( Page 18 ). Furthermore molecular orbital theory predicts that those cyclic molecules which have alternate single and double bonds with ( 4 n + 2 ) ( where n = 0, 1, 2, ….) electrons in the delocalised p-cloud are particularly stable and have chemical properties different from other unsaturated hydrocarbons. Resonance structure of Benzene Structure of benzene can also be explained on the basis of resonance. Benzene may be assigned following structures A and B . Structure A and B have same arrangement of atoms and differ only in electronic arrangement. Any of these structures alone cannot explain all properties of benzene. Resonance structure of benzene According to these structures , there should be three single bonds (bond length 154 pm) and three double bonds (bond length 134 pm) between carbon atoms in the benzene molecule. But actually, it has been found that all the carbon-carbon bonds in benzene are equivalent and have a bond length 139 pm. Structures A and B are known as resonating or cannonical structures of benzene. The actual structure of benzene lies somewhere between A and B may be represented as C referred to as resonance hybrid. To indicate two structures which are resonance forms of the same compound, a double headed arrow is used as shown in Fig (above). The resonance hybrid is more stable than any of the contributing (or canonical) structures. The difference between the energy of the most stable contributing structure and the energy of the resonance hybrid is known as resonance energy. In the case of benzene, the resonance hybrid (actual molecule) has 147 kJ/mol less energy than either A or B. Thus, resonance energy of benzene is 147 kJ/mol. It is this stabilization due to resonance which is responsible for the aromatic character of benzene. STRUCTURAL ISOMERISM IN ARENES Benzene forms a number of mono, di or poly-substituted derivatives by replacement one, two or more hydrogen atoms of the ring by other monovalent atoms or groups. In many cases, the derivative of benzene exist in two or more isomeric forms. The isomerism of these derivatives is discussed as follows. The molecule of benzene is symmetrical and the six carbon atoms as well as the hydrogen atoms occupy similar positions in the molecule. If atom of hydrogen is substituted by a monovalent group or a radical (say methyl group), resulting mono substitution product exist in one form only. The position assigned to the substituent group does not matter because of the equivalent positions of the six hydrogen atoms. Thus, we have only one compound having the formula C6H5X where X is a monovalent group. The various positions in the monosubstituted derivatives are not equivalent with respect to the position already occupied by the substituent. For example, taking the positions of the substituent as number 1, the other positions are as shown : A close examination at the structures reveal that : (i) Positions 2 and 6 are equivalent and are called ortho(-o). (ii) Positions 3 and 5 are equivalent and are called meta(-m) with respect to the position 1. (iii) Position 4 is called para (p) with respect to the position 1. For example in the case of dimethyl benzene (CH3)2C6H4 , commonly known as xylene. There can be three xylenes depending upon the positions of methyl groups as shown below : Besides , the three possible dimethyl benzenes, a fourth isomer, ethyl benzene is also known. In the case of naphthalene, even monosubstituted compounds display positional isomerism as in 1-Methyl and 2-Methyl naphthalenes. AROMATICITY OR AROMATIC CHARACTER The term aromatic was first used for a group of compounds having pleasant odour. These compounds have properties which are quite different from those of the aliphatic compounds. The set of these properties is called aromatic character or aromaticity. Some typical properties of aromatic compounds are : · These are highly unsaturated compounds, but do not give addition reactions easily. · These give electrophilic substitution reactions very easily. · These are cyclic compounds containing five, six or seven membered rings. · These molecules are flat (planar). · These are quite stable compounds. The aromaticity in benzene is considered to be due the presence of six delocalised p-electrons. The modern theory of aromaticity was given by Eric Huckel in 1931. According to this theory, for a compound to exhibit aromaticity , it must have the following properties. · Delocalisation of p-electrons of the ring. · Planarity of the molecules. To permit sufficient or total delocalisation of p-electrons, the ring must be planar to allow cyclic overlap of the p-orbitals. HUCKEL RULE or ( 4 n + 2 ) RULE This rule states that for a compound to exhibit aromatic character, it should have a conjugated , planar cyclic system containing 4 n + 2 ( where n = 1, 2, 3. … ) delocalised p-electrons forming a cyclic cloud of delocalised p-electrons above and below the plane of the molecule. This is known as Huckel rule of (4n + 2 ) p-electrons. Benzene , naphthalene, anthracene and phenanthrene are aromatic as they contain ( 4 n + 2 ) i.e., 6, 10 , 14 p-electrons in a conjugated cyclic array. The cyclopentadiene and cyclooctatetrene are non-aromatic as instead of (4n+2) p-electrons, these have 4n p-electrons. Moreover they are non-planar. 28. Predict which of the following systems would be aromatic and PREPARATION OF BENZENE AND ITS HOMOLOGUES 1. From alkyne Alkynes polymerize at high temperatures to yield arenes e.g. benzene is obtained from ethyne. 2. Decarboxylation of aromatic acids In laboratory benzene is prepared by heating sodium benzoate with sodalime. 3. Reduction of Benzene diazonium salts In presence of hypophosphorus acid, benzene diazonium chloride is convered to benzene (diazo group is replaced by H) 4. Friedel Craft’s reaction Benzene can yield alkyl benzene by treating with alkyl halide in presence of anhydrous aluminium chloride. 5. Wurtz-Fittig reaction Arenes can be obtained by the action of sodium metal on a mixture of aryl halide and alkyl halide in ether 6. From Grignard reagents Arenes can be prepared by reacting Grignard reagent with alkyl halide, e.g. Aromatic hydrocarbons are usually colouless , insoluble in water but soluble in organic solvents. They are inflammable , burn with sooty flame and have characteristic odour. They are toxic and carcinogenic in nature. Their boiling points increase with increase in molecular mass. Aromatic hydrocarbons are unsaturated cyclic compounds. They are also known as arenes. Benzene and its homologoues are better solvents as they dissolve a large number of compounds. This is because the electron clouds makes it polar to some extent and therefore even polar molecules are attracted towards it. This helps in dissolving in them a large number of compounds. The reason for this inertness of benzene and its homologues is due to the presence of pi-electron clouds above and below the plane of the ring of carbon atoms. Therefore, nucleophilic species (electron rich species such as Cl-, OH-, CN- etc ) cannot attack the benzene ring due to repulsion between the negative charge on the nucleophile and delocalised pi-electron clouds. However, electrophiles (such as H+, Cl+, NO2+ etc) can attack the benzene ring and for this reason benzene and its homologues can be replaced by nitro(-NO2), halogen(-X), sulphonic acid group (-SO3H) etc. However, arenes also undergo a few addition reactions under more drastic conditions, such as increased concentration of the reagent, high pressure, high temperature, the presence of catalyst etc. ELECTROPHILIC SUBSTITUTION REACTIONS The chemical reaction which involves the replacement of an atom or group of atoms from organic molecule by some other atom or group with out changing the structure of the remaining part of the molecule is called substitution reaction. The new group which finds place in the molecule is called substituent and the product formed is referred to as substitution product. MECHANISM OF ELECTROPHILIC AROMATIC SUBSTITUTION According to experimental evidences SE ( S = substitution ; E = electrophilic) reactions are supposed to proceed via the following three steps : · Generation of electrophile · Formation of carbocation intermediate · Removal of proton from the carbocation intermediate (a) Generation of electrophile E+ During chlorination, alkylation and acylation of benzene , anhydrous AlCl3 , being a Lewis acid helps in generation of electrophile Cl+ , R+ , RC+O (acylium ion) respectively by combing with the attacking reagent. In case of nitration, the electrophile , nitronium ion +NO2 is produced by transfer of a proton (from sulphuric acid) to nitric acid in the following manner. In the process of generation of nitronium ion , sulphuric acid serves as an acid and nitric acid as a base. Thus it is a simple acid-base equilibrium. (b) Formation of Carbocation (arenium ion ) Attack of electrophile results in the formation of s-complex or arenium ion in which one of the carbon is sp3 hybridised. The arenium ion gets stabilized by resonance. Sigma complex or arenium ion loses its aromatic character because of delocalization of electrons stops at sp3 hybridised carbon. (c) Removal of proton To restore the aromatic character, s-complex releases proton from sp3 hybridised carbon on attack by [AlCl4]- (in case of halogenation , alkylation and acylation ) and [HSO4]- (in case of nitration. Chlorine or brominre react with benzene in the presence of Lewis acids like ferric chloride or aluminium salts of corresponding halogens, which act as catalysts to give chlorobenzne or bromobenzne. The function of Lewis acids like AlCl3, FeCl3, FeBr3 etc is to carry halogen to aromatic hydrocarbons. Hence, they are called halogen carriers. Fluorine is very reactive and hence this method is not suited for the preparation of fluorobenzene. Iodobenzene is obtained by heating iodine and nitric acid, mercuric oxide etc. The replacement of hydrogen atom of benzene by a sulphonic acid group(-SO3H) is known as suphonation. The reaction is carried out by treating an arene with concentrated sulphuric acid containing dissolved sulphur trioxide or with chlorosulphonic acid. A nitro group (-NO2) can be introduced into the benzene ring using nitrating mixture ( a mixture of concentrated sulphuric acid and concentrated nitric acid) 4. Friedel Craft’s reaction On treatment with an alkyl halide or acid halide (acyl halide ) in presence of anhydrous aluminium chloride as catalyst, benzene forms an alkyl or acyl benzene as described below. The reaction is known as Friedel crafts alkylation or acylation respectively. Arenes can be oxidised to different products depending upon their structure and conditions of the reaction. (i) Combustion : Aromatic hydrocarbons burn with luminous and sooty flame. 2 C6H6+ 15 O2 ® 12 CO2 + 6 H2O 2 C6 H5 CH3 + O2 ® 7 CO2 + 4 H2O (ii) Side chain oxidation : Arenes with side chain on oxidation with strong oxidising agents such as alkaline potassium permanganate, give carboxylic acid. However, a hydrocarbon without a side chain remain unaffected. In case , there are more carbon atoms in the side chain, on oxidation all the carbon atom gets oxidised to carbon dioxide except, the one that is directly attached to the aromatic ring. (iii) Catalytic oxidation : Benzene can be catalytically oxidised to an aliphatic compound, maleic anhydride. Benzene when heated with excess of air at 800 K in the presence of vanadium pentoxide(V2O5) gives maleic anhydride. Addition Reactions of Arene 1. Addition of Chlorine Benzene and its homologues undergo some addition reactions similar to alkynes. However, extremely drastic conditions are required for carrying out addition reactions in arenes. For example, benzene can be chlorinated in presence of sunlight to form benzne hexachloride, BHC. ii. Addition of hydrogen Hydrogen adds on to benzene when heated (475 K) under pressure in presence of a nickel catalyst to cyclohexane (hexa hydrobenzene). Similarly, toluene gives methyl cyclohexane ( hexa hydro toluene). iii) Addition of ozone Benzene slowly reacts with ozone to form triozonide. The triozonide on hydrolysis with water gives glyoxal. ORINENTATION IN BENZENE RING The arrangement of substituents on the benzene ring is termed as orientation. The term is often used for the process of determining the position of the substituents on the benzene ring. Whenever substitution is made in the benzene ring , the substituent can occupy any position –all position being equivalent. The position taken by the second substituent depends upon the nature of the substituent already present in the ring. In other words, the substituent already present on the ring directs the incoming substituent. This is called directive influence of the substituents. Depending up on the directive influence of the substituent already present in the ring, the substituents are classified into following groups. Ortho and para directing groups The substituents that direct the incoming substituent to ortho (o) and para (p) positions relative to theirs are called ortho and para directing groups. The ortho and para directing substituents permit the electrophilic substitution at the ortho and para positions. If the substituent S is an ortho and para directing, then the overall reaction may be described as : The proportion of the ortho and para disubstituted products depends upon the reaction conditions. The ortho and para directing substituents are arranged in the decreasing directing influence as follows: The ortho and para directing substituents have ‘electron-donating influence’ on the aromatic ring. Thus ortho and para directing substituents increase the electron density on the ring and therefore are called activating groups (or activators). Activating groups enhance the rate of reaction. Ortho and para directing but deactivating groups The only exception to the above rule is the halogen substituents. The halogens are ortho and para directing but deactivate the benzene ring relative to benzene. For example, if we carry out nitration of toluene, a mixture of ortho and para nitro toluene is formed. Meta directing Groups The substituents that direct the incoming substituents to meta(m-) relative to theirs are called meta directing groups. The meta directing groups permit the electrophilic substitution at meta position. If S is the meta directing group, then the overall reaction may be described as : The meta-directing substituents are arranged in the decreasing directing influences as follows : The meta directing substituents have ‘electron-withdrawing influence ‘ on the benzene ring. Thus meta directing substituents decrease the overall electron density on the ring and are therefore called deactivating groups(or deactivators) . The deactivating groups decrease (or lower) the rate of reaction. For example, nitration of benzoic acid produces m-nitro benzoic acid. Examples of Directing influence of substituents It is seen that : · -OH (phenolic) group is o- and p-directing. · -NO2 group is meta-directing The o- and p-directing influence of –OH group in phenol Phenolic ( -OH) group has lone pairs of electrons on its oxygen atom. The lone pairs of electrons on the O atom of the –OH group interact with p-electrons of the benzene ring and give rise to various resonance forms as shown below. This shows that the electron density is more concentrated on the two ortho- and para- positions. The electrophilic substitution takes place at these positions. The m-directing influence on –NO2 group The –NO2 group is a meta directing and deactivating group. The more electronegative O atoms in–NO2 group withdraw electronic density from N atom and places a slight positive charge on it. This atom pulls electrons from the benzene ring and gives rise to various resonance forms as shown below. resonance in nitrobenzene These resonance structures show that the electron density at o- and p-positions is generally reduced. As a result , electrophilic attack is at meta-position. The o- and p- directing but deactivating effect of halogen atom Halogen atoms are o- and p- directing but deactivating substituents. This anomalous behaviour of halogen atoms (when present in the ring) is due to two opposing effects operating at the same time. A halogen substituent , due to its strong electronegative character , pulls electron from the ring due to inductive effect. This decreases the electron density on the ring. Due to the availability of lone-pairs of electrons, the halogen atom releases electrons to the ring giving rise to various resonance structures. Resonance structures of chlorobenzene Due to poor 2p (C) - 3p (Cl) ovelap , the halogen atom of the ring is not a good electron releasing substituent. As a result , in the case of halogen substituent , the inductive effect predominates and the benzene ring gets deactivated despite o- and p- directing effect of halogen substituent. POLYNUCLEAR AROMATIC HYDROCARBONS Polynuclear aromatic hydrocarbons contain more than one benzene rings and have two carbon atoms shared by two or three aromatic rings. Anthracene and phenanthrene have two pairs of carbon atoms shared by two two rings, each pair is shared by a different pair of rings. Coal tar is the main source of naphthalene (6-10%) , anthracene (1%) and phenanthrene. Naphthalene is obtained from the middle oil fraction (b.p 443 – 503 K) of coal tar by cooling. Crude naphthalene is washed successively with dil H2SO4 , sodium hydroxide and water. The dried sample is then purified by sublimation. Naphthalene is used as moth-balls to protect woollenns. It is also used for the manufacture of phthalic anhydride, 2-naphthol, dyes etc. Anthracene is obtained by cooling green oil fraction (b.p. 543-633K) of coal tar distillation. The crude sample of anthracene is purified by washing successively with solvent naphtha (it removes phenanthrene) and pyridine. Phenanthrene is obtained from its solution in solvent naphtha (obtained during purification of crude anthracene crystals) by evaporation. CARCINOGENICITY AND TOXICITY All chemical substances are believed to be harmful in one way or other. On finding that prolonged exposure to coal tar could cause skin cancer, it was discovered that the high boiling fluorescent fraction of coal tar was responsible for causing cancer. This fraction of coal tar was found to contain 1,2-benzanthracene. Later some more compounds were also found to be carcinogenic. The names and formulae of some carcinogenic compounds are given below. There is no rule available so far to predict the carcinogenic activity of any hydrocarbon or its derivatives. However, the number of groups like –CH3, -OH, -CN, -OCH3 etc. has been found to influence carcinogenic activity of compounds. 29. Bring out the following conversions : (i) Methane to ethane (ii) Ethyne to methane (iii) Ethane to ethene (iv) Ethane to butane 30. Suggest a method to separate a mixture of ethane, ethene and ethyne. 31. Describe a method to distinguish between ethene and ethyne. 32. What is Grignard reagent ? How it is prepared ? 33. How is propane prepared from Grignard reagent ? 34. Arrange the following in the increasing order of boiling points : Hexane, heptane, 2-Methylpentane, 2,2-Dimethyl pentane 35. How would you obtain : i) ethene from ethanol ii) ethyne from ethylene dibromide 36. Write equations for the preparation of propyne from ethyne. 37. How are the following conversions carried ? (i) propene to propane (ii) ethyne to ethane (iii) ethanol to ethene (iv) sodium acetate to methane (v) benzene to nitrobenzene 38. How will you convert benzene to : (v) Benzoic acid 39. What happens when : (i) 1-Bromopropane is heated with alcoholic KOH. (ii) 2-Propanol is heated with alumina at 630 K. (iii) Benzene is treated with a mixture of Con. Sulphuric acid and nitric acid. (iv) Ethene is treated with an alkaline solution of cold KMnO4. 40. Write chemical equations for combustion reaction of the following hydrocarbons ? i) butane ii) pentene iii) hexyne iv) Toluene 41. Draw the cis and trans structures of hex-2-ene. Which isomer will have higher b.p and why ? 42. Why is benzene extra ordinarily stable though it contains three double bonds ? 43. Explain why the following systems are not aromatic ? 44. How will you convert benzene into : i) p-nitrobromobenzene ii) m-nitrochlorobenzene iii) p-nitrotoluene iv) acetophenone 45. In the alkane H3CCH2C(CH3)2CH2CH(CH3)2 , identify 1°, 2° , 3° carbon atoms and give the number of H atoms bonded to each one of these. 46. Addition of HBr to propene yields 2-bromopropane, while in the presence of benzoyl peroxide, the same reaction yields 1-bromopropane. Explain and give mechanism. 47. Write ozonolysis products of 1,2-dimethylbenzene (o-Xylene). How does the result support Kekule structure for benzene ? 48. Arrange benzene, n-hexane and ethyne in decreasing order of acidic behaviour. Also give reason for this behaviour. 49. Why does benzene undergo electrophilic substitution reactions easily and nucleophilic substitutions with difficulty ? 50. How would you convert the following compounds into benzene ? i) ethyne ii) ethane iii) hexane 51. Write the structures of all the alkenes which on hydrogenation give 2-methylbutane. 52. Arrange the following set of compounds in order of decreasing relative reactivity with electreophile E+, (a) chlorobenzene , 2,4-dinitrochlorobenzene , (b) toluene , p-CH3C6H4-NO2 , p-O2N-C6H4-NO2 53. Out of benzene, m-dinitrobenzene and toluene which will undergo nitration most easily and why ? 54. Suggest the name of a Lewis acid other than anhydrous aluminium chloride which can be used during ethylation of benzene. 55. Why Wurtz reaction is not preferred for the preparation of alkanes containing odd number of carbon atoms ? Illustrate your answer by taking one example. 1. Why carbon forms a large number of compounds ? Write a note on optical isomers of tartaric acid. 2. Bring out the following conversions : (i) Methane to ethane (ii) Ethane to ethane (iii) Ethyne to methane (iv) Ethane to butane 3. Suggest a method to separate a mixture of ethane, ethene and ethyne. 4. Describe a method to distinguish between ethene and ethyne. 5. What is Grignard reagent ? How it is prepared ? 6. How is propane prepared from Grignard reagent ? 7. Arrange the following in the increasing order of boiling points : Hexane, heptane, 2-Methylpentane, 2,2-Dimethyl pentane 8. How would you obtain : (i) ethene from ethanol (ii) ethyne from ethylene dibromide 9. Write equations for the preparation of propyne from ethyne. 10. How are the following conversions carried ? (i) propene to propane (ii) ethyne to ethane (iii) ethanol to ethene (iv) sodium acetate to methane (v) benzene to nitrobenzene 11. How will you convert benzene to : (iii) Benzoic acid 12. What happens when : (i) 1-Bromopropane is heated with alcoholic KOH. (ii) 2-Propanol is heated with alumina at 630 K. (iii) Benzene is treated with a mixture of Con. Sulphuric acid and nitric acid. (iv) Ethene is treated with an alkaline solution of cold KMnO4. 13. What is coal ? 14. What are the natural sources of hydrocarbons ? 15. What is petroleum ? 16. Compare the composition of coal and petroleum. 17. Give the origin of coal. 18. Give the origin of petroleum. 19. How are aromatic hydrocarbons obtained from coal ? 20. What do you understand by pyrolysis and discuss it with coal 21. Give a brief account of petroleum refining. Name the various useful products obtained from it. 22. What is straight-run gasoline ? Describe the principle of obtaining straight-run gasoline from petroleum. 23. Explain the following processes: 24. Explain the term ‘knocking’ What is the relationship between the structure of a hydrocarbon and knocking ? 25. Explain the term ‘knocking’. A sample of petrol produces, the same knocking as a mixture containing 30% n-heptane and 70% iso-octane. What is the octane number of sample. 26. Describe different methods to improve the quality of a fuel used in gasoline engine. 27. How can you obtain aliphatic hydrocarbon from coal ? 28. Describe two methods by which petroleum can be obtained artificially from coal. 29. Write a note on synthetic petrol. 30. Discuss various methods for laboratory preparation of alkanes. 31. How can you obtain alkane from (I) Unsaturated hydrocarbons (ii) alkyl halides (iii) carboxylic acids 32. Describe the laboratory preparation of methane. 33. Decribe various methods for laboratory preparation of alkenes. 34. How can alkenes be prepared from : (i) alcohols (ii) alkyl halides ? 35. Decribe the laboratory preparation of ethene ? 36. Give the methods of preparation of ethyne. 37. Describe the laboratory preparation of acetylene. 38. Give the general methods of preparation of higher alkynes. 39. Give the important chemical reactions of alkanes. 40. Give the important chemical properties of alkenes. 41. Give the important chemical properties of alkynes. 42. Give important uses of (i) ethene (ii) ethyne 43. Write a note on halogenation of alkanes. 44. Give the addition reactions of benzene. 45. Discuss the halogenation of benzene. 46. What is sulphonation ? Discuss the sulphonation of benzene. 47. Describe a method to distinguish between ethene and ethyne. 48. Why does ethene decolourise bromine water ; while ethane does not do so ? 49. Give one example each of (i) an addition reaction of chlorine (ii) substitution reaction of chlorine 50. Explain the term ‘polymerisation’ with two examples. 51. Give one chemical equation in which chlorine adds to a hydrocarbon by substitution. 52. What is Grignard reagent ? How is propane prepared from a Grignard reagent ? 53. Give the reason for the following : (i) The boiling points of hydrocarbons decrease with increase in branching. (ii) Unsaturated compounds undergo addition reactions. 54. Account for the following : (i) Boiling points of alkenes and alkynes are higher than the corresponding alkanes. (ii) Hydrocarbons with odd number of carbon atoms have low melting points than those with even number of carbon atoms. (iii) The melting points of cis-isomer of an alkene is lower than that of trans-isomer. 55. Why does acetylene behave like a very weak acid ? 56. Acetylene is acidic in character. Give reason. 57. Write a reaction of acetylene which shows its acidic character. 58. What is Baeyer’s test ? Give its utility. 59. What happens when bromine water is treated with : (i) ethylene (ii) acetylene ? What is its utility ? 60. Explain the terms (i) acetylation (ii) alkylation 61. Write notes on (i) Friedel-Craft’s reaction (ii) Addition reaction. (iii) Markowinkoff’s reaction. 62. Explain the following with examples (i) Wurtz reaction (ii) Kolbe’s electrolytic method 63. Write the equations for the preparation of propyne from acetylene. 64. How are the following conversions carried out ? (i) Propene to propane. (ii) acetylene to ethane (iii) Ethanol to ethene (iv) Methane to ethane (v) Propene to 2-Bromopropane (vi) Methane to tetrachloromethane 65. How will you convert : (i) Acetylene into acetaldehyde. (ii) Methane to ethane (iii) acetic acid to methane. (iv) ethene to ethanol. (v) acetylene to acetic acid. (vi) 2-Chlorobutane to 1-butene. (vii) Ethylene into glyoxal. (viii) Phenol to benzene 66. How will you convert : (i) Ethane to ethene (ii) Ethyl iodide to ethane (iii) Ethyl iodide into butane (iv) Ethyl alcohol into ethyne (v) Propyl chloride to propene 67. What happens when : (i) 1-Bromopropane is heated with alcoholic KOH. (ii) 2-Propanol is heated with alumina at 630 K (iii) Benzene is treated with a mixture of Con. Sulphuric acid and nitric acid. (iv) Ethane is treated with alkaline pot. Permanganate soln (cold). (v) benzene is treated with bromine in presence of aluminium bromide as catalyst. 68. What happens when : (i) Ethyl bromide is treated with alcoholic KOH. (ii) Ethylene dibromide is treated with zinc dust. (iii) Propene reacts with water in presence of a mineral acid. (iv) Ethylene is passed through alkaline KMnO4.Acetylene is hydrated in presence of mercuric sulphate and dil Sulphuric acid. (v) Ethanol is heated with Con. Sulphuric acid. (vi) Ethene is passed through Baeyer’s reagent (vii) Ethyne is passed through ammoniacal silver nitrate. (viii) Methyl bromide is treated with sodium. (ix) Ethanol is treated with HI acid. (x) Ozone is passed in ethylene in an organic solvent. 69. Use Markowinkoff’s rule to predict the product of reaction of (i) HCl with CH2C(Cl)=CH2 (ii) HCl with CH2CH=C(CH3) 2 70. Write chemical equation describing the general mechanism for electrophilic substitution in benzene ring. 71. Why are o- and p-directing groups called activating groups, whereas m-directing groups are deactivating groups ? 72. Name polynuclear hydrocarbon obtained from coal tar. 73. Name some carcino genic compounds. 74. What effect does branching of alkane chain has its boiling point ? 75. What happens when : i) 1-Bromopropane is heated with alcoholic KOH. ii) 2-Propanol is heated with alumina at 630 K. iii) Benzene is treated with a mixture of Con. Sulphuric acid and nitric acid. iv) Ethene is treated with an alkaline solution of cold KMnO4.
http://yourchemistrymaster.blogspot.jp/2009/11/unit-13-hydrocarbons.html
13
14
Compared to other natural hazards (tropical storms, floods, droughts, etc.) destructive tsunamis occur relatively rare. But the recent tsunami in the Indian Ocean dramatically showed what could happen if a tsunami waves triggered by a major earthquake reaches coastal areas without any early warning. The highly energetic tsunami waves struck the costal areas devastating everything on their path. The costal population lost everything, most of the poorly built houses could not stand the massive flood or they were destroyed by flooding material. As floods reached in some areas several kilometers inland wide districts are affected by salinity. Crops, soil and wells for drinking water are contaminated with salt water; it will take years until they could be used again. What is a tsunami? Tsunamis (Japanese for “harbour wave”) are a series of very large waves with extremely long wavelength, in the deep ocean, the length from crest to crest may be 100 km and more. Its height will be only a few decimetres or less. That is why tsunamis can not be felt aboard ships nor can they be seen from the air in the open ocean. They are generated by any rapid, large-scale disturbance of the sea. The waves could travel away from the triggering source with speeds exceeding 800 km/h over very long distances. They could be extremely and damaging when they reach the coast because when the tsunami enters shallower water of the coastal areas the velocity of its waves will decrease and therefore the wave height increase. In shallow waters a large tsunami can crest to heights exceeding 30 m or the water level could rise in a very short time for several tens of meters. Most tsunamis, including the most destructive ones are generated by large and shallow earthquakes which usually occur near geological plate boundaries, or fault-lines, where geological plates collide. When the seafloor abruptly deforms the sudden vertical displacements over large areas disturb the ocean's surface, displace water, and generate destructive tsunami waves. Animation of an earthquake The main factor determining the initial size of a tsunami is the amount of vertical sea floor deformation which, in turn, is controlled by the earthquake's magnitude, depth, and fault characteristics. Parameters which influence the size of a tsunami along the coast are the shoreline and bathymetric configuration, the velocity of the sea floor deformation, the water depth near the earthquake source, and, the efficiency with which energy is transferred from the earth's crust to the water column. Usually, it takes an earthquake with a Richter magnitude exceeding 7.5 to produce a destructive tsunami. Volcanic eruptions, landslides or asteroid impacts could also trigger a tsunami but much less frequently. Even so, one of the largest and most destructive tsunamis was generated in August 26, 1883 after the Krakatoa (Indonesia) eruption. Major earthquakes are suspected to cause underwater slides or slumps of sediment. It is interesting to know that the largest tsunami wave ever observed was triggered by a rock fall l in Alaska on July 9, 1958. A huge block (40 million cubic meter) fall into the sea generating a huge wave but the tsunami energy diminished rapidly away from the source and was hardly recognised by tide gauge stations. Tsunamis can be generated in all parts of the world’s oceans and inland seas. Because the majority of tsunamis are triggered by submarine earthquakes, most occur in the Pacific Ocean. The Pacific Ocean is mainly bounded by subducting geological plates which is also called the “ring of fire”. Even if not very frequent, destructive tsunamis have been also been generated in Atlantic Ocean (Portugal 1883) and the Indian Ocean (Sumatra 2004). For further information see also the global seismic hazard map. How tsunami waves travel across the ocean Once a disruption of the ocean floor has generated a tsunami the waves will travel outward from the source – similar to the ripples caused by throwing a rock into a pond. The wavelength and the period of the tsunami waves depend on the generating source. A high magnitude earthquake of a long fault line will cause greater initial wavelength (~ hundred km) and period (from 5 to 90 minutes), similar to what a huge landslide could generate. The deeper the water, the faster the tsunami wave will travel. In deep oceans, waves can travel with a high speed of up to 800 km/h and lose very little energy while travelling. The great tsunami waves in 1960 in Chile reached Japan which is 16800 km away in less than 24 hours. In many cases, despite tsunami waves travelling very fast, people living in high-risk coastal areas can be warned if there are adequate communication structures established and the people are aware of the risks they face. How tsunamis behave as they approach land The speed of the tsunami is related to the depth of the water. As the water depth decreases, the speed of the tsunami declines. The transformation of total energy of the tsunami leads to the growth of the tsunami waves. Tsunami waves will, however, normally not reach the coast as a huge wall of water. It may appear as a rapidly rising or falling tide, a series of breaking waves, or even a bore. Reefs, bays, entrances to rivers, undersea features and the gradient of the slope of the beach all help to modify the tsunami as it approaches the shore. The rise of the water level on shore varies in extreme cases; the water level can rise to more than 15 metres for tsunamis of distant origin and over 30 metres for a tsunami generated near the earthquake's epicentre. Tsunamis may reach a maximum vertical height, called a run-up height, onshore above sea level of 30 meters. Tsunamis consist of a series of waves, the first of which may not be the largest. Tsunamis have great erosional potential. The flooding of an area can extend inland by many hundreds of metres, covering large expanses of land with water and debris. Flooding tsunami waves tend to carry loose objects as well as people out to sea when they retreat. What is a Tsunami Early Warning System? Early warning is much more than just a prediction. PPEW defines a complete and effective early warning system as a package of four spanning knowledge of the risks faced through to preparedness to act on early Strong linkages between the four elements are essential. Therefore the major players concerned with the different elements need to meet regularly to ensure they understand all of the other components and parties need from them, and to agree on specific responsibilities throughout all four elements. Key activities of all types of early warning systems include: (i) construction of risk scenarios, (ii) improvements to the early warning system itself by adjusting it according to data and analysis from studies of past events (iii) development and publishing of manuals, (iv) dissemination of information, (v) practicing and testing of operational procedures such as evacuations. All these activities need to have a solid base of political support, institutional responsibility, availability of trained people as well as necessary laws and regulations. Early warning systems are most effective when established and supported as a matter and when preparedness to respond is engrained in society. The same basic factors are also valid for tsunami early warning systems. When designing an effective early warning system, the following four elements have to be considered because failure in any one part can Prior knowledge of the risks faced by communities Risks arise from both the hazards and the vulnerabilities that are present; therefore, we need to ask what the patterns and trends in these factors are. Several activities have to be undertaken to gather knowledge of the communities and localities at risk. exposure and vulnerability maps and hazard databases reevaluate community vulnerability and exposure use planning and strategies (no further development, redevelopment, open space uses such as parks and agriculture, keep development at a minimum level in hazard-prone areas) work such as reinforcement of buildings, dams, walls, drainage, - Slowing techniques involves creating friction that reduces the destructive power of waves. (Forests, ditches, - Steering techniques guide the force of tsunamis away from vulnerable structures and people by using angled walls and ditches - Blocking walls include compacted terraces and berms. Tsunami protection wall, Japan. concentration on critical facilities (fire stations, power substations, hospitals, sewage treatment plants) is needed. These key facilities should not be located in inundation zones. Relocation of these types of facilities out of inundation areas is necessary, however, if the location within the hazardous area is unavoidable than the buildings have to be designed or retrofitted to survive tsunami damage. building codes for buildings constructed in exposed/hazardous areas – ensure that these codes and standards address the full range of potential hazards (multi hazard approach). lines and planes have to be in place and horizontal evacuation have to be considered of hazard information into planning processes of comprehensive risk management policies monitoring and warning service for these risks It is essential to determine whether the right factors are being monitored and if accurate warnings could be, in fact, generated in a timely fashion. - Around the clock operational capability and well trained rapid response unites and location assessment studies of experiences when establishing a new early warning system equipment appropriate to local circumstances and well maintained staff to handle the instruments in a proper way of measurement instruments and facilities education for the staff to keep up-to-date and cooperation among relevant monitoring centres and communication channels/lines with the global seismic and communication channels/lines with tide gauge stations of understandable warnings to those at risk A critical issue regarding early warning systems is that the warnings reach those at risk and that the people at risk understand them. The warnings need to deliver proper information in order to enable proper responses. An additional requirement for an effective evacuation/warning system is continuous public education. Without education a system cannot be totally effective no matter how expensive or sophisticated it is. To design an effective and efficient system, all groups that take part in the notification process (emergency managers, media, etc) should be involved in the planning and implementation of the system. This is to ensure that all aspects of the systems are considered. Tsunami siren, Hawaii dissemination system has to be adjusted to incorporate factors such as the size of the area at risk (compact/spread out), its location (harbour/beach), people at risk (retirees/transit population/tourists), financial resources of the community and existing notification systems. at risk need to be equipped with the necessary instruments to receive and further disseminate the warnings. Some of these - Sirens: good coverage including isolated areas, controllable from a central point for rapid notification, low maintenance but high cost, vulnerable to sand and salt. The siren’s meaning may not be recognized or ignored. - Telephones: good coverage, audio component (better understanding of the purpose of the warning), tailored messaging but proximity to the instrument necessary, expensive if rarely used, cell phones beyond coverage area (??), direct dial only (motels, offices excluded) problems in serving large - Radio: easy to use, widespread, mobile, inexpensive, audio component, rapidly transmitted message, but coverage limited to those with radios, not all coastal areas are covered, high rate of false alarms, radio must be on at all times. Alert System (EAS): (Example from the US) wide coverage, can broadcast evacuation notice, inexpensive, consistent and ability to rapidly send message, but radio/television must be switched on, not all radio/TV stations are staffed 24 hours, does not work on satellite TV, power dependent (some area no coverage) - Pagers: inexpensive if used for specific audience but coverage limited - Billboards: simple, always available, multicultural but often a victim of - Aircraft: reaches remote areas, reach accessible areas which are not covered by other means, complements other systems but limited in number and coverage, slow response. systems in place for local and distant tsunamis. and preparedness to act Communities have to understand their risks, they need to respect the warning service, and they have to know how to react if a warning is - Implement effective information and education programmesMaintain the programmes over a long term building/ training within the communities - Curriculum for schools located in districts at risk - Training and regular drill of emergency situation - Plan for evacuation: horizontal evacuation (moving people to more distant locations or higher ground) vertical evacuation (moving people to higher floors in buildings) - Inventory of buildings (potential for vertical evacuation) - Identify specific buildings to serve as vertical shelters - Agreements with house owners - Maintain communities interest in natural hazards, annual events (tsunami week, integration into social life) For further reading please see: Designing for Tsunamis: Seven Principles for Planning and Designing for Tsunami Hazards, Richard Eisner and others, 2001. US National Tsunami Hazard Mitigation Program. http://www.prh.noaa.gov/itic/library/pubs/online_docs/Designing_for_Tsunamis.pdf Tsunami Warning Systems and Procedures: Guidance for Local Officials Oregon Emergency Management and Oregon Department of Geology and Mineral Industries, 2001. http://www.prh.noaa.gov/itic/library/pubs/online_docs/SP%2035%20Tsunami%20warning.pdf
http://www.unisdr.org/2006/ppew/tsunami/what-is-tsunami/backinfor-brief.htm
13
18
Pioneering the Space Frontier The Report of the National Commission on Space Table of Contents The Universe is the true home of humankind. Our Sun is only one star in the billions that comprise the Milky Way Galaxy, which in turn is only one of the billions of galaxies astronomers have already identified. Our beautiful planet is one of nine in our Solar System. Understanding our Universe is not just an intellectual and philosophical quest, but a requirement for continuing to live in, and care for, our tiny part of it, as our species expands outward into the Solar System. Beginnings: The Big Bang, the Universe, and Galaxies In the 1920s scientists concluded that the Universe is expanding from its origin in an enormous explosion—the "Big Bang"—10 to 20 billion years ago. In the future, it could either expand forever or slow down and then collapse under its own weight. Recent studies in particle physics suggest that the Universe will expand forever, but at an ever decreasing rate. For this to be true, there must be about ten times more matter in the Universe than has ever been observed; this "hidden matter" may be in the form of invisible particles that are predicted to exist by modem theory. Thus, in addition to normal galaxies there may be invisible "shadow galaxies" scattered throughout space. The Universe contains 100 billion or more galaxies, each containing billions of stars. Our Galaxy, the Milky Way, is the home of a trillion sun, many of which resemble our Sun. Each of these stars, when it is formed from an interstellar cloud, is endowed with hydrogen and helium—simple chemical elements that formed in the Big Bang, as well as with heavier elements that formed in previous ste1lar furnaces. Hydrogen is consumed by a thermonuclear fire in the star's core, producing heavier chemical elements that accumulate there before becoming fuel for new, higher temperature burning. In massive stars, the process continues until the element iron dominates the core. No further energy-producing nuclear reactions are then possible, so the core collapses suddenly under its own weight, producing vast amounts of energy in a stellar explosion known as a supernova. The temperature in a supernova is so high that virtually all of the chemical elements produced are flung into space, where they are available to become incorporated in later generations of stars. About once per century a supernova explosion occurs in each galaxy, leaving behind a compact object that may be a neutron star—as dense as an atomic nucleus and only a few miles in diameter—or a stellar black hole, in which space-time is so curved by gravity that no light can escape. The Solar System Our Solar System consists of the Sun, nine planets, their moons and rings, the asteroids, and comets. Comets spend most of their time in the "Oort cloud," located 20,000-100,000 astronomical units from the Sun (an astronomical unit is the distance from Earth to the Sun, 93 million miles). The Solar System formed 4.5 billion years ago near the edge of our Galaxy. Heir to millions of supernova explosions, it contains a full complement of heavy elements. Some of these, like silicon, iron, magnesium, and oxygen, form the bulk of the composition of Earth; the elements hydrogen, carbon, and nitrogen are also present, providing molecules essential for life. A Grand Synthesis The Universe has evolved from the Big Bang to the point we see it today, with hundreds of billions of galaxies and perhaps countless planets. There is no evidence that the processes which govern the evolution from elementary particles to galaxies to stars to heavy elements to planets to fife to intelligence differ significantly elsewhere in the Universe. By integrating the insights obtained from virtually every branch of science, from particle physics to anthropology, humanity may hope one day to approach a comprehensive understanding of our position in the cosmos. A Nobel Prize-winning theory predicts the change in beat capacity of liquid helium at uniform pressure as it makes a transition to the superfluid state. Although equipment has been developed to hold samples at a steady temperature within one part in 10 billion, the variation of pressure through the sample due to gravity is so large that experiments have yielded far less accurate results than desired. Reducing gravity by a factor of 100,000, as is possible in the Space Station, can provide a high-quality test of the theory. Research is also proceeding on "fractal aggregates," structures that have the remarkable property that their mean density literally approaches zero the larger they become. Such structures are neither solid nor liquid, but represent an entirely new state of matter. So far, experiments on such structures are limited by the fact that the aggregates tend to collapse under their own weight as soon as they reach .0004 inches in size. In a microgravity environment, it should be possible to develop structures 100,000 times larger, or three feet across. Such sizes are essential if measurements of the physical properties of fractal structures are to be made. Research on many other processes, including fractal gels, dendritic crystallization (the process that produces snowflakes), and combustion of clouds of particles, will profit substantially from the microgravity environment of the Space Station. It is not unlikely that novel applications will develop from basic research in these areas; just as the transistor grew out of basic research on the behavior of electrons in solids. An especially promising avenue of research in space is the pursuit of new tests of Einstein's theory of general relativity. It has long been recognized that because deviations from the Newtonian theory of gravitation within the Solar System are minute, extremely sensitive equipment is required to detect them. Many experiments require the ultraquiet conditions of space. Because Einstein's theory is fundamental to our understanding of the cosmos—in particular, to the physics of black holes and the expanding Universe—it is important that it be experimentally verified with the highest possible accuracy. Relativity predicts a small time delay of radio signals as they propagate in the Solar System; the accuracy in measuring this effect can be continuously improved by tracking future planetary probes. A Mercury orbiter would further improve the accuracy of measurement of changes in Newton's gravitational constant, already shown to be less than one part in 100 billion per year. An experiment in Earth orbit called Gravity Probe B will measure the precession of a gyroscope in Earth orbit with extreme precision, permitting verification of another relativistic effect. Einstein's theory also predicts that a new type of radiation should be produced by masses in motion. There is great interest in detecting this so-called gravitational radiation, not only because it would test Einstein's theory in a fundamental way, but because it could open a new window through which astronomers could study phenomena in the Universe, particularly black holes. Gravitational radiation detectors are in operation, or are being built, on the ground, but they are sensitive only to wave periods less than 0.1 second because of Earth's seismic noise. The radiation predicted from astronomical objects would have much longer periods if it is due to orbiting double stars and black holes with masses greater than 10,000 Suns, such as are believed to exist in the nuclei of galaxies. An attempt will be made to detect such radiation by ranging to the Galileo spacecraft en route to Jupiter. A more powerful approach for the future is to use a large baseline detector based upon optical laser ranging between three spacecraft in orbit about the Sun; detecting minute changes in their separations would indicate the passage of a gravitational wave. Finally, instruments deployed for more general purposes can make measurements to test general relativity. For example, a 100-foot optical intederometer in Earth orbit designed for extremely accurate determination of stellar positions could measure the relativistic bending of light by the Sun with unprecedented precision. A spacecraft that plunges close to the Sun to study plasma in its vicinity could measure the gravitational red-shift of the Sun to high precision. In summary, a variety of space-based experiments on the shuttle and Space Station, in free flyers, and in orbit around the Sun and other planets have the capacity to test general relativity with a high degree of accuracy. Gravitational radiation from certain astronomical sources can be detected only in space. When that happens, astronomers will have an exciting new tool with which to study the Universe. The objective of this field of study is to understand the physics of the Sun and the heliosphere, the vast region of space influenced by the Sun. Other regions of interest include the magnetospheres, ionospheres, and upper atmospheres of Earth, the planets, and other bodies of the Solar System. With this in mind, studies of the basic processes which generate solar energy of all kinds and transmit it to Earth should be emphasized, both because the physical mechanisms involved are of interest, and because there are potential benefits to life on Earth. There are a number of sub-goals within this discipline: To understand the processes that fink the interior of the Sun to its corona; the transport of energy, momentum, plasma, and magnetic fields through interplanetary space by means of the solar wind; the acceleration of energetic particles on the Sun and in the heliosphere; Earth's upper atmosphere as a single, dynamic, radiating, and chemically active fluid; the effects of the solar cycle, solar activity, and solar-wind disturbances upon Earth; the interactions of the solar wind with Solar System bodies other than Earth; and magnetospheres in general. Without assuming specific direct connection, the possible influence of solar-terrestrial interactions upon the weather and climate of Earth should be clarified. A number of near-term activities are essential to the advancement of solar and space physics. Advanced solar observatories will study detailed energy production mechanisms in the solar atmosphere, while the European Space Agency's Ulysses spacecraft will make measurements of activity at the poles of the Sun. Spacecraft with sufficient velocity to leave the inner Solar System will make possible measurements in the outer heliospbere, including its transition to the interstellar medium of the Galaxy. The International Solar-Terrestrial Physics program, which wilI be carried out jointly by the United States, Japan, and Europe, will trace the flow of matter and energy from the solar wind through Earth's magnetosphere and into the upper atmosphere; investigate the entry, storage, and energization of plasma in Earth's neighborhood; and assess how time variations in the deposition of energy in the upper atmosphere affect the terrestrial environment. Interactions of solar plasma with other planets and with satellites and comets will be investigated by a number of planetary probes already in space or on the drawing boards. Up to now, information about Earth's magnetosphere has been based upon measurements made continuously as various spacecraft move through the plasma and magnetic field in that region. An instantaneous global image of the entire magnetosphere can be made using ultraviolet emissions from ionized helium in the magnetosphere. It may also be possible to form an image of energetic particles by observing energetic neutral atoms as they propagate from various regions, having exchanged charge with other atoms there. Innovative experiments will be conducted from the shuttle to investigate the effects of waves, plasma beams, and neutral gases injected into Earth's magnetosphere. To date, our knowledge of the outer atmosphere of the Sun has been based upon remote sensing from the distance of Earth. In a new concept, a spacecraft would be sent on a trajectory coming to within 4 solar radii of the surface of the Sun, only 1/50th of Earth's distance. The spacecraft would carry instruments to measure the density, velocity, and composition of the solar-wind plasma, together with its embedded magnetic field, in an attempt to discover where the solar wind is accelerated to the high velocities observed near Earth. Possible trajectories include a Jupiter swingby or a hypersonic flyby in the upper atmosphere of Venus. Such a mission would yield precise data on the gravitational field of the Sun with which to study its interior, and would test general relativity with higher precision. If a thruster were fired at the closest approach to the Sun, the energy change would be so great that the spacecraft would leave the Solar System with high velocity, reaching 100 times the distance of Earth in only nine years. This would provide measurements where the solar wind makes a transition to the local interstellar medium. To acquire high-resolution information about the poles of the Sun over a long-period, a solar polar orbiter should be flown. A network of four spacecraft at the distance of Earth, but positioned every 90 degrees around the Sun, would provide stereoscopic views of solar features which are otherwise difficult to locate in space, and would also monitor solar flare events over the whole Sun. Such a network would also give early warning to astronauts outside the protective shield of Earth's magnetic field. Finally, plasmas in space should be studied for their own sake. Plasma is an inherently complex state of matter, involving many different modes of interaction among charged particles and their embedded magnetic fields. Our understanding of the plasma state is based upon theoretical research, numerical simulations, laboratory experiments, and observations of space plasmas. The synergy among these approaches should be developed and exploited. If neutral atoms and dust particles are present, as in planetary ring systems and in comets, novel interactions occur; they can be studied by injecting neutral gases and dust particles into space plasmas. With the exception of the samples returned from the Moon by the Apollo astronauts and the Soviet robotic Luna spacecraft, and meteoritic materials that are believed to have fallen naturally on Earth from the asteroids, the Moon, and Mars, we have no samples of materials from bodies elsewhere in the Solar System for analysis in Earth-based laboratories. Decades of study of meteoritic materials and lunar samples have demonstrated that vast amounts of information can be learned about the origin, evolution, and nature of the bodies from which samples are derived, using laboratory techniques which have progressed to the point where precise conclusions can be drawn from an analysis of even a microscopic sample. The laboratory apparatus involved is heavy, complex, and requires the close involvement of people. Thus, given the substantial round-trip travel time for radio signals between Earth and these objects, it appears impractical to operate this equipment effectively under radio control on the bodies of greatest interest. The best method is to acquire and return samples, as was done by Apollo and Luna. Robot vehicles will be the most cost-effective approach to sample acquisition and return in the foreseeable future. Unlike meteoritic materials, the samples will be obtained from known sites, whose location in an area which has been studied by remote sensing makes it possible to generalize the results to the body as a whole. Because of the variations among different provinces, samples are required from several sites in order to develop an adequate understanding of a specific object. Considerable thought has been given to which targets are the most promising. They must be reachable, and samples must be able to be returned with technology that can be developed in the near future. Their surfaces must be hospitable enough so that collecting devices can survive on them, and they must be well-enough understood that a complex sample-return mission can be planned and successfully executed. For these reasons, as well as others noted in the text, we recommend that a sample return from Mars be accomplished as soon as possible. Though at present no individual comet meets the criteria discussed above, comets in general are promising targets for sample return. A start on the study of comets has been made by the 1985 encounter with Comet Giacobini-Zinner, and the 1986 encounters with Comet Halley. The proposed mission to a comet and an asteroid in the Solar System Exploration Committee's core program wi1l yield much more information. Comets are probably composed of ices of methane, ammonia, and water, together with silicate dust and organic residues. The evidence suggests that these materials accumulated very early in the history of the Solar System. Because comets are very small (a few miles in diameter), any heat generated by radioactivity readily escaped, so they never melted, unlike the larger bodies in the Solar System. It is quite possible, therefore, that the primitive materials which accumulated to form the Sun, planets, moons, and asteroids are preserved in essentially their original form within comets. It is even possible that comets contain some dust particles identical to those astronomers have inferred to be present in interstellar clouds. If so, a comet could provide a sample of the interstellar matter that pervades our Galaxy. The Space Science Board has given high priority to determining the composition and physical state of a cometary nucleus. No mission short of a sample return will provide the range and detail of analyses needed to definitively characterize the composition and structure of a comet nucleus. Beyond the asteroid belt he four giant ringed planets (Jupiter, Saturn, Uranus, and Neptune), the curiously small world Pluto, more than 40 moons (two of which—Titan and Ganymede—are larger than the planet Mercury), and two planetary magnetospheres larger than the Sun itself. The center of gravity of our planetary system is here, since these worlds (chiefly Jupiter and Saturn) account for more than 99 percent of the mass in the Solar System outside of the Sun itself. The outer planets, especially Jupiter, can provide unique insights into the formation of the Solar System and the Universe. Because of their large masses, powerful gravitational fields, and low temperatures, these giant planets have retained the hydrogen and helium they collected from the primordial solar nebula. The giant worlds of the outer Solar System differ greatly from the smaller terrestrial planets, so it is not surprising that different strategies have been developed to study them. The long-term exploration goal for terrestrial planets and small bodies is the return of samples to laboratories on Earth, but the basic technique for studying the giant planets is the direct analysis of their atmospheres and oceans by means of probes. Atmospheric measurements, which will be undertaken for the first time by Galileo at Jupiter, provide the only compositional information that can be obtained from a body whose solid surface (if any) lies inaccessible under tens of thousands of miles of dense atmosphere. Atmospheric probe measurements, like measurements on returned samples, will provide critical information about cosmology and planetary evolution, and will permit fundamental distinctions to be made among the outer planets themselves. Exciting possible missions include: (1) Deep atmospheric probes (to 500 bars) to reach the lower levels of the atmospheres of Jupiter and Saturn and measure the composition of these planets; (2) hard and soft landers for various moons, which could emplace a variety of seismic, heat-flow, and other instruments; (3) close-up equipment in low orbits; (4) detailed studies of Titan, carried out by balloons or surface landers; (5) on-site, long-term observations of Saturn's rings by a so-called "ring rover" spacecraft able to move within the ring system; and (6) a high-pressure oceanographic probe to image and study the newly-discovered Uranian Ocean. The size of the current generation of "great observatories" reflects the limitations on weight, size, and power of facilities that can be launched into low Earth orbit by the space shuttle. In the future, the permanently occupied Space Station will furnish a vitally important new capability for astronomical research—that of assembling and supporting facilities in space that are too large to be accommodated in a single shuttle launch. Such large facilities will increase sensitivity by increasing the area over which radiation is collected, and will increase angular resolution using the principle of interferometry, in which the sharpness of the image is proportional to the largest physical dimension of the observing system. Though one or the other goal will usually drive the design of any particular instrument, it is possible to make improvements in both areas simultaneously. When we can construct very large observatories in space, these improvements wil1 be achieved over the whole electromagnetic spectrum. Although the Moon will also offer advantages for astronomical facilities once a lunar base becomes available, we focus our remaining discussion upon facilities in low Earth orbit. A large deployable reflector of 65 to 100 feet aperture for observations in the far infrared spectrum, that is, diffraction limited down to 30 microns wavelength (where it would produce images of a fraction of an arc second across), will permit angular resolutions approaching or exceeding that of the largest ground—based optical telescopes. This project would yield high-resolution infrared images of planets, stars, and galaxies rivaling those routinely available in other wavelength ranges. Assembly in Earth orbit is the key to this observatory. A large space telescope array composed of several 25-foot-diameter telescopes would operate in the ultraviolet, visible, and infrared. The combination of larger diameter telescopes with a large number of telescopes would make this instrument 100 times more sensitive than Hubble Space Telescope. Because the image would be three times sharper, the limiting faintness for long exposures would increase more than 100 times. Such an instrument would with exquisite angular and spectral resolution enable detailed studies of the most distant galaxies and studies of planets. A set of radio telescopes 100 feet or more in diameter could be constructed in Earth orbit by astronauts to provide a very long baseline array for observing radio sources, with the radio signals transmitted to a ground station. Such radio telescopes in space could greatly extend the power of the ground—based Very Long Baseline Array now under construction. The angular resolution of the latter, 0.3 milliarcseconds (the size of a person on the Moon as seen from Earth), could be improved 300-fold by putting telescopes in orbits ranging out as far as 600,000 miles. The resulting resolution of 1/1000th of a milliarcsecond—or one microarcsecond—would enable us to image activity in the center of our Galaxy—believed to be due to a black hole—very nearly down to the black hole itself. It would also provide images of larger, more massive black holes suspected to he at the centers of several nearby galaxies. A long-baseline optical space interferometer composed of two or more large telescopes separated by 300 miles would also provide resolution of 1 microarcsecond, although not complete information about the image. This resolution would permit us to detect a planet no lager than Earth in orbit around a nearby star (by means of its gravitational pull on the star) and to measure the gravitational deflection of light by the Sun as a high—precision test of general relativity. A high-sensitivity x-ray facility, having about 100 times the collecting area of the planned Advanced X-ray Astrophysics Facility, could be assembled in orbit. A space station-serviced x-ray observatory would make possible the detection of very faint objects, such as stellar explosions in distant galaxies, as well as high-spectral resolution of brighter objects. This would make possible a study of x-ray signatures of the composition, temperature, and motion of emitted gases. For example, the theory that most heavy elements are produced in supernovae can be tested by studying die gaseous ejecta in supernova, remnants. A hard x-ray imaging facility with a large (1,000 square feet) aperture is needed to study x-rays with energies in the range from 10 KeV to 2 MeV. Sources of such radiation are known, but are too faint for smaller-aperture instruments to analyze in detail. It is important to find out whether the known faint background radiation at these energies is coming from very distant objects, such as exploding galaxies, or from gas clouds heated by the stellar explosions that accompany galaxy formation. The future development of gamma-ray astronomy will depend upon the results of planned gamma-ray observatory, but it is anticipated that larger collecting areas and higher spectral and angular resolution will be needed to sort out the sources and carry out detailed spectroscopy. Cosmic-ray studies will require a superconducting magnet in space with 1,000 square feet of detectors to determine the trajectories of individual particles and hence their energy and charge. The great observatories of the next century will push technology to its limits, including the capability to assemble large structures in orbit and on the Moon, the design of extremely rigid structures that can be tracked and moved with great precision, and the development of facilities on the Space Station for repairing and maintaining astronomical facilities in orbit. Because of the huge information rates anticipated from such observatories, great advances in computing will be required, especially massive data storage (up to 100 billion bits) accessible at high rates. The preliminary analysis of the data will be performed by supercomputers in orbit, transmitting only the results to the ground. The program will require a long—term commitment to education and support of young scientists, who will be the life blood of the program, as well as the implementation of high-priority precursor missions, including first-generation great observatories and moderate-scale projects. The Evolution of Earth and Its Life Forms Earth is the only one of our Solar System's nine planets that we know harbors life. Why is Earth different from the other planets? Life as we know it requires tepid liquid water, and Earth alone among the bodies of the Solar System has had that throughout most of its history. Biologists have long pursued the hypothesis that living species emerge very gradually, as subtle changes in the environment give decisive advantages to organisms undergoing genetic mutations. The recent discovery that the extinction of the dinosaurs (and many other species as well) some 65 million years ago appears to have coincided with the collision of Earth with a large object from outer space—such as a comet or asteroid—has led to new interest in "punctuated equilibrium." According to this concept, a drastic change in environment, in this case the pall cast upon Earth by the giant cloud of dust that resulted from the collision, can destroy some branches of the tree of life in a short span of time, and thereby open up new opportunities for organisms that were only marginally competitive before. The story of the evolution of life on Earth—once the sole province of biology—thus depends in pan upon astronomical studies of comets and asteroids which may collide with our planet, the Physics of high-velocity impact, and the complex processes that govern the movement of dust in Earth’s atmosphere. Atmospheric scientists are finding that within such short times as decades or centuries the character of life on Earth may depend upon materials originating in the interior of the planet (including dust and gases from volcanoes), chemical changes in the oceans and the atmosphere (including the increase in carbon dioxide due to agricultural and industrial activity), and specific radiations reaching us from the Sun (such as the ultraviolet rays which affect the chemical composition of Earth's atmosphere). Through mechanisms still not understood, changes in Earth’s climate may in turn depend upon the evolution of life. It has become apparent that fife on Earth exists (in a complex and delicate balance not only with its own diverse elements, but with Earth itself, the Sun, and probably even comets and asteroids. Interactions among climatology, geophysics, geochemistry, ecology, astronomy, and solar physics are all important as we contemplate the future of our species; space techniques are playing an increasing role in these sciences. Space techniques are also valuable for studying Earth's geology. The concept of continental drift, according to which the continents change their relative positions as the dense rocks on which they rest slowly creep, is proving to be a key theory in unraveling the history of Earth as recorded in die layers of sediments bud down over millions of years. The Possibility of Other Life in the Universe Are we alone in the Universe? Virtually all stars are composed of the same chemical elements, and our current understanding of the process by which the Solar System formed suggests that all Sun-like stars are likely locales for planets. The search for life begins in our own Solar System, but based on the information we have gleaned from robotic excursions to Mercury, Venus, the Moon, Mars, Jupiter, Saturn, and Uranus, it now appears that Mars, and perhaps Titan, a moon of Saturn, are the most likely candidates for the existence of rudimentary fife forms now or in the past. The existence of water on Mars in small quantities of surface ice and in atmospheric water vapor, and perhaps in larger quantities frozen beneath the surface, leaves open the possibility that conditions on Mars may once have been favorable enough to support life in some areas. Samples returned from regions where floods have occurred may provide new clues to the question of life on Mars. Titan has a thick atmosphere of nitrogen, along with methane and traces of hydrogen cyanide—one of the building blocks of biological molecules. Unfortunately, the oxygen atoms needed for other biological molecules are missing, apparently locked forever in the ice on Titan's surface. How do we search for planets beyond our Solar System? The 1983 Infrared Astronomy Satellite discovered that dozens of stars have clouds of particles surrounding them emitting infrared radiation; astrophysicists believe that such clouds represent an early stage in the formation of planets. Another technique is to track the position of a star over a number of years. Although planets are much less massive than stars, they nevertheless exert a significant gravitational force upon them, causing them to wobble slightly. Through a principle called interferometry, which combines the outputs of two telescopes at some distance apart to yield very sharp images, it should be possible to detect planets—if they exist—by the perturbations they cause as they orbit nearby sun similar to our Sun. With sufficiently large arrays of telescopes in space we might obtain images of planets beyond the Solar System. By searching for evidence of water and atmospheric gases we might even detect the existence of life on those planets. If life originated by the evolution of large molecules in the oceans of newly-formed planets, then other planets scattered throughout our Galaxy could be inhabited by living species, some of which may possess intelligence. If intelligent life does exist beyond our Solar System, we might detect its messages. The Search for Extraterrestrial Intelligence, or SETI, is a rapidly advancing field. For several decades it has been technically possible to detect radio signals (if any) directed at Earth by alien civilizations on planets orbiting nearby stars. It is now possible to detect such signals from anywhere in our Galaxy, opening up the study of over 100 billion candidate stars. Such a detection, if it ever occurs, would have profound implications not only for physical and biological sciences, but also for anthropology, political science, philosophy, and religion. Are we alone? We still do not know. To lift payloads in Earth's gravitational field and place them in orbit, we must expend energy. We generate it first as the energy of motion—hence the great speeds our rockets must attain. As rockets coast upward after firing, their energy of motion converts, according to Newton's laws, to the energy of height. In graphic terms, to lift a payload entirely free of Earth's gravitational clutch, we must spend as much energy as if we were to haul that payload against the full force of gravity that we feel on Earth, to a height of 4,000 miles. To reach the nearer goal of low Earth orbit, where rockets and their payloads achieve a balancing act, skimming above Earth's atmosphere, we must spend about half as much energy—still equivalent to climbing a mountain 2,000 miles high. Once in "free space," the region far from planets and moons, we can travel many thousands of miles at small expenditure of energy. A biosphere is an enclosed ecological system. It is a complex, evolving system within which flora and fauna support and maintain themselves and renew their species, consuming energy in the process. A biosphere is not necessarily stable; it may require intelligent tending to maintain species at the desired levels. Earth supports a biosphere; up to now we know of no other examples. To explore and settle the inner Solar System, we must develop biospberes of smaller size, and learn how to build and maintain them. In order to grow food crops and the entire range of plants that enrich and beautify our lives, we need certain chemical elements, the energy of sunlight, gravity, and protection from radiation. All can be provided in biospheres built on planetary surfaces, although normal gravity is available only on Earth. These essentials are also available in biospheres to be built in the space environment, where Earth-normal gravity can be provided by rotation. Both in space and on planetary surfaces, certain imported chemicals will be required as initial stocks for biospheres. Within the past two decades biospheres analogous to habitats in space or on planetary surfaces have been built both in the U.S.S.R. and in the United States. An example of biosphere technologies can be seen at the "Land" pavilion at the EPCOT Center (Experimental Prototype Community of Tomorrow) near Orlando, Florida. Specialists in biospheres are now building, near Tucson, Arizona, a fully dosed ecological system which will be a simulation of a living community in space. It is Called Biosphere II. Its volume, three million cubic feet, is about the volume over a land area of four acres with a roof 20 feet above it. Biosphere II is much more than greenhouse agriculture. Within it will be small versions of a farm, an ocean, a savannah, a tropical jungle and other examples of Earth's biosystems. Eight people will attempt to live in Biosphere II for two years. The builders of Biosphere II have several goals: to enhance greatly our understanding of Earth's biosphere; to develop pilot versions of biospheres, which could serve as refuges for endangered species; and to prepare for the building of biospheres, in space and on planetary surfaces, which would become the settlements of the space frontier. In the early 1960s, the Government, through NASA, developed and launched the first weather satellites. When the operation of weather satellites matured, they were turned over to the Department of Commerce's Environmental Science Services Administration, which became pan of the newly-established National Oceanic and Atmospheric Administration (NOAA) in 1970. Today, NOAA continues to operate and manage the U.S. civilian weather satellite system, comprised of two polar-orbiting and two geostationary satellites. The Landsat remote sensing system had similar origins. Developed initially by NASA, the first Landsat satellite was launched in 1972; the most recent spacecraft in the series, Landsat 5, was orbited in 1984. Although remote sensing data are provided by the United States at re1atively low cost, many user nations have installed expensive equipment to directly receive Landsat data. Their investments in Landsat provide a strong indication of the data's value. Successive administrations and Congresses wrestled with the question of how best to deal with a successful experimental system that had, in fact, become operational. Following exhaustive governmental review, President Carter decided in 1979 that Landsat would be transferred to NOAA with the eventual goal of private sector operation after 7 to 10 years. Following several years of transition between NASA and NOAA, the latter formally assumed responsibility for Landsat 4 in 1983. By that time, the Reagan Administration had decided to accelerate the privatization of Landsat, but despite the rapid growth in the demand for these services, no viable commercial entity appeared ready to take it over without some sort of Government subsidy. In 1984, Congress passed the Land Remote Sensing Commercialization Act to facilitate the process. Seven qualified bidders responded to the Government's proposal to establish a commercial land remote sensing satellite system, and two were chosen by the Department of Commerce for final competition, One later withdrew after the Reagan Administration indicated that it would provide a considerably lower subsidy than anticipated. The remaining entrant, EOSAT, negotiated a contract that included a Government subsidy and requires them to build at least two more satellites in the series. In the fall of 1985, EOSAT, a joint venture between RCA Astro-Electronics and Hughes Santa Barbara Aerospace, assumed responsibility for Landsat. The Government's capital assistance to EOSAT is in limbo at this time because of the current budget situation, even though EOSAT was contractually targeted for such financial support. It is, therefore, too soon to say whether the Landsat privatization process will provide a successful model for the transfer of a Government-developed space enterprise to the private sector. Factories that could replicate themselves would be attractive for application in space because the limited carrying capacity of our rocket vehicles and the high costs of space transport make it difficult otherwise to establish factories with large capacities. The concept of self-replicating factories was developed by the mathematician John von Neumann. Three components are needed for industrial establishment in space: a transporting machine, a plant to process raw material, and a "job shop" capable of making the heavy, simple parts of more transporting machines, process plants, and job shops. These three components would all be tele-operated from Earth, and would normally be maintained by robots. Intricate parts would be supplied from Earth, but would be only a small percentage of the total. Here is an example of how such components, once established, could grow from an initial "seed" exponentially, the same way that savings grow at compound interest, to become a large industrial establishment: Suppose each of the three seed components had a mass of 10 tons, so that it could be transported to the Moon in one piece. The initial seed on the Moon would then be 30 tons. A processing plant and job shop would also be located in space—20 tons more. After the first replication, the total industrial capacity in space and on the Moon would be doubled, and after six more doublings it would be 128 times the capacity of the initial seed. Those seven doublings would give us the industrial capacity to transport, process, and fabricate finished products from over 100,000 tons of lunar material each year from then onward. That would be more than 2,000 times the weight of the initial seed—a high payback from our initial investment. In an electromagnetic accelerator, electric or magnetic fields are used to accelerate material to high speeds. The power source can be solar or nuclear. There are two types of accelerators for use in space: the "ion engine" and the "mass-driver." The ion engine uses electric fields to accelerate ions (charged atoms). Ion engines are compact, relatively fight in weight, and well-suited to missions requiring low thrust sustained for a very long time. Mass-drivers are complementary to ion engines, developing much higher thrusts but not suited to extreme velocities. A mass-driver accelerates by magnetic rather than electric fields. It is a magnetic linear accelerator, designed for long service fife, and able to launch payloads of any material at high efficiency. Mass-drivers should not be confused with "railguns," which are electromagnetic catapults now being designed for military applications. A mass-driver consists of three parts: the accelerator, the payload carrier, and the Payload. For long lifetime, the system is designed to operate without physical contact between the payload carrier and accelerator. The final portion of the machine operates as ii, decelerator, to slow down each payload carrier for its return and reuse. A key difference between the mass-driver and the ion engine is that the mass-driver can accelerate any solid or liquid material without regard to its atomic properties. Used as a propulsion system, the mass-driver could use as propellant, raw lunar soil, powdered material from surplus shuttle tankage in orbit, or any material found on asteroids. Its characteristics make it suitable for load-carrying missions within the inner solar system. Another potential application for a mass-driver is to launch payloads from a fixed site. The application studied in the most depth at this time is the launch of raw material from the Moon to a collection point in space, for example, one of the lunar Lagrange points. A mass-driver with the acceleration of present laboratory models, but mounted on the lunar surface, would be able to accelerate payloads to lunar escape speed in a distance of only 170 yards. Its efficiency would be about 70 percent, about the same as that of a medium-size electric motor. Loads accelerated by a mass-driver could range from a pound to several tons, depending on the application and available power supply. Technological advance across a broad spectrum is the key to fielding an aerospace plane. A highly innovative propulsion design can make possible horizontal takeoff and single-stage-to-orbit flight with high specific impulse (Isp). The aerospace plane would use a unique supersonic combustion ramjet (SCRAMJET) engine which would breathe air up to the outer reaches of the atmosphere. This approach virtually eliminates the need to carry liquid oxygen, thus reducing propellant and vehicle weight. A small amount of liquid oxygen would be carried to provide rocket thrust for orbital maneuvering and for cabin atmosphere. A ramjet, as its name implies, uses the ram air pressure resulting from the forward motion of the vehicle to provide compression. The normal ramjet inlet slows down incoming air while compressing it, then bums the fuel and air subsonically and exhausts the combustion products through a nozzle to produce thrust. To fly faster than Mach 6, the internal geometry of the engine must be varied in order to allow the air to remain at supersonic speeds through the combustor. This supersonic combustion ramjet could potentially attain speed capability of Mach 12 or higher. Such a propulsion system must cover three different flight regimes: takeoff, hypersonic, and rocket. For takeoff and acceleration to Mach 4, it would utilize air-turbo-ramjets or cryojets. From Mach 4 to Mach 6, the engine would operate as a conventional subsonic combustion ramjet. From Mach 6 to maximum airbreathing speeds, the engine would employ a supersonic combustion SCRAMJET. At speeds of about Mach 12 and above, the SCRAMJET engine might have additional propellant added above the hydrogen flow rates needed for utilization of all air captured by the inlet. This additional flow would help cool the engine and provide additional thrust. Final orbital insertion could be achieved with an auxiliary rocket engine. Such a system of propulsion engines must be carefully integrated with the airframe. Proper integration of the airbreathing inlets into the 6frame is a critical design problem, since the shape of the aircraft itself determines in large part the performance of the engine. During SCRAMJET operation, the wing and forward underbody of the vehicle would generate oblique shock waves which produce inlet air flow compression. The vehicle afterbody shape behind the engine would form a nozzle producing half the thrust near orbital speeds. Second-generation supercomputers can now provide the computational capability needed to efficiently calculate the flow fields at these extremely high Mach numbers. These advanced design tools provide the critical bridge between wind tunnels and piloted flight in regimes of speed and altitude that are unattainable in ground-based facilities. In addition, supercomputers permit the usual aircraft design and development time to be significantly shortened, thus permitting earlier introduction of the aerospace plane into service. The potential performance of such an airframe-inlet-engine-nozzle combination is best described by a parameter known as the net "Isp," which is the measure of the pounds of thrust, minus the drag from the engine, per pounds of fuel flowing through each second. The unit of measure is seconds; the larger the value, the more efficient the propulsion. For the aerospace plane over the speed range of Mach 0 to Mach 25, the engines should achieve an average Isp in excess of 1,200 seconds burning liquid hydrogen. This compares with an Isp of about 470 seconds for the best current hydrogen-oxygen rocket engines, such as the space shuttle main engine. It is the high Isp of an air-breathing engine capable of operating over the range from takeoff to orbit that could make possible a single-stage, horizontal takeoff and landing aerospace plane. For "airliner" or "Orient Express" cruise at Mach 4 to Mach 12, the average Isp is even larger, making the SCRAMJET attractive for figure city-to-city transportation. Another key technology is high strength-to-weight ratio materials capable of operating at very high temperatures while retaining the properties of reusability and long life. These can make possible low maintenance, rapid turnaround, reduced logistics, and low operational costs. Promising approaches to high-temperature materials include rapid-solidification-rate metals, carbon-carbon composites, and advanced metal matrix composites. In extremely hot areas, such as the nose, the use of active cooling with liquid hydrogen or the use of liquid metals to rapidly remove heat win also be employed. The use of these materials and cooling technologies with innovative structural concepts results in important vehicle weight reductions, a key to single-stage-to-orbit flight. The Performance of rocket vehicles is primarily determined by the effective specific impulse of the propulsion system and the dry weight of the entire vehicle. Best Performance can be attained by burning a hydrocarbon fuel at low altitudes, then switching to hydrogen for the rest of the flight to orbit. This assures high effective specific impulse, thus minimizing the volume and weight of the tankage required for Propellants. Rocket engines have been studied which combine into one efficient design the ability to operate in a dual-fuel, combined-cycle mode. Lightweight versions of such engines are clearly possible, but will require technology demonstration and development. The greatest leverage for high performance can be obtained by reducing the inert weight of the tanks, airframe, and other components, since they are lifted all the way into orbit and thus displace payload on a pound-for-pound basis. This holds for the entire vehicle in a single-stage design, and for the final stage (and to a lesser amount for the initial stage) in a two-stage vehicle. The use of new materials with very high strength-to-weight ratios at elevated temperatures could greatly reduce the weight of the tankage, primary structure, and thermal protection system. Thus, aluminum tankage and structure could be replaced with composite and metal matrix materials. Separate heat insulating thermal protection layers could be replaced with heat rejection via radiation by allowing the skin to get very hot, and perhaps by providing active cooling of some substructure. Wing and control surface weight can be minimized by using a control-configured design and small control surfaces. Advances in these technologies, which should be feasible by the early 1990s, have the potential of reducing the vehicle dry weight dramatically, compared to designs for the same payload weight using shuttle technology. The performance of rocket vehicles using such technology would far exceed today's values. Depending on the dry weight reductions actually achieved, the best vehicle could have either a single-stage fully-reusable design, or a fully-reusable two-stage design. Attainment of low operating costs will depend most heavily on technology for handling and processing the launch vehicle and cargo in an automated, simple, and rapid manner. This includes self-checkout and launch from the vehicle's cockpit, high reliability and fault-tolerance in the avionics, adaptive controls, lightweight all-electric actuators and mechanisms, standardized mechanisms for modularized servicing of the vehicle., and automated flight planning. What goes up must come down—even in Earth orbit! The difference in space is that it can take millions of years for objects to be pulled back to Earth by friction with Earth's atmosphere, depending on how dose they am to Earth. An object 100 miles above Earth will return in a matter of days, while objects in geostationary orbit will take millions of years to reenter. Since the dawn of the Space Age, thousands of objects with a collective mass of millions of pounds have been deposited in space. While some satellites and pieces of debris are reentering, others are being launched, so the space debris population remains constant at approximately 5,000 pieces large enough to be tracked from Earth (thousands more are too small to be detected), This uncontrolled space population presents a growing hazard of reentering objects and in-space collisions. As objects reenter, they usually bum up through the heat of friction with Earth's atmosphere, but large pieces may reach the ground. This can constitute a danger to people and property, although there is no proof that anyone has ever been struck by a piece of space debris. There are numerous cases of such debris reaching the ground, however, including the reentry of the U.S. Skylab over Australia in 1979, and the unexpected reentry of two Soviet nuclear reactor powered satellites in 1978 and 1983. The hazard of in-space collisions is created both by multiple collisions between pieces of debris and by intentional or unintentional explosions or fragmentations of satellites. When space objects collide with each other or explode, thousands of smaller particles are created, increasing the probability of further collisions among themselves and with spacecraft. A spacecraft is now more likely to be damaged by space debris than by small micrometeorites. For large, long-life orbital facilities, such as space stations and spaceports, the collision probabilities will become serious by the year 2000, requiring bumper shields or other countermeasures, and more frequent maintenance. All spacefaring nations should adopt preventive measures to minimize: the introduction of new uncontrolled and long-lived debris into orbit. Such countermeasures include making all pieces discarded from spacecraft captive, deorbiting spent spacecraft or stages, adjusting the orbits of transfer stages so that rapid reentry is assured due to natural disturbances, and designating long-life disposal orbits for high altitude spacecraft. The increasing hazard of space debris must be halted and reversed. In a purely physical sense, the Space Station will overshadow all preceding space facilities, Although often referred to as the "NASA" Space Station, it win actually be international in character; Europe, Canada, and Japan, in particular, plan to develop their own hardware components for the Station. As currently visualized, the initial Station will be a 350-foot by 300-foot structure containing four pressurized modules (two for living and two for working), assorted attached pallets for experiments and manufacturing, eight large solar panels for power, communications and propulsion systems, and a robotic manipulator system similar to the shuttle arm. When fully assembly, the initial Station will weigh about 300,000 pounds and carry a crew of six, with a replacement crew brought on board every 90 days. To deliver and assemble the Station’s components, 12 shuttle flights will be required over an 18-month period. The pressurized modules used by the Station will be about 14 feet in diameter and 40 feet long to fit in the shuttle's cargo bay. The Station will circle Earth every 90 minutes at 250-mile altitude and 28.5 degree orbital inclination. Thus the Station will travel only between 28.5 degrees north and south latitude. Unoccupied associate platforms that can be serviced by crews will be in orbits similar to this, as well as in polar orbits circling Earth over the North and South Poles. Polar-orbiting platforms will carry instruments for systems that require a view of the entire globe. The Station will provide a versatile, multifunctional facility. In addition to providing housing, food, air, and water for its inhabitants, it will be a science laboratory performing scientific studies in astronomy, space plasma physics, Earth sciences (including the ocean and atmosphere), materials research and development, and life sciences. The Station will also be used to improve our space technology capability, including electrical power generation, robotics and automation, life support Systems, Earth observation sensors, and communications. The Station will provide a transportation hub for shuttle missions to and from Earth. When the crew is rotated every 90 days, the shuttle will deliver food and water from Earth, as well as materials and equipment for the science laboratories and manufacturing facilities. Finished products and experiment results will be returned to Earth. The Station will be the originating point and destination for flights to nearby Platforms and other Earth orbits. The orbital maneuvering vehicle used for these trips will be docked at the Station. Space tethers have been known in principle for almost 100 years and were crudely tested in two Gemini flights in the 1960s. They were first seriously proposed for high atmosphere sampling from the shuttle by Italy’s Giuseppe Colombo in 1976, which led to a cooperative program between NASA and Italy scheduled to fly in 1988. In the past few years NASA has systematically explored tethering principles in innovative ways for many applications, so the value of space tethers is now becoming clear, and they will be incorporated in several space facilities. Tethers in space capitalize on the fundamental dynamics of bodies moving through central gravity and magnetic fields. They can even provide a pseudo-force field in deep space where none exists. Energy and momentum can be transferred from a spacecraft being lowered on a tether below the orbital center of mass to another spacecraft being raised on a tether above it by applying the principle of conservation of angular momentum of mass in orbit. Upon release, the lower spacecraft will fly to a lower perigee, since it is in a lower energy orbit, while the upper will fly to a higher apogee. Thus, for example, a shuttle departing from a space station can tether downward and then release, reentering the atmosphere without firing its engines, while transferring some energy and momentum to a transfer vehicle leaving the station upward bound for geostationary orbit or the Moon. The result is significant propellant savings for both. Since the process of transfer and storage of energy and M0111entwn are reversible, outgoing vehicles can be boosted by the slowing of incoming vehicles. This can be applied in Earth orbit, in a lunar transfer station, or even in a two-piece elevator system of tethers on Phobos and Deimos that could greatly reduce propellant requirements for Mars transportation. The generation of artificial gravity via tethers offers another class of opportunities. Spacecraft in orbit tethered together will experience an artificial gravity proportional to their distance from their center of mass. Current materials such as Kevlar can support tether lengths of hundreds of miles, allowing controlled gravity fields up to about 0.2g to be generated. By varying tether length, the forces can be set to any level between 0 and 0.2g. This can be used for settling and storing propellants at a space station, for life science research, and for simplifying living and working in space. By deliberately spinning a habitat on a tether only 1,000 feet long, levels of 1g or more can be generated at low revolutions per minute, with low Coriolis forces, to prevent nausea. Long tethers minimize the required mass of the structure, and can alter synthetic gravity by varying spin rate via reeling in or out of tethered counterbalancing masses, If a tether is made of conducting material in orbit about a planet with a magnetic field (like Earth), it will act as a new type of space electric power generator, obtaining energy directly from the orbital energy of the spacecraft, or from a chemical or ion propellant used to keep the orbit from decaying. If power is driven into the tether instead (from a solar array or other source) it will act as an electric motor, and the spacecraft will change altitude, the tether acting as propellantless propulsion with a specific impulse of above 300,000 seconds. This feature can also be exploited by a tethered spacecraft in Jupiter’s strong magnetic field. Propulsion can be provided for maneuvers to visit the Jovian satellites and very high power can be simultaneously generated for the spacecraft and its transmitters. As humans move out to settle space, the consequences of long-term exposure to less than Earth's gravity must be fully understood. In our deliberations, the Commission has found a serious lack of data regarding the effects on the health of humans living for long periods of time in low-gravity environments. NASA’s experience suggests that the "space sickness" syndrome that afflicts as many as, half the astronauts and cosmonauts is fortunately self-limiting. Of continuous concern to medical specialists, however, me the problems of cardiovascular deconditioning after months of exposure to microgravity, the demineralization of the skeleton, the loss of muscle mass and red blood cells, and impairment of the immune response. Space shuttle crews now routinely enter space for periods of seven to nine days and return with no recognized long-term health problems, but these short-term flights do not permit sufficiently detailed investigations of the potentially serious problems. For example, U.S. medical authorities report that Soviet cosmonauts who returned to Earth in 1984 after 237 days in space emerged from the flight with symptoms that mimick severe cerebellar disease, or cerebellar atrophy. The cerebellum is the part of the brain that coordinates and smooths out muscle movement, and helps create the proper muscle force for the movement intended. These pioneering cosmonauts apparently required 45 days of Earth gravity before muscle coordination allowed them to remaster simple children’s games, such as playing catch, or tossing a ring at a vertical peg. As little as we know about human adaptation to microgravity, we have even less empircal knowledge of the long-term effects of the one-sixth gravity of the Moon, or the one-third gravity of Mars. We need a vigorous biomedical research program, geared to understanding the problems associated with long-term human spaceflight. Our recommended Variable-g Research Facility in Earth orbit will help the Nation accumulate the needed data to support protracted space voyages by-humankind and life on worlds with different gravitational forces. We can also expect valuable new medical information useful for Earth-bound patients from this research. Five U.N. treaties are currently in force regarding activities in space: the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and other Celestial Bodies (1967); the Agreement on the Rescue of Astronauts, the Return of Astronauts, and the Return of Objects Launched into Outer Space (1968); the Convention on International Liability for Damage Caused by Space Objects (1972); the Convention on Registration of Objects Launched into Outer Space (1976); and the Treaty on Principles Governing Activities on the Moon and Other Celestial Bodies (1979). The major space nations, including the United States and Soviet Union, have ratified all but the last, which is more commonly referred to as the "Moon Treaty." Only five countries have signed and ratified that agreement. In addition to deliberations at the United Nations, there is an organization called the International Institute of Space Law, which is pan of the International Astronautical Federation that provides a forum for discussing space law at its annual meetings. A specific opportunity for global space cooperation will occur in 1992. Called the International Space Year (ISY), it will take advantage of a confluence of anniversaries in 1992: the 500th anniversary of the discovery of America, the 75th anniversary of the founding of the Union of Soviet Socialist Republics, and the 35th anniversaries of the International Geophysical Year and the launch of the first artificial satellite, Sputnik 1. During this period, it is also expected that the International Geosphere/ Biosphere Program will be in progress, setting the stage for other related space activities. In 1985, Congress approved the ISY concept in a bill that authorizes funding for NASA. The legislation calls on the President to endorse the ISY and consider the possibility of discussing it with other foreign leaders, including the Soviet Union. It directs NASA to work with the State Department and other Government agencies to initiate interagency and international discussions exploring opportunities for international missions and related research and educational activities. As stated by Senator Spark Matsunaga on the tenth anniversary of the historic Apollo-Soyuz Test Project, July 17, 1985, "An International Space Year won’t change the world. But at the minimum, these activities help remind all peoples of their common humanity and their shared destiny aboard this beautiful spaceship we call Earth." PIONEERING THE SPACE FRONTIER: Table of Contents
http://www.nss.org/resources/library/pioneering/sidebars.html
13
17
Version 1.89.J01 - 3 April 2012 Units is a program for computations on values expressed in terms of different measurement units. It is an advanced calculator that takes care of the units. You can try it here: Suppose you want to compute the mass, in pounds, of water that fills to the depth of 7 inches a rectangular area 5 yards by 4 feet 3 inches. You recall from somewhere that 1 liter of water has the mass of 1 kilogram. To obtain the answer, you multiply the water's volume by its specific mass. Enter this after You have above: 5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter then enter pounds after You want and hit the Enter key or press the Compute button. The following will appear in the result area: 5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter = 2321.5398 pounds You did not have to bother about conversions between yards, feet, inches, liters, kilograms, and pounds. The program did it all for you behind the scenes. Units supports complicated expressions and a number of mathematical functions, as well as units defined by linear, nonlinear, and piecewise-linear functions. See under Expressions for detailed specifications. Units has an extensive data base that, besides units from different domains, cultures, and periods, contains many constants of nature, such as: pi ratio of circumference to diameter c speed of light e charge on an electron h Planck's constant force acceleration of gravity As an example of using these constants, suppose you want to find the wavelength, in meters, of a 144 MHz radio wave. It is obtained by dividing the speed of light by the frequency. The speed of light is 186282.39 miles/sec. But, you do not need to know this exact number. Just press Clear and enter this after You have: c / 144 MHz Enter m after You want and hit the Enter key. You will get this result: c /144MHz = 2.0818921 m Sometimes you may want to express the result as a sum of different units, for example, to find what is 2 m in feet and inches. To try this, press Clear and enter 2 m after You have. Then enter after You want: ft;inand hit Enter. You will get this result: 2 m = 6 ft + 6.7401575 in Other examples of computations: Feet and inches to metric: 6 ft + 7 in = 200.66 cm Time in mixed units: 2 years = 17531 hours + 37 min + 31.949357 s Angle in mixed units: radian = 57 deg + 17 ' + 44.806247 " Fahrenheit to Celsius: tempF(97) = tempC(36.111111) Electron flow: 5 mA = 3.1207548e16 e/sec Energy of a photon: h * c / 5896 angstrom = 2.1028526 eV Mass to energy: 1 g * c^2 = 21.480764 kilotons tnt Baking: 2 cups flour_sifted = 226.79619 g Weight as force: 5 force pounds = 22.241108 newton You can explore the units data base with the help of the four buttons under You have field. By entering any string in You have field and pressing the Search button, you obtain a list of all unit names that contain that string as a substring. For example, if you enter year at You have and press Search, you get a list of about 25 different kinds of year, including marsyear and julianyear. Pressing Definition displays this in the result area: year = tropicalyear = 365.242198781 day = 31556926 s, which tells you that year is defined as equal to tropicalyear, which is equal to 365.242198781 days or 31556926 seconds. If you now enter tropicalyear at You have and press the Source button, you open a browser on the unit data base at the place containing the definition of tropicalyear. You find there a long comment explaining that unit. You may then freely browse the data base to find other units and facts about them. Pressing Conformable units will give you a list of all units for measuring the same property as tropicalyear, namely the length of a time interval. The list contains over 80 units. Instead of the applet shown above, you can use Units as a stand-alone application. As it is written in Java, you can use it under any operating system that supports Java Runtime Environment (JRE) release 1.5.0 or later. To install Units on your computer, download the Java archive (JAR) file that contains the executable Java classes. Save the JAR file in any directory, under any name of your choice, with extension .jar. If your system has an association of .jar files with javaw command (which is usually set up when you install JRE), just double-click on the JAR file icon. If this does not work, you can type java -jar jarfile at the command prompt, where jarfile is the name you gave to the JAR file. Each way should open the graphic interface of Units, similar to one at the beginning of this page. With Units installed on your computer, you can use it interactively from command line, or invoke it from scripts. It imitates then almost exactly the behavior of GNU Units from which it has evolved. See under Command interface for details. You also have a possibility to modify the file that contains unit definitions, or add your own definitions in separate file(s). (The applet can only use its own built-in file.) See under Adding own units for explanation how to do it. The complete package containing the JAR and the Java source can be downloaded as a gzipped tar file from the SourceForge project page. You use expressions to specify computations on physical and other quantities. A quantity is expressed as the product of a numerical value and a unit of measurement. Each quantity has a dimension that is either one of the basic dimensions such as length or mass, or a combination of those. For example, 7 mph is the product of number 7 and unit mile/hour; it has the dimension of length divided by time. For a deeper discussion, see articles on physical quantity and dimensional analysis. For each basic dimension, Units has one primitive unit: meter for length, gram for mass, second for time, etc.. The data base defines each non-primitive unit in such a way that it can be converted to a combination of primitive units. For example, mile is defined as equal to 1609.344 m and hour to 3600 s. Behind the scenes, Units replaces the units you specify by these values, so 7 mph becomes: 7 mph = 7 * mile/hour = 7 * (1609.344*m)/(3600*s) = 3.12928 m/s This is the quantity 7 mph reduced to primitive units. The result of a computation can, in particular, be reduced to a number, which can be regarded as a dimensionless quantity: 17 m / 120 in = 5.5774278 In your expressions, you can use any units named in the units data base. You find there all standard abbreviations, such as ft for foot, m for meter, or A for ampere. For readability, you may use plural form of unit names, thus writing, for example, seconds instead for second. If the string you specified does not appear in the data base, Units will try to ignore the suffix s or es. It will also try to remove the suffix ies and replace it by y. The data base contains also some irregular plurals such as feet. The data base defines all standard metric prefixes as numbers. Concatenating a prefix in front of a unit name means multiplication by that number. Thus, the data base does not contain definitions of units such as milligram or millimeter. Instead, it defines milli- and m- as prefixes that you can apply to gram, g, meter, or m, obtaining milligram, mm, etc.. Only one prefix is permitted per unit, so micromicrofarad will fail. However, micro is a number, so micro microfarad will work and mean .000001 microfarad. Numbers are written using standard notation, with or without decimal point. They may be written with an exponent, for example 3.43e-8 to mean 3.43 times 10 to the power of -8. By writng a quantity as 1.2 meter or 1.2m, you really mean 1.2 multiplied by meter. This is multiplication denoted by juxtaposition. You can use juxtaposition, with or without space, to denote multiplication also in other contexts, whenever you find it convenient. In addition to that, you indicate multiplication in the usual way by an asterisk (*). Division is indicated by a slash (/) or per. Division of numbers can also be indicated by the vertical dash (|). Examples: 10cm 15cm 1m = 15 liters 7 * furlongs per fortnight = 0.0011641667 m/s 1|2 meter = 0.5 m The multiplication operator * has the same precedence as / and per; these operators are evaluated from left to right. Multiplication using juxtaposition has higher precedence than * and division. Thus, m/s s/day does not mean (m/s)*(s/day) but m/(s*s)/day = m/(s*s*day), which has dimension of length per time cubed. Similarly, 1/2 meter means 1/(2 meter) = .5/meter, which is probably not what you would intend. The division operator | has precedence over both kinds of multiplication, so you can write 'half a meter' as 1|2 meter. This operator can only be applied to numbers. Sums are written with the plus (+) and minus (-). Examples: 2 hours + 23 minutes - 32 seconds = 8548 seconds 12 ft + 3 in = 373.38 cm 2 btu + 450 ft lbf = 2720.2298 J The quantities which are added together must have identical dimensions. For example, 12 printerspoint + 4 heredium results in this message: Sum of non-conformable values: 0.0042175176 m 20186.726 m^2. Plus and minus can be used as unary operators. Minus as a unary operator negates the numerical value of its operand. Exponents are specified using the operator ^ or **. The exponent must be a number. As usual, x^(1/n) means the n-th root of x, and x^(-n) means 1/(x^n): cm^3 = 0.00026417205 gallon 100 ft**3 = 2831.6847 liters acre^(1/2) = 208.71074 feet (400 W/m^2 / stefanboltzmann)^0.25 = 289.80881 K 2^-0.5 = 0.70710678 An exponent n or 1/n where n is not an integer can only be applied to a number. You can take the n-th root of non-numeric quantity only if that quantity is an n-th power: foot^pi = Non-numeric base, 0.3048 m, for exponent 3.1415927. hectare**(1/3) = 10000 m^2 is not a cube. An exponent like 2^3^2 is evaluated right to left. The operators ^ and ** have precedence over multiplication and division, so 100 ft**3 is 100 cubic feet, not (100 ft)**3. On the other hand, they have a lower priority than prefixing and |, so centimeter^3 means cubic centimeter, but centi meter^3 is 1/100 of a cubic meter. The square root of two thirds can be written as 2|3^1|2. Abbreviation. You may concatenate a one-digit exponent, 2 through 9, directly after a unit name. In this way you abbreviate foot^3 to foot3 and sec^2 to sec2. But beware: $ 2 means two dollars, but $2 means one dollar squared. Units provides a number of functions that you can use in your computation. You invoke a function in the usual way, by writing its name followed by the argument in parentheses. Some of them are built into the program, and some are defined in the units data base. The built-in functions include sin, cos, tan, their inverses asin, acos, atan, and: ln natural logarithm log base-10 logarithm log2 base-2 logarithm exp exponential sqrt square root, sqrt(x) = x^(1/2) cuberoot cube root, cuberoot(x) = x^(1/3) The argument of sin, cos, and tan must be a number or an angle. They return a number. The argument of asin, acos, atan, ln, log, log2, and exp must be a number. The first three return an angle and the remaining return a number. The argument of sqrt and cuberoot must be a number, or a quantity that is a square or a cube. circlearea area of circle with given radius pH converts pH value to moles per liter tempF converts temperature Fahrenheit to temperature Kelvin wiregauge converts wire gauge to wire thickness Most of them are used to handle nonlinear scales, as explained under Nonlinear meaures. By preceding a function's name with a tilde (~) you obtain an inverse of that function: circlearea(5cm) = 78.539816 cm^2 ~circlearea(78.539816 cm^2) = 5 cm pH(8) = 1.0E-8 mol/liter ~pH(1.0E-8 mol/liter) = 8 tempF(97) = 309.26111 K ~tempF(309.26111 K) = 96.999998 wiregauge(11) = 2.3048468 mm ~wiregauge(2.3048468 mm) = 11 The following table summarizes all operators in the order of precedence. prefix concatenated exponent number division | (left to right) unary + - exponent ^ ** (right to left) multiplication by juxtaposition (left to right) multiplication and division * / per (left to right) sum + - (left to right) A plus and minus is treated as unary only if it comes first in the expression or follows any of the operators ^, **, *, /, per, +, or -. Thus, 5 -2 is interpreted as '5 minus 2', and not as '5 times -2'. Parentheses can be applied in the usual way to indicate the order of evaluation. The syntax of expressions is defined as follows. Phrases and symbols in quotes represent themselves, | means 'or', ? means optional occurrence, and * zero or more occurrences. expr = term (('+' | '-') term)* | ('/' | 'per') product term = product (('*' | '/' | 'per') product)* product = factor factor* factor = unary (('^' | '**') unary)* unary = ('+' | '-')? primary primary = unitname | numexpr | bfunc '(' expr ')' | '~'? dfunc '(' expr ')' | '(' expr ')' numexpr = number ('|' number)* number = mantissa exponent? mantissa = '.' digits | digits ('.' digits?)? exponent = ('e' | 'E') sign? digits unitname = unit name with optional prefix, suffix, and / or one-digit exponent bfunc = built-in function name: sqrt, cuberoot, sin, cos, etc. dfunc = defined function name Names of syntactic elements shown above in italics may appear in error messages that you receive if you happen to enter an incorrect expression. For example: You have: 1|m After '1|': expected number. You have: cm^per $ After 'cm^': expected unary. You have: 3 m+*lbf After '3 m+': expected term. Spaces are in principle ignored, but they are often required in multiplication by juxtaposition. For example, writing newtonmeter will result in the message Unit 'newtonmeter' is unknown; you need a space in the product newton meter. To avoid ambiguity, a space is also required before a number that follows another number. Thus, an error will be indicated after 1.2 in 1.2.3. Multiplication by juxtaposition may also result in another ambiguity. As e is a small unit of charge, an expression like 3e+2C can be regarded as meaning (3e+2)*C or (3*e)+(2*C). This ambiguity is resolved by always including as much as possible in a number. In the Overview, it was shown how you specify the result by entering a unit name at You want. In fact, you can enter there any expression specifying a quantity with the same dimension as the expression at You have: You have: 10 gallons You want: 20 cm * circlearea(5cm) 10 gallons = 24.09868 * 20 cm * circlearea(5cm) This tells you that you can almost fit 10 gallons of liquid into 24 cans of diameter 10 cm and 20 cm tall. However: You have: 10 gallons You want: circlearea(5cm) Conformability error 10 gallons = 0.037854118 m^3 circlearea(5cm) = 0.0078539816 m^2 Some units, like radian and steradian, are treated as dimensionless and equal to 1 if it is necessary for conversion. For example, power is equal to torque times angular velocity. The dimension of expression at You have below is kg m^2 radian/s^3, and the dimension of watt is kg m^2/s^3 The computation is made possible by treating radian as dimensionless: You have: (14 ft lbf) (12 radians/sec) You want: watts (14 ft lbf) (12 radians/sec) = 227.77742 watts Note that dimensionless units are not treated as dimensionless in other contexts. They cannot be used as exponents so for example, meter^radian is not allowed. You can also enter at You want an expression with dimension that is an inverse of that at You have: You have: 8 liters per 100 km You want: miles per gallon reciprocal conversion 1 / 8 liters per 100 km = 29.401823 miles per gallon Here, You have has the dimension of volume divided by length, while the dimension of You want is length divided by volume. This is indicated by the message reciprocal conversion, and by showing the result as equal to the inverse of You have. You may enter at You want the name of a function, without argument. This will apply the function's inverse to the quantity from You have: You have: 30 cm^2 You want: circlearea 30 cm^2 = circlearea(0.030901936 m) You have: 300 K You want: tempF 300 K = tempF(80.33)Of course, You have must specify the correct dimension: You have: 30 cm You want: circlearea Argument 0.3 m of function ~circlearea is not conformable to 1 m^2. If you leave You want field empty, you obtain the quantity from You have reduced to primitive units: You have: 7 mph You want: 3.12928 m / s You have: 2 m You want: ft;in;1|8 in 2 m = 6 ft + 6 in + 5.9212598 * 1|8 inNote that you are not limited to unit names, but can use expressions like 1|8 in above. The first unit is subtracted from the given value as many times as possible, then the second from the rest, and so on; finally the rest is converted exactly to the last unit in the list. Ending the unit list with ';' separates the integer and fractional parts of the last coefficient: You have: 2 m You want: ft;in;1|8 in; 2 m = 6 ft + 6 in + 5|8 in + 0.9212598 * 1|8 inEnding the unit list with ';;' results in rounding the last coefficient to an integer: You have: 2 m You want: ft;in;1|8 in;; 2 m = 6 ft + 6 in + 6|8 in (rounded up to nearest 1|8 in)Each unit on the list must be conformable with the first one on the list, and with the one you entered at You have: You have: meter You want: ft;kg Invalid unit list. Conformability error: ft = 0.3048 m kg = 1 kg You have: meter You want: lb;oz Conformability error meter = m lb = 0.45359237 kgOf course you should list the units in a decreasing order; otherwise, the result may not be very useful: You have: 3 kg You want: oz;lb 3 kg = 105 oz + 0.051367866 lbA unit list such as cup;1|2 cup;1|3 cup;1|4 cup;tbsp;tsp;1|2 tsp;1|4 tspcan be tedious to enter. Units provides shorthand names for some common combinations: hms hours, minutes, seconds dms angle: degrees, minutes, seconds time years, days, hours, minutes and seconds usvol US cooking volume: cups and smallerUsing these shorthands, or unit list aliases, you can do the following conversions: You have: anomalisticyear You want: time 1 year + 25 min + 3.4653216 sec You have: 1|6 cup You want: usvol 2 tbsp + 2 tspYou cannot combine a unit list alias with other units: it must appear alone at You want. Some measures cannot be expressed as the product of a number and a measurement unit. Such measures are called nonlinear. An example of nonlinear measure is the pH value used to express the concentration of certain substance in a solution. It is a negative logarithmic measure: a tenfold increase of concentration decreases the pH value by one. You convert between pH values and concentration using the function pH mentioned under Functions: You have: pH(6) You want: micromol/gallon pH(6) = 3.7854118 micromol/gallon For conversion in the opposite direction, you use the inverse of pH, as described under Specifying result: You have: 0.17 micromol/cm^3 You want: pH 0.17 micromol/cm^3 = pH(3.7695511) Other example of nonlinear measures are different "gauges". They express the thickness of a wire, plate, or screw, by a number that is not obviously related to the familiar units of length. (Use the Search button on gauge to find them all.) Again, they are handled by functions that convert the gauge to units of length: You have: wiregauge(11) You want: inches wiregauge(11) = 0.090742002 inches You have: 1mm You want: wiregauge 1mm = wiregauge(18.201919) The most common example of nonlinear measure is the temperature indicated by a thermometer, or absolute temperature: you cannot really say that it becomes two times warmer when the thermometer goes from 20°F to 40°F. Absolute temperature is expressed relative to an origin; such measure is called affine. To handle absolute temperatures, Units provides functions such as tempC and tempF that convert them to degrees Kelvin. (Other temp functions can be found using the Search button.) The following shows how you use these functions to convert absolute temperatures: You have: tempC(36) You want: tempF tempC(36) = tempF(96.8) meaning that 36°C on a thermometer is the same as 96.8°F. You can think of pH(6), wiregauge(11), tempC(36), or tempF(96.8) not as functions but as readings on the scale pH, tempC, or tempF, used to measure some physical quantity. You can read the examples above as: 'what is 0.17 micromol/cm^3 on the pH scale?', or 'what is 1 mm on the wiregauge scale?', or 'what is the tempF reading corresponding to 36 on tempC scale?' Note that absolute temperature is not the same as temperature difference, in spite of their units having the same names. The latter is a linear quantity. Degrees Celsius and degrees Fahrenheit for measuring temperature difference are defined as linear units degC and degF. They are converted to each other in the usual way: You have: 36 degC You want: degF 36 degC = 64.8 degF Some units have different values in different locations. The localization feature accomodates this by allowing the units database to specify region-dependent definitions. In the database, the US units that differ from their British counterparts have names starting with us: uston, usgallon, etc.. The corresponding British units are: brton, brgallon, etc.. When using Units, you have a possibility to specify en_US or en_GB as 'locale'. Each of them activates a portion of the database that defines short aliases for these names. Thus, specifying en_US as locale activates these aliases: ton = uston gallon = usgallon etc.while en_GB activates these: ton = brton gallon = brgallon etc. The US Survey foot, yard, and mile can be obtained by using the US prefix. These units differ slightly from the international length units. They were in general use until 1959, and are still used for geographic surveys. The acre is officially defined in terms of the US Survey foot. If you want an acre defined according to the international foot, use intacre. The difference between these units is about 4 parts per million. The British also used a slightly different length measure before 1959. These can be obtained with the prefix UK. The units data base is defined by a units data file. A default units file, called units.dat, is packaged in the JAR file together with the program. (You can extract it from there using the jar tool of Java.) If you want to add your own units, you can write your own units file. See how to do it under Writing units file. If you place that file in your home directory under the name units.dat, it will be read after the default units file. You may also supply one or more own unit files and access them using the property list or command option. In each case, you specify the order in which Units will read them. If a unit with the same name is defined more than once, Units will use the last definition that it encounters. Note that adding your own unit files is possible only if you run Units from a downloaded JAR file. An applet can only use the default units.dat file. If you want, you may run Units from command line. It imitates then almost exactly the behavior of GNU Units. (The differences are listed under What is different.) To use the command line interface, you need to download the Java archive (JAR) file that contains the executable classes and the data file. You can save the JAR in any directory of your choice, and give it any name compatible with your file system. The following assumes that you saved the JAR file under the name jarfile. It also assumes that you have a Java Runtime Environment (JRE) version 1.5.0 or later that is invoked by typing java at your shell prompt. java -jar jarfile -i or java -jar jarfile options at your shell prompt. The program will print something like this: 2192 units, 71 prefixes, 32 nonlinear units You have: At You have prompt, type the expression you want to evaluate. Next, Units will print You want. There you tell how you want your result, in the same way as in the graphical interface. See under Expressions and Specifying result. As an example, suppose you just want to convert ten meters to feet. Your dialog will look like this: You have: 10 meters You want: feet * 32.8084 / 0.03048 The answer is displayed in two ways. The first line, which is marked with a * to indicate multiplication, says that the quantity at You have is 32.8084 times the quantity at You want. The second line, marked with a / to indicate division, gives the inverse of that number. In this case, it tells you that 1 foot is equal to about 0.03 dekameters (dekameter = 10 meters). It also tells you that 1/32.8 is about .03. Units prints the inverse because sometimes it is a more convenient number. For example, if you try to convert grains to pounds, you will see the following: You have: grains You want: pounds * 0.00014285714 / 7000 From the second line of the output you can immediately see that a grain is equal to a seven thousandth of a pound. This is not so obvious from the first line of the output. If you find the output format confusing, try using the -v ('verbose') option, which gives: You have: 10 meters You want: feet 10 meters = 32.8084 feet 10 meters = (1 / 0.03048) feet You can suppress printing of the inverse using the -1 ('one line') option. Using both -v and -1 produces the same output as the graphical interface: You have: 5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter You want: pounds 5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter = 2321.5398 pounds If you request a conversion between units which measure reciprocal dimensions, Units will display the conversion results with an extra note indicating that reciprocal conversion has been done: You have: 6 ohms You want: siemens reciprocal conversion * 0.16666667 / 6Again, you may use the -v option to get more comprehensible output: You have: 6 ohms You want: siemens reciprocal conversion 1 / 6 ohms = 0.16666667 siemens 1 / 6 ohms = (1 / 6) siemens When you specify compact output with -c, you obtain only the conversion factors, without indentation: You have: meter You want: yard 1.0936133 0.9144 When you specify compact output and perform conversion to mixed units, you obtain only the conversion factors separated by semicolons. Note that unlike the case of regular output, zeros are included in this output list: You have: meter You want: yard;ft;in 1;0;3.3700787 If you only want to find the reduced form or definition of a unit, simply press return at You want prompt. For example: You have: 7 mph You want: 3.12928 m/s You have: jansky You want: Definition: jansky = fluxunit = 1e-26 W/m^2 Hz = 1e-26 kg / s^2 The definition is shown if you entered a unit name at You have prompt. The example indicates that jansky is defined as equal to fluxunit which in turn is defined to be a certain combination of watts, meters, and hertz. The fully reduced form appears on the far right. If you type ? at You want prompt, the program will display a list of named units which are conformable with the unit that you entered at You have prompt. Note that conformable unit combinations will not appear on this list. Typing help at either prompt displays a short help message. You can also type help followed by a unit name. This opens a window on the units file at the point where that unit is defined. You can read the definition and comments that may give more details or historical information about the unit. Typing search followed by some text at either prompt displays a list of all units whose names contain that text as a substring, along with their definitions. This may help in the case where you aren't sure of the right unit name. To end the session, you type quit at either prompt, or press the Enter (Return) key at You have prompt. You can use Units to perform computations non-interactively from the command line. To do this, type java -jar jarfile [options] you-have [you-want] at your shell prompt. (You will usually need quotes to protect the expressions from interpretation by the shell.) For example, if you type java -jar jarfile "2 liters" "quarts"the program will print * 2.1133764 / 0.47317647and then exit. If you omit you-want, Units will print out definition of the specified unit. The following options allow you to use alternative units file(s), check your units file, or change the output format: The Java imitation is not an exact port of the original GNU units. The following is a (most likely incomplete) list of differences. You can supply some parameters to Units by setting up a Property list. It is a file named units.opt, placed in the same directory as the JAR file. It may look like this: GUIFONT = Lucida ENCODING = Cp850 LOCALE = en_GB UNITSFILE = ; c:\\Java\\gnu\\units\\my.dat The options -e, -f, -g, and -l specified on the command line override settings from the Property list. You embed a Units applet in a Web page by means of this tag: <APPLET CODE="units.applet.class" ARCHIVE="http://units-in-java.sourceforge.net/Java-units.1.89.J01.jar" WIDTH=500 HEIGHT=400> <PARAM NAME="LOCALE" VALUE="locale"> <PARAM NAME="GUIFONT" VALUE="fontname"> </APPLET> Notice that because an applet cannot access any files on your system, you can use only the default units file packaged in the JAR file. You may view the source of this page for an example of Web page with an embedded Units applet. The units data base is defined by a units data file. A default units file, called units.dat, is packaged in the JAR file together with the program. This section tells you how to write your own units file that you can use together with, or instead of, the default file, as described under Adding own units. The file has to use the UTF-8 character encoding. Since the ASCII characters appear the same in all encodings, you do not need to worry about UTF-8 as long as your definitions use only these characters. Each definition occupies one line, possibly continued by the backslash character (\) that appears as the last character. Comments start with a # character, which can appear anywhere in a line. Following #, the comment extends to the end of the line. Empty lines are ignored. A unit is specified on a single line by giving its name followed by at least one blank, followed by the definition. A unit name must not contain any of the characters + - * / | ^ ( ) ; #. It cannot begin with a digit, underscore, tilde, decimal point, or comma. It cannot end with an underscore, decimal point, or comma. If a name ends in a digit other than zero or one, the digit must be preceded by a string beginning with an underscore, and afterwards consisting only of digits, decimal points, or commas. For example, NO_2, foo_2,1 or foo_3.14 would be valid names but foo2 or foo_a2 would be invalid. The definition is either an expression, defining the unit in terms of other units, or ! indicating a primitive unit, or !dimensionless indicating a dimensionless primitive unit. Be careful to define new units in terms of old ones so that a reduction leads to the primitive units. You can check this using the -C option. See under Checking your definitions. Here is an example of a short units file that defines some basic units: m ! # The meter is a primitive unit sec ! # The second is a primitive unit rad !dimensionless # A dimensionless primitive unit micro- 1e-6 # Define a prefix minute 60 sec # A minute is 60 seconds hour 60 min # An hour is 60 minutes inch 0.0254 m # Inch defined in terms of meters ft 12 inches # The foot defined in terms of inches mile 5280 ft # And the mile A unit which ends with a - character is a prefix. If a prefix definition contains any / characters, be sure they are protected by parentheses. If you define half- 1/2 then halfmeter would be equivalent to 1 / 2 meter. Here is an example of function definition: tempF(x) [1;K] (x+(-32)) degF + stdtemp ; (tempF+(-stdtemp))/degF + 32 The definition begins with the function name followed immediately (with no spaces) by the name of the parameter in parentheses. Both names must follow the same rules as unit names. Next, in brackets, is a specification of the units required as arguments by the function and its inverse. In the example above, the tempF function requires an input argument conformable with 1. The inverse function requires an input argument conformable with K. Note that this is also the dimension of function's result. Next come the expressions to compute the function and its inverse, separated by a semicolon. In the example above, the tempF function is computed as tempF(x) = (x+(-32)) degF + stdtemp The inverse has the name of the function as its parameter. In our example, the inverse is ~tempF(tempF) = (tempF+(-stdtemp))/degF + 32 This inverse definition takes an absolute temperature as its argument and converts it to the Fahrenheit temperature. The inverse can be omitted by leaving out the ; character, but then conversions to the unit will be impossible. If you wish to make synonyms for nonlinear units, you still need to define both the forward and inverse functions. So to create a synonym for tempF you could write fahrenheit(x) [1;K] tempF(x); ~tempF(fahrenheit) The example below is a function to compute the area of a circle. Note that this definition requires a length as input and produces an area as output, as indicated by the specification in brackets. circlearea(r) [m;m^2] pi r^2 ; sqrt(circlearea/pi) Empty or omitted argument specification means that Units will not check dimension of the argument you supply. Anything compatible with the specified computation will work. For example: square(x) x^2 ; sqrt(square) square(5) = 25 square(2m) = 4 m^2 Some functions cannot be computed using an expression. You have then a possibility to define such a function by a piecewise linear approximation. You provide a table that lists values of the function for selected values of the argument. The values for other arguments are computed by linear interpolation. An example of piecewise linear function is: zincgauge[in] 1 0.002, 10 0.02, 15 0.04, 19 0.06, 23 0.1 In this example, zincgauge is the name of the function. The unit in square brackets applies to the result. Tha argument is always a number. No spaces can appear before the ] character, so a definition like foo[kg meters] is illegal; instead write foo[kg*meters]. The definition is a list of pairs optionally separated by commas. Each pair defines the value of the function at one point. The first item in each pair is the function argument; the second item is the value of the function at that argument (in the units specified in brackets). In this example, you define zincgauge at five points. We have thus zincgauge(1) = 0.002 in. Definitions like this may be more readable if written using continuation characters as zincgauge[in] \ 1 0.002 \ 10 0.02 \ 15 0.04 \ 19 0.06 \ 23 0.1 If you define a piecewise linear function that is not strictly monotone, the inverse will not be well defined. In such a case, Units will return the smallest inverse. Unit list aliases are treated differently from unit definitions, because they are a data entry shorthand rather than a true definition for a new unit. A unit list alias definition begins with !unitlist and includes the alias and the definition; for example, the aliases included in the standard units data file are: !unitlist hms hr;min;sec !unitlist time year;day;hr;min;sec !unitlist dms deg;arcmin;arcsec !unitlist ftin ft;in;1|8 in !unitlist usvol cup;3|4 cup;2|3 cup;1|2 cup;1|3 cup;1|4 cup;\ tbsp;tsp;1|2 tsp;1|4 tsp;1|8 tsp Unit list aliases are only for unit lists, so the definition must include a ';'. Unit list aliases can never be combined with units or other unit list aliases, so the definition of time shown above could not have been shortened to year;day;hms. As usual, be sure to run Units with option -C to ensure that the units listed in unit list aliases are conformable. A locale region in the units file begins with !locale followed by the name of the locale. The locale region is terminated by !endlocale. The following example shows how to define a couple of units in a locale. !locale en_GB ton brton gallon brgallon !endlocale A file can be included by giving the command !include followed by full path to the file. You are recommended to check the new or modified units file by invoking Units from command line with option -C. Of course, the file must be made available to Units as described under Adding own units. The option will check that the definitions are correct, and that all units reduce to primitive ones. If you created a loop in the units definitions, Units will hang when invoked with the -C option. You will need to use the combined -Cv option which prints out each unit as it checks them. The program will still hang, but the last unit printed will be the unit which caused the infinite loop. If the inverse of a function is omitted, the -C option will display a warning. It is up to you to calculate and enter the correct inverse function to obtain proper conversions. The -C option tests the inverse at one point and prints an error if it is not valid there, but this is not a guarantee that your inverse is correct. The -C option will print a warning if a non-monotone piecewise linear function is encountered. Units works internally with double-byte Unicode characters. The unit data files use the UTF-8 encoding. This enables you to use Unicode characters in unit names. However, you can not always access them. The graphical interface of Units can display all characters available in its font. Those not available are shown as empty rectangles. The default font is Monospaced. It is a so-called logical font, or a font family, with different versions depending on the locale. It usually contains all the national characters and much more, but far from all of Unicode. You may specify another font by using the property GUIFONT, or an applet parameter, or the command option -g. You can enter into the Units window all chracters available at your keyboard, but there is no facility to enter any other Unicode characters. The treatment of Unicode characters at the command interface depends on the operating system and the Java installation. The operating system may use character encoding different from the default set up for Java Virtual Machine (JVM). As the result, names such as ångström typed in the command window are not recognized as unit names. If you encounter this problem, and know the encoding used by the system, you can identify the encoding to Units with the help of the property ENCODING or command option -e. (In Windows XP, you can find the encoding using the command chcp. In one case investigated by the author, the encoding was Cp437, while the JVM default was Cp1252.) The units.dat file supplied with Units contains commands !utf8 and !endutf8. This is so because it is taken unchanged from GNU units. The commands enclose the portions of file that use non-ASCII characters so they can be skipped in environments that do not support UTF-8. Because Java always supports the UTF-8 encoding for input files, the commands are ignored in Units. The program documented here is a Java development of GNU Units 1.89e, a program written in C by Adrian Mariano ([email protected]). The file units.dat containing the units data base was created by Adrian Mariano, and is maintained by him. The package contains the latest version obtained from GNU Units repository. GNU Units copyright © 1996, 1997, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2010, 2011 by Free Software Foundation, Inc. Java version copyright © 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 by Roman Redziejowski. The program is free software: you can redistribute it and/or modify it under the terms of the GNU Library General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. The program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. This Web page copyright © 2012 by Roman Redziejowski. The author gives unlimited permission to copy, translate and/or distribute it document, with or without modifications, as long as this notice is preserved, and information is provided about any changes. Substantial parts of this text have been taken, directly or modified, from the manual Unit Conversion, edition 1.89g, written by Adrian Mariano, copyright © 1996, 1997, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2011 by Free Software Foundation, Inc., under a written prmission contained in that document.
http://units-in-java.sourceforge.net/
13
28
For most people, inflation refers to how much money they need to purchase basic goods and to the interest rates on their loans when their government decides to pay its bills by printing more money. The average worker might earn twice as much but discovers he can purchase only about two-thirds as much in the way of goods and services. Hyperinflation occurred in Germany in 1923 (see figure 1). At one point a small glass of beer and one pound of meat cost 4 and 36 billion German marks, respectively. Figure 1: Hyperinflation In 1923, German banknotes lost so much value due to inflation that many Germans found them to be an inexpensive substitute for wallpaper. Photo credit: German Government Archives. The hyperinflation of the German mark is nothing, however, compared to what astronomers believe occurred to the size of the universe between 10-35 and 10-32 seconds after the cosmic creation event. Astronomers conclude that during this extremely brief instant the universe grew in size from one hundred million trillion times smaller than the diameter of a proton to about the size of a grapefruit. That is, the volume of the universe expanded by a factor of 10102 times in less than 10-32 seconds! In big bang cosmology nothing less than such hyperinflation shortly after the universe's birth would allow it to ever possibly support life. Without the hyperinflation episode the universe would have lacked the uniformity and homogeneity life demands. (For example, the stars and planets needed to support life can form only in an extremely uniform and homogeneous universe.) These necessary characteristics, in turn, demand that light (or heat) everywhere in the universe be thermally connected to light (or heat) everywhere else in the universe. Yet without inflation even in a 14-billion-year-old universe there is not enough time for light to travel the necessary distances to explain such thermal connections. Inflation is the last scientific prediction of the biblical big bang creation model1 yet to be proven beyond any reasonable shadow of doubt. Young-earth creationist leaders often use doubt about inflation as a tool to deflect criticism of their own creation models, particularly light-travel times in the universe. Critics of the young-earth view point out that if the universe is less than 50,000 years old, then light lacks the time needed to travel from distant galaxies to astronomers' telescopes. In reply, young-earth proponents argue that in the absence of inflation light in the universe cannot be causally connected if the universe is less than 15 billion years old. (It is for this reason that I sometimes refer to myself as a middle-age universe creationist since I hold that the universe is billions of years old, not thousands or quadrillions). In a recently published article in the Astrophysical Journal, a team of twenty-eight astronomers from the USA, Britain, France, and Chile provided yet more evidence that the universe experienced a brief period of hyperinflation during its infancy.2 The group presented their analysis of two years worth of data from the Background Imaging of Cosmic Extragalactic Polarization (BICEP) instrument, a bolometric polarimeter located at the South Pole (see figure 2). The BICEP instrument's location allows it to take advantage of the extremely cold conditions of Antarctica which guarantee that insufficient water remains in the atmosphere to disturb the quality of the observations. Figure 2: The BICEP Instrument The BICEP bolometric polarimeter is the smaller of the two telescopes in this photograph. It is located on the left side of the roof of the two-story building. Photo credit: National Science Foundation The BICEP experiment's objective is to measure at a high precision level the polarization of the cosmic background radiation left over from the cosmic creation event. Cosmic inflation and the theory of general relativity predict a primordial background of gravitational waves. These gravitational waves, in turn, predict a highly specified signature in the E and B polarization modes of the cosmic background radiation. The E polarization mode in this radiation was previously detected in the analysis of the 5-year data stream from the Wilkinson Microwave Anisotropy Probe (WMAP) satellite.3 The analysis showed the polarization data from WMAP was consistent with both the inflationary hot big bang creation model and with the universe being dominated primarily by dark energy and secondarily by cold dark matter (the LCDM model). The analysis of the 2-year data stream from the BICEP instrument measured the E-mode to unprecedented precision. It accurately measured the E-mode angular power spectrum from multipole values 21 to 335. It detected for the first time the peak at multipole value = 140 that all inflationary models predicted must exist. In addition to generating E-mode polarization, gravitational waves produced by the cosmic inflationary episode would induce a B-mode polarization in the cosmic background radiation. Unlike the E-mode, a B-mode detection would yield unambiguous proof for a cosmic inflation event. While an E-mode detection is a necessary requirement for the inflationary hot big bang creation event, there are exotic cosmic models capable of explaining the observed E-mode without inflation. However, only an inflation event could explain a B-mode detection. The problem for astronomers, though, is that detecting the B-mode polarization in the cosmic background radiation is much more challenging than the E-mode. The WMAP, for example, lacked the sensitivity to even place a meaningful limit on the B-mode polarization. While the analysis of the 2-year data stream from the BICEP observations failed to detect the B-mode, it had sufficient sensitivity to place for the first time a meaningful constraint on the inflationary gravitational wave background. It eliminated several of the more exotic inflationary big bang models. B- and E-mode polarization in the cosmic background radiation is just one of several tests for cosmic inflation. Two others, for which observational confirmation already exists, are in the values for the geometry of the universe and for what cosmologists term the “scalar spectral index,” a parameter that describes the nature of primordial density perturbations. All inflationary hot big bang creation models predict the geometry of the universe must be flat or very nearly flat. Perfect geometric flatness is where the space-time surface of the universe exhibits zero curvature (see figure 3). Two meaningful measurements of the universe's curvature parameter, ½k, exist. Analysis of the 5-year database from WMAP establishes that -0.0170 < ½k < 0.0068.4 Weak gravitational lensing of distant quasars by intervening galaxies places -0.031 < ½k < 0.009.5 Both measurements confirm the universe indeed manifests zero or very close to zero geometric curvature and, hence, provide strong evidence for cosmic inflation. Figure 3: Evidence for the Flat Geometry Predicted by Cosmic Inflation If the geometry of the space-time surface of the universe is closed, the angular sizes of the hot and cold spots in maps of the temperature fluctuations in the cosmic background radiation will be large. If the cosmic geometry is open, the angular sizes of the hot and cold spots will be small. A flat-geometry universe will exhibit angular sizes in between. Consequently, high sensitivity measurements of the temperature fluctuations in the cosmic background radiation, such as those produced by the WMAP satellite, can determine whether the flat, or nearly flat, cosmic geometry predicted by inflationary hot big bang creation models is correct. Models of the universe that exclude inflation predict the scalar spectral index must take on a value greater than 1.0. For the simplest cosmic inflation model the scalar spectral index = 0.95. For models invoking a more complex, but not wildly exotic, version of cosmic inflation, the scalar spectral index would fall between 0.96 and 0.97. The latest and best WMAP determination, in combination with the best results from the Sloan Digital Sky Survey and the Supernova Cosmology Project, yielded a scalar spectral index measure of 0.960 ± 0.013.6 Clearly, strong evidence for cosmic inflation already exists. The goal of the BICEP research team, however, is to continue their observational program at the South Pole. In due time they will possess a sensitive enough measurement of the B-mode polarization in the cosmic background radiation to determine exactly what kind of cosmic inflation is responsible for the present-day universe. Such an achievement will definitively establish both cosmic inflation and the theory of general relativity. We hope this kind of hard scientific proof will help many people change their philosophical and theological objections to the biblically predicted big bang creation model.7 Hopefully, too, it will help proponents of young-earth creationism recognize that [scientific research on God's second book of revelation to humanity, namely the record of nature, is the friend and not the enemy of the Christian faith.8 1. Hugh Ross, The Creator and the Cosmos 3rd ed. (Colorado Springs: NavPress, 2001), 23–29. 2. H. C. Chang et al., “Measurement of Cosmic Microwave Background Polarization Power Spectra from Two Years of BICEP Data,” Astrophysical Journal 711 (March 10, 2009): 1123–40. 3. E. Komatsu et al., “Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation,” Astrophysical Journal Supplement Series 180 (February 2009): 330–76. 5. S. H. Suyu et al., “Dissecting the Gravitational Lens B1608+656. II. Precision Measurements of the Hubble Constant, Spatial Curvature, and the Dark Energy Equation of State,” Astrophysical Journal 711 (March 1, 2010): 201–21. 6. E. Komatsu et al. 7. Hugh Ross, Creator and the Cosmos, 23–29. 8. Psalm 19:1–4; 50:6; 97:6; Romans 1:18–22; Hugh Ross, A Matter of Days (Colorado Springs: NavPress, 2004).
http://www.reasons.org/articles/did-the-universe-hyperinflate
13
54
Contents - Previous - Next This is the old United Nations University website. Visit the new site at http://unu.edu Methane is emitted to the atmosphere primarily by anaerobic microorganisms that obtain their metabolic energy from carbohydrates by fermentation, yielding methane as a waste product. Such organisms are found in wet soils, especially in swamps and flood-irrigated fields (e.g. rice paddies). They also live in the guts of termites and other wood-eating insects, and in the intestines of most herbivorous animals, especially cattle and sheep, where they assist with the digestion of cellulose. These sources are anthropogenic to the extent that humans alter natural patterns, for example by deforestation to facilitate cattlegrazing, by draining wetlands, or by irrigation of drylands. However, because the methane production associated with various land uses and animal husbandry practices are not yet well understood, we have not quantified these changes in methane emissions for the United States over the past century. Our focus here is on methane emissions from fossil-fuel production and use, that is, coal-mining, coking, oil and gas drilling, and natural gas distribution. We discuss these four sources in this order. Coal is a porous carbonaceous material in which substantial quantities of gas (mainly methane) are typically found in cracks or adsorbed on the surfaces. The coal gas contains upward of 90 per cent methane - a typical figure would be 95 per cent. As the coal is broken up in the drilling and mining process and later crushed for use, most of this adsorbed methane is released into the atmosphere. This is the source of methane seepage into coal mines, which has resulted in many explosions and fires. There is enormous variability between coal seams with regard to adsorbed methane, but for a given rank of coal the major variable is depth. Deeper coals contain more adsorbed gas than surface coals, probably owing to the higher pressure. The US Bureau of Mines has developed an empirical equation relating gas volume (in litres/kg or m³/tonne) with depth: V = k0(0.096h)n = -bx(1.8/100)+11 where a is depth in metres, ko and no are parameters depending on the rank of the coal, and b is a function of density. Both are related to the ratio of fixed carbon to volatile matter, as shown in figure 6. Adsorptive capacity v. depth is plotted in figure 7. From figure 7 we can estimate that the gas content of anthracite (from underground mines) ranges between 15 and 20 litres/kg (or cubic metres per metric ton). Appalachian bituminous coals average 10-15 litres/kg. Data for a number of mines in the Pittsburgh area are shown in table 6. Midwestern and Western strip-mined medium or low volatile coals would cluster generally in the range 5-10 litres/ kg. The national average for all bituminous and anthracite coals has been estimated to be 6.25 or 7.0 Iitres/kg (SAI, 1980). Emissions from sub-bituminous coals and lignite are smaller (2.8 litres/kg and 1.4 litres/kg, respectively) and can probably be neglected for our purposes, since the quantities of these fuels mined in the United States are small. Fig. 6 Constants ko and no as a function of fixed carbon/volatile matter ratio (Source: US Bureau of Mines) Fig. 7 Adsorptive capacity of coal as a function of rank and depth (Source: US Bureau of Mines) Table 6 Comparison of estimated and direct determination of methane content of coal |Peach Mountain #18||213||699||22||14-19||+3| |Redstone||225||747||4||9- 11||- 5| Source: US Bureau of Mines. a. 1 cm³/g= 1 litre/kg= 1 m/³/mt. The earliest coal mines in the United States were in Virginia. However, Pennsylvania and West Virginia soon became dominant sources and remained so until the last quarter of the nineteenth century, when the Illinois coal fields were opened. Western coals were extensively exploited even later. Utah, Wyoming, and Montana have large deposits of low-sulphur, high-ash coals relatively near the surface and accessible to large-scale strip-mining techniques. Meanwhile, some of the older Eastern mines have been largely exhausted (especially in the so-called Pittsburgh seam) and Eastern coals are increasingly from deeper mines. Nevertheless, coking coal used in the steel industry is still obtained mainly from the Appalachian mines. Two long-term trends are apparent: 1. Eastern coal mines have become continuously deeper, on average, over time, with a corresponding gradual increase in associated methane release per ton of coal output to the present level. 2. Most of the increased total production since the late nineteenth century is due to the opening of shallower mines - mostly strip mines - in the Midwest and Far West. Western coals yield less methane per ton than Appalachian coal, but have increased as a fraction of total output. Taking these two change factors into account we assume a slight increase in methane emissions from Eastern coals but a constant average for the United States as a whole. These contrary trends result in relatively constant average emission rates, as reflected in our historical emission coefficient estimates (table 7) at the end of this chapter. Coke is the solid residue produced from the carbonization of bituminous coal after the volatile matter has been distilled off. The main object of coking is to free the bituminous coal from impurities water, hydrocarbons and volatilizable sulphur - leaving in the coke fixed carbon, ash, and the non-volatilizable sulphur. The suitability of a coal for conversion to coke is determined by its "coking" properties. Coke is used primarily as a reducing agent for metal oxides in metallurgical processes and secondarily as a fuel. It was first used in the iron-making process in Great Britain in the early 1750s as a replacement for charcoal. The primary factor driving the change was the costs - the cost of producing charcoal pig iron greatly increased in the latter half of the eighteenth century while the costs of coke pig iron fell sharply. By the end of the century, coke pig iron provided some 90 per cent of the total iron industry production in Great Britain (Hyde, 1977). Table 7 Methane emissions coefficients (metric tons CH4/metric ton of fuel) |Bituminous, US average||-||-||0.005||0.005||0.005||0.005| a. Emissions coefficients for coal are calculated on the basis of an assumed density of 0.714 kg/ m³ for methane, and gas adsorption of 101/kg for anthracite and Appalachian bituminous coals, and 71/kg for average US bituminous coals. b. Based on coal used for coking. c. Based on unaccounted potential production of associated gas. d. Based on gas marketed. This substitution took place about a century later in the United States because of the greater availability of wood for charcoal manufacturing in the eastern US, as discussed previously. Before the 1830s, almost all pig iron was made with charcoal. In the 1830s, ironmakers began using mineral fuel in the iron-making process, but it was primarily anthracite rather than bituminous coal. By 1854, the first year for which aggregate statistics are available, pig iron made with anthracite constituted 45 per cent of the total pig iron produced in the country, while that made with bituminous coal only furnished 7.5 per cent. By 1880, however, the percentage of pig iron made with bituminous coal and coke had reached 45 per cent, mixed anthracite and coke provided the fuel for 42 per cent, while the remaining 13 per cent was made with charcoal. One state alone, Pennsylvania, provided 84.2 per cent of US coke production in that year. By 1911, bituminous coal and coke provided the reducing agent for 98 per cent of the pig iron manufactured (Temin, 1964; Warren, 1973). The first method of making coke was copied from that used to prepare charcoal. Coal was heaped in piles or rows on level ground and covered with a layer of coal dust to minimize airflow. Once the process had been started (with the help of wood), the heat drove off the volatile gases, consisting of methane and ethane plus some ammonia and hydrogen sulphide (H2S). These gases burned at the surface of the pile, which provided heat to keep the process going. When the gaseous matter had been used up, the heap was smothered with a coating of dust or duff, then cooled by wetting, leaving a silvery white residue high in carbon. If a higher, dryer heat was applied, the hydrogen sulphide gas was driven off but the sulphur remained and combined with the carbon. No attempt was made to capture any of the escaping gases. The time necessary for coking a heap was usually between five and eight days. The coke yield was approximately 59 per cent of the original mass (Binder, 1974). However, no information is available concerning the total amount of coke produced by this crude process." By the late nineteenth century, coke was produced mainly in the so-called beehive coke oven. Beehive coke was supposedly first made in western Pennsylvania in 1817; coke iron was produced for the first time in 1837, also in western Pennsylvania, from high-quality coking coal from the Connellsville seam (Warren, 1973). Extensive use of coke in the iron-making process, however, did not begin until after the Civil War. In 1855, there were 106 beehive ovens in the country; by 1880 there were 12,372 ovens in 186 plants, and by 1909 the maximum was reached with 103,982 ovens in 579 plants. By 1939, the number of beehive ovens had shrunk to 10,934 in 76 plants. In terms of their distribution, initially almost all of the beehive ovens were in the so-called Pittsburgh coal bed area located in western Pennsylvania and northern West Virginia. As late as 1918, over half the ovens in the country were still in this region (Eavenson, 1942). Beehive ovens were arranged in single stacks or in banks of single or double rows. Most late-nineteenth-century ovens were charged from coal delivered by a car running on tracks above the wagon. Before charging, the ovens were preheated by a wood and coal fire. After the coal had been charged, the front opening was bricked up, with a 2- or 3-inch opening left at the top. The coking process proceeded from the top downward, with the required heat for the coking process produced by the burning of the volatile byproducts escaping from the coal. When no more volatile matter was escaping, the coking process was complete. The brickwork was removed from the door, and the coke was cooled with a water spray and then removed from the oven by either hand or mechanical means (Wagner, 1916). The yield of high-class Connellsville coal coked in beehive ovens in 1875 was 63 per cent (Platt, 1875); in 1911 the US Geological Survey reported that the average yield nationally for beehive ovens was 64.7 per cent (Wagner, 1916). Still, no attempt was made to capture and utilize the valuable byproducts resulting from the beehive coking process. The maximum tonnage of coal utilized in the beehive process was 53 million tons in 1910. If it is assumed that 10,000 cubic feet of gas can be produced from each ton of bituminous coal, then a potential of 530 billion cubic feet of gas that could have been utilized for various heating and lighting processes was theoretically available from the beehive ovens. (Only a fraction of this was needed to provide heat for the process.) In addition, it is estimated that 400 million gallons of coal tar, nearly 150 million gallons of light oils, and 600,000 tons of ammonium sulphate an important fertilizer - were also wasted. Of course, capturing these by-products depended on the availability of a feasible and economical technology as well as on markets for the products (Schurr and Netschert, 1960). There were some limited attempts to recapture by-products in the years before the Civil War. The so-called Belgian or retort oven resulted in the recovery of some by-products and was utilized primarily for low-volatile or "dry" coals. It had been pioneered by Belgian, German, and French engineers and the technology was gradually applied in the American coal fields. Retort ovens generated a higher coke yield per ton of coal than the beehive ovens (average 70 per cent), produced valuable by-products, and provided for more rapid coking. However, the process was much more expensive than the beehive oven since the coals used had to be crushed, sorted, and cleaned before coking. A number of retort ovens were used by the Cambria Iron and Steel Works at Johnstown, Pennsylvania, in the 1870s. But the extensive adoption of the by-product oven in the United States did not occur until after 1900 (Warren, 1973). By-product coke ovens constructed during the first decades of the twentieth century were of two types: the horizontal flue construction of Simon-Carves or the vertical flue of Coppee. In both cases the coking chamber consisted of long, narrow, retort-shaped structures built of firebrick and located side by side in order to form a battery of ovens. The retorts were usually about 33 feet long, from 17 to 22 inches wide, and about 61/2 feet high. The oven ends were closed by fire-brick-lined iron doors luted with clay to form a complete seal. The heat required for distillation was supplied by burning a portion of the gas evolving from the coal in flues which surrounded the oven. Some types of ovens constructed at the beginning of the century employed the recuperative principle for preheating the air for combustion, but most used a regenerative chamber to conserve heat better. The yield of by-products was determined by the quality and quantity of coke desired (Wagner, 1916). The use of the by-product oven was originally limited by undeveloped markets for by-products to balance the higher costs of the capital equipment used by the process. Thus, beehive coke produced from high-quality Connellsville coal long maintained a cost advantage for local iron smelters. Utilization of some of the by-products, especially the high-calory coke-oven gas, to provide supplementary heat (e.g. for "soaking" pits or rolling mills) in the integrated iron and steel works themselves finally reduced the costs associated with the coking process below that required to produce beehive coke (Meissner, 1913). The result was a large expansion of the use of byproduct ovens located at or near integrated steel mills, especially after the First World War (Warren, 1973). Fractional by-product recovery from all coking operations in the United States is shown in figure 8. Emissions of methane to the atmosphere from by-product ovens can be assumed to be rather low. Emissions from beehive ovens can be presumed to correspond more or less to the methane content of coke-oven gas that was recovered from by-product ovens. The methane content of coking coal (based on average recovery from by-product ovens) can be taken as 27 per cent of the weight of the original coal. Oil and gas drilling Natural gas is found in distinct gas reservoirs or is present with crude oil in reservoirs. Two types of gas can be distinguished on the basis of their production characteristics: (1) non-associated gas, which is free gas not in contact with crude oil, or where the production of the gas is not affected by the crude oil production; (2) associated gas (also called "casinghead" gas), which is free gas in contact with crude oil that is significantly affected by the crude oil production; it occurs either as a "gas cap" overlying crude oil in an underground reservoir or as gas dissolved in the oil, held in solution by reservoir pressure (Schurr and Netschert, 1960). Natural gas was encountered in the United States early in the nineteenth century in the drilling of water wells and brine wells. It was not put to any practical use until 1824, when it was utilized for illumination and other purposes in Fredonia, New York. Systematic exploitation of the resource, however, whether for domestic or industrial purposes, did not occur until after the middle of the century and primarily in connection with drilling for oil (Stockton et al., 1952). Oil was first discovered in sizeable quantities in 1859 in western Pennsylvania, and the oilfields of the Appalachian area in the states of New York, Ohio, Pennsylvania, Indiana, Kentucky, and West Virginia were the first to be developed. Natural gas was "associated" with these oilfields, but "non-associated" gas wells were also regularly discovered in these areas (Henry, 1970). Other nineteenth-century discoveries of oil and natural gas were made in California, Kansas, Arkansas, Louisiana, Texas, and Wyoming. Fig. 8 By-product recovery from coke in the US (Source: US Bureau of Mines ) Statistics on sources of gas (gas wells and oil wells) have been kept in the United States only since 1935. The fraction attributable to gas wells producing no oil (non-associated gas) has been rising almost continuously since the statistics have been kept. Fitting a logistic curve to the data and extrapolating backward in time suggests that the "non-associated" fraction might have been about 10 per cent in 1860 when oil production began (see figure 9). It is clear from anecdotal evidence that some wells at least were producing gas alone as early as 1870 (Henry, 1970). Oilfield gas was first put to use in the areas around the wells, where it lighted the oil derricks and raised steam for the well-pumping engines. This utilization was dependent on the invention of separators or gas traps to separate the oil from the gas. These were first developed in about 1865, with the subsequent invention of a variety of separators (Kiessling et al., 1939). As late as 1950, the largest single class of natural gas consumption was in gas and oilfield operations (18.9 per cent or 1,187 billion cubic feet) (Stockton et al., 1952). Fig. 9 Natural gas production from oil and gas wells (Source: Nakicenovic and Gruebler, 1987) After field use, another important industrial use of natural gas that did not require transmission over long distances was the manufacture of carbon black. Carbon black plants were located near the wells where they took advantage of large volumes of cheap gas not usually tapped by transmission lines. Carbon black manufacture (for ink) began in Cumberland, Md., in 1870 (Henry, 1970). It was widespread in the Appalachian fields during the late nineteenth century, although most plants were gone by 1929 (Thoenen, 1964). In 1950, about 93 per cent of the industry was located in the south-western states of Texas, Louisiana, New Mexico, and Oklahoma. An upsurge in demand for carbon black occurred after 1915, when it was found that by adding carbon black to natural latex the structure and durability of rubber products such as lyres was greatly increased (Stockton et al., 1952). Where municipal markets were relatively close, the gas was piped to them. As early as 1873, for instance, several towns in the oil region of New York State, Pennsylvania, Ohio, and West Virginia - including Buffalo, N.Y., and Erie, Pa. - were furnished with natural gas from nearby wells through pipelines (Henry, 1970). The gas was used to light the streets with large, flaring torches, and to light homes and provide fuel for cooking stoves. Even with this use, an excess usually had to be vented from a safety valve (Henry, 1970; Pearse, 1875). Industrial uses also occurred where firms were relatively close to sources, especially in Erie, Pa. The lack of the availability of markets and limitations on transmission capabilities meant that huge amounts of gas were wasted (Henry, 1970; Schurr and Netschert, 1960). Waste took place in the form of either venting or flaring. Venting is defined as the escape of gas without burning, while flaring is defined as escape with burning. In many cases, gas that was accidentally ignited burned for some time before being extinguished. In Pennsylvania, for instance, a gas well struck in 1869 burned for about a year (Pearce, 1875); the Cumberland well burned for two years before being utilized for carbon black (Henry, 1970). It was estimated that losses and waste of gas in oilfields in the early part of the twentieth century were as high as 90 per cent of all gas associated with oil production. Many gas wells were left to "blow," especially because of the expectation that oil would flow when the gas head had gone (Stockton et al., 1952; Williamson and Daum, 1959). West Virginia was an important oil and gas producer at the turn of the century and in 1903 it was estimated that during the previous five years 500 million cubic feet of gas had been "allowed to escape into the air" each day from the state wells (Thoenen, 1964). In Illinois, in 1939, it was estimated that 95 per cent (134 billion cubic feet) of the gas associated with the new Salem oilfield in the state was flared. From 1937 to 1942 it was estimated that 416 billion feet of gas were flared in Illinois (Murphy, 1948). In other cases, discovery wells in gas fields were capped or plugged and "forgotten." In the case of the early fields, many wells were inadequately plugged (Thoenen, 1964; Prindle, 1981). Losses from such wells cannot be estimated with any accuracy, although the quantity lost was probably quite small by later standards. The best-known example of waste was in the Texas fields in the 1930s. When the natural gas in an unassociated well is allowed to expand rapidly or is cooled, somewhat less than 10 per cent of the gas condenses into a liquid (natural gasoline) suitable for use in vehicles. The phenomenon had first been observed around the turn of the century in the Appalachian fields, where socalled "drip" or casinghead gasoline was often considered a nuisance. The invention of the internal combustion engine, however, provided a market for such "natural" gasoline, and a number of small gasoline plants were established starting in 1910 in the producing fields. In West Virginia, the utilization of natural gas in the making of casinghead gasoline was viewed as a "great force in the conservation of natural gas" (Thoenen, 1964). In Texas in the 1920s and 1930s, when markets for natural gas were still quite limited, natural gasoline became a most valuable product. The process used in Texas to produce gasoline from natural gas wells was known as stripping gas, and numerous companies engaged in the practice of marketing the stripped condensate and then venting or releasing the remaining 90 per cent to the atmosphere or flaring it (Prindle, 1981). One historian estimated that in 1934 approximately a billion cubic feet of unassociated gas was stripped and released or flared daily in the Texas Panhandle alone (Prindle, 1981). The possibilities of recovering and marketing even a small part of the associated gas were small in cases where the rate of production could not be controlled. The gas from these wells was almost invariably flared. Waste was probably most severe in the East Texas oilfields. During the early 1940s it was estimated that one-and-a-half billion cubic feet of casinghead gas was flared each day from the larger fields. Motorists could supposedly drive for hours at night near the Texas fields without ever having to turn on their automobile lights because of the illumination from the casinghead flares (Prindle, 1981). State legislation was an obvious approach to the conservation of natural gas. At one time or another almost all states involved in petroleum and natural gas production passed conservation laws (Murphy, 1948). Pennsylvania had the first legislation, passed in 1868, and West Virginia had a law in 1891. The West Virginia law, for instance, applied to all wells producing petroleum, natural gas, salt water, or mineral water. In regard to natural gas, owners were required to "shut in and confine" the gas and to plug the well after exhaustion. There was no provision, however, to prevent venting or flaring (Thoenen, 1964). In Texas, an 1899 law prohibited the flaring of unassociated gas. A 1925 law, however, specifically permitted the flaring of associated gas from an oil well (Prindle, 1981). But since it was often difficult to define clearly the difference between a gas and an oil well, enforcement was difficult. It was not until the middle 1930s that the Texas law was successfully enforced through the reclassification of hundreds of oil wells as gas wells and the prohibition of flaring (Prindle, 1981). Still a few states that had major oil fields, such as Illinois, had no gas and oil conservation legislation as late as the Second World War (Murphy, 1948). Considerable amounts of natural gas were conserved and utilized by technological developments. Methods of capturing associated natural gas, for instance, were developed and the gas used to run pumps and lights at the works. Other developments, such as the Starke Gas Trap to purify wet gas, occurred over the years, and also led to conservation (White, 1951). An important development was the application of highpressure compressors to the extraction of gasoline from casinghead natural gas. These compressors made it possible for field operators to develop small gasoline plants on their producing fields. After 1913 the absorption process increasingly replaced the compressor-condensation system as a means of extracting gasoline from both dry and wet natural gases. Between 1911 and 1929 (the peak year), the volume of natural gas liquids produced increased from 3,660,000 gallons to 72,994,000 gallons (Thoenen, 1964). The most important factor reducing the waste of natural gas has been the development of long-distance pipelines to available markets. The first major attempt was by the Bloomfield & Rochester Gas Light Co. in 1820, which organized the piping of gas 40 km from a well in Bloomfield, N.Y., to the city of Rochester. The gas was pronounced inferior by consumers, however, resulting in the failure of the company. The first successful cast-iron gas transmission pipeline in 1872 linked a well in Newton, Pa., with nearby Titusville (about 9 km). In 1875, natural gas was piped 27 km from a well near Butler, Pa., to ironworks at Sharpsburg and Allebheny, near Pittsburgh (Pearce, 1875). In 1883, the Chartiers Valley Gas Co. was formed to supply the city of Pittsburgh with gas from wells near Murraysville about 25 km. By the following year 500 km of pipelines were in place, supplying natural gas to the city. The field, however, was exhausted in little more than a decade and natural gas was replaced by coal in the city's growing steel industry (Tarr and Lamperes, 1981). High-pressure technology was first used in 1891 by the Indiana Natural Gas and Oil Company to bring gas 120 miles from northern Indiana gas fields to Chicago. By the 1920s, integrated companies that combined production, transmission, distribution, and storage facilities had been developed in the Appalachian area, in the Midwest, and in California, Oklahoma, and Texas. By 1925, pipelines as much as 300 miles in length had been constructed and were serving 3,500,000 customers in 23 states (Stockton et al., 1952). Most of the interstate movement of natural gas took place in the north-eastern United States, where densely populated urban areas were located nearby the Appalachian fields. In 1921, 150 billion cubic feet of gas moved interstate, of which approximately 65 per cent was produced in West Virginia and flowed mostly into Pennsylvania and Ohio. Less than 2 per cent of the total interstate movement of gas originated in Texas (Sanders, 1981). During the late 1920s, important metallurgical advances as well as improvements in welding and compression methods resulted in the possibility of constructing much longer and bigger pipelines. Most critical was the development of continuous butt-welding and of seamless tubes made of steel with greater tensile strength. Also important were improvements in methods of compression, which made it possible to move higher volumes of gas without recompression. By 1934, approximately 150,000 miles of field, transmission, and distribution lines existed in 32 US states, with some transmission lines of as long as 1,200 miles (Sanders, 1981; Stockton et al., 1952; Schurr and Netschert, 1960). The post-Second World War period saw a great expansion of long-distance pipelines, with 20,657 miles of natural gas lines constructed between 1946 and 1950. Probably most significant was the conversion to natural gas transmission of two long-distance pipelines ("big" inch and "little" inch), built during the war by the government to transport petroleum. These pipelines were the first connecting the East Texas field through the Midwest to Appalachia and the Atlantic seaboard. By 1949, gas from the Southwest was supplying 60 per cent of the Columbia Gas Company's 2 million customers in Pennsylvania, Ohio, and West Virginia. Markets for gas clearly meant the reduction of waste and increased resource utilization. At the end of 1946, 39 per cent of Texas gas wells were shut because of the lack of pipelines, but by 1951 this number had been reduced to 25 per cent (Sanders, 1981; Stockton et al., 1952; Schurr and Netschert, 1960). An important method of dealing with the problem of seasonal peaks in regard to natural gas utilization has been the development of storage facilities. The first successful storage facility was developed in Ontario in 1915 and applied in a field near Buffalo, New York. Largescale underground storage was initially developed in the Manifee field of eastern Kentucky and its use has spread since then. In 1940, there were only 19 underground storage pools in operation, but by the mid1950s the number had grown to nearly 200. To a large extent, storage took place in consumer rather than producer states. Storage was especially important for states that had developed a dependence on natural gas through local supplies that had later become depleted. In 1949, for instance, Pennsylvania, Ohio, Michigan, and West Virginia were the leading states in terms of amount of gas stored and withdrawn from storage (Stockton et al., 1952; Schurr and Netschert, 1960). With regard to methane emissions, two questions now arise: 1. How much natural gas was potentially recoverable from oil (or gas) wells that were opened prior to the development of significant markets for the gas? 2. How much of this gas was vented or flared? (As already explained, flaring converts most methane to harmless carbon dioxide and water vapour.) As noted previously, the discovery of natural gas was a by-product of petroleum exploration. Gas was not sought independently until 20 years or so ago. Although the gas content of petroleum varies widely from field to field, it is likely that the potential gas output of petroleum wells is, on average, proportional to the petroleum output. The "proportionality" hypothesis above implies that the gas/oil recovery ratio should, on average, have gradually increased over time approaching a limit as gas increased enough in commercial value to justify its complete recovery. One would also expect the relative quantity of natural gas recovered to increase, relative to oil, as markets for gas developed. A gas pipeline distribution system was an essential precondition for an increasing demand for gas. In actuality, the gas/oil output ratio has increased, on average, since 1882 - when recovery began - but has done so quite unevenly (fig. 10). Fig. 10 Gas/oil production ratios, US and Northeast US In the Northeast (mainly Pennsylvania), the gas/oil ratio rose gradually to about 2:1 in the early 1920s, then moved down to a trough in the 1930s, followed by a second, higher peak in the late 1950s and a still higher peak in 1980 of nearly 5:1. In the case of the United States as a whole, the initial peak recovery rate was earlier (c. 1900) and lower (around 0.4), and was followed by a trough in the 1930s and 1940s. The troughs between the first and second peaks are difficult to explain in terms of the proportionality hypothesis. It is hard to believe that the troughs are accidental or that a physical phenomenon (such as declining pressure) could be responsible. Instead, an economic explanation for the troughs seems to be most plausible. The demand for oil outstripped the demand for gas in the 1920s and early 1930s for two reasons. Demand for petroleum products rose sharply because of the fuel needs of a fast-growing fleet of automobiles and trucks. On the other hand, demand for gas was limited by its lack of availability in urbanized areas, especially in the north-eastern part of the country. During that period the technology of large-scale gas distribution pipelines was still being developed. Moreover, finance was a problem: long-term financing could only be obtained (e.g. from life insurance companies) on the assurance of long-term supply contracts at fixed prices. But gas supplies were regulated (by the Texas Railroad Commission) in order to conserve oil, not to sell gas. This conflict of interest required a number of years to resolve. Until the interstate gas pipeline system was created, only relatively local areas near the wells could be supplied with natural gas. This probably accounts for the lag in gas demand growth. Figure 11 is constructed by using the US data for associated gas after 1935. For earlier years total (gross) gas production is multiplied by the imputed fraction associated with oil wells taken from figure 9, statistically smoothed to eliminate some of the scatter in early decades. It suggests two things. First, it implies that most of the increase in the gas/oil extraction ratio (after 1935 at least) is attributable to nonassociated gas wells. Second, it implies that the ratio of associated gas to oil production peaked around 1900 (at about 0.35, plus or minus 0.10). The dip in apparent associated gas/oil ratio after 1960 coincides with the period of gas scarcity due to low (regulated) prices. A modified "proportionality" hypothesis seems to fit the facts best, that is, that the (average) potential production of associated gas is roughly constant over time for a given area. The difference between (imputed) output of associated gas and "potential" output of associated gas is unaccounted for. This must have been used on site (e.g. for carbon black), vented or flared. Actually, use of gas to manufacture carbon black is roughly equivalent to flaring, in the sense that combustion is deliberately inefficient to maximize soot (unburned carbon) production. We can assume that unaccounted-for gas was mostly (90 per cent) flared for safety and/or economic reasons or used on site, but that flaring (including gas used in carbon black: production) was only 90 per cent efficient in terms of methane oxidation. This suggests a total emission factor of 20 per cent, although this estimate must be regarded as somewhat uncertain. The 20 per cent (give or take 10 per cent) of associated gas that is assumed to be vented comes from three sources: (1) blowouts and "gas volcanoes" (occasionally large) and "gushers"; (2) leaks; and (3) small "stripper" wells, for which gas recovery is uneconomic and flaring is unnecessary or impossible. Fig. 11 Ratio of associated gas to crude oil, US: raw data compared lo model (Sources: (a) data on gas marketed 1882-1889 based on estimates of coal replacement, originally by USGS, cited in Schurr and Netschert, 1960; (b) data 1990-1904 cited by Schurr and Netschert, 1960, and attributed to Minerals Yearbook, but disagrees with figures in Historical Statistics, also attributed to Minerals Yearbook.) Natural gas distribution Methane losses also occur in gas distribution, mostly at the local level. In this connection, we have to point out that the difference between "net" production (after gas used for oil well repressurization) and "marketed" gas is not a loss. In fact, most of this statistical difference is attributable to gas used as fuel to operate compressors in the pipeline system. The actual loss rate is probably of the order of 1 per cent of the quantity marketed, and is almost certainly less than 3 per cent. This might be the biggest loss mechanism for natural gas in the United States at present. However, in past years, venting/flaring losses were certainly dominant, as they are today in most of the rest of the world, for instance in Russia. Methane emission coefficients Based on the foregoing data and analysis, our "projected" emission coefficients for methane are summarized in table 7. The coefficients for gas venting, assuming 20 per cent venting of associated gas, as argued above, should be regarded as a plausible value. Actual losses of methane from this source could be as much as 50 per cent greater, or perhaps as little as half of that. It is, unfortunately, not possible to improve the estimate on the basis of the historical data which is available to us and which is summarized in this chapter. A better estimate of the venting/flaring factor would require a more intensive search of archival sources, supplemented by historical research on the technology of oil/gas drilling, pumping, separation, and flaring. Contents - Previous - Next
http://archive.unu.edu/unupress/unupbooks/80841e/80841E0j.htm
13
25
Deaf and Hard of Hearing Hearing loss is common, and its incidence increases with age. For the most part hearing loss is mild and has no serious effect on the development of spoken language and communication. The number of children for whom hearing loss has implications for the classroom and for general communication is relatively small, and profound hearing loss, or deafness, constitutes a low-incidence condition. In general, two categories are used to describe hearing loss: hard of hearing and deaf, with no clear demarcation between the two. Roughly, hard of hearing children are characterized as having incomplete or limited access to the spoken word, either with or without augmented hearing. Deaf children have no functional access to the spoken word, either with or without augmented hearing. The hard of hearing category may be subdivided into two categories, mild and moderate, and deafness into another two, severe and profound. Numbers decrease with the severity of the hearing loss, with one school- age child in 2,000 exhibiting a profound hearing loss (Gallaudet Research Institute, 2005). Approximately 80,000 deaf and hard of hearing children in the United States have been identified as receiving education services (Mitchell, 2004). The most significant development in assessment of hearing has been the spread of neonatal hearing screening. Close to 95% of newborn children are screened in the hospital. Until the implementation of neonatal screening, the average age of identification of hearing loss was 2 1/2 years. The two most common types of tests used are Automatic Auditory Brainstem Response (AABR) and Transient Evoked Otoacoustic Emissions (TEOAE) (Wrightstone, 2007). In the AABR test electrodes are placed on the forehead, mastoid, and nape of the neck of the infant, and the infant is fitted with a disposable earphone. The stimulus is a click or series of clicks. In the TEOAE test a microphone is placed in the external ear and a series of clicks tests the infant's response. In both cases, in order to reduce the number of false positives, a follow-up test is recommended for infants who do not pass the first screening. Hearing testing, or audiometric assessment, can be accomplished in a variety of ways. Most frequently it is done by a trained audiologist using an audiometer, a device that emits tones at various frequencies and at different levels of loudness. The testing usually is conducted in a soundproof room. The person being tested wears a set of headphones or a headband and each ear is tested separately. The results are shown on an audiogram, a graph that represents hearing levels from low to high frequencies. The hearing is measured in units called decibels. Normal speech patterns are around 30 to 50 decibels. Hard of hearing individuals would have difficulty with much of spoken language and deaf individuals would have no access to it through audition. Deaf and hard of hearing children, with some exceptions, reflect the general American school-age population. The exceptions are due to factors such as heredity, etiology, extent of hearing loss, and age of onset of a hearing loss. For approximately half of American deaf and hard of hearing children, the hearing loss is caused by genetic factors (Moores, 2001). Predominantly, genetic hearing loss is of a recessive nature, typically meaning that both parents, although they are able to hear, carry a gene for hearing loss. In a smaller number of cases, one parent may be deaf or hard of hearing and pass the gene along to the child in 50% of the cases. There are a small number of incidences of sex-linked hearing loss in which the hearing mother may pass the gene along to male children. This is possibly a partial explanation of why males constitute slightly more than half of the school age deaf and hard of hearing population. With a few exceptions, children with a genetic etiology do not possess disabilities; they are normal intellectually, physically, and emotionally. Non-genetic factors play a decreasing but still important role. For generations, and perhaps for hundreds of years, worldwide maternal Rubella would double the number of deaf children born in particular periods of time. For large numbers of children, the results would include hearing loss, visual impairment, heart conditions, neurological disorders and physical frailness. The development of a Rubella vaccine has eliminated Rubella as a cause of hearing loss. Mother-child blood incompatibility is another cause of hearing loss that has been brought under control, at least in developed countries. Childhood meningitis, however, presents a somewhat different picture. In the past a very young child who contracted meningitis might die, whereas an older child might survive but with profound hearing loss. In the 21st century it is more likely that the older child would be cured without any hearing loss and the younger child survive but with multiple disabilities. Hearing loss may also occur as a consequence of premature birth, with the possibility of additional disabilities. Again, medical advances can lead to survival, but with concomitant conditions. Miller (2006) has argued that there are essentially two distinguishable categories of children with hearing loss: those who are normal intellectually and physically and those with overlays of disability. There are some differences in the racial/ethnic make up of the school age deaf and hard of hearing population as compared to the hearing population. According to information compiled by the Gallaudet Research Institute (2005), 50% of the deaf and hard of hearing school-age population is classified as White, 15% as Black/African American, and 25% as Hispanic/Latino, with the remaining 10% being classified as Asian/Pacific, American Indian, Other, or Multi-ethnic. The disparity is with the Hispanic/Latino category. A larger percentage of Hispanic/Latino children are in programs for deaf and hard of hearing children than in the general school population. The reasons for this are not clear. Historically, there have been three interrelated major points of contention concerning the education of deaf and hard of hearing children and educational practices: Where should the children be taught, what should the children be taught, and how should the children be taught? (Moores & Martin, 2006). Complex sets of demographic changes, medical developments, societal expectations, and federal legislation have had major impacts on each of these questions. Traditionally, deaf and hard of hearing children were taught either in residential school or in separate day-school programs in large cities. The situation began to change after World War II, due to the baby boom and population explosion. State legislatures were not willing to commit extra money for the construction of new residential facilities, and increasing numbers of deaf and hard of hearing children attended public schools, often in separate classes in schools with a majority of hearing children. The trend continued with the last major Rubella epidemic in the 1960s, at a time when the baby boom was over and there were empty rooms in the schools. The passage in 1975 of the Education of all Handicapped Children Act, which has undergone extensive amendment over time, including the Individuals with Disabilities Education Improvement Act Amendments (IDEA) of 2004, was the impetus for the acceleration of the movement of special education children to mainstream or integrated settings. The law requires a Free Appropriate Public Education (FAPE) for all disabled children, with each child receiving an Individualized Education Plan (IEP). Children are to be educated in the Least Restrictive Environment (LRE), and placement with non-disabled children is viewed as desirable. The concept of integration has been replaced with that of inclusion, by which modifications of instruction are expected to adapt to the needs of the child rather than placing the onus on the child. This is the expectation in theory, but it is not necessarily realized. Amendments of IDEA lowered the age at which disabled children could be served until services are available at birth. For very young children there is an emphasis on serving the family as a whole system; instead of an individual education plan for the child, a family plan is developed. Coupled with universal neonatal screening there has been an increase of programs for children from birth to 3 years of age. Unfortunately, many states have not developed effective systems for follow-up once a hearing loss is identified, so there is often a lack of appropriate response. Approximately half of deaf and hard of hearing children are placed in a regular classroom setting with hearing children, and may be served, depending on the IEP, by an itinerant teacher of the deaf or other professional. An estimated 40% of deaf and hard of hearing children in regular class settings receive sign interpreting services (Gallaudet Research Institute, 2005). The question of what deaf and hard of hearing children should be taught has been influenced by changes in school placement and by federal legislation. The curricula for deaf and hard of hearing children used to emphasized speech training, speech recognition, and English, with relatively little attention devoted to content areas such as math, science, and social studies. However, as more and more children were educated in regular classrooms the regular education curriculum of the particular school district took precedence. The enactment of the No Child Left Behind (NCLB) legislation in 2001 brought education of deaf and hard of hearing children into even more close alignment with regular education. Among other things, the law mandates that each state must establish high standards of learning for each grade and that the states develop rigorous, grade-level assessments to document student progress. Results are reported at school, school district, and state levels. Results are also reported at each level for all students and disaggregated by race/ethnicity, speakers of languages other than English, poverty (as demonstrated by free or reduced lunch eligibility), and disability. Annual goals are established for each category, and schools, school districts, and states must show annual yearly progress (AYP). Any achievement gaps among racial/ethnic, income, language, or disability groups must be closed so that by 2014 100% of American children will demonstrate academic proficiency, as measured by grade-level standardized state-administered tests. Only a small number of profoundly impaired children are exempt. All others, including deaf and hard of hearing students, must take the tests and, by 2014, have a 100% pass rate. This poses an enormous challenge. Deaf and hard of hearing children in regular classrooms are already being exposed to the curricula of their home school districts, and most day and residential programs have adopted or adapted general education curricula. However, many, if not most, deaf and hard of hearing children start school, even after early intervention and preschool experiences, without the English skills and word knowledge that most children have acquired before the start of formal schooling. They are therefore unable to use English fluency as a tool to acquire academic knowledge and skills. For many deaf and hard of hearing children English is a barrier to learning that must be overcome. The curriculum must be modified to help deaf and hard of hearing children develop some of the skills that hearing children already have at the start of their education. Only a small percentage of high school-age deaf children achieve at grade level at the same level as their hearing peers (Moores & Martin, 2006) and the goal of 100 percent success in demonstrated academic proficiency will not be achieved by 2014. The third issue, how to teach deaf children, deals with the oral-manual controversy, which has been raging for more than a century. In the first American schools for the deaf, which enrolled a substantial number of late deafened and hard of hearing students, instruction was through either a natural sign language, the precursor of American Sign Language (ASL), or through a system of signs modified by means of the American manual alphabet to represent English and presented in English word order. This method predated English-based signed systems that may be presented in coordination with spoken English. Oral-only education was introduced in the last third of the nineteenth century and quickly became dominant in the large city day schools and in some private residential schools. In the state residential schools a system evolved in which children up to around age 12 were taught orally and then tracked into either oral or manual classes (Winzer, 1993). The situation began to change in the 1960s because of dissatisfaction with results of oral-only early intervention programs. A philosophy called Total Communication quickly grew in popularity. Theoretically, it involved the use of any means of communication to meet individual needs: speech, ASL, English-based signing, writing, gesture, or speech-reading. It reality it usually involved English-based signing in coordination with speech, and was known as simultaneous communication (Sim Com). During the 1990s a movement developed to employ ASL as the main mode of classroom instruction, with an additional concentration on written English. The approach was labeled Bi-BI, or bilingual-bicultural. Early 21st-century data indicates that 50% of deaf and hard of hearing children are taught through oral-only communication, 40% through sign and speech, and 10% through sign-only communication (Gallaudet Research Institute, 2005). There has been growing interest in multichannel cochlear implants for deaf and hard of hearing children. The surgical procedure involves removing part of the mastoid bone and inserting a permanent electrode array into the cochlear. The procedure has been more common in Australia (Lloyd & Uniake, 2007) and Western European countries such as Sweden (Preisler, 2007), where as many as 80% of young deaf and hard of hearing children have received implants. Implantation has increased in the United States and in some areas approaches 50% of the deaf and hard of hearing population. Cochlear implants are designed to bring a more clear representation of the spoken word to deaf and hard of hearing children. There have been anecdotal reports of dramatic improvements in hearing in some children, but systematic reports of the extent to which it helps children with different characteristics are not available. In is important to keep in mind that the purposes of assessment for each child can include facilitating educational placement decisions, evaluating progress, determining educational approaches to be used, improving educational and intervention strategies, monitoring progress for individualized education plans, or family education plans (Miller, 2006). Although the situation is improving, attempts to provide meaningful and valid assessments for deaf and hard of hearing children have met with limited success. Miller points out that there is difficulty getting agreement in the field about which tests require “deaf norms” and which tests should use “hearing norms.” When separate norms are desired there is the additional question of who should be included in the normative sample—all deaf children, deaf children without disabilities, deaf and hard of hearing children, children with disabilities, and so forth. Miller states that in establishing an assessment framework assessment specialists must keep in mind the normal course of development of a typical deaf or hard of hearing child who has no disabilities and who has been in a linguistically enriched environment from birth. Other deaf and hard of hearing children should be compared to this ideal prototype or model of development and measured in ways that can estimate their similarities to or differences from the ideal. The template of developmental norms for deaf and hard of hearing children should be the model for all deaf and hard of hearing children even if such children comprise a minority of the population under consideration. Some of the key variables to be considered include age of onset of the hearing loss, age of identification and beginning of educational services, home and school linguistic environment, presence or absence of disabling conditions, use of and benefit from auditory amplification, and consistency of the communication approach over the years. Considering these variables, the assessment specialist should be able to identify those children with hearing loss only, those with disabilities who have received excellent programming from an early age, and those who have received little or inappropriate educational services. The need is for assessment specialists to redouble efforts to develop meaningful, relevant, and linguistically and culturally appropriate assessment batteries that make the most sense in practical terms for deaf and hard of hearing children and their families. See also:Special Education Gallaudet Research Institute. (2005). Report of data from the 2004–2005 annual survey of deaf and hard of hearing children and youth. Washington, DC: Gallaudet University. Lloyd, K., & Uniake, M. (2007). Deaf Australians and the cochlear implant. In L. Komesaroff (Ed.), Surgical consent, bioethics and cochlear implantation (pp. 174–194). Washington, DC: Gallaudet University Press. Miller, M. S. (2006). Individual assessment and educational planning. In D.F. Moores & D. S. Martin (Eds.), Deaf learners: Developments in curriculum and instruction (pp.161– 176). Washington, DC: Gallaudet University Press. Mitchell, R. E. (2004). National profile of deaf and hard of hearing students in special education from weighted results. American Annals of the Deaf, 149, 336–349. Moores, D. F. & D. S. Martin. (Eds.). (2006). Deaf learners: Developments in curriculum and instruction. Washington, DC: Gallaudet University Press. Preisler, G. (2007). The psychosocial development of deaf children with cochlear implants. In L. Komensaroff (Ed.), Surgical consent: Bioethics and cochlear implantation (pp. 120– 136). Washington, DC: Gallaudet University Press. Winzer, M. (1993). The history of special education: From isolation to integration. Washington, DC: Gallaudet University Press. Wrightstone, A. S. (2007). Universal newborn hearing screening. American Family Physician, 76, 1349–1356. Add your own comment - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - 10 Fun Activities for Children with Autism - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - Should Your Child Be Held Back a Grade? Know Your Rights - Bullying in Schools - First Grade Sight Words List - Test Problems: Seven Reasons Why Standardized Tests Are Not Working
http://www.education.com/reference/article/deaf-and-hard-of-hearing/
13
41
Christopher Columbus discovered the island of Cuba on October 27, 1492, during his initial voyage to find a westerly route to the Orient. As gathered from his chronicles, the exotic beauty of the island left him absolutely spell-bound. In his essay about the discovery of Cuba, he passionately describes it as “ the most beautiful land human eyes have ever seen”. Before the Europeans arrived, Cuba was inhabited by three different cultures: the Ciboneyes, the Guanahatabeyes and the Tainos. The Tainos were the most advanced. They were fishermen, hunters, and agriculturists. They grew maize (corn), yams, beans, squash, yucca, cotton and tobacco. They were skilled in woodwork and pottery. By the time the Spanish arrived, about 100 000 native Indians lived peacefully in the island. Cuba's size and diversity of landscape no doubt convinced Columbus that he had indeed found Asia. It wasn’t until 1508 that Sebastián de Ocampo, another Spanish navigator and explorer, circumnavigated Cuba, proving that it was an island. In 1511, Diego Velázquez de Cuéllar set out with three ships and an army of 300 men from La Espanola (Hispaniola), east of Cuba, with orders from Spain to conquer the island. The new settlers were to be greeted with stiff resistance from the local Taíno population under the leadership of Hatuey, a cacique ( chieftain) who had fled to Cuba from La Espanola to escape the brutalities of Spanish rule on that island. After a prolonged resistance, Hatuey was captured and burnt alive, and within three years the Spanish had gained control of the island. Diego Velázquez established seven main settlements in the new colony: Baracoa, Bayamo, Santiago, Puerto Principe, Trinidad, Sancti Spiritus and Havana. Due to its favorable geographic location at the mouth of Gulf of Mexico, Cuba served as a transit point for Spanish treasure fleets carrying the wealth of the colonies to Spain. Havana’s superb harbor with an easy access to the Gulf Stream soon turned the village into the capital of the New World. These riches attracted the attention of pirates such as Frenchman Jacques de Sores who attacked Havana in 1555. By the mid of the 16th century slavery, malnutrition, diseases, suicide and overwork had drastically reduced the native population of Cuba. This caused the Spanish to rely on the African slaves. Economically, there was little gold in Cuba, but agriculture more than made up for it. Cattle raising and tobacco became the most important industries. Sugar cane was introduced to Cuba on Columbus' second voyage but the expansion of sugar cultivation was limited by a lack of slaves. By the second half of the 18th century, Spain became involved in the Seven Years’ War between Britain and France. In August 1762 the British captured Havana and held the island for eleven months. In July 1763 Cuba was returned to Spain in exchange for Florida. The British occupation and the temporary lifting of Spanish restrictions showed the local landowning class the economic potential of trading their commodities with England and North America. Towards the end of the 18th century, Cuba began its transformation into a slave plantation society. After the French Revolution, there were slave uprisings in the nearby colony of Haiti. French planters fled what had been the most profitable colony in the Caribbean and settled across the water in Cuba, bringing their expertise with them. They set up coffee plantations and modernized the Cuban sugar industry. Cuba soon became a major sugar exporter and, after 1793, slaves were imported in huge numbers to work the plantations. The island was under absolute military control with a colonial elite that made its money principally from sugar. By the mid of the 19th century, Cuba was producing about a third of the world’s sugar and was heavily dependent on African slaves to do so. Diplomatic pressure from Britain forced Spain to agree to halt the slave trade but the import of African slaves continued. It is estimated that almost 400,000 Africans were brought to the island between 1835 and 1864. Slavery was not abolished until 1886. The Fight for Independence The first large-scale war for Cuban independence began on October 10, 1868 with a historic proclamation known as the Grito de Yara (the Cry of Yara). The rebellion was lead by landowner Carlos Manuel de Céspedes. At his sugar plantation La Demajagua in the eastern province of Oriente, Céspedes freed his slaves and declared war again Spain. This was the beginning of the so called Ten Years War. The uprising was supported by other local landowners and continued to spread throughout the eastern region of Cuba. By the end of October, the rebel army had grown to 12,000 men. The first important city captured by the rebels was Bayamo. On October 20, ten days after the beginning of the war, Bayamo was proclaimed capital of the Republic in Arms and Cuba's National Anthem was sung there for the first time. Within the first year of the war a young man named Antonio Maceo rose to the unprecedented rank of lieutenant colonel of the Liberating Army and captured the admiration and imagination of black and white Cubans alike. Maximo Gomez, a former cavalry officer for the Spanish Army in the Dominican Republic, with his extraordinary military skills, taught the Cuban forces what would be their most lethal tactic: the machete charge. Gomez became one of the most important leaders of the independence movement. In 1878, the Convention of Zanjón brought the war to an end. The agreement granted freedom to all slaves who fought in the war, but slavery was not abolished and Cuba remained under Spanish rule. During the next 17 years, tension between the people of Cuba and the Spanish government continued. It is during this period when U.S. capital began flowing into the island, mostly into the sugar and tobacco industries and mining. By 1895 investments reached 50 million U.S. dollars. Although Cuba remained Spanish politically, economically it started to depend on the U.S. On February 24, 1895, a new independence war was started lead by the young poet and revolutionary, José Martí. He was joined by Antonio Maceo, Maximo Gomez and Calixto Garcia, all veterans from the previous war. On May the same year, Marti was shot and killed in a brief encounter with the Spanish army. He later became Cuba’s National Hero. Unwilling to repeat the mistakes of the first war of independence, Gomez and Maceo began an invasion to the western provinces. In ninety days and 78 marches, the invading army went from Baraguá (at the eastern tip of the island) to Mantua (the western end) traveling a total of 1,696 kilometers and fighting 27 battles. The USA was now concerned for its investments in Cuba and was considering its strategic interests within the region. In January 1898 the US battleship Maine was sent to Havana to protect US citizens living in the island. When the ship mysteriously exploded in Havana harbor on 15 February 1898, killing 266 American sailors, this was made a pretext for declaring war on Spain. The American cry of the hour became Remember the Maine! Hostilities started hours after the declaration of war when a US contingent under Admiral William T. Sampson blockaded several Cuban ports. The Americans decided to invade Cuba and to start in Oriente where the Cubans had almost absolute control and were able to co-operate. The first US objective was to capture the city of Santiago de Cuba. Future US president Theodore Roosevelt personally led the celebrated charge of the ‘Rough Riders’ up San Juan Hill and claimed a great victory. The port of Santiago became the main target of naval operations. The Battle of Santiago de Cuba, on 3 July 1898, was the largest naval engagement during the Spanish-American War resulting in the destruction of the Spanish Caribbean Squadron. In December 1898 a peace treaty was signed in Paris by the Spanish and the Americans. The Cubans were excluded. The Spanish troops left the island in December 1898 and an American military government was immediately proclaimed in Cuba. After many years of struggle, the Cuban people had gained independence from Spain but found themselves under US military occupation for the next four years. The Republic of Cuba was proclaimed on 20 May 1902 and the Government was handed over to its first president, Tomás Estrada Palma. Although the U.S. forces withdrew from Cuba, the Americans retained almost total control over the Island. As a precondition to Cuba’s independence the US had demanded that the Platt Amendment be approved fully and without changes by the Cuban Constituent Assembly as an appendix to the new constitution. Under this amendment the US kept the right to intervene in Cuban domestic affairs "to preserve its independence". The amendment also allowed the United States to establish a naval base at the mouth of the Guantánamo Bay which they occupy to this day. By the 1920’s US companies owned two thirds of Cuba’s farmland and most of its mines. A series of weak, corrupt, dependant governments ruled Cuba during the next decades. In 1925 Gerardo Machado was elected president of Cuba on a wave of popularity. However, a drastic fall in sugar prices in the late 1920s led to protests which he forcefully repressed. In 1928 through bribes and threats he ‘persuaded’ Congress to grant him a second term of office, which was greeted with strikes and protests from students, the middle classes and labor unions. Machado's police forces arrested students and opposition leaders, whom they tortured or killed. The United States, attempting to find a peaceful solution to Cuba's political situation, sent special envoy Sumner Welles to mediate between government and opposition. Welles's efforts finally led to a general strike and an army revolt which forced Machado to leave the country on Aug. 12, 1933. Carlos M. Céspedes , the son of Cuba's legendary leader, took over as provisional president. Shortly later, on 5 September 1933, a revolt of non-commissioned officers including sergeant Fulgencio Batista, deposed the government and installed a five-member committee with Ramón Grau San Martín as president. One of the appointed Ministers of the new government was Antonio Guiteras who implemented important radical changes in the country. He sets up an 8-hour working day, establishes a Department of Labor, grants peasants the right to own the land they were farming, reduces electricity rates by 40 percent, and nationalizes the American-owned Electric Company. US Ambassador Sumner Welles described these reforms as "communistic" and "irresponsible". The Grau-Guiteras government only lasted 100 days. Fulgencio Batista, by then colonel, staged a coup and held power through presidential puppets until he was elected president himself in 1940. In the same year a new Constitution was passed, which included universal suffrage and benefits for workers such as a minimum wage, pensions, social insurance and an eight-hour day. In 1944 Ramón Grau San Martín representing the Partido Autentico (Authentic Party) was elected President. His administration coincided with the end of World War II, and he inherited an economic boom as sugar production and prices rose. He inaugurated a program of public works and school constructions. Social security benefits were increased, and economic development and agricultural production were encouraged. But increased prosperity brought increased corruption. Grau was followed into the presidency by Carlos Prío Socarrás, who held office from 1948 to 1952, a term which was even more corrupt and depraved. Eduardo Chibás was at the time the leader of the Partido Ortodoxo (Orthodox Party), a liberal democratic group, who was widely expected to win in 1952 on an anticorruption platform. On the 10th of March of 1952, just three months before the scheduled election date, Batista staged a military coup and quickly established a brutal and repressive dictatorship. By the 1950’s over half of Cuba’s land, industry and essential services were in foreign hands. Since the late 1930s, American mobsters had been involved in Cuban gaming. In 1946, Lucky Luciano gathered America’s top gangsters – as well as honored guest Frank Sinatra – at Havana’s Hotel Nacional for his infamous Mafia Summit. In the 1950’s more than twenty new hotels were built and gambling establishments increased. In 1952, Meyer Lansky became Batista’s official advisor of gambling reform. In that capacity Lansky controlled the majority of casino gambling on the island, along with Santo Trafficante. The Riviera and the Hotel Capri opened their casinos in 1957. Other casinos were installed in Havana at the Nacional, Plaza, Seville-Biltmore, Deauville, and Comodoro hotels; in Cienfuegos at Jagua motel; in Varadero at the Internacional; and on the Isle of Pines at the Colony hotel. Newspapers observed that Havana was bidding for the title of the Las Vegas of Latin America In reaction to Batista's oppressive and corrupted government, new revolutionary movements started to spring across the Island. These were formed by student and labor organizations, intellectuals, the middle-class, farmers and peasants. On 26 July 1953 a young lawyer by the name of Fidel Castro led a historic attack on the Moncada Barracks in Santiago de Cuba, the second most important military base in the country at that time. The attack failed and about 55 rebels were captured, tortured and murdered. Although Fidel and his brother Raúl escaped, they were later captured and put on trial. Fidel used the occasion to make an impassioned speech, denouncing the crimes of Batista’s government and its illegitimacy, and the need for radical economic and social changes in Cuba. The speech is known in Cuban history for its final phrase, History will absolve me Fidel was sentenced to 15 years, but was released as part of an amnesty in 1955. He departed for Mexico where he met Ernesto "Che" Guevara. While in Mexico, he organized the 26th of July Movement with the goal of overthrowing Batista. By the end of November of the following year, Fidel — along with Che Guevara and 81 other rebels — set sail in the yacht Granma in another attempt to overthrow Batista. On landing , on December 2, the rebels were surprised by an ambush. The 15 survivors — including Fidel, Raul and Che — fled in three separate groups into the impenetrable forests of the Sierra Maestra where they regrouped, reorganized and launched guerrilla attacks that soon gained the support of the vast majority of Cuba’s farmers, urban workers and students. Just over two years later, the Rebel Army defeated Batista’s forces. On January 1, 1959, the dictator fled the island. By then United States companies owned about 40 percent of the Cuban sugar lands, almost all the cattle ranches, 90 percent of the mines and mineral concessions, 80 percent of the utilities, practically all the oil industry, and supplied two-thirds of Cuba's imports. The Rebel Army entered Havana on 8 January 1959. Shortly afterward, a liberal lawyer, Dr Manuel Urrutia Lleó became president. Disagreements within the government culminated in Urrutia's resignation in July 1959. He was replaced by Osvaldo Dorticós Torrado, who served as president until 1976. Fidel Castro became prime minister in February 1959, succeeding José Miró in that post. Among the first acts of the revolutionary government were cuts in rents and electricity and telephone rates, and the closing of the mafia-controlled gambling industry. State education was immeasurably expanded and a national literacy campaign was launched. In May 1959, the First Agrarian Reform Laws were launched which redistributed land mostly owned by US companies to small farmers and landless rural workers and banned land ownership by foreigners. American displeasure with these measures was clear and the reaction of the U.S. government was swift. Sugar purchases from Cuba were stopped and were accompanied by other actions aimed to undermine the Revolutionary government's programs. In response, Cuba nationalized American-owned industries, mostly sugar mills. When the U.S. petroleum companies threatened to cut-off oil supplies and paralyze the country, Cuba started purchasing oil from the Soviet Union which the U.S.-owned refineries refused to process. As a result Texaco, Esso, and Shell oil refineries were nationalized. All foreign banks were also nationalized including the First National City Bank of New York, the First National Bank of Boston and the Chase Manhattan Bank. As the U.S. increased pressure on Cuba, the government of the Revolution sought, and found, new allies in the Soviet Union. By 1960 the USSR had became the main purchaser of Cuban sugar and its most important supplier of petroleum products. On October 19 of the same year the U.S. places a partial trade embargo on Cuba, and ended diplomatic relations with the neighbor island at the beginning of 1961. As the newly established Cuban Revolution drifted towards a Marxist-Leninist political system, upper class and professional Cubans were leaving the country in droves. In December 1959 Cubans began to send their children to the U.S., afraid of "losing them to communism." Over 14,000 Cuban children who went to the U.S. under this program are known as the Peter Pan kids. At the same time, various newspapers in the U.S., Mexico, and Latin America were running articles warning of an imminent U.S. attack on Cuba. The recently elected Kennedy administration denied emphatically that an attack on Cuba was planned, but in 1961 an attack took place at Bay of Pigs. The famous aggression began on April 14 when some 1400 Cuban émigrés trained by the CIA in Florida and Guatemala set sail in six ships from Puerto Cabeza, Nicaragua. On 15 April, planes from Nicaragua bombed several Cuban airfields in an attempt to wipe out the air force. Seven Cuban airmen were killed in the raid, and at their funeral the next day, Fidel Castro addressed a mass rally in Havana and proclaimed the socialist nature of the Cuban revolution for the first time. On 17 April the invasion flotilla landed at Playa Girón and Playa Larga in the Bahía de Cochinos (Bay of Pigs), but the men were stranded on the beaches when the Cuban air force attacked their supply ships. Two hundred were killed and the rest surrendered within three days. A total of 1197 men were captured and eventually returned to the USA in exchange for US$53 million in food and medicine. The US reaction was to isolate Cuba, with a full trade embargo and heavy pressure on other American countries to sever diplomatic relations. Cuba was expelled from the Organization of American States (OAS) and the OAS imposed economic sanctions. However, Canada and Mexico refused to bow to American pressure and maintained relations. In April 1962, Soviet President Kruschev decided to install missiles in Cuba, which would be capable of striking anywhere in the USA. In October, President JF Kennedy ordered Soviet ships heading for Cuba to be stopped and searched for missiles in international waters. This lead to the Cuban Missile Crisis, which brought the world to the brink of nuclear war. Kennedy demanded the withdrawal of Soviet troops and arms from Cuba and imposed a naval blockade. Without consulting Castro and without his knowledge, Kruschev eventually agreed to have the missiles dismantled and withdrawn on condition that the West would guarantee a policy of non-aggression towards Cuba. In November, Kennedy suspended the naval blockade but reiterated US support for political and economic aggression towards Cuba. The early 60s also witnessed the creation of several organizations and institutions, such as the Federation of Cuban Women (FMC), the Committees for the Defense of the Revolution (CDR), the Union of Cuban Pioneers (OPJM) and the Young Communist League (UJC), geared in part to deepen the roots of the Revolution among its people and throughout the country. From the very beginning the Cuban Revolution defined itself as internationalist. Although still a third world country itself Cuba supported African, Central American and Asian countries in the field of military, health and education. During the second decade of the Revolution, Cuba became increasingly dependent on Soviet markets and military aid. In 1972 the island becomes a member of the Soviet Union's trade association, the Council for Mutual Economic Assistance. The island’s economic and political institutions are increasingly modeled on those of the Soviet Union. By the mid-1980’s the inefficiencies of the economic system had become obvious and Cuba began a process known as the ‘rectification of errors’ which attempted to reduce bureaucracy and allow more decision-making at local levels. With the 1989 collapse of the centrally planned economies of Eastern Europe and the 1991 dissolution of the Soviet Union, Cuba lost both its major markets and its primary source of foreign assistance. As a result, the Cuban economy collapsed, and the full effect of the U.S. embargo became evident. The loss of cheap Soviet oil also triggered a Cuban energy crisis. Cuban foreign trade fell 75 percent, and economic output fell 50 percent. The Cuban Government responded to this economic crisis with a major program of reforms. Initiating market-oriented reforms, allowing foreign investment, and promoting a diversified export program have set the stage for Cuba’s economic recovery. In 1990, Cuba announced a “Special Period in Peacetime” economic austerity program to counter the loss of Soviet support. The program rationed food, fuel, and electricity and gave priority to domestic food production, development of tourism, and biotechnology. In 1993, the Cuban Government established a new form of cooperative— the Basic Unit of Cooperative Production, or UBPC— initiating the process of breaking up large state farms. While land title remains with the state, these cooperatives have the right to use the land and make production and resource decisions. In 1994, the Government established farmers’ markets, where producers’ surplus production can be sold at free-market prices. Cuba also fostered the establishment of foreign “economic associations” (joint ventures, international contracts) to allow increased foreign investment in the tourism, mining, telecommunications, manufacturing, and construction sectors of the Cuban economy. Since the initiation of reforms, GDP growth, consumption, and production are showing signs of recovery. Major growth areas in the Cuban economy are tourism, nickel and ore production, fisheries, manufacturing, tobacco, and vegetables. In February 2008, Fidel Castro announced his resignation as President of Cuba and on 24 February his brother Raúl was elected as the new President. 27 Oct 1492 - Christopher Columbus lands in Cuba and claims it for Spain. 1511 - Spanish conquest begins under the leadership of Diego de Velazquez. 1522 - After the decimation of the indigenous population, colonial landowners bring in African slaves to work the fields. The first of the slave ships arrives, with many more to follow. 1762 - Havana is captured by a British force led by Admiral George Pocock and Lord Albemarle. 1763 - Havana is returned to Spain in exchange for Florida. 1853 - José Martí, Cuba’s national hero is born. 10 Oct 1868 - The Ten Years War over independence from Spain starts. 1878 – End of the Ten Years War. Cuba remained under Spanish rule. 1886 - Slavery is abolished. 24 Feb 1895 - Jose Marti leads a second war of independence. May 19 1895 - José Martí is killed at Dos Ríos in eastern Cuba. He is 42 years old. 15 Feb 1898 - The USS Maine explodes in Havana's harbor. The U.S. blames Spain, and so begins the Spanish-American War. 10 Dec 1898 - Treaty of Paris is signed. Spain loses their colonies in Guam, Puerto Rico and the Philippines to the United States. Cuba gains independence from Spain. The United States maintains military control over the island. 1901 – The Platt Amendment is added to the Cuban Constitution. The United States are allowed military bases on the island, and may intervene militarily whenever they deem necessary. 20 May 1902 - Cuba becomes an independent republic with Tomas Estrada Palma as its president; however, the Platt Amendment keeps the island under US protection and gives the US the right to intervene in Cuban affairs. 1906-09 – Estrada Palma resigns and the US occupies Cuba following a rebellion led by Jose Miguel Gomez. 1909 - Jose Miguel Gomez becomes president following elections supervised by the US, but is soon tarred by corruption. 1912 - US forces return to Cuba to help put down black protests against discrimination. 1925 - Gerardo Machado becomes president. His regime is one of the most corrupt and exploitative in the island's political history. 13 Aug 1926 - Fidel Castro is born. 14 Jun 1928 - Ernesto "Che" Guevara is born in Argentina. 1933 - Machado's government is overthrown. 1940 - Fulgencio Batista is elected president. He remains president for four years. Oct 1945 - Fidel Castro enters law school at the University of Havana. 10 Mar 1952 - Carlos Prío's government is overthrown by Fulgencio Batista's military coup. Batista becomes dictator. This government is openly supported by the United States. 26 July 1953 - Fidel Castro leads an unsuccessful revolt against the Batista regime. 2 December 1956 - Castro lands in eastern Cuba from Mexico and takes to the Sierra Maestra mountains where, aided by Ernesto "Che" Guevara, he wages a guerrilla war. 1 Jan 1959 – Triumph of the Cuban Revolution. Fidel Castro overthrows Batista after months of guerilla warfare. He sets up a provisional government. Batista flees the country. 17 Apr 1961 - The Bay of Pigs invasion. Cuban exiles attempt to invade the island and overthrow Castro's regime. The U.S. government backs the botched mission. The Cuban army easily defeats the rebels, many of whom are killed during the insurrection. January 1962 – Under US pressure Cuba was expelled from the Organization of American States (OAS). Canada and Mexico decided to maintain relations. October 1962 - Cuban missile crisis ignites when, fearing a US invasion, Castro agrees to allow the USSR to deploy nuclear missiles on the island. The crisis was subsequently resolved when the USSR agreed to remove the missiles in return for the withdrawal of US nuclear missiles from Turkey. 9 Oct 1967 - Che Guevara is executed by government troops in Bolivia. 1972 - Cuba becomes a full member of the Soviet-based Council for Mutual Economic Assistance. 1976 - Canadian Prime Minister Pierre Trudeau visits Cuba. 1990 - Cuba endures a massive recession due to the collapse of the U.S.S.R. 1994 - Cuba signs an agreement with the US according to which the US agrees to admit 20,000 Cubans a year in return for Cuba halting the exodus of refugees. 1998 - Pope John Paul II visits Cuba. 1999 November - Cuban child Elian Gonzalez is picked up off the Florida coast after the boat in which his mother, stepfather and others had tried to escape to the US capsized. A huge campaign by Miami-based Cuban exiles begins with the aim of preventing Elian from rejoining his father in Cuba and of making him stay with relatives in Miami. 2000 June - Elian allowed to rejoin his father in Cuba after prolonged court battles. 2000 October - US House of Representatives approves the sale of food and medicines to Cuba. 2002 January - Prisoners taken during US-led action in Afghanistan are flown into Guantanamo Bay for interrogation as al-Qaeda suspects. 2002 May - Former US president Jimmy Carter makes a goodwill visit which includes a tour of scientific centres, in response to US allegations about biological weapons. Carter is the first former or serving US president to visit Cuba since the 1959 revolution. 2002 June - National Assembly amends the constitution to make socialist system of government permanent and untouchable. 2006 July - President Fidel Castro undergoes gastric surgery and temporarily hands over control of the government to his brother, Raul. 2008 February - Raul Castro takes over as president, days after Fidel announces his retirement. 2009 March - US Congress votes to lift Bush Administration restrictions on Cuban-Americans visiting Havana and sending back money. 2009 June - Organization of American States (OAS) votes to lift ban on Cuban membership imposed in 1962. Cuba welcomes decision, but says it has no plans to rejoin. 2009 July - Cuba signs agreement with Russia allowing oil exploration in Cuban waters of Gulf of Mexico.
http://www.authenticubatours.com/cuba-travel-resources/history-cuba-travel.htm
13
14
Difference Between Supply and Demand Supply vs Demand Supply and demand are basic economic concepts that are usually applied in a market environment where there is a presence of a manufacturing firm and consumers. Both are also components of an economic model which is an instrument in determining the price and quantity of a particular product in a given time or place. “Supply” is defined as “the amount of goods or services that can be provided by a company to its consumers or clients in an open market” while “demand” is said to be “the willingness of the consumers or clients to buy or receive products or services from a firm in the same open market.” These concepts are always present in every economic activity – whether in business and anyplace where economic exchange is present. In economics, both concepts also adhere to their own respective laws. The law involves a particular concept and its relationship to the price and its counterpart concept. The law of supply states that the supply and price are directly related. If there is an increase in price, the same increase applies to the supply due to the owner’s increased production and expectation of profits. If the price goes down, there is no reason to increase production. On the other hand, the law of demand conveys the inverse relationship between price and demand. If the demand is high, the price goes down to make the product more available, and the reverse happens when the demand is low while the price goes up to make up for the product costs. Both laws only apply as there are no factors considered except for price and quantity. “Supply” is determined by marginal costs and requires the company as a perfect competitor. On the other side, marginal utility characterizes demand. In “demand,” the consumer is the requirement as the perfect competitor. To observe both changes in demand and supply, they are illustrated in a graph. The price sits on the vertical axis while the horizontal axis is where the demand or supply is placed. In illustrating the relationship with supply or demand with the price, it results in a curve. The curve that illustrates the supply is the supply curve which has an upward slope. Meanwhile, the curve for the demand is called the demand curve which has an opposite direction, the downward slope. Aside from the demand curve and the supply curve, there are also two types of curves that can exists in the graph – the individual demand or supply curve and the market demand or supply curve. The individual curve is a micro-level representation of a particular consumer or firm’s demands and supply while the market curve is a macro-level image of a market’s demand or supply Supply and demand have different determinants. Supply considers the following as its factors – production cost of the product or cost of service, technology, the price of similar products or services, the company’s expectation for the future, and number of suppliers or employees. On the same note, demand also has the determinants which often reflect on the consumers like income, tastes, preferences, variety of price on a parallel product or service. The balance or the combination of supply and demand is called equilibrium. This event happens when there is enough supply and demand for a product or service. Equilibrium seldom happens since information is vital to the event. If information is withheld from both sides, it does not happen. Equilibrium happens both on an individual or market level. 1.Supply and demand are elementary, economic concepts that exist in any economic activity as long there is a product or service with a price. 2.Supply and demand have an inverse relationship with each other. If one is up, then one is going down. 3.Both supply and demand have their own laws regarding price, and each has their own curve when illustrated in a graph. Supply has a direct relationship with price with an upward slope in the supply curve. Meanwhile, demand has an opposite and inverse relationship with price, and the demand curve is illustrated as a downward slope. 4.Both concepts have their own determinants. Supply’s determinants reflect the firm while determinants of demand reflect the consumers. Search DifferenceBetween.net : Email This Post : If you like this article or our site. Please spread the word. Share it with your friends/family. Leave a Response
http://www.differencebetween.net/business/economics-business/difference-between-supply-and-demand/
13
15
A triple bond in chemistry is a chemical bond between two chemical elements involving six bonding electrons instead of the usual two in a covalent single bond. The most common triple bond, that between two carbon atoms, can be found in alkynes. Triple bonds are also common with phosphorus atoms.Other functional groups containing a triple bond are cyanides and isocyanides. Some diatomic molecules, such as dinitrogen and carbon monoxide are also triple bonded. In skeletal formula the triple bond is drawn as three parallel lines (≡) between the two connected atoms; in typography, this is accomplished with the identity operator. Triple bonds are stronger than single bonds or double bonds and they are also shorter. The bond order is three. The type of bonding can be explained in terms of orbital hybridization. In the case of acetylene each carbon atom has two sp orbitals and two p-orbitals. The two sp orbitals are linear with 180° angles and occupy the x-axis (cartesian coordinate system). The p-orbitals are perpendicular on the y-axis and the z-axis. When the carbon atoms approach each other the sp orbitals overlap to form a sp-sp sigma bond. At the same time the pz-orbitals approach and together they form a pz-pz pi-bond. Likewise, the other pair of py-orbitals form a py-py pi-bond. The result is formation of one sigma bond and two pi bonds. In bent bond theory the triple bond can also formed by the overlapping of three sp3 lobes without the need to invoke a pi-bond. - ^ March, Jerry (1985), Advanced Organic Chemistry: Reactions, Mechanisms, and Structure (3rd ed.), New York: Wiley, ISBN 0-471-85472-7 - ^ Organic Chemistry 2nd Ed. John McMurry - ^ Pyykkö, Pekka; Riedel, Sebastian; Patzschke, Michael (2005). "Triple-Bond Covalent Radii". Chemistry - A European Journal 11 (12): 3511–20. doi:10.1002/chem.200401299. PMID 15832398. - ^ Advanced Organic Chemistry Carey, Francis A., Sundberg, Richard J. 5th ed. 2007
http://en.wikipedia.org/wiki/Triple_bond
13
27
In this glossary you will find only a very short description of each concept. If you need a more detailed explanation, please follow one of the external links. - Analysis of variance - Research data are often based on samples drawn from a larger population of cases. This is true of most types of questionnaire surveys, among other examples. When, on the basis of the analysed results of such sample surveys, we conclude that the same results are valid for the population from which the sample has been drawn, we are making a generalisation with some degree of uncertainty attached. Analysis of variance is a method that allows us to determine whether differences between the means of groups of cases in a sample are too great to be due to random sampling errors. This can, for instance, help us to determine whether observed differences between the income of men and women (in a sample) are great enough to conclude that these differences are also present in the population from which the sample has been drawn. In other words, the method tells us whether the variable Gender has any impact on the variable Income, or more generally, whether a non-metric variable (which divides the cases into groups) is a factor in deciding the values that the cases have on a separate and metric variable. Analysis of variance looks at the total variance of the metric variable and tries to determine how much of this variance is due to, or can be explained by, the non-metric grouping variable. The method consists of the following building blocks; 1) total variance 2) within-group variance and 3) between-group variance. Total variance is the sum of the squared differences between the data values of each of the units and the mean value of all the units. This total variance can be broken down into within-group variance, which is the sum of the squared differences between the data values of each of the units and the mean of the group to which the unit belongs, and between-group variance, which is the sum of the squared differences between each of the group means and the mean of all the units. - Asymmetrical distribution - If you split the distribution in half at its mean, then the distribution of the two sides of this central point would not be the same (i.e., not symmetrical) and the distribution would be considered skewed. In a symmetrical distribution, the two sides of this central point would be the same (i.e., symmetrical). - A parameter estimate of a regression equation that measures the increase or decrease in the dependent variable for a one-unit difference in the independent variable. In other words, b shows how sensitive the dependent variable is to changes in the independent variable. - Standardised b shows by how many standard deviations the dependent variable changes when the independent variable increases by 1 standard deviation. The beta coefficient should be used when comparing the relative explanatory power of several independent variables. - Birth cohort - A birth cohort is normally defined as consisting of all those who were born in the region or country of interest in a certain calendar year. In this document, however, we redefine a birth cohort so that it consists of all those who were born in a certain calendar year and were living in the country or countries of interest at the time of the second ESS interview round. - Box-and-whisker plot - The distribution can also be shown by means of a box-and-whisker plot. This form of presentation is based on a group of statistical measures which are known as median measures. All such measures are based on a sorted distribution, i.e. a distribution in which the cases have been sorted from the lowest to the highest value. The first quartile is the data value of the case where 25 % of the cases have lower values and 75 % of the cases have greater values. The third quartile is the data value of the case where 75 % of the cases have lower values and 25 % of the cases have greater values. The inter-quartile range is the distance from the first to the third quartile, i.e. the variation area of the half of the cases that lies at the centre of the distribution. The box-and-whisker plot consists of a rectangular box divided by one vertical line, with one horizontal line (whisker) extending from either end. The left and right ends of the box mark the first and third quartiles respectively. The dividing line at the centre represents the median. The length of the box is equal to the inter-quartile range and contains the middle half of the cases included in the sorted distribution. The end points of the two extending lines are determined by the data values of the most extreme cases, but do not extend more than one inter-quartile range from either end of the box (i.e. from the first and third quartiles). The maximum length of each of these lines, therefore, is equal to the length of the box itself. If there are no cases this far from either quartile, the lines will be shorter. If the maximum or minimum values of the entire distribution are beyond the end points of the line, a cross will indicate their position. - By cases we mean the objects about which a data set contains information. If we are working with opinion polls or other forms of interview data, the cases will be the individual respondents. If data have been collected about the municipalities of a county or about the countries of the world, the cases will be the geographical areas, i.e. the municipalities or countries. - Central tendency - The central tendency is a number summarising the average value of a set of scores. The mode, the median and the mean are the commonly used central tendency statistics. - Chi-square distribution - A family of distributions, each of which has different degrees of freedom, on which the chi-square test statistic is based. - Chi-square test - A test of statistical significance based on a comparison of the observed cell frequencies of a joint contingency table with frequencies that would be expected under the null hypothesis of no relationship. - Compute is used for creating new variables on the basis of variables already present in the data matrix. It is possible to do calculations with existing variables, such as calculating the sums of the values on several variables, percentaging, multiplying variables by a constant, etc. The results of such arithmetic operations are saved as new variables, which can then be used just like the other variables in the data set. - Conditional mean - A conditional mean value is the mean value of a variable for a group of respondents whose members have a particular combination of values on other variables, whereas an overall mean is the mean value of a variable for all respondents. - Confidence interval - Research data are very often based on samples drawn from a larger population of cases. This applies to most types of questionnaire surveys, for instance. When we assume, on the basis of results from the analysis of such sample surveys, that the same results apply to the population from which the samples are drawn, we are making generalisations with some degree of uncertainty attached. The confidence interval is a method of estimating the uncertainty associated with computing mean values in sample data. We normally use a confidence interval with a significance level of 95 %. This means that there is a 5 % chance of being wrong if we assume that a mean value for a sample lies within the confidence interval. - Constants in linear functions Some of you may remember that a line on a plane can be expressed as an equation. Assume first that there are two variables, one that is measured along the plane’s vertical axis and whose values are symbolised by the letter y, and one that is measured along the horizontal axis and whose values are symbolised by the letter x. The function that represents a linear association between these two variable values can be expressed as follows: y = a + b∙x Here, a and b symbolise constants (fixed numbers). The number b indicates how much the variable value y increases or decreases as x changes. When x increases by 1 unit, y increases by b units. To see this, assume, for instance, that x has the initial value 5. Insert this value into the equation. You get: y = a + b∙5. Then, let the value x increase by one unit to x = 5 + 1 and insert this in the equation instead of the former value. Now the equation can be written y = a + b∙5 + b∙1. Thus, by letting x increase by 1 unit, we have made y increase by b∙1 = b units. This implies that if b is equal to, say, 2, y will increase by 2 units whenever x increases by 1 unit, or if b is negative and equal to, say, -0.5, y will decrease by 0.5 units whenever x increases by 1 unit.Figure A. Graphic presentation of a linear function The latter case is illustrated in Figure A, where we have drawn a line in accordance with the function y = 7 - 0.5∙x (where we have inserted the randomly chosen value 7 for a). As explained above, x and y are variables, which means that they can take on a whole range of different values, while 7 and -0.5 are constants, i.e. values that determine the position of the line on the plane and which cannot change without causing the position of the line to change. In figure A, the x-values range between 0 and 10, while the y-values range between 2 and 7. It is to be hoped that you realise that for each value x takes on between 0 and 10, the equation and the corresponding line assigns a unique numerical value to y as shown in the figure. Thus, if x = 0, y takes the value of 7, which is the value we have given to the constant a. Check this out by inserting 0 in place of x in the equation y = 7 - 0.5∙x and then compute the value of the expression on the right-hand side of the equals sign. (You should get y = 7.) This illustrates the important point that the constant a in the equation y = a + b∙x is identical to the value that y takes on when x = 0. From a graphical point of view (see Figure A), a can be interpreted as the distance between two points on the vertical axis, namely the distance between its zero point (y = 0) and the point where this axis and the line given by the equation meet each other. (Assuming that the vertical axis crosses the horizontal axis in the latter’s zero point.) Thus, the constant a is often called the intercept. Now for the graphical interpretation of the constant b: We repeat the exercise from the numerical example presented above by starting from the point on the line in Figure A where x = 5 (as marked by vertical line k). We then increase the x-value by 1 unit to x = 6 as we move downwards along the line (the new x-value is marked by vertical line l). This change in x makes the corresponding value of y decrease from 4.5 (marked by horizontal line m) to 4 (marked by horizontal line n), i.e. it ‘increases’ by -0.5 units (decreases by 0.5 units).Thus, Figure A confirms what we just saw in our numerical example: b is the change in y that takes place when x increases by 1 unit, provided that the changes occur along the line that is determined by the function y = a + b∙x. If b is positive, y increases whenever x increases, and if b has a negative numerical value, y decreases when x increases. Note also that b can be interpreted as a measure of the steepness of the line. The more y changes when x is increased by 1 unit, the steeper the line gets. - Correlation is another word for association between variables. There are many measures of correlation between different types of variables, but most often the word is used to designate linear association between metric variables. This type of correlation between two variables is measured by the Pearson correlation coefficient, which varies between -1 and 1. A coefficient value of 0 means no correlation. A coefficient of -1 or 1 means that if we plot the observations on a plane with one variable measured along each of the two axes, all observations would lie on a straight line. In regression terminology this corresponds to a situation where all observations lie on the (linear) regression line and all residuals have the value 0. - Dependent and independent variables - The idea behind these concepts is that the values of some variables may be affected by the values of other variables, and that this relation makes the former dependent on the latter, which, from the perspective of this particular relationship, are therefore called independent. In practice, the analysts are the ones who determine which variables shall be treated as dependent and which shall be treated as independent. - Descriptive statistics - Descriptive statistics is a branch of statistics that denotes any of the many techniques used to summarize a set of data. The techniques are commonly classified as: 1) Graphical description (graphs) 2) Tabular description (frequency, cross table) 3) Parametric description (central tendency, statistical variability). - Dichotomies are variables with only two values, e.g. the variable Gender with the two values Male and Female. - Factor analysis is used to uncover the latent structure (dimensions) of a set of variables. It reduces attribute space from a larger number of variables to a smaller number of factors. The eigenvalue for a given factor reflects the variance in all the variables, which is accounted for by that factor. A factor's eigenvalue may be computed as the sum of its squared factor loadings for all the variables. The ratio of eigenvalues is the ratio of explanatory importance of the factors with respect to the variables. If a factor has a low eigenvalue, then it is contributing little to the explanation of variances in the variables and may be ignored. Note that the eigenvalues associated with the unrotated and rotated solution will differ, though their total will be the same. - Factor analysis - The objective with this technique is to explain the most of the variablility among a number of observable random variables in term of a smaller number of unobservable random variables called factors. The observable random variables are modeled as linear combinations of the factors, pluss 'error' terms. The main application of factor analytic techniques are: 1. to reduce the number of variables and 2. to detect structure in the relationships between variables. - Factor analysis is used to uncover the latent structure (dimensions) of a set of variables. It reduces attribute space from a larger number of variables to a smaller number of factors. The factor loadings are the correlation coefficients between the variables and factors. Factor loadings are the basis for imputing a label to different factors. Analogous to Pearson's r, the squared factor loading is the percentage of variance in the variable, explained by a factor. The sum of the squared factor loadings for all factors for a given variable is the variance in that variable accounted for by all the factors, and this is called the communality. In complete principal components analysis, with no factors dropped, communality is equal to 1.0, or 100% of the variance of the given variable. - Frequency is a method of describing how the cases are distributed over the different data values on a particular variable. The Frequency table gives an overview of the number of cases that have each of the values on a variable. Frequency tables are most suitable for variables with few data values. - A function is a mathematical equation in which the values of one dependent variable are seen as uniquely determined by the values of one or more other independent variable. The function y = a + b∙x, for instance, expresses the dependent variable y as a linear function of the independent variable x. - Identification number - Unique number given to each member of a survey sample. The identification numbers of the ESS survey sample members are stored as the variable ‘idno’. - An index variable is a variable that in one way or another summarises information about several other variables. Index variables are most frequently used where the data set includes several measures of the same basic phenomenon, e.g. political participation, status, etc. We can combine these measures or indicators in one index to create a variable which gives an overall impression of the basic phenomenon. But note that many authors use the word scale (e.g. a summated scale) rather than the word index to denote variables that are composed of several measures of the same basic phenomenon. - This statistic shows how the distribution of the variables deviates from the normal distribution. The normal distribution of a variable is a bell-shaped symmetric curve where approximately 2/3 of the cases are within 1 standard deviation on either side of the mean value and approximately 95 % of the cases fall within these 2 standard deviations. Kurtosis is a measure of the degree to which a variable meets this condition - whether it has a more concentrated (more peaked) or more even (flat) distribution. A positive kurtosis tells us that the distribution of the variable is more peaked than the normal distribution. A negative kurtosis tells us that the distribution is less peaked than the normal distribution. (Skewness tells us whether the variable meets the normal distribution curve's symmetry requirement.) - When respondents answer to a Likert questionnaire item, they normally specify their level of agreement to a statement on a five point scale which ranges from ‘strongly disagree’ to ‘strongly agree’ through ‘disagree’, ‘neither agree nor disagree’, and ‘agree’. - Listwise deletion is a method used to exclude cases with missing values on the specified variable(s). The cases used in the analysis are cases without missing values on the variable(s) specified. - Logical operators Use these relational logical operators in If statements in SPSS commands: EQ or = Equal to NE or ~= or = or <> Not equal to LT or < Less than LE or <= Less than or equal to GT or > Greater than GE or >= Greater than or equal to Two or more relations can be logically joined using the logical operators AND and OR. Logical operators combine relations according to the following rules: - The ampersand (&) symbol is a valid substitute for the logical operator AND. The vertical bar ( | ) is a valid substitute for the logical operator OR. - Only one logical operator can be used to combine two relations. However, multiple relations can be combined into a complex logical expression. - Regardless of the number of relations and logical operators used to build a logical expression, the result is either true, false or indeterminate because of missing values. - Operators or expressions cannot be implied. For example, X EQ 1 OR 2 is illegal; you must specify X EQ 1 OR X EQ 2. - The ANY and RANGE functions can be used to simplify complex expressions. AND Both relations must be true for the complex expression to be true. OR If either relation is true, the complex expression is true. The following table lists the outcomes for AND and OR combinations.Logical outcomes Expression Outcome Expression Outcome true AND true = true true OR true = true true AND false = false true OR false = true false AND false = false false OR false = true true AND missing = missing true OR missing = true missing AND missing = missing missing OR missing = missing false AND missing = false false OR missing = missing - Data matrix - When preparing data for statistical analysis, we structure the material in a data matrix. A data matrix has one row for each case and a fixed column for each variable. The cases are distributed over the values of each variable, so that the values are shown in the cells of the matrix. - A statistical method for estimating population parameters (as the mean and variance) from sample data that selects as estimates those parameter values maximizing the probability of obtaining the observed data. - The arithmetical mean is a measure for the central tendency for metric variables. The arithmetical mean is the sum of all the cases’ variable values divided by the number of cases. - The median is a measure of the central tendency for ordinal or metric variables. The median is the value that divides a sorted distribution into two equal parts, i.e. the value of the case with 50 % of the cases above it and 50 % below. For example: If you have measured the variable Height for 25 persons and sorted the values in ascending order, the median will be the height of person no. 13 in the sorted sample, i.e. the person that divides the sample into two with an equal number of cases above and below him/her. - Metric variable - A variable is metric if we can measure the size of the difference between any two variable values. Age measured in years is metric because the size of the difference between the ages of two persons can be measured quantitatively in years. Other examples of metric variables are length of education measured in years, and income measured in monetary units. Thus, we can use linear regression to assess the association between these two variables. - Missing Values SPSS acknowledges two types of missing values: System-missing and User-missing. If a case has not been (or cannot automatically be) assigned a value on a variable, that case’s value on that variable is automatically set to ‘System missing’ and will appear as a . (dot) in the data matrix. Cases with System-missing values on a variable are not used in computations which include that variable. If a case has been assigned a value code on a variable, the user may define that code as User-missing. By default, User-missing values are treated in the same way as System-missing values. In the ESS dataset, refusals to answer and ‘don’t know’ answers etc. have been preset as User-missing to prevent you from making unwarranted use of them in numeric calculations. If you need to use these values to create dummy variables or for other purposes, you must first redefine them as non-missing. One way to achieve this is to open the ‘Variable View’ in the data editor, find the row of the variable whose missing values you want to redefine, go right to the ‘Missing’ column, click the right-hand side of the cell, and tick ‘No missing values’ in the dialogue box that pops up. You can also use the MISSING VALUES syntax command (see SPSS’s help function for instructions). Cases with System-missing values can be assigned valid values using the ‘Recode into different variables’ feature in the ‘Transform’ menu. Be careful when you use this option, that you do not overwrite value assignments that you would have preferred to keep as they are. Moreover, if you need to define more values as User-missing, you can use the syntax command MISSING VALUES or the relevant variable’s cell in the ‘Missing’ column in the ‘Variable View’. - The mode is a measure of the central tendency. The mode of a sample is the value which occurs most frequently in the sample. - Nominal variable - Nominal variables measure whether any two observations have equal or different values but not whether one value is larger or smaller than another. Occupation and nationality are examples of such variables. - Normal distribution - Normal distribution is a theoretical distribution which many given empirical variable distributions resemble. If a variable has a normal distribution (i.e. resembles the theoretical normal distribution), the highest frequencies are concentrated round the variable's mean value. The distribution curve is symmetric round the mean and shaped like a bell. Approximately 2/3 of all cases will fall within 1 standard deviation to either side of the mean value. Approximately 95 % of the cases fall within 2 such deviations. - Operationalisation is the process of converting concepts into specific observable behaviours or attitudes. For example, highest education completed could be an operationalisation of the concept academic skill. - Ordinal variable - Ordinal variables measure whether one observation (case / individual) has a larger or smaller value than another but not the exact size of the difference. Measurements of opinions with values such as excellent, very good, good etc. are examples of ordinal variables because we know little about the size of the difference between ‘very good’ and ‘good’ etc. - Overall mean - The overall mean of a variable is the mean of all participating individuals (cases / observations) irrespective of their values on other variables. - SPSS distinguishes between pairwise and listwise analyses. In a pairwise analysis the correlations between each pair of variables are determined on the basis of all cases with valid values on those two variables. This takes place regardless of the values of these cases on other specified variables. - Policy cycle - This is the technical term used to refer to the process of policy development from the identification of need, through assessment and piloting, to implementation and evaluation. - Recode reassigns the values of existing variables or collapses ranges of existing values into new values. For example, you could collapse income into income range categories. - Regression is a method of estimating some conditional aspect of a dependent variable’s value distribution given the values of some other variable or variables. The most common regression method is linear regression, by means of which a variable’s conditional mean is estimated as a linear function of one or more other variables. The objective is to explain or predict variations in the dependent variable by means of the independent variables. - In psychometrics, reliability is the accuracy of the scores of a measure. The most common internal consistency measure is Cronbach's alpha. Reliability does not imply validity. A reliable measure is measuring something consistently. A valid measure is measuring what it is supposed to measure. A Rolex may be a very reliable instrument for measuring the time, but if it is wrong, it does not give a valid measure of what the time really is. - A residual is the difference between the observed and the predicted dependent variable value of a particular person (case / observation). - Response bias - A response bias is a systematic bias towards a certain type of response (e.g. low, or extreme responses) that masks true levels of the construct that one is attempting to measure. - The scattergram is a graphic method of presentation, in which the different cases are plotted as points along two (three) axes defined by the two (three) variables included in the analysis. - Select cases - Select the cases (persons / observations) you want to use in your analysis by clicking ‘Data’ and ‘Select Cases’ on the SPSS menu bar. Next, select ‘If condition is satisfied’ and click ‘If’. A new dialogue box opens. Type an expression where you use variable names, logical operators and value codes to delineate the cases you want to retain in the analysis from those that you want to exclude. - Structural equation modelling - Structural equation modelling (SEM) is a very general statistical modelling technique. Factor analysis, path analysis and regression all represent special cases of SEM. SEM is a largely confirmatory, rather than exploratory, technique. In SEM, interest usually focuses on latent constructs, for example well-being, rather than on the manifest variables used to measure aspects of well-being. Measurement is recognized as difficult and error-prone. By explicitly modelling measurement error, SEM users seek to derive unbiased estimates for the relations between latent constructs. To this end, SEM allows multiple measures to be associated with a single latent construct. - In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. ‘A statistically significant difference’ simply means there is statistical evidence that there is a difference; it does not mean that the difference is necessarily large, important, or significant in the common meaning of the word. - Skewness is a measure that shows how the variable distribution deviates from the normal distribution. A variable with normal distribution has a bell-shaped symmetric distribution round the mean value of the variable. This means that the highest frequencies are in the vicinity of the mean value and that there is an equal number of cases on either side of the mean. Skewness measures deviation from this symmetry. Positive skewness tells us that the value of the majority of the cases is below the mean and hence there is a predominance of cases with positive extreme values. Negative skewness tells us that the majority of the cases are greater than the mean while there is a predominance of negative extreme values. - Squared value - A squared value (for instance a squared distance) is that value (distance) multiplied by itself. - Square root - A value’s square root is a number that, multiplied by itself, produces that value. Thus, a is the square root of b if a∙a = b. - Standard deviation - A variable’s standard deviation is the square root of its variance. Standard deviation is a measure of statistical dispersion. - Standard error - The standard error of a parameter (e.g. a regression coefficient) is the standard deviation of that parameter’s sampling distribution. - Standardised values - A standardised variable value is the value you get if you take the difference between that value and the variable’s mean value and divide that difference by the variable’s standard deviation. If this is done to all the observed values of a variable, we get a standardised variable. Standardised variables have a 0 mean value and a standard deviation of 1. - Statistics is the science and practice of developing human knowledge through the use of empirical data. It is based soundly on statistical theory, which is a branch of applied mathematics. An SPSS syntax is a text command or a combination of text commands used to instruct SPSS to perform operations or calculations on a data set. Such text commands are written and stored in syntax files, which are characterised by the extension .spx. In order to run the syntax commands that have been provided with this course pack, we suggest that you first open an SPSS syntax file. Either create a new syntax file (click ‘New’ and ‘Syntax’ on the SPSS menu bar’s ‘File’ menu) or open an old one that already contains commands that you want to combine with the new commands (click ‘Open File’ and ‘Syntax’ in the ‘File’ menu, and select the appropriate file). Then find, select and copy the relevant syntax from this course pack’s website and paste it into the open syntax file window. While doing exercises, you may have to make partial changes to the commands by editing the text. Run commands from syntax files by selecting them with the cursor (or the shift/arrow key combination) before you click the blue arrow on the syntax window’s tool bar. If you use the menu system, you can create syntaxes by clicking ‘Paste’ instead of ‘OK’ before exiting the dialogue boxes. This causes SPSS to write the commands you have prepared to a new or to an open syntax file without executing them. Use this option to store your commands in a file so that you can run them again without having to click your way through a series of menus each time. New commands can be created from old ones by copying old syntaxes and editing the copies. This saves time. - This is a measure of the covariance of independent variables (normally called colinearity). The tolerance value shows how much of the variance of each independent variable is shared by other independent variables. The value can range from 0 (all variance is shared by other variables) to 1 (all variance is unique to the variable in question). If the tolerance value approaches 0, the results of the analysis may be unreliable. In such cases it is also difficult to determine which of the independent variables explain the variance of the dependant variable. - After an estimation of a coefficient, the t-statistic for that coefficient is the ratio of the coefficient to its standard error. That can be tested against a t distribution to determine how probable it is that the true value of the coefficient is really zero. - T test - The t test is used to determine whether a difference between a sample parameter value (e.g. a mean or a regression coefficient) and a null hypothesis value (or the difference between two parameters) is sufficiently great for us to conclude that the difference is not due to sampling errors. This method can, for instance, help us to decide whether the observed differences in income between men and women (in a sample) are great enough for us to be able to conclude that they are also present in the population from which the sample has been drawn. - Type I error - In statistical hypothesis testing, a type I error involves rejecting a null hypothesis (there is no connection) that is true. In other words, finding a result to be significant when this in fact happened by chance. - Type II error - In statistical hypothesis testing, a type II error consists of failing to reject an invalid null hypothesis (i.e. falsely accepting an invalid hypothesis. As the likelihood of type II error decreases, the likelihood of type I error increases. - Validity refers to the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. While reliability is concerned with the accuracy of the actual measuring instrument or procedure, validity is concerned with the study's success at measuring what the researchers set out to measure. - Value codes and value labels A case's (a person's) value on a variable must be given a code in order for it to be recognised by SPSS. Codes can be of various types, for instance dates, numbers or strings of letters. A variable's codes must consist of numbers if you want to use it in mathematical computations. Value codes may have explanatory labels that tell us what the codes stand for. One way to access these explanations is to open the data file and keep the ‘Variable view’ of the SPSS 'Data editor' window open. (You can toggle between 'Variable view' and 'Data view' by clicking the buttons in the lower part of the window.) In the 'Data view', each variable has its own row. Find the cell where the row of the variable you are interested in meets the column called 'Values'. Click the right end of the cell. A dialogue box that displays codes and corresponding explanatory labels appears. These dialogue boxes can be used to assign labels to the codes of variables that you have created yourself (recommended). Codes of continuously varying variables do not have explanatory labels. The meaning of the codes of such variables must be stated in the variable label. The value labels can also be accessed from the 'Variables' option in the 'Utilities' menu or from the variable lists that appear in many dialogue boxes. Right click the variable label and click 'Variable information'. - By variables we mean characteristics or facts about cases about which the data contain information, e.g. the individual questions on a questionnaire. For instance, if the cases are countries or other geographical areas, we may have population figures, etc. There are several types of variables, and the variable type determines which methods and forms of presentation should be used. Where the individual groups or values have no obvious order or ranking, the variable is called a nominal variable (gender). The next type is called an ordinal variable. In addition to the actual classification, there is a natural principle ranking the different values in a particular order. It can obviously be claimed that the response "Very interested in politics" is evidence of a stronger political interest than "Quite interested". The values of ordinal variables have a natural order, but the intervals between the values cannot be measured. The third type is called metric variables. These are variables that in some way measure parameters, quantities, percentages etc., using a scale based on the numerical system. The numerical values of these variables have a direct and intuitive meaning. They are not codes used as surrogates for the real responses as in the case of the nominal and ordinal variables. It follows that these variables also have the arithmetic properties of numbers. The values have a natural order, and the intervals between them can be measured. We can say that one person is three times the age of another without violating any logical or mathematical rules. It is also possible to compute the average age of a group of people. - A variable’s variance is the sum of all the squared differences between its observed values and its overall mean value, divided by the number of observations. (Subtract 1 from the number of observations if you are computing an estimate of a population’s variance by means of sample data.) The variance is a measure of the statistical dispersion, and it is defined as the mean square deviation of a continuous distribution. - The variation is a number indicating the dispersion in a distribution: How typical is the central tendency of the other sample observations? For continuous variables, the variance and the standard deviation are the most commonly used measures for dispersion. - Factor analysis is used to uncover the latent structure (dimensions) of a set of variables. It reduces attribute space from a larger number of variables to a smaller number of factors. Varimax rotation seeks to maximize the variances of the squared normalized factor loadings across variables for each factor. This is equivalent to maximizing the variances in the columns of the matrix of the squared normalized factor loadings. The goal of rotation is to obtain a clear pattern of loadings, i .e., the factors are somehow clearly marked by high loadings for some variables and low loadings for other variables. This general pattern is called ‘Simple Structure’. - Weighting allows you to assign a different weight to the different cases in the analysis file. In SPSS, a weight variable can be used to assign different weights to different cases when used in calculations. If case A has value 4 and weight 0.5, while case B has value 6 and weight 1.5, their weighted mean is (4 ∙ 0.5 + 6 ∙ 1.5)/2 = 5.5, whereas their unweighted mean is (4 + 6)/2 = 5. Use the ‘Weight Variable’ procedure in the ‘Data’ menu to make SPSS perform weighting. Information about how and why you may want to use weights when analysing ESS data can be found on NSD’s web pages (follow the link to the reference site). Weighting is usually used for correcting skewness in a sample that is meant to represent a particular population. It can also be used for "blowing up" sample data so that the analysis results are shown in figures that are in accordance with the size of the population.
http://essedunet.nsd.uib.no/cms/glossary/index.html
13
14
The Achievements and Challenges of Mali Learning objectives for students This unit is intended to focus on some of those aspects of Malian life and history that are of great significance to understanding the people of Mali today and their situation. By using or adapting the core lessons and activities, your students will learn about the following: II. Ancient Civilizations - Where have all the Kingdoms gone? III. Slave Trades IV. Indigenous Cultures V. Environmental Issues - Desertification VI. Economics - Trade VII. Art and Architecture Students will be encouraged to develop a critical stance toward information. They will learn to evaluate evidence, consider sources, and study a variety of differing viewpoints. The goal for these lessons in the Teacher Zone is for students to develop an understanding of the culture of Mali and to develop critical thinking skills such as analysis, synthesis and evaluation. Lessons and activities for students As the student begins to use the Internet, they often encounter a wide variety of contradictory and confusing information. The purpose of this introductory lesson is to give students a tool that they can use to evaluate the credibility of different websites. 1. The first activity that the students will pursue is a search for information about the country of Mali. Students will work in pairs for this activity. The information may be obtained through the use of the Internet, books, or other available resources. 2. The student pairs will report back to the whole class on what they have learned, and discuss the sources of their information. 3. The class will be asked to develop a numerical rank order rating for all the sources they have obtained. They will discuss how to determine the credibility of a source. 4. The class will construct a rubric (a rating table see example) that will consist of their decisions on how to determine a source's credibility. 5. The class will use the following web sites as beginning sources for information: Overview:The kingdoms of Ghana, Mali, and Songhay flourished from A.D.500 to 1700. Trading of gold, salt and slaves brought untold wealth to the kingdoms. Mansa Musa, King of Mali, traveled in style to Egypt in 1324. A caravan of one hundred camels, each carrying three hundred pounds of gold, followed him on his journey. How did Mali go from being an extraordinary wealthy kingdom to one of the poorest countries in the world today Procedures:1. Begin the lesson by asking students for reasons why empires decline. Write student's responses on the board. (examples- invading forces, internal disputes, war, drought, natural disasters, disease, overpopulation, economics) 2. Ask students to predict what life might be like in the United States in the 25th century. 3. Have students research the following empires. (Students may work in groups, pairs or alone for this assignment.) Ask students to concentrate on how and why the empires flourished and disintegrated. 4. Students will write several paragraphs explaining the rise and fall of the different empires. 5. Tell the students that the are living in the 25th century. The United States has become one of the poorest countries in the world. What could have caused this? Have students write an article about the rise and fall of the United States of America from the perspective of a person living in the 25th century. 6. Students will share their articles will the class. Students will be evaluated on the quality of their written work. Overview:Africans were not strangers to the slave trade, or to the keeping of slaves. Slavery had existed in West- Africa for centuries. When Leo Africanus traveled to West Africa in the 1500's, he recorded that, "slaves are the next highest commodity in the marketplace. There is a place where they sell countless slaves on market days." Criminals and prisoners of war, as well as political prisoners were often sold in the marketplaces in Gao, Jenne, and Timbuktu. African slaves were in Europe as early as the eleventh century. America's demand for laborers for the sugar, tobacco and cotton field brought the first cargo of black slaves from West Africa to the Americas in the early 1500's. European, Arab and African merchants were now selling humans as well as gold, ivory, and spices. Al-Siddiq was born to a well-educated and influential family from Timbuktu. In the early 1800's he was captured and sold to an English slaver. In 1823 he was keeping records in Arabic for his owner's store in Jamaica. Encouraged by his owner's friend, Al-Siddiq was given his freedom. Al-Siddiq joined the Englishman, John Davidson on an expedition to Timbuktu in 1836. On his return to the African continent, Al-Siddiq learned that one of his relatives had become the sheik of Timbuktu. The expedition was attacked and all lives were lost with the exception of Al-Siddiq. 1. Learn about the slave trade in West-Africa on the following web sites: 2. Discuss what the students have learned about slavery in West Africa. Historically, slavery has long been a part of human history. The Trans-Atlantic slave trade was not the first instance of slavery. But what set the Middle Passage apart from the others? How did the Trans-Atlantic slave trade impact the relationships of Africa and the Americas? Does slavery still exist today? Where? Do you think this was fair? Think of similar instances where money and education create special treatment. Do you think Diallo, or Job as he was later called, changed his opinions about slavery after his return to Africa? Students will be evaluated on the quality of their written journal entry. While researching for his book, Roots, Alex Haley traveled to Mali, where he met with a groit (GREE-ohs) who had memorized the story of the Juffure village, where his ancestors had lived. This groit historian drew on vast stores of oral history to tell him his family's story. The goits of Mali are the storytellers, historians and entertainers of West Africa. The groits have preserved much of the history of their country by passing the stories from one generation to another. Procedure:1. In this activity students will gather information about groits and the importance of oral traditions. The following web sites provide a place to start: 2. After students have gathered information have the entire class share what they have learned about the griots and oral traditions. 3. Have students create an oral account of an event, person, place or a time period in Mali's history. (This could take the form of a song. Story telling often is a participatory experience. Lines maybe created to involve other class members.) Explain to students that they are not to write a single word for this assignment. The Groits (in earlier years) didn't have a formal written language. The following sites may provide background information for the creation of the oral stories: 3. Students will share their stories will the class. 4. Discuss how creating an oral story differs from creating a written story. Students will be evaluated on the quality of their presentation. The Tuareg, known as the lords of the desert, have wandered the Sahara Desert with their camels and cattle for centuries. Seen as a fierce people, they were noted for their bravery in war, and for their raids on towns and caravans. The Tuarge even maintained their independence against French domination. The Tuarge maintained their independence against French domination. The Tuareg's played a critical role in the Trans-Saharan trade, which helped the Ghana, Singh and Mali empires flourish. Even today, Tuareg camels can still be seen carrying salt for hundreds of miles on their backs. However, two decades of drought and battles with the black governments have greatly influenced the Tuareg's life style. Some Tuareg have chosen to settle in one place, while circumstances have forced others into a sedentary life style. 3. Tell the students to take notes on the details that they might want to include in their letter as they gather information. 4. Students will be sure to include some of the following information in their letters. (history, climate, diet, clothing, daily life) Students will be evaluated on the quality of their written letters. Overview:In this lesson, the students will come to understand the issues surrounding desertification in Africa. The term desertification was first coined by French scientists and explorer Louis Lavauden in 1927. Desertification is assumed to be caused by a complex relationship involving human impact on arid, semi-arid and dry sub-humid areas (dry lands) - but excluding hyper-arid deserts. Characterized by the degradation of soil and vegetative cover, desertification could occur in any dry area, not just on the fringes of natural deserts. According to the International Development Information Center: 1. The class will view the following websites to see images of desertification: 2. Students will be divided into four groups. Each group will visit a different website to answer the following questions: A) What is desertification? 3. The groups will compare their answers to each of these questions. The class will discuss differences and similarities found, and suggest reasons for them. Assessment:The class will be evaluated on the quality of their participation in group activities. Overview:In this lesson, the students will come to understand elements of trade and trade routes in different regions of in Asia, Europe and Africa. This will be done by developing an appreciation of economic and cultural exchanges in today's world as well as in the past. 1. The class will brainstorm answers to the following questions. What are some examples of cultural exchange? Why are projects like the Odyssey important? 2. The class will be divided into four groups. Each will visit one of the following sites: Each group will be asked to analyze each site in terms of what they have learned about trade in Africa from the sites. 3. The class will then complete the following assignment: Imagine that you are writing a letter to a young person living in an isolated village of the world today. What would you want them to know about the world as you see it? What do you perceive as having value in terms of economic and cultural exchange from your own society? 1. The teacher will share the following quote from Kim Naylor's Mali which describes the great Sahelian kingdoms from some two thousand years ago: 2. The class will be asked to: Substitute the word "businesses" for "kingdoms" and "web sites" for "lands" so that the quote will read: 3. The class will be divided into small groups to discuss these quotes, focusing on the similarities and differences they represent in terms of trade, and how trade is critical to cultural and economic exchange. 1. The class will be given two quotes. The first is from Kim Naylor's book Mali. She recounts an 18th century Scottish explorer's description of salt: The second is from Philip Koslow's Mali: Crossroads of Africa: 2. The class will be given the following assignment: 3. The class will discuss what makes something valuable in terms of trade. Assessment:The class will be evaluated on the quality of their presentations and participation in class discusssions. Overview:Students are often exposed to world cultures through a narrow lens. The purpose of this lesson is to broaden students' understandings of the diversity and richness of art and architecture in Mali. Students will also explore the concept of cultural heritage. They will visit the website of the Mali Interactive Project, which provides accounts of archaeological excavations and information on the people and culture of Jenné . Jenné is the earliest known urban settlements south of the Sahara. 1. The students will work in pairs to explore different aspects of Malian art and architecture. The following sites are suggested as a beginning: 2. The students will explore the website of the Mali Interactive Project at: 3. The student pairs will create a report in the form of a news broadcast based on what they have found in their explorations of these sites. They will present this to the whole class. 1. The class will read the following excerpt from Philip Koslow's Mali: Crossroads of Africa: The colonialist view of African history held sway until into the 20th century. But as the century progressed, more enlightened scholars began to take a fresh look at Africa's past. As archaeologists (scientists who studied the physical remains of past societies) explored the sites of former African cities, they found that Africans had enjoyed a high level of civilization hundreds of years before the arrival of Europeans. In many respects the kingdoms and cities of Africa had been equal to or more advanced than European societies during the same time period." 2. The class will brainstorm reasons why this Eurocentric viewpoint appears to prevail. 3. The class will break into small groups and come up with five suggestions as to how we can increase awareness of the importance of all world cultures. 4. The class will break into small groups and visit the following website that focuses on World Heritage at Each group will be asked to bring back to the class something that they have learned. 5. The class will discuss the concept of cultural heritage as it is defined on this website. 6. The class will be asked to write a letter to fourth grade students describing what cultural heritage is, and why it is important. Assessment:Students will be evaluated on the quality of their written journal entry. Zimbabwe Lessons - Mali Lessons - Egypt Lessons Internet and Society Lessons - Youth and Society Lessons Indigenous People Lessons ©1999 The Odyssey: World Trek for Service and Education. All rights reserved.
http://www.worldtrek.org/odyssey/teachers/malilessons.html
13
37
Following World War I, the republic emerged from the German Revolution in November 1918. In 1919 a national assembly convened in the city of Weimar, where a new constitution for the German Reich was written, to be adopted on August 11. This attempt to re-establish Germany as a liberal democracy failed with the ascent of Adolf Hitler and the Nazi Party in 1933. Although technically the 1919 Weimar constitution was not invalidated until after World War II, the legal measures taken by the Nazi government in February and March 1933, commonly known as Gleichschaltung, destroyed the mechanisms of a true democracy. Therefore 1933 is usually seen as the end of the Weimar Republic and as the beginning of Hitler's so-called "Third Reich". The name Weimar Republic was never used officially during its existence. Despite its political form, the new republic was still known as Deutsches Reich in German. This phrase commonly translated into English as German Empire, although the German word Reich has a broader range of connotations than the English Empire. As a compromise, the name is often half-translated to the German Reich in English. The common short form remained Germany. The plan to transform Germany into a constitutional monarchy similar to Britain quickly became obsolete as the country slid into a state of near-total chaos. Germany was flooded with soldiers returning from the front, many of them wounded physically and psychologically. Violence was rampant, as the forces of the political right and left fought not only each other, but among themselves. Rebellion broke out when on October 29, the military command, without consultation with the government, ordered the German High Seas Fleet to sortie. This was not only entirely hopeless from a military standpoint, but was also certain to bring the peace negotiations to a halt. The crews of two ships in Wilhelmshaven mutinied. When the military arrested about 1,000 seamen and had them transported to Kiel, the Wilhelmshaven mutiny turned into a general rebellion that quickly swept over most of Germany. Other seamen, soldiers and workers, in solidarity with the arrested, began electing worker and soldier councils (Arbeiter- und Soldatenräte) modelled after the soviets of the Russian Revolution of 1917, and seized military and civil powers in many cities. On November 7, the revolution had reached Munich, causing King Ludwig III of Bavaria to flee. In contrast to Russia one year earlier, the councils were not controlled by communists. Most of their members were social democrats. Still, with the emergence of the Soviet Union, the rebellion caused great fear in the establishment down to the middle classes. The country seemed to be on the verge of a communist revolution. At the time, the traditional political representation of the working class, the Social Democratic Party was divided: a faction that called for immediate peace negotiations and leaned towards a socialist system had founded the Independent Social Democratic Party (USPD) in 1917. In order not to lose their influence, the remaining Majority Social Democrats (MSPD), who supported the war efforts and a parliamentary system, decided to put themselves at the front of the movement, and on November 7, demanded that Kaiser Wilhelm II abdicate. When he refused, Prince Max of Baden simply announced that he had done so and frantically attempted to establish a regency under another member of the House of Hohenzollern. On November 9, 1918, the German Republic was proclaimed by MSPD member Philipp Scheidemann at the Reichstag building in Berlin, to the fury of Friedrich Ebert, the leader of the MSPD, who still hoped to preserve the monarchy. Two hours later a Free Socialist Republic was proclaimed, 2 kilometers away, at the Berliner Stadtschloss. The proclamation was issued by Karl Liebknecht, co-leader (with Rosa Luxemburg) of the communist Spartacist League, which had allied itself with the USPD in 1917. On November 9, in a legally questionable act, Reichskanzler Prince Max of Baden transferred his powers to Friedrich Ebert, who, shattered by the monarchy's fall, reluctantly accepted. It was apparent, however, that this act would not be sufficient to satisfy Liebknecht and his followers, so a day later, a coalition government called "Council of People's Commissioners" (Rat der Volksbeauftragten) was established, consisting of three MSPD and three USPD members. Led by Ebert for the MSPD and Hugo Haase for the USPD it ought to act as collective head of state. Although the new government was confirmed by the Berlin worker and soldier council, it was opposed by the Spartacist League. Ebert called for a National Congress of Councils, which took place from December 16 to December 20, 1918, and in which the MSPD had the majority. Ebert thus managed to enforce quick elections for a National Assembly to produce a constitution for a parliamentary system, marginalizing the movement that called for a socialist republic (see below). On 11 November an Armistice was signed at Compiègne by German representatives. It effectively ended military operations between the Allies and Germany. It amounted to German demilitarization, without any concessions by the Allies; the naval blockade would continue until complete peace terms were agreed. From November 1918 through January 1919, Germany was governed by the Council of People's Commissioners. It was extraordinarily active, and issued a large number of decrees. At the same time, its main activities were confined to certain spheres: the eight-hour workday, domestic labour reform, agricultural labour reform, right of civil-service associations, local municipality social welfare relief (split between Reich and States) and important national health insurance, re-instatement of demobilised workers, protection from arbitrary dismissal with appeal as a right, regulated wage agreement, and Universal suffrage from 20 years of age in all classes of elections — local and national. Occasionally the name "Die Deutsche Sozialdemokratische Republik" (The German Social-Democratic Republic) appeared in leaflets and on posters from this era, although this was never the official name of the country. This also marked one of several steps that caused the permanent split in the working class' political representation into the SPD and Communists. The eventual fate of the Weimar Republic derived significantly from the general political incapacity of the German labour movement. The several strands within the central mass of the socialist movement adhered more to sentimental loyalty to alliances arising from chance than to any recognition of political necessity. Combined action on the part of the socialists was impossible without action from the millions of workers who stood midway between the parliamentarians and the ultra-leftists who supported the workers councils. Confusion made acute the danger of extreme right and extreme left engaging in virulent conflict. The split became final after Ebert called upon the OHL for troops to put down another Berlin army mutiny on November 23, 1918, in which soldiers had captured the city's garrison commander and closed off the Reichskanzlei where the Council of People's Commissioners was situated. The ensuing street fighting was brutal with several dead and injured on both sides. This caused the left wing to call for a split with the MSPD which, in their view, had joined with the Anti-Communist military to suppress the Revolution. The USPD thus left the Council of People's Commissioners after only seven weeks. In December, the split deepened when the Kommunistische Partei Deutschlands (KPD) was formed out of a number of radical left-wing groups, including the radical left wing of the USPD and the Spartacist League group. In January, more armed attempts at establishing communism, known as the Spartacist uprising, by the Spartacist League and others in the streets of Berlin were put down by paramilitary Freikorps units consisting of volunteer soldiers. Bloody street fights culminated in the beating and shooting deaths of Rosa Luxemburg and Karl Liebknecht after their arrests on January 15. With the affirmation of Ebert, those responsible were not tried before a court martial, leading to lenient sentences, which made Ebert unpopular amongst the radical leftists. The National Assembly elections took place January 19, 1919. In this time, the radical left-wing parties, including the USPD and KPD, were barely able to get themselves organized, leading to a solid majority of seats for the MSPD moderate forces. To avoid the ongoing fights in Berlin, the National Assembly convened in the city of Weimar, giving the future Republic its unofficial name. The Weimar Constitution created a republic under a semi-presidential system with the Reichstag elected by proportional representation. The Socialist and (Non-Socialist) Democratic parties obtained a solid 80 per cent of the vote. During the debates in Weimar, fighting continued. A Soviet republic was declared in Munich, but was quickly put down by Freikorps and remnants of the regular army. The fall of the Munich Soviet Republic to these units, many of which were situated on the extreme right, resulted in the growth of far-right movements and organizations in Bavaria, including the Nazis, Organisation Consul, and societies of exiled Russian Monarchists. Sporadic fighting continued to flare up around the country. In eastern provinces, forces loyal to Germany's fallen Monarchy fought the republic, while militias of Polish nationalists fought for independence: Great Poland Uprising in Provinz Posen and three Silesian Uprisings in Upper Silesia. The carefully thought-out social and political legislation introduced during the revolution was generally unappreciated by the German working classes. The two goals sought by the government, democratization and social protection of the working class, were never achieved. This has been attributed to a lack of pre-war political experience on the part of the Social Democrats. The government had little success in confronting the twin economic crises following the war. The permanent economic crisis was a result of lost pre-war industrial exports, the loss of supplies in raw materials and food stuffs from Alsace-Lorraine, Polish districts and the colonies along with worsening debt balances and reparations payments. Military-industrial activity had almost ceased, although controlled demobilisation kept unemployment at around one million. The fact that the Allies continued to blockade Germany until after the Treaty of Versailles did not help matters, either. The allies permitted only low import levels of goods that most Germans could not afford. After four years of war and famine, many German workers were exhausted, physically impaired and discouraged. Millions were disenchanted with capitalism and hoping for a new era. Meanwhile the currency devalued. The German peace delegation in France signed the Treaty of Versailles accepting mass reductions of the German military, unrealistically heavy war reparations payments, and the controversial "War Guilt Clause". Adolf Hitler later blamed the republic and its democracy for the oppressive terms of this treaty, though most current historians disregard the "stab-in-the-back" myth Hitler advocated for his own personal political gain. See also: French-German enmity. The Republic was under great pressure from both left and right-wing extremists. The radical left accused the ruling Social Democrats of having betrayed the ideals of the workers' movement by preventing a communist revolution. Right-wing extremists were opposed to any democratic system, preferring an authoritarian state like the 1871 Empire. To further undermine the Republic's credibility the extremists of the right (especially certain members of the former officer corps) also blamed an alleged conspiracy of Socialists and Jews for Germany's defeat in World War I (see Dolchstoßlegende). For the next five years Germany's large cities suffered political violence between left-wing and right-wing groups, both of which committed violence and murder against innocent civilians and against each other, resulting in many deaths. The worst of the violence was between right-wing paramilitaries called the Freikorps and pro-Communist militias called the Red Guards, both of which admitted ex-soldiers into their ranks. The first challenge to the Weimar Republic came when a group of communists and anarchists took over the Bavarian government in Munich and declared the creation of the Bavarian Soviet Republic. The communist rebel state was quickly put down one month later when Freikorps units were brought in to battle the leftist rebels. The Kapp Putsch took place on March 13, 1920, involving a group of 5000 Freikorps troops who gained control of Berlin and installed Wolfgang Kapp (a right-wing journalist) as chancellor. The national government fled to Stuttgart and called for a general strike. While Kapp's vacillating nature did not help matters, the strike crippled Germany's ravaged economy and the Kapp government collapsed after only four days on March 17. Inspired by the general strikes, a communist uprising began in the Ruhr region when 50,000 people formed a "Red Army" and took control of the province. The regular army and the Freikorps ended the uprising on their own authority. Other communist rebellions were put down in March 1921 in Saxony and Hamburg. In 1922, Germany signed a treaty - the Treaty of Rapallo - with Russia, and disarmament was brought to a halt. Under the Treaty of Versailles Germany could only have 100,000 soldiers and no conscription, Naval forces reduced to 15,000 men, 12 destroyers, 6 battleships, and 6 cruisers, no submarines or aircraft. The Treaty with Russia worked in secret, as the treaty allowed Germany to train military personnel, and Russia gained the benefits of German military technology. This was against the Treaty of Versailles, but Russia had pulled out of World War I against the Germans due to the 1917 Russian Revolution and was looked down on by the League of Nations. Germany seized the chance to make an ally. By 1923, the Republic claimed it could no longer afford the reparations payments required by the Versailles treaty, and the government defaulted on some payments. In response, French and Belgian troops occupied the Ruhr region, Germany's most productive industrial region at the time, taking control of most mining and manufacturing companies in January 1923. Strikes were called, and passive resistance was encouraged. These strikes lasted eight months, further damaging the economy and increasing the expense of imports. The strike meant no goods were being produced. This infuriated the French, who began to kill and exile protestors in the region. Since striking workers were paid benefits by the state, much additional currency was printed, fueling a period of hyperinflation. The 1920s German inflation started when Germany had no goods with which to trade. The government printed money to deal with the crisis; this allowed Germany to pay war loans and reparations with worthless marks and helped formerly great industrialists to pay back their own loans. This also led to pay raises for workers and for businessmen who wanted to profit from it. Circulation of money rocketed, and soon the Germans discovered their money was worthless. The value of the Papiermark had declined from 4.2 per US dollar at the outbreak of World War I to 1 million per dollar by August 1923. This gave the Republic's opponents something else to criticise it for. On 15 November 1923, a new currency, the Rentenmark, was introduced at the rate of 1 trillion (1,000,000,000,000) Papiermark for 1 Rentenmark. At that time, 1 U.S. dollar was equal to 4.2 Rentenmark. Reparation payments resumed, and the Ruhr was returned to Germany under the Locarno Pact, which defined a border between Germany, France and Belgium. Further pressure from the right came in 1923 with the Beer Hall Putsch, also called the Munich Putsch, staged by Adolf Hitler in Munich. In 1920, the German Workers' Party had become the National Socialist German Workers' Party (NSDAP), nicknamed the Nazi Party, and would become a driving force in the collapse of Weimar. Hitler was named chairman of the party in July 1921. On November 8, 1923, the Kampfbund, in a pact with Erich Ludendorff, took over a meeting by Bavarian prime minister Gustav von Kahr at a beer hall in Munich. Ludendorff and Hitler declared a new government, planning to take control of Munich the following day. The 3,000 rebels were thwarted by 100 policemen. Hitler was arrested and sentenced to five years in prison, a minimum sentence for the charge and he served less than eight months before his release and even then in a comfortable cell. Following the failure of the Beer Hall Putsch, his imprisonment and subsequent release, Hitler focused on legal methods of gaining power. As chancellor, Stresemann had to restore law and order in certain towns in Germany such as Spandau and Krustin, where the 'Black Reichswehr' (a section of the freikorps) held a mutiny. Saxony and Thuringia allowed KPD members into their governments, and a new nationalist leader in Bavaria called for Bavarian independence and told his army to disobey orders from Berlin. Streseman persuaded Ebert to issue Article 48 to resolve the situation and brought the Freikorps to settle the situation. However the use of violence against political activities led the SPD (Social Democratic Party) to remove themselves from his coalition which finally led to the ending of his chancellorship. Stresemann's first move as foreign minister was to issue a new currency, the Rentenmark, to halt the extreme hyperinflation crippling German society and the economy. It was successful because Stresemann refused to issue more currency, the cause of the inflationary spiral. In addition the currency was based on land, and restored confidence into the economy. With this achieved, a permanent currency - the Reichsmark - was introduced in 1926. Hans Luther was also appointed as Finance minister who helped balance the budget by dismissing 700 000 public employees. In 1924 the Dawes Plan was created, an agreement between American banks and the German government, in which the American banks lend money to Germany, to help them pay reparations. Other foreign achievements were the evacuation of the Ruhr in 1925, and the 1925 Treaty of Berlin. This reinforced the Treaty of Rapollo in 1922, and improved relations between the USSR and Germany. Also in this year, Germany was admitted to the League Of Nations, which gave her a good international stance and the ability to veto legislation after Stresemann's insistence on entering as a permanent member. They also made agreements over its western border, though nothing was fixed on the Eastern borders. However, this progress was funded by overseas loans, increasing the nation's debts, while overall trade decreased and unemployment rose. Stresemann's reforms did not relieve the underlying weaknesses of Weimar but gave the appearance of a stable democracy. The 1920s saw a massive cultural revival in Germany. It was, arguably, the most innovative period of cultural change in Germany. Innovative street theatre brought plays to the public, the cabaret scene and promiscuity became very popular. Women were americanised, wearing makeup, short hair, smoking and breaking out of tradition. Music was created with a practical purpose, such as Schoenberg's 'atonality' and there was a new type of architecture taught at 'Bauhaus' schools. Art reflected the new ideas of the time with artists such as Grosz being fined for defaming the military and for blasphemy. There was a lot of opposition to this Weimar culture shock, especially from conservatives. For instance, in 1930 Wilhelm Frick banned jazz performances and removed modern art from museums, as well as a new law being introduced to prevent teenagers from buying pulp fiction or pornography. Despite the progress during these years, Stresemann was criticized by opponents for his policy of "fulfilment", or compliance with the terms of the Versailles Treaty, and by the German people after the invasion of the Ruhr, in which he agreed to pay the reparations set by the treaty in order for the French troops to evacuate. In 1929, Stresemann's death marked the end of the "Golden Era" of the Weimar Republic. He died at the age of 51, four years after receiving the 1926 Nobel Peace Prize. The last years of the Weimar Republic were stamped by even more political instability than in the previous years and the administrations of Chancellors Brüning, Papen, Schleicher and Hitler (from 30 January to 23 March 1933) were all Presidentially-appointed dictatorships. This meant they used the President's power to rule without consulting the Reichstag (German parliament). On March 29, 1930, the finance expert Heinrich Brüning had been appointed the successor of Chancellor Müller by Reichspräsident Paul von Hindenburg after months of political lobbying by General Kurt von Schleicher on behalf of the military. The new government was expected to lead a political shift towards conservatism, based on the emergency powers granted to the Reichspräsident by the constitution, since it had no majority support in the Reichstag. After an unpopular bill to reform the Reich's finances was left unsupported by the Reichstag, Hindenburg established the bill as an emergency decree based on Article 48 of the constitution. On July 18 1930, the bill was again invalidated by a slim majority in the Reichstag with the support of the SPD, KPD, the (then small) NSDAP and DNVP. Immediately afterwards, Brüning submitted to the Reichstag the president's decree that it would be dissolved. The Reichstag general elections on September 14, 1930 resulted in an enormous political shift: 18.3% of the vote went to the Nazis, five times the percentage compared to 1928. This increased legislative representation of the NSDAP had devastating consequences for the Republic. There was no longer a moderate majority in the Reichstag even for a Great Coalition of moderate parties, and this encouraged the supporters of the Nazis to force their claim to power with increasing violence and terror. After 1930, the Republic slid more and more into a state of potential civil war. From 1930 to 1932, Brüning attempted to reform the devastated state without a majority in Parliament, governing with the help of the President's emergency decrees. During that time, the Great Depression reached its low point. In line with liberal economic theory that less public spending would spur economic growth, Brüning drastically cut state expenditures, including in the social sector. He expected and accepted that the economic crisis would, for a while, deteriorate before things would improve. Among others, the Reich completely halted all public grants to the obligatory unemployment insurance (which had been introduced only in 1927), which resulted in higher contributions by the workers and fewer benefits for the unemployed. This was understandably an unpopular move on his part. The economic downturn lasted until the second half of 1932, when there were first indications of a rebound. By this time though, the Weimar Republic had lost all credibility with the majority of Germans. While scholars greatly disagree about how Brüning's policy should be evaluated, it can safely be said that it contributed to the decline of the Republic. Whether there were alternatives at the time remains the subject of much debate. The bulk of German capitalists and land-owners originally gave support to the conservative experiment: not from any personal liking for Brüning, but believing the conservatives would best serve their interests. As, however, the mass of the working class and also of the middle classes turned against Brüning, more of the great capitalists and landowners declared themselves in favour of his opponents - Hitler and Hugenberg. By late 1931 conservatism as a movement was dead, and the time was coming when Hindenburg and the Reichswehr would drop Brüning and come to terms with Hugenberg and Hitler. Hindenburg himself was no less a supporter of an anti-democratic counter-revolution represented by Hugenberg and Hitler. On May 30, 1932, Brüning resigned after no longer having Hindenburg's support. Five weeks earlier, Hindenburg had been re-elected Reichspräsident with Brüning's active support, running against Hitler (the president was directly elected by the people while the Reichskanzler was not). Von Papen was closely associated with the industrialist and land-owning classes and pursued an extreme Conservative policy along Hindenburg's lines. He appointed as Reichswehr Minister Kurt von Schleicher and all of the members of the new cabinet were of the same political opinion as Hindenburg. This government was to be expected to assure itself of the co-operation of Hitler. Since the Republicans and Socialists were not yet ready to take action and the Conservatives had shot their political bolt, Hitler and Hindenburg were certain to achieve power. July 1932 resulted in the question as to what part the now immense Nazi Party would play in the Government of the country. The Nazi party owed its huge increase to an influx of workers, unemployed, despairing peasants, and middle-class people. The millions of radical adherents at first forced the Party towards the Left. They wanted a renewed Germany and a new organisation of German society. The left of the Nazi party strove desperately against any drift into the train of such capitalist and feudal reactionaries. Therefore Hitler refused ministry under Papen, and demanded the chancellorship for himself, but was rejected by Hindenburg on August 13 1932. There was still no majority in the Reichstag for any government; as a result, the Reichstag was dissolved and elections took place once more in the hope that a stable majority would result. In this brief Presidential Dictatatorship entr'acte, Schleicher took the role of 'Socialist General', and entered into relations with the Christian Trade Unions, the Left Nazis, and even with the Social Democrats. Schleicher's plan was for a sort of Labour Government under his Generalship. It was an utterly un-workable idea as the Reichswehr officers were hardly prepared to follow Schleicher on this path, and the working class had a natural distrust of their future allies. Equally, Schleicher aroused hatred amongst the great capitalists and landowners by these plans. The SPD and KPD could have achieved success building on a Berlin transport strike. Hitler learned from von Papen that the general had no authority to abolish the Reichstag parliament, whereas any majority of seats did. The cabinet (under a previous interpretation of Article 48) ruled without a sitting Reichstag, which could vote only for its own dissolution. Hitler also learned that all past crippling Nazi debts were to be relieved by German big business. On January 22, Hitler's efforts to persuade Oskar von Hindenburg (the President's son) included threats to bring criminal charges over estate taxation irregularities at the President's Neudeck estate (although 5000 extra acres were soon allotted to Hindenburg's property). Out maneuvered by von Papen and Hitler on plans for the new cabinet, and having lost Hindenburg's confidence, Schleicher asked for new elections. On January 28 von Papen described Hitler to Paul von Hindenburg as only a minority part of an alternative, von Papen-arranged government. The four great political movements, the SPD, KPD, Centre, and the Nazis were in opposition. On January 29 Hitler and von Papen thwarted a last-minute threat of an officially-sanctioned Reichswehr takeover, and on January 30, 1933 Hindenburg accepted the new Papen-Nationalist-Hitler coalition with the Nazis holding only three of eleven Cabinet seats. Later that day, the first cabinet meeting was attended by only two political parties, representing a minority in the Reichstag: The Nazis and the DNVP led by Alfred Hugenberg (196 + 52 seats). Eyeing the Catholic Centre Party's 70 (+ 20 BVP) seats, Hitler refused their leader's demands for constitutional "concessions" (amounting to protection) and planned for dissolution of the Reichstag. Hindenburg, despite his misgivings about the Nazis' goals and about Hitler as a person, reluctantly agreed to Papen's theory that, with Nazi popular support on the wane, Hitler could now be controlled as chancellor. The date dubbed Machtergreifung (seizure of power) by the Nazi propaganda is commonly seen as the beginning of Nazi Germany. On 15 March the first cabinet meeting was attended by the two coalition parties, representing a minority in the Reichstag: The Nazis and the DNVP led by Alfred Hugenberg (196 + 52 seats). According to the Nuremberg Trials this cabinet meeting's first order of business was how at last to achieve the complete counter-revolution by means of the constitutionally-allowed Enabling Act, requiring two-thirds parliamentary majority. This Act would, and did, bring Hitler and the NSDAP unfettered dictatorial powers. At the last internal Centre meeting prior to the debate on the Enabling Act, Kaas expressed no preference or suggestion on the vote, but as a way of mollifying opposition by Centre members to the granting of further powers to Hitler, Kaas somehow arranged for a letter of constitutional guarantee from Hitler himself prior to his voting with the centre en bloc in favor of the Enabling Act. This guarantee was not ultimately given. Kaas, the party's chairman since 1928, had strong connections to the Vatican Secretary of State, later Pope Pius XII. In return for pledging his support for the act, Kaas would use his connections with the Vatican to set in train and draft the Holy See's long desired Reichskonkordat with Germany (only possible with the co-operation of the Nazis). In the debate prior to the vote on the Enabling Act, Hitler orchestrated the full political menace of his paramilitary forces like the storm troopers in the streets to intimidate reluctant Reichstag deputies into approving the Enabling Act. The Communists' 81 seats had been empty since the Reichstag Fire Decree and other lesser known procedural measures, thus excluding their anticipated "No" votes from the balloting. Otto Wels, the leader of the Social Democrats, whose seats were similarly depleted from 120 to below 100, was the only speaker to defend democracy and in a futile but brave effort to deny Hitler the two-thirds majority, he made a speech critical of the abandonment of democracy to dictatorship. At this Hitler could no longer restrain his wrath. In his retort to Wels, Hitler abandoned earlier pretence at calm statesmanship and delivered a characteristic screaming diatribe, promising to exterminate all Communists in Germany and threatening Wels' Social Democrats as well. Meanwhile Hitler's promised written guarantee to Monsignor Kaas was being typed up, it was asserted to Kaas, and thereby Kaas was persuaded to silently deliver the Centre bloc's votes for the Enabling Act anyway. The NSDAP movement had rapidly passed the power of the majority Nationalist Ministers to control. Unchecked by the police, the S.A indulged in acts of terrorism throughout Germany. Communists, Social Democrats, and the Centre were ousted from public life everywhere. The violent persecution of Jews began, and by the summer 1933 the NSDAP felt itself so invincible that it did away with all the other parties, as well as trades unions. The Nationalist Party was among those suppressed. The NSDAP ruled alone in Germany. The Reichswehr had, however, remained completely un-touched by all these occurrences. It was still the same State within a State that it had been in the Weimar Republic. Similarly, the private property of wealthy industrialists and landowners was untouched, whilst the administrative and judicial machinery was only very slightly tampered with. No single reason can explain the failure of the Weimar Republic. The most commonly asserted causes can be grouped into three categories: economic problems, institutional problems and the roles of specific individuals. The Weimar Republic was severely affected by the Great Depression triggered by the Wall Street Crash of 1929. The crash and subsequent economic stagnation led to increased demands on Germany to repay the debts owed to the United States. As the Weimar Republic was very fragile in all of its existence, the depression proved to be devastating, and played a major role in the NSDAP's takeover. The Treaty of Versailles was considered by most Germans to be a punishing and degrading document because it forced them to surrender resource-rich areas and pay massive amounts of compensation. These punitive reparations caused consternation and resentment, although the actual economic damage resulting from the Treaty of Versailles is difficult to determine. While the official reparations were considerable Germany ended up paying only a fraction of them. However, the reparations did damage Germany's economy by discouraging market loans, which forced the Weimar government to finance its deficit by printing more money, causing rampant hyperinflation. In addition, the rapid disintegration of Germany in 1919, due to the return of a disillusioned army, the rapid change from possible victory in 1918 to defeat in 1919, and the political chaos may have caused a psychological imprint on Germans that could lead to extreme nationalism, shown by Hitler. Most historians agree that many industrial leaders identified the Weimar Republic with labour unions and with the Social Democrats, who had established the Versailles concessions of 1918/1919. Although some did see Hitler as a means to abolish the latter, the Republic was already unstable before any industry leaders were supporting Hitler. Even those who supported Hitler's appointment often did not want Nazism in its entirety and considered Hitler a temporary solution in their efforts to abolish the Republic. Industry support alone cannot explain Hitler's enthusiastic support by large segments of the population, including many workers who had turned away from the left. Brüning's economic policy from 1930-1932 has been the subject of much debate. It caused many Germans to identify the Republic with cuts in social spending and extremely liberal economics. Whether there were alternatives to this policy during Great Depression is an open question. Paul von Hindenburg became Reichspräsident in 1925. He represented the older authoritarian 1871 Empire, and it is hard to label him as a democrat in support of the 1919 Republic, but he was never a Nazi. During his later years (at well over 80 years old), he was also senile. A president with solid democratic beliefs may not have allowed the Reichstag to be circumvented with the use of Article 48 decrees and might have avoided signing the Reichstag Fire Decree. Hindenburg waited one and a half days before he appointed Hitler as Reichskanzler on January 30, 1933, which indicates some hesitance. Some claim Nazism would have lost much public support if Hitler had not been named chancellor. These states were gradually de facto abolished under the Nazi regime via the Gleichschaltung process, as the states were largely re-organised into Gaue. However, the city-state of Lübeck was formally incorporated into Prussia in 1937 following the Greater Hamburg Act - apparently motivated by Hitler's personal dislike for the city. Most of the remaining states were formally dissolved by the Allies at the end of World War II and ultimately re-organised into the modern states of Germany.
http://www.reference.com/browse/peace+talk
13
42
Germanic Tribes were not the first peoples to occupy the eastern Alpine-Danubian region, but the history and culture of these tribes, especially the Bavarians and Swabians, are the foundation of Austria's modern identity. Austria thus shares in the broader history and culture of the Germanic peoples of Europe. The territories that constitute modern Austria were, for most of their history, constituent parts of the German nation and were linked to one another only insofar as they were all feudal possessions of one of the leading dynasties in Europe, the Habsburgs. Surrounded by German, Hungarian, Slavic, Italian, and Turkish nations, the German lands of the Habsburgs became the core of their empire, reaching across German national and cultural borders. This multicultural empire was held together by the Habsburgs' dynastic claims and by the cultural and religious values of the Roman Catholic Counter-Reformation that the Habsburgs cultivated to provide a unifying identity to the region. But this cultural-religious identity was ultimately unable to compete with the rising importance of nationalism in European politics, and the nineteenth century saw growing ethnic conflict within the Habsburg Empire. The German population of the Habsburg Empire directed its nationalist aspirations toward the German nation, over which the Habsburgs had long enjoyed titular leadership. Prussia's successful bid for power in Germany in the nineteenth century--culminating in the formation in 1871 of a German empire under Prussian leadership that excluded the Habsburgs' German lands--was thus a severe political shock to the German population of the Habsburg Empire. When the Habsburg Empire collapsed in 1918 at the end of World War I, its territories that were dominated by non-German ethnic groups established their own independent nation-states. The German-speaking lands of the empire sought to become part of the new German republic, but European fears of an enlarged Germany forced them to form an independent Austrian state. The new country's economic weakness and lack of national consciousness contributed to political instability and polarization throughout the 1920s and 1930s and facilitated the annexation (Anschluss) of Austria by Nazi Germany in 1938. The Austro-Hungarian Empire played a decisive role in central European history. It occupied strategic territory containing the southeastern routes to Western Europe and the north-south routes between Germany and Italy. Although present-day Austria is only a tiny remnant of the old empire, it retains this unique position. Soon after the Republic of Austria was created at the end of World War I, it faced the strains of catastrophic inflation and of redesigning a government meant to rule a great empire into one that would govern only 6 million citizens. In the early 1930s, worldwide depression and unemployment added to these strains and shattered traditional Austrian society. Resultant economic and political conditions led in 1933 to a dictatorship under Engelbert Dollfuss. In February 1934, civil war broke out, and the Socialist Party was outlawed. In July, a coup d'etat by the National Socialists failed, but Nazis assassinated Dollfuss. In March 1938, Austria was incorporated into the German Reich, a development commonly known as the "Anschluss" (annexation). At the Moscow conference in 1943, the Allies declared their intention to liberate Austria and reconstitute it as a free and independent state. In April 1945, both Eastern- and Western-front Allied forces liberated the country. Subsequently, Austria was divided into zones of occupation similar to those in Germany. Under the 1945 Potsdam agreements, the Soviets took control of German assets in their zone of occupation. These included 7% of Austria's manufacturing plants, 95% of its oil resources, and about 80% of its refinery capacity. The properties were returned to Austria under the Austrian State Treaty. This treaty, signed in Vienna on May 15, 1955, came into effect on July 27, and, under its provisions, all occupation forces were withdrawn by October 25, 1955. Austria became free and independent for the first time since 1938. The 1955 Austrian State Treaty ended the four-power occupation and recognized Austria as an independent and sovereign state. In October 1955, the Federal Assembly passed a constitutional law in which "Austria declares of her own free will her perpetual neutrality." The second section of this law stated, "In all future times Austria will not join any military alliances and will not permit the establishment of any foreign military bases on her territory." Since then, Austria shaped its foreign policy on the basis of neutrality. In recent years, however, Austria began to reassess its definition of neutrality, granting over flight rights for the UN-sanctioned action against Iraq in 1991, and, since 1995, contemplating participation in the EU's evolving security structure. Also in 1995, it joined the Partnership for Peace, and subsequently participated in peacekeeping missions in Bosnia. Discussion of possible Austrian NATO membership intensified during 1996. OVP and FPO aim at moving closer to NATO or a European defense arrangement. The SPO, in turn, maintains continued neutrality is the cornerstone of Austria's foreign policy, and a majority of the population generally supports this stance. Austria has a well-developed social market economy with a high standard of living in which the government has played an important role. Many of the country's largest firms were nationalized in the early post-war period to protect them from Soviet takeover as war reparations. For many years, the government and its state-owned industries conglomerate played a very important role in the Austrian economy. However, starting in the early 1990s, the group was broken apart, state-owned firms started to operate largely as private businesses, and a great number of these firms were wholly or partially privatized. Although the government's privatization work in past years has been very successful, it still operates some firms, state monopolies, utilities, and services. The new government has presented an ambitious privatization program, which, if implemented, will considerably reduce government participation in the economy. Austria enjoys well-developed industry, banking, transportation, services, and commercial facilities. Although some industries, such as several iron and steel works and chemical plants, are large industrial enterprises employing thousands of people, most industrial and commercial enterprises in Austria are relatively small on an international scale. Austria has a strong labor movement. The Austrian Trade Union Federation (OGB) comprises constituent unions with a total membership of about 1.5 million--more than half the country's wage and salary earners. Since 1945, the OGB has pursued a moderate, consensus-oriented wage policy, cooperating with industry, agriculture, and the government on a broad range of social and economic issues in what is known as Austria's "social partnership." The OGB has announced tough opposition against the new government's program for budget consolidation, social reform, and improving the business climate, and indications are rising that Austria's peaceful social climate could become more confrontational. Austrian farms, like those of other west European mountainous countries, are small and fragmented, and production is relatively expensive. Since Austria's becoming a member of the EU in 1995, the Austrian agricultural sector has been undergoing substantial reform under the EU's common agricultural policy (CAP). Although Austrian farmers provide about 80% of domestic food requirements, the agricultural contribution to gross domestic product (GDP) has declined since 1950 to about 2%. Austria has achieved sustained economic growth. During the second half of the 1970s, the annual average growth rate was 3% in real terms, though it averaged only about 1.5% through the first half of the 1980s before rebounding to an average of 3.2% in the second half of the 1980s. At 2%, growth was weaker again in the first half of the 1990s, but averaged 2.5% again in the period 1997 to 2001. After real GDP growth of only 0.7% in 2002, the economy is predicted to grow 1.7% in 2003, 2.3% in 2004, 2.5% in 2005, and 2.3% in 2006, for an average rate of 1.9% in the period 2002 to 2006. Austria became a member of the EU on January 1, 1995. Membership brought economic benefits and challenges and has drawn an influx of foreign investors attracted by Austria's access to the single European market. Austria also has made progress in generally increasing its international competitiveness. As a member of the Economic and Monetary Union (EMU), Austria's economy is closely integrated with other EU member countries, especially with Germany. On January 1, 1999, Austria introduced the new Euro currency for accounting purposes. In January 2002, Euro notes and coins were introduced and substituted for the Austrian schilling. Economists agree that the economic effects in Austria of using a common currency with the rest of the members of the Euro-zone have been positive. Trade with other EU countries accounts for about 63% of Austrian imports and exports. Expanding trade and investment in the emerging markets of central and eastern Europe is a major element of Austrian economic activity. Trade with these countries accounts for almost 15% of Austrian imports and exports, and Austrian firms have sizable investments in and continue to move labor-intensive, low-tech production to these countries. Although the big investment boom has waned, Austria still has the potential to attract EU firms seeking convenient access to these developing markets. Total trade with the United States in 2001 reached $7.7 billion. Imports from the United States amounted to $4.0 billion, constituting a U.S. market share in Austria of 5.3%. Austrian exports to the United States in 2001 were $3.7 billion or 5.3% of total Austrian exports. During the Roman Catholic Counter-Reformation of the sixteenth and seventeenth centuries, the Habsburgs were the leading political representatives of Roman Catholicism in its conflict with the Protestantism of the Protestant Reformation in Central Europe, and ever since then, Austria has been a predominantly Roman Catholic country. Because of its multinational heritage, however, the Habsburg Empire was religiously heterodox and included the ancestors of many of Austria's contemporary smaller denominational groups. The empire's tradition of religious tolerance derived from the enlightened absolutism of the late eighteenth century. Religious freedom was later anchored in Austria-Hungary's constitution of 1867. After the eighteenth century, twelve religious communities came to be officially recognized by the state in Austria: Roman Catholic; Protestant (Lutheran and Calvin); Greek, Serbian, Romanian, Russian, and Bulgarian Orthodox; Jewish; Muslim; Old Catholic; and, more recently, Methodist and Mormon. The presence of other communities within the empire did not prevent the relationship between the Austrian imperial state and the Roman Catholic Church--or the "throne and the altar"--from being particularly close before 1918. Because of this closeness, the representatives of secular ideologies--liberals and socialists--sought to reduce the influence of the Roman Catholic Church in such public areas as education. The Constitution provides for freedom of religion, and the Government generally respects this right in practice. Religious organizations may be divided into three different legal categories (listed in descending order of status): Officially recognized religious societies, religious confessional communities, and associations. Religious recognition under the law has wide-ranging implications, such as the authority to participate in the mandatory church contributions programs, which can be legally enforced; to engage in religious education; and to bring in religious workers to act as ministers, missionaries, or teachers. Under the law, religious societies have "public corporation" status. This status permits religious societies to engage in a number of public or quasi-public activities that are denied to other religious organizations. The Roman Catholic Church is the predominant church in the country. Approximately 78 percent of the population belonged to this church. Small Lutheran minorities are located mainly in Vienna, Carinthia, and Burgenland. The law also allows nonrecognized religious groups to seek official status as confessional communities without the fiscal and educational privileges available to recognized religions. Confessional communities must have at least 300 members, and once they are recognized officially as such by the Government, they have juridical standing, which permits them to engage in such activities as purchasing real estate in their own names and contracting for goods and services. Early criminal codes merely listed crimes--their definitions were considered self-evident or unnecessary--and provided for the extreme punishments characteristic of the Middle Ages. The codes did not presume to list all possible crimes, and a judge was authorized to determine the criminality of other acts and to fix sentences at his discretion. The first unified crime code was enacted in 1768, during the reign of Empress Maria Theresa. Investigation, prosecution, and defense were all in the hands of a judge. The code contained illustrated directions for the application of "painful interrogation," that is, torture, if the judge entertained suspicions regarding a defendant. Torture was outlawed a few years later, however. The Josephine Code of 1787, enacted by Joseph II, declared that there was "no crime without a law," thus, an act not defined as a crime was not a crime. Although it was a humanitarian document, the code had shortcomings that were remedied to a considerable extent by the codes of 1803 and 1852. A modern code of criminal procedure adopted in 1873 provided that ordinary court proceedings had to be oral and open. Capital punishment, which was prohibited for a time after 1783, was reinstituted and remained a possible punishment until 1950. Imprisonment in chains and corporal punishment were abolished in the mid-1800s. The Austrian criminal code and code of criminal procedure were riddled with Nazi amendments between 1938 and 1945 after the Anschluss, but each code was restored to its 1938 status when the country regained independence. Revisions of the criminal code in the mid-1960s, based on ten years of work by a legal commission, give strong emphasis to the principle of government by law and allow unusual latitude in determining appropriate punishment and its implementation. Austria attempts to distinguish among lawbreakers whose crimes are committed on impulse, those who are susceptible to rehabilitation, and those who are addicted to crime and are incorrigible. Further reforms of the criminal code in 1974 emphasized the importance of avoiding jail sentences whenever possible because of the potentially antisocial effects of even a short prison term. Vagrancy, begging, and prostitution are specifically decriminalized. In large communities, prostitution is regulated by health authorities, and prostitutes and brothels are registered. Individual local jurisdictions retain the authority to prohibit prostitution, however. Provisions in the 1974 law modified the punishment for business theft and shoplifting and restricted the definitions of riotous assembly and insurrection. INCIDENCE OF CRIME The crime rate in Austria is low compared to other industrialized countries. An analysis was done using INTERPOL data for Austria. For purpose of comparison, data were drawn for the seven offenses used to compute the United States FBI's index of crime. Index offenses include murder, forcible rape, robbery, aggravated assault, burglary, larceny, and motor vehicle theft. The combined total of these offenses constitutes the Index used for trend calculation purposes. Austria will be compared with Japan (country with a low crime rate) and USA (country with a high crime rate). According to the INTERPOL data, for murder, the rate in 2001 was 1.96 per 100,000 population for Austria, 1.10 for Japan, and 5.61 for USA. For rape, the rate in 2001 was 7.12 for Austria, compared with 1.78 for Japan and 31.77 for USA. For robbery, the rate in 2001 was 85.84 for Austria, 4.08 for Japan, and 148.50 for USA. For aggravated assault, the rate in 2001 was 2.59 for Austria, 23.78 for Japan, and 318.55 for USA. For burglary, the rate in 2001 was 1,035.60 for Austria, 233.60 for Japan, and 740.80 for USA. The rate of larceny for 2001 was 1,944.50 for Austria, 1401.26 for Japan, and 2484.64 for USA. The rate for motor vehicle theft in 2001 was 48.71 for Austria, compared with 44.28 for Japan and 430.64 for USA. The rate for all index offenses combined was 3126.32 for Austria, compared with 1709.88 for Japan and 4160.51 for USA. (Note that Japan data are for year 2000) TRENDS IN CRIME Between 1995 and 2001, according to INTERPOL data, the rate of murder decreased from 2.19 to 1.96 per 100,000 population, a decrease of 10.5%. The rate for rape increased from 6.40 to 7.12, an increase of 11.2%. The rate of robbery increased from 50.72 to 85.84, an increase of 69.2%. The rate for aggravated assault decreased from 2.60 to 2.59, a decrease of 0.4%. The rate for burglary decreased from 1067.47 to 1035.60, a decrease of 3 %. The rate of larceny increased from 1494.37 to 1944.50, an increase of 30.1%. The rate of motor vehicle theft increased from 27.70 to 48.71, an increase of 75.8%. The rate of total index offenses increased from 2651.45 to 3126.32, an increase of 17.9%. The earliest urban police force was Vienna's City Guard of 1569, consisting of 150 men. By the beginning of the Thirty Years' War (1618-48), the City Guard consisted of 1,000 men organized as a regiment, individual companies of which took part in military campaigns. The soldiers of the guard were subject to the authority of the Imperial War Council, and the city was required to pay for their services. In 1646 the city set up its own Public Order Watch; serious frictions between the two bodies resulted in their replacement by a new service under a commissioner of police in 1776. Its personnel were still made up of soldiers, either volunteers or assigned, but they failed to meet the city's needs because of a lack of training and continuity of service. Police functions were organized in a similar form in other large cities of the empire. It was not until a series of reforms between 1850 and 1869 that military influence over the police force was finally ended with the introduction of an independent command structure, a permanent corps of police professionals, training of officers in police skills, and distinctive uniforms and symbols of rank. The Gendarmerie (Gendarmerie in German) was created by Emperor Franz Joseph I in 1850 after the disorder and looting that accompanied the uprising of 1848. Initially composed of eighteen regiments and part of the army, its operational command was transferred to the Ministry for Interior in 1860 and wholly severed from the armed forces in 1867. Nevertheless, training, uniforms, ranks, and even pay remained patterned after the army. A special Alpine branch was formed in 1906, mainly to protect the part of Tirol that bordered Italy. Alpine rescue operations and border patrols have remained an important Gendarmerie function. As of 1993, the more important law enforcement and security agencies were organized under the General Directorate for Public Security of the federal Ministry for Interior. The directorate is divided into five units: the Federal Police; the Gendarmerie central command; the State Police (secret service); the Criminal Investigation Service; and the Administrative Police. Security directorates in each of the nine provinces are also under supervision of the General Directorate for Public Security. Each of these is organized into a headquarters division, a state police division, a criminal investigation division, and an administrative police division. Contingents of the Federal Police (Bundespolizei) are stationed in Vienna and thirteen of the larger cities. As of 1990, approximately one-third of the population of Austria lived in areas receiving Federal Police protection. The Gendarmerie accounts for nearly all of the remaining areas. A few small Austrian localities still have their own police forces separate from the Federal Police or the Gendarmerie. The Federal Police are responsible for maintaining peace, order, and security; controlling weapons and explosives; protecting constitutional rights of free expression and assembly; controlling traffic; enforcing environmental and commercial regulations; enforcing building safety and fire prevention rules; policing public events; and preventing crime. A mobile commando group is organized in each city directorate, in addition to a four-platoon "alarm group" in Vienna and a special force to maintain security at the international airport. In early 1992, it was announced that 150 officials would be assigned to special units reporting directly to the Ministry for Interior to fight organized crime. As of 1990, the Federal Police had a personnel complement of 10,000 in the regular uniformed service (Sicherheitswache-- Security Watch) and 2,400 plainclothes police in the Criminal Investigation Service. Federal Police contingents are armed with Glock 17 9mm pistol and truncheons. These can be supplemented with the standard army weapon, the Steyr 5.56mm automatic rifle, as well as various kinds of riot-control equipment. A separate women's police corps serves in the cities, principally to oversee school crossings and to assist with traffic control. As of 1990, about twenty-four women served in the Gendarmerie and sixty-six in the Federal Police, mostly to deal with cases involving women, youth, and children. The secret service branch of the Federal Police, the State Police (Staatspolizei; commonly known as Stapo) specializes in counterterrorism and counterintelligence. It also pursues rightwing extremism, drug trafficking, illicit arms dealing, and illegal technology transfers. It performs security investigations for other government agencies and is responsible for measures to protect national leaders and prominent visiting officials. Members of the State Police are chosen from volunteers who have served for at least three years in one of the other security agencies. Numbering 11,600 in 1990, the Gendarmerie has responsibilities similar to the Federal Police but operates in rural areas and in towns without a contingent of Federal Police or local police. There is one member of the Gendarmerie for each 397 inhabitants in the areas subject to its jurisdiction; there is one member of the Federal Police for each 316 residents in the cities it patrols. The Gendarmerie is organized into eight provincial commands (every province, except Vienna), ninety district commands, and 1,077 posts. A post can have from as few as three to as many as thirty gendarmes; most have fewer than ten. The provincial headquarters is composed of a staff department, criminal investigation department, training department, and area departments comprising two or three district commands. Basic Gendarmerie training is the responsibility of the individual provincial commands, each of which has a school for new recruits. Leadership and specialized courses are given at the central Gendarmerie school in Mödling near Vienna. The basic course for NCOs is one year; that for Gendarmerie officers lasts two years. The Gendarmerie has its own commando unit, nicknamed Kobra, as do the separate provincial commands employing gendarmes with previous experience in Kobra. Alpine posts and high Alpine posts are served by 750 Gendarmerie Alpinists and guides. In 1988 more than 1,300 rescue missions were conducted, many with the aid of Agusta-Bell helicopters in the Gendarmerie inventory. Members of the Gendarmerie are armed with 9mm Browning-type semiautomatic pistols. They also have available American M-1 carbines and Uzi machine pistols. The Administrative Police, in addition to maintaining the bulk of routine police records and statistics, work on import export violations, illegal shipments of such items as firearms and pornographic materials, and alien and refugee affairs. Customs officials are ordinarily in uniform; other Administrative Police dress according to the needs of their assignments. The late 1980s witnessed a growing incidence of complaints alleging police misconduct and unnecessary use of force. The minister for interior reported that there had been 2,622 allegations of ill-treatment by the police between 1984 and 1989, of which 1,142 resulted in criminal complaints leading to thirtythree convictions against police officers. In addition, 120 disciplinary investigations were carried out, and disciplinary measures were taken against twenty-six police officers. However, victims of police misbehavior were liable to be deterred from pressing their complaints because of the risk of being charged with slander by the accused officers. A new police law that went into effect in May 1993 stipulates more clearly the limitations on police conduct and imposes restrictions on holding persons on charges of aggressive behavior without an appearance before a magistrate. In addition, leaflets are to be given to detained or arrested persons setting out their rights, including the right to call a lawyer and to have their own doctors if medical examinations are required. In 1990 it was disclosed that the State Police had extensively monitored the activities of private citizens without sufficient justification. Security checks had been carried out for private companies on request. Of some 11,000 citizens who inquired whether they had been monitored, some 20 percent were found to have State Police files. These actions appeared to be in violation of laws protecting personal data collected by the government, public institutions, and private entities, as well as constitutional protection of the secrecy of the mail and telephone. These revelations gave rise to a restructuring of the State Police, including the reduction of its staff from 800 to 440. The new police law that came into effect in 1993 also introduces parliamentary control over the State Police and the military secret police, with oversight to be exercised by separate parliamentary subcommittees. The police are subject to the effective control of the executive and judicial authorities. The national police maintain internal security, and the army is responsible for external security. The police are well trained and disciplined; however, there were reports that police committed some human rights abuses. The Constitution prohibits arbitrary arrest and detention, and the Government generally observes these prohibitions. In criminal cases, the law provides for investigative or pretrial detention for up to 48 hours; an investigative judge may decide within that period to grant a prosecution request for detention of up to 2 years pending completion of an investigation. The grounds required for such investigative detention are specified in the law, as are conditions for bail. The investigative judge is required to periodically evaluate an investigative detention. There is a system of bail. The law prohibits forced exile, and the Government does not employ it. The judicial system is independent of the executive and legislative branches. The constitution establishes that judges are independent when acting in their judicial function. They cannot be bound by instructions from a higher court (except in cases of appeal) or by another agency. In administrative matters, judges are subordinate to the Ministry for Justice. A judge can be transferred or dismissed only for specific reasons established by law and only after formal court action has been taken. The Austrian judiciary functions only at the federal level, and thus there is no separate court system at the provincial level. The Constitutional Court decides the legality of treaties and the constitutionality of laws and decrees passed at the federal, provincial, and local levels. Cases involving courts and administrative agencies or the Administrative Court and the Constitutional Court are heard in the Constitutional Court. Monetary claims against the state, provinces, administrative districts, or local communities that cannot be settled by a regular court or an administrative agency are brought to the Constitutional Court, as are claims regarding disputed elections. The court also decides questions of impeachment and hears cases charging the president with breaking a constitutional law or cases charging members of federal or provincial governments with breaking a law. The Administrative Court, located in Vienna, is the court of final appeal for cases involving administrative agencies. The court's specific purpose is to determine whether an individual's rights have been violated by an administrative action or omission. Individuals can also appeal to this court if an administrative agency fails to grant a decision in a case. The Administrative Court may not rule on matters that come under the competence of the Constitutional Court. Cases outside the jurisdiction of these courts are heard in special courts. For example, labor courts decide civil cases concerning employment. Employers and employees are represented in labor court hearings. Cases involving the Stock and Commodity Exchange and the Exchange for Agricultural Products are decided by the Court of Arbitration, which is composed of members of the exchanges. The Patent Court decides appeals of patent cases. The highest courts of Austria's independent judiciary are the Constitutional Court; the Administrative Court, which handles bureaucratic disputes; and the Supreme Court, for civil and criminal cases. While the Supreme Court is the court of highest instance for the judiciary, the Administrative Court acts as the supervisory body over the administrative branch, and the Constitutional Court presides over constitutional issues. The Supreme Court in Vienna heads the system of ordinary courts. This court is the court of final instance for most civil and criminal cases. It can also hear cases involving commercial, labor, or patent decisions, but constitutional or administrative decisions are outside its purview. Justices hear cases in five-person panels. Four superior courts, which are appellate courts, are located in Vienna, Graz, Linz, and Innsbruck. They are usually courts of second instance for civil and criminal cases and are the final appellate courts for district court cases. On a lower level are seventeen regional courts having jurisdiction over provincial and district matters. Boundaries of judicial districts may or may not coincide with those of administrative districts. Regional courts serve as courts of first instance for civil and criminal cases carrying penalties of up to ten years' imprisonment and as appellate courts for some cases from district courts. Vienna and Graz have separate courts for civil, criminal, and juvenile cases, and Vienna also has a separate commercial court. At the lowest level are about 200 district or local courts, which decide minor civil and criminal cases, that is, those involving small monetary value or minor misdemeanors. Questions involving such issues as guardianship, adoption, legitimacy, probate, registry of lands, and boundary disputes are also settled at this level. Depending on the population of the area, the number of judges varies, but one judge can decide a case. Civil and criminal matters are heard in separate courts in Vienna and Graz. Vienna further divides civil courts into one for commercial matters and one for other civil cases. Ordinary court judges are chosen by the federal president or, if the president so decides, by the minister for justice on the basis of cabinet recommendations. The judiciary retains a potential voice in naming judges, inasmuch as it must submit the names of two candidates for each vacancy on the courts. The suggested candidates, however, need not be chosen by the cabinet. Lay people have an important role in the judicial system in cases involving crimes carrying severe penalties, political felonies, and misdemeanors. The public can participate in court proceedings as lay assessors or as jurors. Certain criminal cases are subject to a hearing by two lay assessors and two judges. The lay assessors and judges decide the guilt or innocence and punishment of a defendant. If a jury, usually eight lay people, is used, the jury decides the guilt of the defendant. Then jury and judges together determine the punishment. The Constitution provides for an independent judiciary, and the Government generally respects this provision in practice. The Constitution provides that judges are independent in the exercise of their judicial office. Judges cannot be removed from office or transferred against their will. There are local, regional, and higher regional courts, as well as the Supreme Court as the court of highest instance. While the Supreme Court is the court of highest instance for the judiciary, the Administrative Court acts as the supervisory body over the administrative branch, and the Constitutional Court presides over constitutional issues. The Constitution provides for the right to a fair trial and an independent judiciary generally enforces this right. The system of judicial review provides for extensive possibilities for appeal. Trials have to be public, and have to be conducted orally. Persons charged with criminal offenses are considered innocent until proven guilty. There were no reports of political prisoners in year 2001. All prisons from local jails to maximum security institutions are regulated by the Ministry for Interior. Revisions to penal statutes adopted in 1967 emphasize rehabilitation, education, work, prison wages, and assistance to prisoners on their return to society. Programs stress the humane treatment and rehabilitation of inmates, but program implementation is often inhibited by restricted prison budgets and lack of facilities. Regulations stipulate that all able-bodied prisoners will be put to useful work. If proceeds from an individual's work exceed the cost to the state of his maintenance, the prisoner is paid a wage. Part can be used for pocket money, and the remainder is paid to the offender after release. Where facilities are inadequate or the situation justifies work or education beyond what is available on the prison grounds, those not considered dangerous or likely to attempt to escape can work or attend classes in the nearby area. The penal system in Austria includes seven penitentiaries (Garsten, Graz, Hertenberg, Schwarzau, Stein, Suben, and ViennaSimmering ); three institutions of justice; two special institutions; and eighteen jails at the seats of courts of first instance. In spite of the rising crime rate, the prison population fell steadily from 7,795 in 1987 to 5,975 at the end of 1989. The average prison population of 6,318 in 1988 was composed of 6,054 males and 264 females. The rate of incarceration was seventy-seven per 100,000 population, typical for Europe as a whole but higher than some Scandinavian countries. Those on supervised probation numbered 4,930--2,762 adults and 2,168 juveniles. The number held in investigative detention also declined, from 1,666 in 1987 to 1,466 in late 1989. This reduction was attributed to implementation in 1988 of the law easing the requirements for conditional release. According to Austrian authorities, the number of detainees had been reduced to a level corresponding to the European average. Prison conditions generally meet international standards. Male and female prisoners are held separately, as are adults and juveniles. Pretrial detainees are held separately from convicted criminals. The Government permits prison visits by independent human rights monitors. In individual cases, prison directors or judges have jurisdiction over questions of access to the defendant. As of year 2002, violence against women remained a problem. There are no accurate statistics available on the number of women abused annually, but it is believed to be a widespread problem. Police and judges enforce laws against violence; however, it is estimated that less than 10 percent of abused women file complaints. The Association of Houses for Battered Women has estimated that one-fifth of the country's 1.5 million adult women has suffered from violence in a relationship. In 1999 legislators passed an amendment to the 1997 Law on the Protection Against Violence in the Family, extending the period during which police can expel abusive family members from family homes. In 2000 an injunction to prevent abusive family members from returning home was applied in 3,354 cases. The Government also sponsors shelters and help lines for women. Trafficking in women was a problem in year 2002. While prostitution is legal, trafficking for the purposes of prostitution is illegal. Of the 850 cases brought to the Ombudsmen for Equal Opportunity in 2000, 142 were complaints of sexual harassment. The Federal Equality Commission, as well as the Labor Court, can order employers to compensate victims of sexual harassment. The Government's coalition agreement contained a detailed section advocating the equal rights and opportunities for women. Most legal restrictions on women's rights have been abolished. A Federal Equality Commission and a Federal Commissioner for Equal Treatment oversee laws prescribing equal treatment of men and women. In October 2000, the FPO replaced Social Security and Generations Minister Elisabeth Sickl with FPO Member of Parliament Herbert Haupt. Haupt has been criticized widely for devoting Ministry resources to a new department dealing with discrimination faced by men. The Government has received extensive criticism for replacing the head of this ministry, which oversees women's affairs, with a man. In 1994 the European Court of Justice ruled that the country's law prohibiting women from working nights was not permissible and gave the Government until the end of the year to adapt its legislation to gender-neutral EU regulations. In January 1998, legislation went into effect that required collective bargaining units to take action by the end of the year to eliminate restrictions on nighttime work for women, and on December 31, the legislation banning nighttime work for women expired. EU legislation is expected to take effect in 2002. An estimated 60 percent of women between the ages of 15 and 60 are in the labor force; however, a report published by the European Commission in July found that women in the country on average earn 31 percent less than men. Women are more likely than men to hold temporary positions and also are disproportionately represented among those unemployed for extended periods of time. In September 2000, the U.N. Committee on Elimination of Discrimination Against Women released a report criticizing the Government's treatment of women, including its decision to abolish the Federal Women's Affairs Ministry and fold its portfolio into the Ministry of Social Affairs and Generations. The Committee was particularly concerned about immigrant women's access to employment. Although labor laws provide for equal treatment for women in the civil service, women remain underrepresented. To remedy this circumstance, the law requires hiring women of equivalent qualifications ahead of men in civil service areas in which less than 40 percent of the employees are women; however, there are no penalties for failing to attain the 40 percent target. Female employees in the private sector can invoke equality laws prohibiting discrimination of women; the Federal Equality Commission may award compensation of up to 4 months' salary if women are discriminated against in promotions because of their sex. The Commission also may order legal recompense for women who are denied a post despite having equal qualifications. Women are allowed to serve in the military voluntarily. At year's end, there were a total of 147 women--out of a standing force of approximately 51,000--serving in the military, including 7 officers. There are no restrictions on the type or location of assignments given to women. Women's rights organizations are partly politically affiliated, and partly autonomous groups. They usually receive wide public attention when voicing their concerns. Despite fears of women's rights groups, the Government continued to provide government subsidies to these groups. The law provides for the protection of children's rights. Each provincial government and the federal Ministry for Youth and Family Affairs has an "Ombudsperson for Children and Adolescents" whose main function is to resolve complaints about violations of children's rights. While 9 years of education are mandatory for all children beginning at age 6, the Government also provides free education through secondary school and subsidizes technical, vocational, or university education. The majority of schoolage children attend school. Educational opportunity is equal for girls and boys. Comprehensive, government-financed medical care is available for all children without regard to gender. There is no societal pattern of abuse against children, although heightened awareness of child abuse has led the Government to continue its efforts to monitor the issue and prosecute offenders. The growing number of reported incidences of child abuse is considered a result of increased public awareness of the problem. In June the OVP and FPO reached a compromise agreement requiring doctors to report to the police suspected cases of child abuse and molestation. An exception may be made if the suspected abuser is a parent or sibling, in which case the report is not disclosed until an investigation is completed by the police. According to the Penal Code, sexual intercourse between an adult and a child (under 14 years of age) is punishable with a prison sentence of up to 10 years; in case of pregnancy of the victim, the sentence can be extended to up to 15 years. Sex between a male ages 14 to 18 and an adult male are punishable with sentences ranging from 6 months to 5 years. In 2000 the Ministry of Justice reported 819 cases of child abuse, most involving intercourse with a minor. Of these cases, 249 resulted in convictions. Under the law, any citizen engaging in child pornography in a foreign country becomes punishable under Austrian law even if the actions are not punishable in the country where this violation was committed. The law also entails severe provisions for the possession, trading, and private viewing of pornographic materials. For example, exchanging pornographic videos is illegal even if done privately rather than as a business transaction. TRAFFICKING IN PERSONS There is no single law covering all forms of trafficking in persons, although several laws contain provisions that can be used to prosecute traffickers; however, trafficking in women for prostitution and domestic service was a problem. Austria is a transit and final destination country for women trafficked from Bulgaria, Romania, Ukraine, the Czech Republic, Slovakia, Hungary, and the Balkans; the women are trafficked into Austria and other western European countries, primarily for the purpose of sexual exploitation. Women also were trafficked from Asia and Latin America to Austria for domestic labor. Most women were brought to Austria with promises of unskilled jobs such as nannies or waitresses. Upon arrival they were coerced or forced into prostitution. There also were cases of women who came to Austria explicitly to work as prostitutes but who then were forced into states of dependency akin to slavery. Most victims were in Austria illegally and feared being turned into authorities and deported. Traffickers usually retained victims' official documents, including passports, to maintain control over the victims. Victims of trafficking have reported being subjected to threats and physical violence. A major deterrent to victim cooperation is widespread fear of retribution, both in Austria and in the victims' countries of origin. There are no accurate statistics on trafficked persons specifically; however, the number of intercepted illegal immigrants, of whom some were trafficking victims, continued to increase. Police estimated that one-fourth of trafficking in women in the country is controlled by organized crime. Austria is particularly attractive to traffickers due to its geographic location and to the fact that citizens of the Czech Republic, Slovakia, and Hungary do not require visas to enter the country. The Interior Ministry works at the national and international level to raise awareness of human trafficking. Federal police units addressing organized crime and sex crimes also focused on this issue. Although prostitution is legal, trafficking for the purpose of prostitution is illegal, and can result in jail sentences of up to 10 years for convicted traffickers. In July 2000, the Government passed legislation implementing stronger penalties for alien smuggling including trafficking. The maximum penalty for the most serious offenses increased from 5 to 10 years' imprisonment. In 2000 the Interior Ministry, which is the primary government agency involved in antitrafficking efforts, reported that 125 complaints were filed under the law against trafficking for prostitution, of which 10 resulted in convictions. The Ministry of Interior estimated that most traffickers are prosecuted under criminal law provisions on alien smuggling. In October in a high-profile case, the Government convicted the Carinthian "Porno King", Hellmuth Suessenbacher, and 10 others for trafficking in persons and other related offenses. Charges resulted from the trafficking of 50 Romanian women who were initially hired as dancers and subsequently forced into prostitution. Suessenbacher was sentenced to 21/2 years' imprisonment. The other defendants received sentences ranging from fines to up to 4 years' imprisonment. Suessenbacher appealed the sentence. Some NGO's have called for an expansion of the legal definition of trafficking to include exploitation for domestic labor and coerced marriages. In March in response to a marked increase of illegal border crossings at Austria's eastern borders in the first half of the year, the Government set up a special task force to address trafficking. The Government provides temporary residence to victims of trafficking who are prepared to testify or intend to raise civil law claims; however, victims still rarely agree to testify, due to fear of retribution. The temporary residency status allows victims to stay in the country only during a trial; no provisions are made for them to stay in the country following their testimony. Virtually all victims of trafficking are deported. The Government funds research on the problem of trafficking as well as NGO prevention efforts, including antitrafficking brochures and law enforcement workshops. The Government also provides funding for intervention centers that provide emergency housing and psychological, legal, and health-related assistance to victims. There is one NGO center that provides comprehensive counseling, educational services, and emergency housing to victims of trafficking. The Government also is active in U.N. and Organization of Security and Cooperation in Europe international efforts to combat trafficking. Austria is primarily a transit country for drug trafficking from the Balkans to western European markets. Illegal drug consumption is not a severe problem in Austria, and there is no significant production or cultivation of illegal substances. Organized drug trafficking is performed largely by non-Austrian criminal groups. New investigative tools were legislated in 1997 and 1998 to counter the growth of this crime. The passage of a new narcotics law in 1997 allowed Austria to ratify the 1971 and 1988 UN Drug Conventions in 1997. A November 1998 government report expresses concern over the continued rise of organized crime in Austria, which it attributes predominantly to mafia gangs from NIS countries. While not considered an important financial center, offshore tax haven or banking center, Austria remains an attractive site for drug-related money laundering. The government continues to implement measures to narrow avenues for money launderers and facilitate asset seizure and forfeiture. While cooperation with U.S. authorities is excellent, Austrian law enforcement authorities charged with combating narcotics complain about notorious underfunding and lack of personnel. Although not a significant producer of illicit drugs, Austria remains a transit country for drug-related organized crime along the major European drug routes. Foreign-based drug crime by organized groups continued to grow in Austria in 1998. A November 1998 government report maintains that up to 30 percent of serious crimes in Austria in 1998 can be traced to organized crime groups, most of which originate in the NIS. Despite limited appeal of anonymous passbook savings accounts for criminal purposes, possibilities for money laundering through other, unmonitored transactions remain. The new Narcotic Substances Act, which went into effect January 1, 1998, focuses on therapy for drug users while maintaining severe penalties for drug dealers. While drug dealers may face up to 20 years in prison, first-time users of cannabis may avoid criminal proceedings if they agree to therapy. New legislation went into effect in July 1998, allowing technical surveillance of persons "strongly suspected" of having committed crimes punishable up to 10 years. The new regulations facilitate surveillance of persons on which there is "substantial suspicion" that they belong to a criminal organization. A new police powers law submitted to parliament in November 1998 seeks to authorize investigators to obtain personal data from private telephone companies in clearly defined situations (including suspicion of organized crimes). A proposal to allow police to collect and analyze information about likely extremist/terrorist groups without judicial approval and prior to the establishment of "substantiated suspicion" was postponed until 1999. During its EU presidency (July-December 1998), Austria sought to advance work on a second "EU Drug Action Plan" for the period 2000-2005 as well as to intensify cooperation with eastern European accession countries, including in the area of demand reduction. Austria actively participated in the "European Awareness Week on Drug Prevention" November 5-12, 1998. Under the auspices of the European Commission and the Vienna-based UNDCP, the city of Vienna hosted the "European Drug Conference 1998," which focused on "drug prevention and drug policy." Throughout 1998, Austrian experts participated in a series of anti-drug projects within the framework of the EU's Monitoring Center for Drugs and Drug Addiction (EMCDDA). The overall number of drug-related criminal offenses in Austria in 1997 increased by 10.3 percent to 17,868. Serious drug-related crime rose by 25.4 percent to 2,712 over the same period. Authorities believe that one out of two criminal offenses is drug-related. In 1997, the number of seizures rose by 7.4 percent to 7,117. An October 1998 penal-code amendment tightened regulations against bribery and corruption and allowed the GOA to join the OECD Anti-Bribery Convention. Ratification is expected in early 1999. The GOA has public-corruption laws that recognize and punish the abuse of power by a public official. The USG is not aware of any high-level Austrian government officials' involvement in drug-related corruption. The US-Austrian Mutual Legal Assistance Treaty, signed in 1995, was ratified by the Austrian parliament in June 1998. Exchange of the instruments of ratification is still pending. A new US-Austrian Extradition Treaty was ratified by Austria in mid-1998, approved by the U.S. Senate in October 1998, and is pending ratification. Austria ratified the 1988 UN Drug Convention as well as the 1971 UN Convention on Psychotropic Substances and the Council of Europe Convention in 1997. Austria ratified the Europol Convention in early 1998. Austria is a party to the 1961 Single Convention on Narcotic Drugs and its 1972 Protocol. Vienna is the seat of UNDCP, and Austria is a UNDCP major donor. Austria participates in the World Health Organization, the Dublin Group within the EU, the Financial Action Task Force on money laundering (FATF) and the Council of Europe's "Pompidou Group." The USG is not aware of any significant cultivation or production of illicit drugs in Austria. Traditionally, the routes of the Balkan drug path have been the major venues for illegal import/transit of southwest Asian heroin through Austria. The illicit trade is dominated by Turkish groups, followed by traffickers from countries of the former Yugoslavia, by Romanian and Bulgarian nationals, as well as by Macedonian and Albanian dealers, who continue to use nearby Bratislava, Slovakia as a temporary depository for heroin. Cocaine is imported by couriers of South American drug cartels who increasingly rely on Eastern European airports. Austrian authorities view drug addiction as a disease rather than a crime, a fact reflected in recent drug legislation (1997) and related court decisions. Demand reduction puts emphasis on primary prevention, drug treatment and counseling, as well as on "harm reduction." The use of heroin for therapeutic purposes is generally not allowed. Primary intervention extends from preschool to secondary-school levels and relies on "educational campaigns" inside and outside school fora. Austria has syringe exchange programs in place for HIV prevention. AIDS cases declined in 1998, while the spread of Hepatitis B and C represents a new problem. Substitution programs, such as methadone, have been in place for over a decade. Throughout Austria's EU presidency (July-December 1998), the U.S. sought to increase cooperation with the EU on narcotics issues. Under Austria's presidency, the EU pledged $31 million toward the Peru Donors Conference. The Austrian EU presidency also encouraged exploring the possibility of using the Caribbean Drug Initiative as a model of US-EU cooperation for fighting the illicit narcotics trade in other regions. Finally, the Austrian EU presidency has advocated that the EU pursue more ambitious counternarcotics efforts in Central Asia and has requested and appreciated information on U.S. counternarcotics assistance to the Central Asian republics. Although Austria has no specific bilateral narcotics agreement with the US, Austrian cooperation with U.S. investigative efforts is excellent. In July 1998, Austrian authorities' cooperation with U.S. officials resulted in the seizure of 102,000 tablets of ecstasy. The U.S. has requested that all three individuals arrested in this case be extradited to the US. The U.S. Office of National Drug Control Policy Director, Barry McCaffrey, used his July 1998 Vienna visit to explain the goals of the "National Drug Control Strategy 1998" to Austrian and UN officials. The U.S. will continue to support Austrian efforts to create more effective tools for law enforcement, as well as to work with Austria within the context of US-EU initiatives. Promoting a better understanding of U.S. drug policy among Austrian officials will remain a priority. Internet research assisted by Terry Wesley and Rita Zois
http://www-rohan.sdsu.edu/faculty/rwinslow/europe/austria.html
13
55
Learn about key events in history and their connections to today. On March 14, 1900, Congress ratified the Gold Standard Act, which officially ended the use of silver as a standard of United Stares currency and established gold as the only standard. The New York Times reported that President William McKinley “used a new gold pen and holder” to sign the bill. The gold standard is a system under which a country ties the value of its currency to gold, setting a fixed price at which gold can be bought or sold by the government. The United States had since its early history used both gold and silver to value its currency, a system known as bimetallism. Under bimetallism, fluctuations in the value of gold and silver would cause one metal to be virtually removed from circulation for a time. A rise in the value of gold in the early 19th century made gold more valuable as a metal than as currency, prompting many Americans to have their gold coins melted down. As a result, the U.S. was virtually taken off the gold standard until 1834, when the government changed the gold-to-silver ration to about 16 to 1 from 15 to 1. The use of silver currency declined following the California Gold Rush of the 1850s, which reduced the value of gold in relation to silver. The U.S. effectively ended its use of the silver standard in 1873 with the passage of the Fourth Coinage Act, under which the government would no longer produce silver dollars for domestic use or exchange silver at a fixed price. The Coinage Act — along with the dropping of the silver standard by European countries — contributed to the Panic of 1873, an economic depression in the U.S. and Europe. The act reduced the money supply, causing a rise in interest rates and making it difficult for banks to raise capital or pay off debts. The act also hurt silver mining companies in the West, who called the act the “Crime of ’73.” In the 1880s and ’90s, crop prices decreased, creating great hardship for farmers. Pro-silver advocates and Western populists argued for an increase in the money supply through the reintroduction of silver currency. The issue was at the center of the 1896 presidential election. The Democrats, who later split over the issue, nominated William Jennings Bryan, a populist, after his famous “cross of gold” speech at the convention. Bryan decried the gold standard, proclaiming, “You shall not crucify mankind upon a cross of gold.” Eastern bankers who wanted to protect against inflation favored the gold standard. Running on a pro-gold standard platform, McKinley defeated Bryan, clearing the way for the passage of the Gold Standard Act more than three years later. The United States remained on the gold standard until 1933, when President Franklin D. Roosevelt suspended it to combat the deflation of the Great Depression. The nation continued to exchange gold internationally at a fixed price until 1971, when President Richard Nixon ended the practice. Connect to Today: Today, no country uses the gold standard. However, there is rising support among Americans for its reintroduction in the hope of controlling inflation. The Republican presidential candidate Ron Paul has made the gold standard a central platform of his campaign, and fellow candidate Newt Gingrich has pledged to study the issue. In November 2010, The Times’s Room for Debate blog asked whether moving to a modified gold standard makes sense in the modern global economic climate or if it would make recovery from recession more difficult. What are the arguments for and against a return to the gold standard? Do you think it makes sense for the U.S. to connect its circulated currency to its gold reserve? Why or why not?
http://learning.blogs.nytimes.com/2012/03/14/march-14-1900-u-s-officially-adopts-gold-standard/?ref=education
13
28
Basic Physics of Nuclear Medicine/Print version |This is the print version of Basic Physics of Nuclear Medicine You won't see this message or any elements not part of the book's content when you print or preview this page. Note: current version of this book can be found at http://en.wikibooks.org/wiki/Basic_Physics_of_Nuclear_Medicine Atomic & Nuclear Structure You will have encountered much of what we will cover here in your high school physics. We are going to review this material again below so as to set the context for subsequent chapters. This chapter will also provide you with an opportunity to check your understanding of this topic. The chapter covers atomic structure, nuclear structure, the classification of nuclei, binding energy and nuclear stability. Atomic Structure The atom is considered to be the basic building block of all matter. Simple atomic theory tells us that it consists of two components: a nucleus surrounded by an electron cloud. The situation can be considered as being similar in some respects to planets orbiting the sun. From an electrical point of view, the nucleus is said to be positively charged and the electrons negatively charged. From a size point of view, the radius of an atom is about 10-10 m while the radius of a nucleus is about 10-14 m, i.e. about ten thousand times smaller. The situation could be viewed as something like a cricket ball, representing the nucleus, in the middle of a sporting arena with the electrons orbiting somewhere around where the spectators would sit. This perspective tells us that the atom should be composed mainly of empty space. However, the situation is far more complex than this simple picture portrays in that we must also take into account the physical forces which bind the atom together. The Nucleus From a mass point of view the mass of a proton is roughly equal to the mass of a neutron and each of these is about 2,000 times the mass of an electron. So most of the mass of an atom is concentrated in the small region at its core. From an electrical point of view the proton is positively charged and the neutron has no charge. An atom all on its own (if that were possible to achieve!) is electrically neutral. The number of protons in the nucleus of such an atom must therefore equal the number of electrons orbiting that atom. Classification of Nuclei The term Atomic Number is defined in nuclear physics as the number of protons in a nucleus and is given the symbol Z. From your chemistry you will remember that this number also defines the position of an element in the Periodic Table of Elements. The term Mass Number is defined as the number of nucleons in a nucleus, that is the number of protons plus the number of neutrons, and is given the symbol A. Note that the symbols here are a bit odd, in that it would prevent some confusion if the Atomic Number were given the symbol A, and the Mass Number were given another symbol, such as M, but its not a simple world! It is possible for nuclei of a given element to have the same number of protons but differing numbers of neutrons, that is to have the same Atomic Number but different Mass Numbers. Such nuclei are referred to as Isotopes. All elements have isotopes and the number ranges from three for hydrogen to over 30 for elements such as caesium and barium. Chemistry has a relatively simple way of classifying the different elements by the use of symbols such as H for hydrogen, He for helium and so on. The classification scheme used to identify different isotopes is based on this approach with the use of a superscript before the chemical symbol to denote the Mass Number along with a subscript before the chemical symbol to denote the Atomic Number. In other words an isotope is identified as: where X is the chemical symbol of the element; A is the "Mass Number," (protons+ neutrons); Z is the "Atomic Number," (number identifying the element on the periodic chart). Let us take the case of hydrogen as an example. It has three isotopes: - the most common one consisting of a single proton orbited by one electron, - a second isotope consisting of a nucleus containing a proton and a neutron orbited by one electron, - a third whose nucleus consists of one proton and two neutrons, again orbited by a single electron. A simple illustration of these isotopes is shown below. Remember though that this is a simplified illustration given what we noted earlier about the size of a nucleus compared with that of an atom. But the illustration is nevertheless useful for showing how isotopes are classified. The first isotope commonly called hydrogen has a Mass Number of 1, an Atomic Number of 1 and hence is identified as: The second isotope commonly called deuterium has a Mass Number of 2, an Atomic Number of 1 and is identified as: The third isotope commonly called tritium is identified as: The same classification scheme is used for all isotopes. For example, you should now be able to figure out that the uranium isotope, , contains 92 protons and 144 neutrons. A final point on classification is that we can also refer to individual isotopes by giving the name of the element followed by the Mass Number. For example, we can refer to deuterium as hydrogen-2 and we can refer to as uranium-236. Before we leave this classification scheme let us further consider the difference between chemistry and nuclear physics. You will remember that the water molecule is made up of two hydrogen atoms bonded with an oxygen atom. Theoretically if we were to combine atoms of hydrogen and oxygen in this manner many, many of billions of times we could make a glass of water. We could also make our glass of water using deuterium instead of hydrogen. This second glass of water would theoretically be very similar from a chemical perspective. However, from a physics perspective our second glass would be heavier than the first since each deuterium nucleus is about twice the mass of each hydrogen nucleus. Indeed water made in this fashion is called heavy water. Atomic Mass Unit The conventional unit of mass, the kilogram, is rather large for use in describing characteristics of nuclei. For this reason, a special unit called the Atomic Mass Unit (amu) is often used. This unit is sometimes defined as 1/12th of the mass of the stable most commonly occurring isotope of carbon, i.e. 12C. In terms of grams, 1 amu is equal to 1.66 x 10-24 g, that is, just over one million, million, million millionth of a gram. The masses of the proton, mp and neutron, mn on this basis are: while that of the electron is just 0.00055 amu. Binding Energy We are now in a position to consider the subject of nuclear stability. From what we have covered so far, we have seen that the nucleus is a tiny region in the centre of an atom and that it is composed of neutrally and positively charged particles. So, in a large nucleus such as that of uranium (Z=92) we have a large number of positively charged protons concentrated into a tiny region in the centre of the atom. An obvious question which arises is that with all these positive charges in close proximity, why doesn't the nucleus fly apart? How can a nucleus remain as an entity with such electrostatic repulsion between the components? Should the orbiting negatively-charged electrons not attract the protons away from the atoms centre? Let us take the case of the helium-4 nucleus as an example. This nucleus contains two protons and two neutrons so that in terms of amu we can figure out from what we covered earlier that the Therefore we would expect the total mass of the nucleus to be 4.03298 amu. The experimentally determined mass of a helium-4 nucleus is a bit less - just 4.00260 amu. In other words there is a difference of 0.03038 amu between what we might expect as the mass of this nucleus and what we actually measure. You might think of this difference as very small at just 0.75%. But remember that since the mass of one electron is 0.00055 amu the difference is actually equivalent to the mass of about 55 electrons. Therefore it is significant enough to wonder about. It is possible to consider that this missing mass is converted to energy which is used to hold the nucleus together; it is converted to a form of energy called Binding Energy. You could say, as with all relationships, energy must be expended in order to maintain them! Like the gram in terms of the mass of nuclei, the common unit of energy, the joule is rather cumbersome when we consider the energy needed to bind a nucleus together. The unit used to express energies on the atomic scale is the electron volt, symbol: eV. One electron volt is defined as the amount of energy gained by an electron as it falls through a potential difference of one volt. This definition on its own is not of great help to us here and it is stated purely for the sake of completeness. So do not worry about it for the time being. Just appreciate that it is a unit representing a tiny amount of energy which is useful on the atomic scale. It is a bit too small in the case of binding energies however and the mega-electron volt (MeV) is often used. Albert Einstein introduced us to the equivalence of mass, m, and energy, E, at the atomic level using the following equation: where c is the velocity of light. It is possible to show that 1 amu is equivalent to 931.48 MeV. Therefore, the mass difference we discussed earlier between the expected and measured mass of the helium-4 nucleus of 0.03038 amu is equivalent to about 28 MeV. This represents about 7 MeV for each of the four nucleons contained in the nucleus. Nuclear Stability In most stable isotopes the binding energy per nucleon lies between 7 and 9 MeV. There are two competing forces in the nuclei, electrostatic repulsion between protons and the attractive nuclear force between nucleons (protons and neutrons). The electrostatic force is a long range force that becomes more difficult to compensate for as more protons are added to the nucleus. The nuclear force, which arises as the residual strong force (the strong force binds the quarks together within a nucleon), is a short range force that only operates on a very short distance scale (~ 1.5 fm) as it arises from a Yukawa potential. (Electromagnetism is a long range force as the force carrier, the photon, is massless; the nuclear force is a short range force as the force carrier, the pion, is massive). Therefore, larger nuclei tend to be less stable, and require a larger ratio of neutrons to protons (which contribute to the attractive strong force, but not the long-range electrostatic repulsion). For the low Z nuclides the ratio of neutrons to protons is approximately 1, though it gradually increases to about 1.5 for the higher Z nuclides as shown below on the Nuclear Stability Curve. In other words to combat the effect of the increase in electrostatic repulsion when the number of protons increases the number of neutrons must increase more rapidly to contribute sufficient energy to bind the nucleus together. As we noted earlier there are a number of isotopes for each element of the Periodic Table. It has been found that the most stable isotope for each element has a specific number of neutrons in its nucleus. Plotting a graph of the number of protons against the number of neutrons for these stable isotopes generates what is called the Nuclear Stability Curve: Note that the number of protons equals the number of neutrons for small nuclei. But notice also that the number of neutrons increases more rapidly than the number of protons as the size of the nucleus gets bigger so as to maintain the stability of the nucleus. In other words more neutrons need to be there to contribute to the binding energy used to counteract the electrostatic repulsion between the protons. There are about 2,450 known isotopes of the approximately one hundred elements in the Periodic Table. You can imagine the size of a table of isotopes relative to that of the Periodic Table! The unstable isotopes lie above or below the Nuclear Stability Curve. These unstable isotopes attempt to reach the stability curve by splitting into fragments, in a process called Fission, or by emitting particles and/or energy in the form of radiation. This latter process is called Radioactivity. It is useful to dwell for a few moments on the term radioactivity. For example what has nuclear stability to do with radio? From a historical perspective remember that when these radiations were discovered about 100 years ago we did not know exactly what we were dealing with. When people like Henri Becquerel and Marie Curie were working initially on these strange emanations from certain natural materials it was thought that the radiations were somehow related to another phenomenon which also was not well understood at the time - that of radio communication. It seems reasonable on this basis to appreciate that some people considered that the two phenomena were somehow related and hence that the materials which emitted radiation were termed radio-active. We know today that the two phenomena are not directly related but we nevertheless hold onto the term radioactivity for historical purposes. But it should be quite clear to you having reached this stage of this chapter that the term radioactive refers to the emission of particles and/or energy from unstable isotopes. Unstable isotopes for instance those that have too many protons to remain a stable entity are called radioactive isotopes - and called radioisotopes for short. The term radionuclide is also sometimes used. Finally about 300 of the 2,450-odd isotopes mentioned above are found in nature. The rest are man-made, that is they are produced artificially. These 2,150 or so artificial isotopes have been made during the last 100 years or so with most having been made since the second world war. We will return to the production of radioisotopes in a later chapter of this wikibook and will proceed for the time being with a description of the types of radiation emitted by radioisotopes. Multiple Choice Questions Click here to access multiple choice questions on atomic and nuclear structure. External Links - Novel Periodic Table - an interactive table providing information about each element. - Marie and Pierre Curie and the Discovery of Polonium and Radium - an historical essay from The Nobel Foundation. - Natural Radioactivity - an overview of radioactivity in nature - includes sections on primordial radionuclides, cosmic radiation, human produced radionuclides, as well as natural radioactivity in soil, in the ocean, in the human body and in building materials - from the University of Michigan Student Chapter of the Health Physics Society. - The Particle Adventure - an interactive tour of the inner workings of the atom which explains the modern tools physicists use to probe nuclear and sub-nuclear matter and how physicists measure the results of their experiments using detectors - from the Particle Data Group at the Lawrence Berkeley National Lab, USA and mirrored at CERN, Geneva. - WebElements - an excellent web-based Periodic Table of the Elements which includes a vast array of data about each element - originally from Mark Winter at the University of Sheffield, England. Radioactive Decay We saw in the last chapter that radioactivity is a process used by unstable nuclei to achieve a more stable situation. It is said that such nuclei decay in an attempt to achieve stability. So, an alternative title for this chapter is Nuclear Decay Processes. We also saw in the previous chapter that we can use the Nuclear Stability Curve as a means of describing what is going on. So a second alternative title for this chapter is Methods of Getting onto the Nuclear Stability Curve. We are going to follow a descriptive or phenomenological approach to the topic here by describing in a fairly simple fashion what is known about each of the major decay mechanisms. Once again you may have already covered this material in high school physics. But bear with us because the treatment here will help us set the scene for subsequent chapters. Methods of Radioactive Decay Rather than considering what happens to individual nuclei it is perhaps easier to consider a hypothetical nucleus that can undergo many of the major forms of radioactive decay. This hypothetical nucleus is shown below: Firstly we can see two protons and two neutrons being emitted together in a process called alpha-decay. Secondly, we can see that a proton can release a positron in a process called beta-plus decay, and that a neutron can emit an electron in a process called beta-minus decay. We can also see an electron being captured by a proton. Thirdly we can see some energy (a photon) being emitted which results from a process called gamma-decay as well as an electron being attracted into the nucleus and being ejected again. Finally there is the rather catastrophic process where the nucleus cracks in half called spontaneous fission. We will now describe each of these decay processes in turn. Spontaneous Fission This is a very destructive process which occurs in some heavy nuclei which split into 2 or 3 fragments plus some neutrons. These fragments form new nuclei which are usually radioactive. Nuclear reactors exploit this phenomenon for the production of radioisotopes. Its also used for nuclear power generation and in nuclear weaponry. The process is not of great interest to us here and we will say no more about it for the time being. Alpha Decay In this decay process two protons and two neutrons leave the nucleus together in an assembly known as an alpha particle. Note that an alpha particle is really a helium-4 nucleus. So why not call it a helium nucleus? Why give it another name? The answer to this question lies in the history of the discovery of radioactivity. At the time when these radiations were discovered we didn't know what they really were. We found out that one type of these radiations had a double positive charge and it was not until sometime later that we learned that they were in fact nuclei of helium-4. In the initial period of their discovery this form of radiation was given the name alpha rays (and the other two were called beta and gamma rays), these terms being the first three letters of the Greek alphabet. We still call this form of radiation by the name alpha particle for historical purposes. Calling it by this name also contributes to the specific jargon of the field and leads outsiders to think that the subject is quite specialized! But notice that the radiation really consists of a helium-4 nucleus emitted from an unstable larger nucleus. There is nothing strange about helium since it is quite an abundant element on our planet. So why is this radiation dangerous to humans? The answer to this question lies with the energy with which they are emitted and the fact that they are quite massive and have a double positive charge. So when they interact with living matter they can cause substantial destruction to molecules which they encounter in their attempt to slow down and to attract two electrons to become a neutral helium atom. An example of this form of decay occurs in the uranium-238 nucleus. The equation which represents what occurs is: Here the uranium-238 nucleus emits a helium-4 nucleus (the alpha particle) and the parent nucleus becomes thorium-234. Note that the Mass Number of the parent nucleus has been reduced by 4 and the Atomic Number is reduced by 2 which is a characteristic of alpha decay for any nucleus in which it occurs. Beta Decay There are three common forms of beta decay: (a) Electron Emission - Certain nuclei which have an excess of neutrons may attempt to reach stability by converting a neutron into a proton with the emission of an electron. The electron is called a beta-minus particle - the minus indicating that the particle is negatively charged. - We can represent what occurs as follows: - where a neutron converts into a proton and an electron. Notice that the total electrical charge is the same on both sides of this equation. We say that the electric charge is conserved. - We can consider that the electron cannot exist inside the nucleus and therefore is ejected. - Once again there is nothing strange or mysterious about an electron. What is important though from a radiation safety point of view is the energy with which it is emitted and the chemical damage it can cause when it interacts with living matter. - An example of this type of decay occurs in the iodine-131 nucleus which decays into xenon-131 with the emission of an electron, that is - The electron is what is called a beta-minus particle. Note that the Mass Number in the above equation remains the same and that the Atomic Number increases by 1 which is characteristic of this type of decay. - You may be wondering how an electron can be produced inside a nucleus given that the simple atomic description we gave in the previous chapter indicated that the nucleus consists of protons and neutrons only. This is one of the limitations of the simple treatment presented so far and can be explained by considering that the two particles which we call protons and neutrons are themselves formed of smaller particles called quarks. We are not going to consider these in any way here other than to note that some combinations of different types of quark produce protons and another combination produces neutrons. The message here is to appreciate that a simple picture is the best way to start in an introductory text such as this and that the real situation is a lot more complex than what has been described. The same can be said about the treatment of beta-decay given above as we will see in subsequent chapters. (b) Positron Emission - When the number of protons in a nucleus is too large for the nucleus to be stable it may attempt to reach stability by converting a proton into a neutron with the emission of a positively-charged electron. - That is not a typographical error! An electron with a positive charge also called a positron is emitted. The positron is the beta-plus particle. - The history here is quite interesting. A brilliant Italian physicist, Enrico Fermi developed a theory of beta decay and his theory predicted that positively-charged as well as negatively-charged electrons could be emitted by unstable nuclei. These particles could be called pieces of anti-matter and they were subsequently discovered by experiment. They do not exist for very long as they quickly combine with a normal electron and the subsequent reaction called annihilation gives rise to the emission of two gamma rays. - Science fiction writers had a great time following the discovery of anti-matter and speculated along with many scientists that parts of our universe may contain negatively-charged protons forming nuclei which are orbited by positively-charged electrons. But this is taking us too far away from the topic at hand! - The reaction in our unstable nucleus which contains one too many protons can be represented as follows: - Notice, once again, that electric charge is conserved on each side of this equation. - An example of this type of decay occurs in sodium-22 which decays into neon-22 with the emission of a positron: - Note that the Mass Number remains the same and that the Atomic Number decreases by 1. (c) Electron Capture - In this third form of beta decay an inner orbiting electron is attracted into an unstable nucleus where it combines with a proton to form a neutron. The reaction can be represented as: - This process is also known as K-capture since the electron is often attracted from the K-shell of the atom. - How do we know that a process like this occurs given that no radiation is emitted? In other words the event occurs within the atom itself and no information about it leaves the atom. Or does it? The signature of this type of decay can be obtained from effects in the electron cloud surrounding the nucleus when the vacant site left in the K-shell is filled by an electron from an outer shell. The filling of the vacancy is associated with the emission of an X-ray from the electron cloud and it is this X-ray which provides a signature for this type of beta decay. - This form of decay can also be recognised by the emission of gamma-rays from the new nucleus. - An example of this type of radioactive decay occurs in iron-55 which decays into manganese-55 following the capture of an electron. The reaction can be represented as follows: - Note that the Mass Number once again is unchanged in this form of decay and that the Atomic Number is decreased by 1. Gamma Decay Gamma decay involves the emission of energy from an unstable nucleus in the form of electromagnetic radiation. You should remember from your high school physics that electromagnetic radiation is the biggest physical phenomenon we have so far discovered. The radiation can be characterised in terms of its frequency, its wavelength and its energy. Thinking about it in terms of the energy of the radiation we have very low energy electromagnetic radiation called radio waves, infra-red radiation at a slightly higher energy, visible light at a higher energy still, then ultra-violet radiation and the higher energy forms of this radiation are called X-rays and gamma-rays. You should also remember that these radiations form what is called the Electromagnetic Spectrum. Before proceeding it is useful to pause for a moment to consider the difference between X-rays and gamma-rays. These two forms of radiation are high energy electromagnetic rays and are therefore virtually the same. The difference between them is not what they consist of but where they come from. In general we can say that if the radiation emerges from a nucleus it is called a gamma-ray and if it emerges from outside the nucleus from the electron cloud for example, it is called an X-ray. One final point is of relevance before we consider the different forms of gamma-decay and that is what such a high energy ray really is. It has been found in experiments that gamma-rays (and X-rays for that matter!) sometimes manifest themselves as waves and other times as particles. This wave-particle duality can be explained using the equivalence of mass and energy at the atomic level. When we describe a gamma ray as a wave it has been found useful to use terms such as frequency and wavelength just like any other wave. In addition when we describe a gamma ray as a particle we use terms such as mass and electric charge. Furthermore the term electromagnetic photon is used for these particles. The interesting feature about these photons however is that they have neither mass nor charge! There are two common forms of gamma decay: (a) Isomeric Transition - A nucleus in an excited state may reach its ground or unexcited state by the emission of a gamma-ray. - An example of this type of decay is that of technetium-99m - which by the way is the most common radioisotope used for diagnostic purposes today in medicine. The reaction can be expressed as: - Here a nucleus of technetium-99 is in an excited state, that is, it has excess energy. The excited state in this case is called a metastable state and the nucleus is therefore called technetium-99m (m for metastable). The excited nucleus looses its excess energy by emitting a gamma-ray to become technetium-99. (b) Internal Conversion - Here the excess energy of an excited nucleus is given to an atomic electron, e.g. a K-shell electron. Decay Schemes Decay schemes are widely used to give a visual representation of radioactive decay. A scheme for a relatively straight-forward decay is shown below: This scheme is for hydrogen-3 which decays to helium-3 with a half-life of 12.3 years through the emission of a beta-minus particle with an energy of 0.0057 MeV. A scheme for a more complicated decay is that of caesium-137. This isotope can decay through through two beta-minus processes. In one which occurs in 5% of disintegrations a beta-minus particle is emitted with an energy of 1.17 MeV to produce barium-137. In the second which occurs more frequently (in the remaining 95% of disintegrations) a beta-minus particle of energy 0.51 MeV is emitted to produce barium-137m - in other words a barium-137 nucleus in a metastable state. The barium-137m then decays via isomeric transition with the emission of a gamma-ray of energy 0.662 MeV. The general method used for decay schemes is illustrated in the diagram on the right. The energy is plotted on the vertical axis and atomic number on the horizontal axis - although these axes are rarely displayed in actual schemes. The isotope from which the scheme originates is displayed at the top - X in the case above. This isotope is referred to as the parent. The parent looses energy when it decays and hence the products of the decay referred to as daughters are plotted at a lower energy level. The diagram illustrates the situation for common forms of radioactive decay. Alpha-decay is illustrated on the left where the mass number is reduced by 4 and the atomic number is reduced by 2 to produce daughter A. To its right the scheme for beta-plus decay is shown to produce daughter B. The situation for beta-minus decay followed by gamma-decay is shown on the right side of the diagram where daughters C and D respectively are produced. Multiple Choice Questions Click here to access multiple choice questions on radioactive decay. External Links - Basics about Radiation - overview of the different types of ionising radiation from the Radiation Effects Research Foundation - a cooperative Japan-United States Research Organization which conducts research for peaceful purposes. - Radiation and Life - from the World Nuclear Association website. - Radiation and Radioactivity - a self-paced lesson developed by the University of Michigan's Student Chapter of the Health Physics Society, with sections on radiation, radioactivity, the atom, alpha radiation, beta radiation and gamma radiation. The Radioactive Decay Law We covered radioactive decay from a phenomenological perspective in the last chapter. In this chapter we consider the topic from a more general analytical perspective. The reason for doing this is so that we can develop a form of thinking which will help us to understand what is going on in a quantitative, mathematical sense. We will be introduced to concepts such as the Decay Constant and the Half Life as well as units used for the measurement of radioactivity. You will also have a chance to develop your understanding by being brought through three questions on this subject. The usual starting point in most forms of analysis in physics is to make some assumptions which simplify the situation. By simplifying the situation we can dispose of irrelevant effects which tend to complicate matters but in doing so we sometimes make the situation so simple that it becomes a bit too abstract and apparently hard to understand. For this reason we will try here to relate the subject of radioactive decay to a more common situation which we will use as an analogy and hopefully we will be able to overcome the abstract feature of the subject matter. The analogy we will use here is that of making popcorn. So think about putting some oil in a pot, adding the corn, heating the pot on the cooker and watching what happens. You might also like to try this out while considering the situation! For our radioactive decay situation we first of all consider that we have a sample containing a large number of radioactive nuclei all of the same kind. This is our unpopped corn in the pot for example. Secondly we assume that all of the radioactive nuclei decay by the same process be it alpha, beta or gamma-decay. In other words our unpopped corn goes pop at some stage during the heating process. Thirdly take a few moments to ponder on the fact that we can only really consider what is going on from a statistical perspective. If you look at an individual piece of corn, can you figure out when it is going to pop? Not really. You can however figure out that a large number of them will have popped after a period of time. But its rather more difficult to figure out the situation for an individual piece of corn. So instead of dealing with individual entities we consider what happens on a larger scale and this is where statistics comes in. We can say that the radioactive decay is a statistical one-shot process, that is when a nucleus has decayed it cannot repeat the process again. In other words when a piece of corn has popped it cannot repeat the process. Simple! In addition as long as a radioactive nucleus has not decayed the probability for it doing so in the next moment remains the same. In other words if a piece of corn has not popped at a certain time the chance of it popping in the next second is the same as in the previous second. The bets are even! Let us not push this popcorn analogy too far though in that we know that we can control the rate of popping by the heat we apply to the pot for example. However as far as our radioactive nuclei are concerned there is nothing we can do to control what is going on. The rate at which nuclei go pop (or decay, in other words) cannot be influenced by heating up the sample. Nor by cooling it for that matter or by putting it under greater pressures, by changing the gravitational environment by taking it out into space for instance, or by changing any other aspect of its physical environment. The only thing that determines whether an individual nucleus will decay seems to be the nucleus itself. But on the average we can say that it will decay at some stage. The Radioactive Decay Law Let us now use some symbols to reduce the amount of writing we have to do to describe what is going on and to avail ourselves of some mathematical techniques to simplify the situation even further than we have been able to do so far. Let us say that in the sample of radioactive material there are N nuclei which have not decayed at a certain time, t. So what happens in the next brief period of time? Some nuclei will decay for sure. But how many? On the basis of our reasoning above we can say that the number which will decay will depend on overall number of nuclei, N, and also on the length of the brief period of time. In other words the more nuclei there are the more will decay and the longer the time period the more nuclei will decay. Let us denote the number which will have decayed as dN and the small time interval as dt. So we have reasoned that the number of radioactive nuclei which will decay during the time interval from t to t+dt must be proportional to N and to dt. In symbols therefore: the minus sign indicating that N is decreasing. Turning the proportionality in this equation into an equality we can write: where the constant of proportionality, λ, is called the Decay Constant. Dividing across by N we can rewrite this equation as: So this equation describes the situation for any brief time interval, dt. To find out what happens for all periods of time we simply add up what happens in each brief time interval. In other words we integrate the above equation. Expressing this more formally we can say that for the period of time from t = 0 to any later time t, the number of radioactive nuclei will decrease from N0 to Nt, so that: This final expression is known as the Radioactive Decay Law. It tells us that the number of radioactive nuclei will decrease in an exponential fashion with time with the rate of decrease being controlled by the Decay Constant. Before looking at this expression in further detail let us review the mathematics which we used above. First of all we used integral calculus to figure out what was happening over a period of time by integrating what we knew would occur in a brief interval of time. Secondly we used a calculus relationship that the where ln x represents the natural logarithm of x. And thirdly we used the definition of logarithms that when Now, to return to the Radioactive Decay Law. The Law tells us that the number of radioactive nuclei will decrease with time in an exponential fashion with the rate of decrease being controlled by the Decay Constant. The Law is shown in graphical form in the figure below: The graph plots the number of radioactive nuclei at any time, Nt, against time, t. We can see that the number of radioactive nuclei decreases from N0 that is the number at t = 0 in a rapid fashion initially and then more slowly in the classic exponential manner. The influence of the Decay Constant can be seen in the following figure: All three curves here are exponential in nature, only the Decay Constant is different. Notice that when the Decay Constant has a low value the curve decreases relatively slowly and when the Decay Constant is large the curve decreases very quickly. The Decay Constant is characteristic of individual radionuclides. Some like uranium-238 have a small value and the material therefore decays quite slowly over a long period of time. Other nuclei such as technetium-99m have a relatively large Decay Constant and they decay far more quickly. It is also possible to consider the Radioactive Decay Law from another perspective by plotting the logarithm of Nt against time. In other words from our analysis above by plotting the expression: in the form Notice that this expression is simply an equation of the form y = mx + c where m = -l and c = ln N0. As a result it is the equation of a straight line of slope -l as shown in the following figure. Such a plot is sometimes useful when we wish to consider a situation without the complication of the direct exponential behaviour. Most of us have not been taught to think instinctively in terms of logarithmic or exponential terms even though many natural phenomena display exponential behaviours. Most of the forms of thinking which we have been taught in school are based on linear changes and as a result it is rather difficult for us to grasp the Radioactive Decay Law intuitively. For this reason an indicator is usually derived from the law which helps us think more clearly about what is going on. This indicator is called the Half Life and it expresses the length of time it takes for the radioactivity of a radioisotope to decrease by a factor of two. From a graphical point of view we can say that when: the time taken is the Half Life: Note that the half-life does not express how long a material will remain radioactive but simply the length of time for its radioactivity to halve. Examples of the half lives of some radioisotopes are given in the following table. Notice that some of these have a relatively short half life. These tend to be the ones used for medical diagnostic purposes because they do not remain radioactive for very long following administration to a patient and hence result in a relatively low radiation dose. |Radioisotope||Half Life (approx.)| |238U||4.51 x 109 years| But they do present a logistical problem when we wish to use them when there may not be a radioisotope production facility nearby. For example suppose we wish to use 99mTc for a patient study and the nearest nuclear facility for making this isotope is 5,000 km away. The production facility could be in Sydney and the patient could be in Perth for instance. After making the isotope at the nuclear plant it would be decaying with a half life of 6 hours. So we put the material on a truck and drive it to Sydney airport. The isotope would be decaying as the truck sits in Sydney traffic then decaying still more as it waits for a plane to take it to Perth. Then decaying more as it is flown across to Perth and so on. By the time it gets to our patient it will have substantially reduced in radioactivity possibly to the point of being useless for the patient's investigation. And what about the problem if we needed to use 81mKr instead of 99mTc for our patient? You will see in another chapter of this book that logistical challenges such as this have given rise to quite innovative solutions. More about that later! You can appreciate from the table above that other isotopes have a very long half lives. For example 226Ra has a half life of over 1,500 years. This isotope has been used in the past for therapeutic applications in medicine. Think about the logistical problems here. They obviously do not relate to transporting the material from the point of production to the point of use. But they relate to how the material is kept following its arrival at the point of use. We must have a storage facility so that the material can be kept safely for a long period of time. But for how long? A general rule of thumb for the quantities of radioactivity used in medicine is that the radioactivity will remain significant for about 10 half lives. So we would have to have a safe environment for storage of the 226Ra for about 16,000 years! This storage facility would have to be secure from many unforeseeable events such as earthquakes, bombing etc. and be kept in a manner which our children's, children's children can understand. A very serious undertaking indeed! Relationship between the Decay Constant and the Half Life On the basis of the above you should be able to appreciate that there is a relationship between the Decay Constant and the Half Life. For example when the Decay Constant is small the Half Life should be long and correspondingly when the Decay Constant is large the Half Life should be short. But what exactly is the nature of this relationship? We can easily answer this question by using the definition of Half Life and applying it to the Radioactive Decay Law. Once again the law tells us that at any time, t: and the definition of Half Life tells us that: We can therefore re-write the Radioactive Decay Law by substituting for Nt and t as follows: These last two equations express the relationship between the Decay Constant and the Half Life. They are very useful as you will see when solving numerical questions relating to radioactivity and usually form the first step in solving a numerical problem. Units of Radioactivity The SI or metric unit of radioactivity is named after Henri Becquerel, in honour of his discovery of radioactivity, and is called the becquerel with the symbol Bq. The becquerel is defined as the quantity of radioactive substance that gives rise to a decay rate of 1 decay per second. In medical diagnostic work 1 Bq is a rather small amount of radioactivity. Indeed it is easy to remember its definition if you think of it as a buggerall amount of radioactivity. For this reason the kilobecquerel (kBq) and megabecquerel (MBq) are more frequently used. The traditional unit of radioactivity is named after Marie Curie and is called the curie, with the symbol Ci. The curie is defined as the amount of radioactive substance which gives rise to a decay rate of 3.7 x 1010 decays per second. In other words 37 thousand, million decays per second which as you might appreciate is a substantial amount of radioactivity. For medical diagnostic work the millicurie (mCi) and the microcurie (µCi) are therefore more frequently used. Why two units? It in essence like all other units of measurement depends on what part of the world you are in. For example the kilometer is widely used in Europe and Australia as a unit of distance and the mile is used in the USA. So if you are reading an American textbook you are likely to find the curie used as the unit of radioactivity, if you are reading an Australian book it will most likely refer to becquerels and both units might be used if you are reading a European book. You will therefore find it necessary to know and understand both units. Multiple Choice Questions Click here to access an MCQ on the Radioactive Decay Law. Three questions are given below to help you develop your understanding of the material presented in this chapter. The first one is relatively straight-forward and will exercise your application of the Radioactive Decay Law as well as your understanding of the concept of Half Life. The second question is a lot more challenging and will help you relate the Radioactive Decay Law to the number of radioactive nuclei which are decaying in a sample of radioactive material. The third question will help you understand the approach used in the second question by asking a similar question from a slightly different perspective. (a) The half-life of 99mTc is 6 hours. After how much time will 1/16th of the radioisotope remain? (b) Verify your answer by another means. - (a) Starting with the relationship we established earlier between the Decay Constant and the Half Life we can calculate the Decay Constant as follows: - Now applying the Radioactive Decay Law, - we can re-write it in the form: - The question tells us that N0 has reduced to 1/16th of its value, that is: - which we need to solve for t. One way of doing this is as follows: - So it will take 24 hours until 1/16th of the radioactivity remains. - (b) A way in which this answer can be verified is by using the definition of Half Life. We are told that the Half Life of 99mTc is 6 hours. Therefore after six hours half of the radioactivity remains. - Therefore after 12 hours a quarter remains; after 18 hours an eighth remains and after 24 hours one sixteenth remains. And we arrive at the same answer as in part (a). So we must be right! - Note that this second approach is useful if we are dealing with relatively simple situations where the radioactivity is halved, quartered and so on. But supposing the question asked how long would it take for the radioactivity to decrease to a tenth of its initial value. Deduction from the definition of half life is rather more difficult in this case and the mathematical approach used for part (a) above will yield the answer more readily. Find the radioactivity of a 1 g sample of 226Ra given that t1/2: 1620 years and Avogadro's Number: 6.023 x 1023. - We can start the answer like we did with Question 1(a) by calculating the Decay Constant from the Half Life using the following equation: - Note that the length of a year used in converting from 'per year' to 'per second' above is 365.25 days to account for leap years. In addition the reason for converting to units of 'per second' is because the unit of radioactivity is expressed as the number of nuclei decaying per second. - Secondly we can calculate that 1 g of 226Ra contains: - Thirdly we need to express the Radioactive Decay Law in terms of the number of nuclei decaying per unit time. We can do this by differentiating the equation as follows: - The reason for expressing the result above in absolute terms is to remove the minus sign in that we already know that the number is decreasing. - We can now enter the data we derived above for λ and N: - So the radioactivity of our 1 g sample of radium-226 is approximately 1 Ci. - This is not a surprising answer since the definition of the curie was originally conceived as the radioactivity of 1 g of radium-226! What is the minimum mass of 99mTc that can have a radioactivity of 1 MBq? Assume the half-life is 6 hours and that Avogadro's Number is 6.023 x 1023. - Starting again with the relationship between the Decay Constant and the Half Life: - Secondly the question tells us that the radioactivity is 1 MBq. Therefore since 1 MBq = 1 x 106 decays per second, - Finally the mass of these nuclei can be calculated as follows: - In other words a mass of just over five picograms of 99mTc can emit one million gamma-rays per second. The result reinforces an important point that you will learn about radiation protection which is that you should treat radioactive materials just like you would handle pathogenic bacteria! Units of Radiation Measurement A Typical Radiation Situation A typical radiation set-up is shown in the figure below. Firstly there is a source of radiation, secondly a radiation beam and thirdly some material which absorbs the radiation. So the quantities which can be measured are associated with the source, the radiation beam and the absorber. This type of environment could be one where the radiation from the source is used to irradiate a patient (that is the absorber) for diagnostic purposes where we would place a device behind the patient for producing an image or for therapeutic purposes where the radiation is intended to cause damage to a specific region of a patient. It is also a situation where we as an absorber may be working with a source of radiation. The Radiation Source When the radiation source is a radioactive one the quantity that is typically measured is the radioactivity of the source. We saw in the previous chapter that the units used to express radioactivity are the becquerel (SI unit) and the curie (traditional unit). The Radiation Beam The characteristic of a radiation beam that is typically measured is called the Radiation Exposure. This quantity expresses how much ionisation the beam causes in the air through which it travels. We will see in the following chapter that one of the major things that happens when radiation encounters matter is that ions are formed - air being the form of matter it encounters in this case. So the radiation exposure produced by a radiation beam is expressed in terms of the amount of ionisation which occurs in air. A straight-forward way of measuring such ionisation is to determine the amount of electric charge which is produced. You will remember from your high school physics that the SI unit of electric charge is the coulomb. The SI unit of radiation exposure is the coulomb per kilogram - and is given the symbol C kg-1. It is defined as the quantity of X- or gamma-rays such that the associated electrons emitted per kilogram of air at standard temperature and pressure (STP) produce ions carrying 1 coulomb of electric charge. The traditional unit of radiation exposure is the roentgen, named in honour of Wilhelm Roentgen (who discovered X-rays) and is given the symbol R. The roentgen is defined as the quantity of X- or gamma-rays such that the associated electrons emitted per kilogram of air at STP produce ions carrying 2.58 x 10-4 coulombs of electric charge. So 1 R is a small exposure relative to 1 C kg-1 - in fact it is 3,876 times smaller. Note that this unit is confined to radiation beams consisting of X-rays or gamma-rays. Often it is not simply the exposure that is of interest but the exposure rate, that is the exposure per unit time. The units which tend to be used in this case are the C kg-1 s-1 and the R hr-1. The Absorber Energy is deposited in the absorber when radiation interacts with it. It is usually quite a small amount of energy but energy nonetheless. The quantity that is measured is called the Absorbed Dose and it is of relevance to all types of radiation be they X- or gamma-rays, alpha- or beta-particles. The SI unit of absorbed dose is called the gray, named after a famous radiobiologist, LH Gray, and is given the symbol Gy. The gray is defined as the absorption of 1 joule of radiation energy per kilogram of material. So when 1 joule of radiation energy is absorbed by a kilogram of the absorber material we say that the absorbed dose is 1 Gy. The traditional unit of absorbed dose is called the rad, which supposedly stands for Radiation Absorbed Dose. It is defined as the absorption of 10-2 joules of radiation energy per kilogram of material. As you can figure out 1 Gy is equal to 100 rad. There are other quantities derived from the gray and the rad which express the biological effects of such absorbed radiation energy when the absorber is living matter - human tissue for example. These quantities include the Equivalent Dose, H, and the Effective Dose, E. The Equivalent Dose is based on estimates of the ionization capability of the different types of radiation which are called Radiation Weighting Factors, wR, such that where D is the absorbed dose. The Effective Dose includes wR as well as estimates of the sensitivity of different tissues called Tissue Weighting Factors, wT, such that where the summation, Σ, is over all the tissue types involved. Both the Equivalent Dose and the Effective Dose are measured in derived SI units called sieverts (Sv). Let us pause here for a bit to ponder on the use of the term dose. It usually has a medical connotation in that we can say that someone had a dose of the 'flu, or that the doctor prescribed a certain dose of a drug. What has it to do with the deposition of energy by a beam of radiation in an absorber? It could have something to do with the initial applications of radiation in the early part of the 20th century when it was used to treat numerous diseases. As a result we can speculate that the term has stayed in the vernacular of the field. It would be much easier to use a term like absorbed radiation energy since we are talking about the deposition of energy in an absorber. But this might make the subject just a little too simple! Specific Gamma Ray Constant A final quantity is worth mentioning with regard to radiation units. This is the Specific Gamma-Ray Constant for a radioisotope. This quantity is an amalgam of the quantities we have already covered and expresses the exposure rate produced by the gamma-rays emitted from a radioisotope. It is quite a useful quantity from a practical viewpoint when we are dealing with a radioactive source which emits gamma-rays. Supposing you are using a gamma-emitting radioactive source (for example 99mTc or 137Cs) and you will be standing at a certain distance from this source while you are working. You most likely will be interested in the exposure rate produced by the source from a radiation safety point of view. This is where the Specific Gamma-Ray Constant comes in. It is defined as the exposure rate per unit activity at a certain distance from a source. The SI unit is therefore the and the traditional unit is the These units of measurement are quite cumbersome and a bit of a mouthful. It might have been better if they were named after some famous scientist so that we could call the SI unit 1 smith and the traditional unit 1 jones for example. But again things are not that simple! The Inverse Square Law Before we finish this chapter we are going to consider what happens as we move our absorber away from the radiation source. In other words we are going to think about the influence of distance on the intensity of the radiation beam. You will find that a useful result emerges from this that has a very important impact on radiation safety. The radiation produced in a radioactive source is emitted in all directions. We can consider that spheres of equal radiation intensity exist around the source with the number of photons/particles spreading out as we move away from the source. Consider an area on the surface of one of these spheres and assume that there are a certain number of photons/particles passing though it. If we now consider a sphere at a greater distance from the source the same number of photons/particles will now be spread out over a bigger area. Following this line of thought it is easy to appreciate that the radiation intensity, I will decrease with the square of the distance, r from the source, i.e. This effect is known as the Inverse Square Law. As a result if we double the distance from a source, we reduce the intensity by a factor of two squared, that is 4. If we triple the distance the intensity is reduced by a factor of 9, that is three squared, and so on. This is a very useful piece of information if you are working with a source of radiation and are interested in minimising the dose of radiation you will receive. External Links - Radiation and Risk - covers the effect of radiation, how risks are determined, comparison of radiation with other risks and radiation doses. - Radiation Effects Overview - results of studies of victims of nuclear bombs including early effects on survivors, effects on the in utero exposed, and late effects on the survivors - from the Radiation Effects Research Foundation, a cooperative Japan-United States Research Organization. - The Radiation and Health Physics Home Page - all you ever wanted to know about radiation but were afraid to ask....with hundreds of WWW links - from the Student Chapter of the Health Physics Society, University of Michigan containing sections on general information, regulatory Information, professional organizations and societies, radiation specialties, health physics research and education. - What You Need to Know about Radiation - to protect yourself to protect your family to make reasonable social and political choices - covers sources of radiation and radiation protection - by Lauriston S. Taylor. Interaction of Radiation with Matter We have focussed in previous chapters on the source of radiation and the types of radiation. We are now in a position to consider what happens when this radiation interacts with matter. Our main reason for doing this is to find out what happens to the radiation as it passes through matter and also to set ourselves up for considering how it interacts with living tissue and how to detect radiation. Since all radiation detectors are made from some form of matter it is useful to first of all know how radiation interacts so that we can exploit the effects in the design of such detectors in subsequent chapters of this wikibook. Before we do this let us first remind ourselves of the physical characteristics of the major types of radiation. We have covered this information in some detail earlier and it is summarised in the table below for convenience. We will now consider the passage of each type of radiation through matter with most attention given to gamma-rays because they are the most common type used in nuclear medicine. One of the main effects that you will notice irrespective of the type of radiation is that ions are produced when radiation interacts with matter. It is for this reason that it is called ionizing radiation. Before we start though you might find an analogy useful to help you with your thinking. This analogy works on the basis of thinking about matter as an enormous mass of atoms (that is nuclei with orbiting electrons) and that the radiation is a particle/photon passing through this type of environment. So the analogy to think about is a spaceship passing through a meteor storm like you might see in a science-fiction movie where the spaceship represents the radiation and the meteors represent the atoms of the material through which the radiation is passing. One added feature to bring on board however is that our spaceship sometimes has an electric charge depending on the type of radiation it represents. Alpha Particles We can see from the table above that alpha-particles have a double positive charge and we can therefore easily appreciate that they will exert considerable electrostatic attraction on the outer orbital electrons of atoms near which they pass. The result is that some electrons will be attracted away from their parent atoms and that ions will be produced. In other words ionizations occur. We can also appreciate from the table that alpha-particles are quite massive relative to the other types of radiation and also to the electrons of atoms of the material through which they are passing. As a result they travel in straight lines through matter except for rare direct collisions with nuclei of atoms along their path. A third feature of relevance here is the energy with which they are emitted. This energy in the case of alpha-particles is always distinct. For example 221Ra emits an alpha-particle with an energy of 6.71 MeV. Every alpha-particle emitted from this radionuclide has this energy. Another example is 230U which emits three alpha-particles with energies of 5.66, 5.82, 5.89 MeV. Finally it is useful to note that alpha-particles are very damaging biologically and this is one reason why they are not used for in-vivo diagnostic studies. We will therefore not be considering them in any great detail in this wikibook. Beta Particles We can see from the table that beta-particles have a negative electric charge. Notice that positrons are not considered here since as we noted in chapter 2 these particles do not last for very long in matter before they are annihilated. Beta-minus particles last considerably longer and are therefore the focus of our attention here. Because of their negative charge they are attracted by nuclei and repelled by electron clouds as they pass through matter. The result once again without going into great detail is ionization. The path of beta-particles in matter is often described as being tortuous, since they tend to ricochet from atom to atom. A final and important point to note is that the energy of beta-particles is never found to be distinct in contrast to the alpha-particles above. The energies of the beta-particles from a radioactive source forms a spectrum up to a maximum energy - see figure below. Notice from the figure that a range of energies is present and features such as the mean energy, Emean, or the maximum energy, Emax, are quoted. The question we will consider here is: why should a spectrum of energies be seen? Surely if a beta-particle is produced inside a nucleus when a neutron is converted into a proton, a single distinct energy should result. The answer lies in the fact that two particles are actually produced in beta-decay. We did not cover this in our treatment in chapter 2 for fear of complicating things too much at that stage of this wikibook. But we will cover it here briefly for the sake of completeness. The second particle produced in beta-decay is called a neutrino and was named by Enrico Fermi. It is quite a mysterious particle possessing virtually no mass and carrying no charge, though we are still researching its properties today. The difficulty with them is that they are very hard to detect and this has greatly limited our knowledge about them so far. The beta-particle energy spectrum can be explained by considering that the energy produced when a neutron is converted to a proton is shared between the beta-particle and the anti-neutrino. Sometimes all the energy is given to the beta-particle and it receives the maximum energy, Emax. But more often the energy is shared between them so that for example the beta-particle has the mean energy, Emean and the neutrino has the remainder of the energy. Finally it is useful to note that beta-particles are quite damaging biologically and this is one reason why they are not used for in-vivo diagnostic studies. We will therefore not consider them in any great detail in this wikibook. Gamma Rays Since we have been talking about energies above, let us first note that the energies of gamma-rays emitted from a radioactive source are always distinct. For example 99mTc emits gamma-rays which all have an energy of 140 keV and 51Cr emits gamma-rays which have an energy of 320 keV. Gamma-rays have many modes of interaction with matter. Those which have little or no relevance to nuclear medicine imaging are: and will not be described here. Those which are very important to nuclear medicine imaging, are the Photoelectric Effect and the Compton Effect. We will consider each of these in turn below. Note that the effects described here are also of relevance to the interaction of X-rays with matter since as we have noted before X-rays and gamma-rays are essentially the same entities. So the treatment below is also of relevance to radiography. Photoelectric Effect - When a gamma-ray collides with an orbital electron of an atom of the material through which it is passing it can transfer all its energy to the electron and cease to exist - see figure below. On the basis of the Principle of Conservation of Energy we can deduce that the electron will leave the atom with a kinetic energy equal to the energy of the gamma-ray less that of the orbital binding energy. This electron is called a photoelectron. - Note that an ion results when the photoelectron leaves the atom. Also note that the gamma-ray energy is totally absorbed in the process. - Two subsequent points should also be noted. Firstly the photoelectron can cause ionisations along its track in a similar manner to a beta-particle. Secondly X-ray emission can occur when the vacancy left by the photoelectron is filled by an electron from an outer shell of the atom. Remember that we came across this type of feature before when we dealt with Electron Capture in chapter 2. Compton Effect - This type of effect is somewhat akin to a cue ball hitting a coloured ball on a pool table. Here a gamma-ray transfers only part of its energy to a valance electron which is essentially free - see figure below. Notice that the electron leaves the atom and may act like a beta-particle and that the gamma-ray deflects off in a different direction to that with which it approached the atom. This deflected or scattered gamma-ray can undergo further Compton Effects within the material. - Note that this effect is sometimes called Compton Scattering. The two effects we have just described give rise to both absorption and scattering of the radiation beam. The overall effect is referred to as attenuation of gamma-rays. We will investigate this feature from an analytical perspective in the following chapter. Before we do so, we'll briefly consider the interaction of radiation with living matter. Radiation Biology It is well known that exposure to ionizing radiation can result in damage to living tissue. We've already described the initial atomic interactions. What's important in radiation biology is that these interactions may trigger complex chains of biomolecular events and consequent biological damage. We've seen above that the primary means by which ionizing radiations lose their energy in matter is by ejection of orbital electrons. The loss of orbital electrons from the atom leaves it positively charged. Other interaction processes lead to excitation of the atom rather than ionization. Here, an outer valence electron receives sufficient energy to overcome the binding energy of its shell and moves further away from the nucleus to an orbit that is not normally occupied. This type of effect alters the chemical force that binds atoms into molecules and a regrouping of the affected atoms into different molecular structures can result. That is, excitation is an indirect method of inducing chemical change through the modification of individual atomic bonds. Ionizations and excitations can give rise to unstable chemical species called free radicals. These are atoms and molecules in which there are unpaired electrons. They are chemically very reactive and seek stability by bonding with other atoms and molecules. Changes to nearby molecules can arise because of their production. But, let's go back to the interactions themselves for the moment..... In the case of X- and gamma-ray interactions, the energy of the photons is usually transferred by collisions with orbital electrons, e.g. via photoelectric and Compton effects. These radiations are capable of penetrating deeply into tissue since their interactions depend on chance collisions with electrons. Indeed, nuclear medicine imaging is only possible when the energy of the gamma-rays is sufficient for complete emission from the body, but low enough to be detected. The interaction of charged particles (e.g. alpha and beta particles), on the other hand, can be by collisions with atomic electrons and also via attractive and repulsive electrostatic forces. The rate at which energy is lost along the track of a charged particle depends therefore on the square of the charge on that particle. That is, the greater the particle charge, the greater the probability of it generating ion pairs along its track. In addition, a longer period of time is available for electrostatic forces to act when a charged particle is moving slowly and the ionization probability is therefore increased as a result. The situation is illustrated in the following figure where tracks of charged particles in water are depicted. Notice that the track of the relatively massive α-particle is a straight line, as we've discussed earlier in this chapter, with a large number of interactions (indicated by the asterisks) per unit length. Notice also that the tracks for electrons are tortuous, as we've also discussed earlier, and that the number of interactions per unit length is considerably less. The Linear Energy Transfer (LET) is defined as the energy released per unit length of the track of an ionizing particle. A slowly moving, highly charged particle therefore has a substantially higher LET than a fast, singly charged particle. An alpha particle of 5 MeV energy and an electron of 1 MeV energy have LETs, for instance, of 95 and 0.25 keV/μm, respectively. The ionization density and hence the energy deposition pattern associated with the heavier charged particle is very much greater than that arising from electrons, as illustrated in the figure above. The energy transferred along the track of a charged particle will vary because the velocity of the particle is likely to be continuously decreasing. Each interaction removes a small amount of energy from the particle so that the LET gradually increases along a particle track with a dramatic increase (called the Bragg Peak) occurring just before the particle comes to rest. The International Commission on Radiation Units and Measurements (ICRU) suggest that lineal energy is a better indicator of relative biological effectiveness (RBE). Although lineal energy has the same units as LET (e.g. keV/μm), it is defined as the: Since the microscopic deposition of energy may be quite anisotropic, lineal energy should be a more appropriate measure of potential damage than that of LET. The ICRU and the ICRP have accordingly recommended that the radiation effectiveness of a particular radiation type should be based on lineal energy in a 1 μm diameter sphere of tissue. The lineal energy can be calculated for any given radiation type and energy and a Radiation Weighting Factor, (wR) can then be determined based on the integrated values of lineal energy along the radiation track. All living things on this planet have been exposed to ionizing radiation since the dawn of time. The current situation for humans is summarized in the following table: |Source||Effective Dose (mSv/year)||Comment| |Radon and other gases|| The sum total of this Natural Background Radiation is about 2.5 mSv per year, with large variations depending on altitude and dietary intake as well as geological and geographical location. Its generally considered that repair mechanisms exist in living matter and that these can be invoked following radiation damage at the biomolecular level. These mechanisms are likely to have an evolutionary basis arising as a response to radiation fluxes generated by natural background sources over the aeons. Its also known that quite considerable damage to tissues can arise at quite higher radiation fluxes, even at medical exposures. Cell death and transformations to malignant states can result leading to latent periods of many years before clinical signs of cancer or leukemia, for instance, become manifest. Further treatment of this vast field of radiation biology however is beyond our scope here. Practical Radiation Safety Since nuclear medicine involves handling substantial quantities of radioactive materials a radiation hazard arises for the user. Although this risk may be small, it remains important to keep occupational exposures as low as reasonably achievable (i.e. the ALARA principle). There are a number of practices that can be adopted to aid in achieving this aim, which include: - Maintaining a comprehensive record of all radioactive source purchases, usage, movement and storage. - Storage of radioactive sources in a secure shielded environment. Specially dedicated facilities are required for the storage, safe handling, manipulation and dispensing of unsealed radioactive sources. Storage areas should be designed for both bulk radioisotope and radioactive waste. Furthermore, radioactive patients should be regarded as unsealed sources. - Adequate ventilation of any work area. This is particularly important to minimize the inhalation of Technigas and potentially volatile radioisotopes such as I-125 and I-131. It is preferable to use fume hoods when working with volatile materials. - Ensuring that any Codes of Safe Practice are adhered to and develop sensible written protocols and working rules for handling radioisotopes. - Benches should be manufactured with smooth, hard impervious surfaces with appropriate splash-backs to allow ready decontamination following any spillage of radioisotopes. Laboratory work should be performed in stainless steel trays lined with absorbent paper. - Protocols for dealing with minor contamination incidents of the environment or of staff members must be established. Remember that no matter how good work practices are, minor accidents or incidents involving spillage of radioisotopes can take place. - Excretion of radioactive materials by patients may be via faeces, urine, saliva, blood, exhaled breath or the skin. Provision to deal with any or all of these potential pathways for contamination must be made. - Provision for collection and possible storage of both liquid and solid radioactive waste may be necessary in some circumstances. Most short-lived, water soluble liquid waste can be flushed into the sewers but longer lived isotopes such as I-131 may have to be stored for decay. Such waste must be adequately contained and labelled during storage. - Ensure that appropriate survey monitors are available to determine if any contamination has occurred and to assist in decontamination procedures. Routine monitoring of potentially contaminated areas must be performed. - Ensure that all potentially exposed staff are issued with individual personnel monitors. - Protective clothing such as gowns, smocks, overboots and gloves should be provided and worn to prevent contamination of the personnel handling the radioactivity. In particular, gloves must be worn when administering radioactive materials orally or intravenously to patients. It should be noted that penetration of gloves may occur when handling some iodine compounds so that wearing a second pair of gloves is recommended. In any event, gloves should be changed frequently and discarded ones treated as radioactive waste. - Eating and drinking of food, smoking, and the application of cosmetics is prohibited in laboratories in which unsealed sources are utilized. - Mouth pipetting of any radioactive substance is totally prohibited. - Precautions should be taken to avoid punctures, cuts, abrasions and any other open skin wounds which otherwise might allow egress of radiopharmaceuticals into the blood stream. - Always ensure that there is a net benefit resulting from the patient procedure. Can the diagnosis or treatment be made by recourse to an alternative means using non ionizing radiation? - A useful administrative practice is to implement a program of routine and or random laboratory audits, to establish that safe work practices are being adhered to. - Ensure that all staff, including physicians, technologists, nurses and interns and other students, who are involved in the practice of nuclear medicine receive the relevant level of training and education appropriate to their assigned tasks. The training program could be in the form of seminars, refresher courses and informal tutorials. - A substantive Quality Assurance (QA) program should be implemented to ensure that the function of the Dose Calibrator, Gamma Camera, computer and other ancillary equipment is optimized. - Finally, it is worth reiterating, that attempts should be made to minimize the activity used in all procedures. Remember that the more activity you use the more radiation exposure you will receive. The potential hazards to staff in nuclear medicine include: - Milking the 99mTc/99Mo generator, drawing up and measuring the amount of radioisotope prior to administration. - Delivering the activity to the patient by injection or other means and positioning the now radioactive patient in the imaging device. - Removing the patients from the imaging device and returning them to the ward where they may continue to represent a radiation hazard for some time. For Tc-99m, a short-lived radionuclide the hazard period will be only a few hours but for therapeutic isotopes the hazardous period may be several days. - Disposal of radioactive waste including body fluids, such as blood and urine, but also swabs, syringes, needles, paper towels etc. - Cleaning up the imaging area after the procedure. The table below lists the dose rates from patients having nuclear medicine examinations. In general, the hazards from handling or dealing with radioactive patients arise in two parts: - External hazard: This will be the case when the radioisotope emits penetrating γ-rays. Usually, this hazard can be minimised by employing shielding and sensible work practices. - Radioactive contamination: This is potentially of more concern as it may lead to the inhalation or ingestion of radioactive material by staff. Possible sources of contamination are radioactive blood, urine and saliva, emanating from a patient, or airborne radioactive vapour. Again, sensible work practices, which involve high levels of personal hygiene, should ensure that contamination is not a major issue. One of the most common nuclear medicine diagnostic procedures is the bone scan using the isotope Tc-99m. The exposure rate at 1 metre from a typical patient will peak at approximately 3 μSv per hour immediately after injection dropping steadily because of radioactivity decay and through excretion so that after 2 hours it will be about 1.5 μSv per hour. Neglecting any further excretion, the total exposure received by an individual, should that person stand one meter from the patient for the whole of the first 24 hours, would be ~17 μSv. For a person at 3 meters from the patient this number would reduce to 1.7 μSv and for a distance of 5 metres it would be ~0.7 μSv. These values have been estimated on the basis of the inverse square law. One point to make is that these figures assume no reduction in exposure caused by intervening attenuation such as building materials. Also note that excretion is in fact quite important in terms of patients clearing radioactivity from their body. Patients should be encouraged to drink substantial quantities of liquid following their scan, as this will improve excretion and aid in minimizing not only their radiation dose but also that of nursing staff. In the context of the above discussion it is informative to digress and address the issue as to whether or not it is safe to radiograph a patient if the latter has just been injected for a bone scan. Whilst this practice is not recommended in view of the ALARA principle, there are situations when such an eventuality may occur through no fault of any particular person. Two quite separate concerns are frequently raised. Firstly, claims are made that the γ-rays emanating from the radioactive patient will degrade image quality and, secondly, that these same γ-rays represent a substantial risk to the radiographer. From a consideration of the numbers above, these concerns are not warranted, as in the few minutes that the patient may be close to the imaging device and/or the radiographer, the total absorbed dose received by either is going to be a fraction of a μGy. Even, in the potentially more serious scenario, where a sonographer may be required to perform an occasional abdominal ultrasound examination on a radioactive patient, the radiation dose received by the sonographer is still negligible in the context of the natural background dose he or she receives. Contamination is one of the more insidious sources of accidental exposure as it can arise from innocuous events such as handling a tap, the telephone or power switch with contaminated gloves. However, the major source of hazard in nuclear medicine or laboratory departments relates to the potential for the ingestion or inhalation of radionuclides such as I-131 that will target specific organs. Any ingested or inhaled iodine will readily accumulate in the thyroid leading to an unnecessary and avoidable thyroid absorbed dose. Although the immediate administration of stable iodine or equivalent (KI, KCLO4 or Lugols solution) may help minimize this uptake after a suspected uptake or misadventure, the real concern in those instances is when there is no awareness or knowledge of the uptake. Accordingly, high levels of personal hygiene must be practised in radioisotope laboratories. The latter means wearing gloves, aprons or smocks and masks when handling radionuclides. Hands should be washed thoroughly and often. Eating of food or drinking in clinical areas and laboratory areas is strictly forbidden. Decontamination facilities such as an emergency shower facility must be available and bench top and personal monitoring for contamination is also indicated. Decontamination usually involves little more than thorough washing of the skin surface with an appropriate soap or detergent. Care must be taken to avoid abrading the skin as this may allow direct entry of radioactivity into the blood stream. Invariably, the radioisotopes used are highly penetrating so that it is rarely practicable in nuclear medicine departments for workers to protect themselves by continuously wearing very thick lead aprons for hours on end. When milking the generator, handling high specific activity sources or conducting some patient examinations it may be practicable to do so, but this would not be the normal situation. Note that the HVL for the 140 keV γ-ray of Tc-99m is ~0.25 mm of lead so that the commercially available lead aprons are of limited usefulness. Nevertheless, a number of safety measures based on shielding can be implemented, such as: - Some dose reduction, most particularly to the fingers, can be achieved by using lead syringe shields during the drawing up and injection phase. - Radioisotopes should be stored in lead pots (except ß emitters). Specifically, high activity sources such as 99mTc/99Mo generators, should be located behind lead brick walls or stored in lead safes. Known beta emitters should be shielded in the first instance by low atomic number materials such as perspex to minimize the production of bremsstrahlung radiation. Bremsstrahlung radiation arises when highly energetic charged particles are stopped in high atomic number materials. They are in fact X-rays and as such are relatively penetrating but a thin outer shield of lead will protect against the minimal levels of bremsstrahlung produced in the perspex. Access to areas where such sources are stored should be restricted. Finally, the figure below encapsulates the practice of radiation safety in a nut shell. Attenuation of Gamma-Rays We covered the interaction of gamma-rays with matter from a descriptive viewpoint in the previous chapter and we saw that the Compton and Photoelectric Effects were the major mechanisms. We will consider the subject again here but this time from an analytical perspective. This will allow us to develop a more general understanding of the phenomenon. Note that the treatment here also refers to the attenuation of X-rays since, as we noted before gamma-rays and X-rays are essentially the same physical entities. Our treatment begins with a description of a simple radiation experiment which can be performed easily in the laboratory and which many of the early pioneers in this field did. We will then build on the information obtained from such an experiment to develop a simple equation and some simple concepts which will allow us generalise the situation to any attenuation situation. Attenuation Experiment The experiment is quite simple. It involves firing a narrow beam of gamma-rays at a material and measuring how much of the radiation gets through. We can vary the energy of the gamma-rays we use and the type of absorbing material as well as its thickness and density. The experimental set-up is illustrated in the figure below. We refer to the intensity of the radiation which strikes the absorber as the incident intensity, I0, and the intensity of the radiation which gets through the absorber as the transmitted intensity, Ix. Notice also that the thickness of the absorber is denoted by x. From what we covered in the previous chapter we can appreciate that some of the gamma-rays will be subjected to interactions such as the Photoelectric Effect and the Compton Effect as they pass through the absorber. The transmitted gamma-rays will in the main be those which pass through without any interactions at all. We can therefore expect to find that the transmitted intensity will be less than the incident intensity, that is But by how much you might ask. Before we consider this let us denote the difference between Ix and I0 as ∆I, that is Effect of Atomic Number - Let us start exploring the magnitude of ∆I by placing different absorbers in turn in the radiation beam. What we would find is that the magnitude of ∆I is highly dependent on the atomic number of the absorbing material. For example we would find that ∆I would be quite low in the case of an absorber made from carbon (Z=6) and very large in the case of lead (Z=82). - We can gain an appreciation of why this is so from the following figure: - The figure illustrates a high atomic number absorber by the large circles which represent individual atoms and a low atomic number material by smaller circles. The incident radiation beam is represented by the arrows entering each absorber from the left. Notice that the atoms of the high atomic number absorber present larger targets for the radiation to strike and hence the chances for interactions via the Photoelectric and Compton Effects is relatively high. The attenuation should therefore be relatively large. - In the case of the low atomic number absorber however the individual atoms are smaller and hence the chances of interactions are reduced. In other words the radiation has a greater probability of being transmitted through the absorber and the attenuation is consequently lower than in the high atomic number case. - With respect to our spaceship analogy used in the previous chapter the atomic number can be thought of as the size of individual meteors in the meteor cloud. - If we were to precisely control our experimental set-up and carefully analyse our results we would find that: - Therefore if we were to double the atomic number of our absorber we would increase the attenuation by a factor of two cubed, that is 8, if we were to triple the atomic number we would increase the attenuation by a factor of 27, that is three cubed, and so on. - It is for this reason that high atomic number materials (e.g. Pb) are used for radiation protection. Effect of Density - A second approach to exploring the magnitude of ∆I is to see what happens when we change the density of the absorber. We can see from the following figure that a low density absorber will give rise to less attenuation than a high density absorber since the chances of an interaction between the radiation and the atoms of the absorber are relatively lower. - So in our analogy of the spaceship entering a meteor cloud think of meteor clouds of different density and the chances of the spaceship colliding with a meteor. Effect of Thickness - A third factor which we could vary is the thickness of the absorber. As you should be able to predict at this stage the thicker the absorber the greater the attenuation. Effect of Gamma-Ray Energy - Finally in our experiment we could vary the energy of the gamma-ray beam. We would find without going into it in any great detail that the greater the energy of the gamma-rays the less the attenuation. You might like to think of it in terms of the energy with which the spaceship approaches the meteor cloud and the likelihood of a slow spaceship getting through as opposed to a spaceship travelling with a higher energy. Mathematical Model We will consider a mathematical model here which will help us to express our experimental observations in more general terms. You will find that the mathematical approach adopted and the result obtained is quite similar to what we encountered earlier with Radioactive Decay. So you will not have to plod your way through any new maths below, just a different application of the same form of mathematical analysis! Let us start quite simply and assume that we vary only the thickness of the absorber. In other words we use an absorber of the same material (i.e. same atomic number) and the same density and use gamma-rays of the same energy for the experiment. Only the thickness of the absorber is changed. From our reasoning above it is easy to appreciate that the magnitude of ∆I should be dependent on the radiation intensity as well as the thickness of the absorber, that is for an infinitesimally small change in absorber thickness: the minus sign indicating that the intensity is reduced by the absorber. Turning the proportionality in this equation into an equality, we can write: where the constant of proportionality, μ, is called the Linear Attenuation Coefficient. Dividing across by I we can rewrite this equation as: So this equation describes the situation for any tiny change in absorber thickness, dx. To find out what happens for the complete thickness of an absorber we simply add up what happens in each small thickness. In other words we integrate the above equation. Expressing this more formally we can say that for thicknesses from x = 0 to any other thickness x, the radiation intensity will decrease from I0 to Ix, so that: This final expression tells us that the radiation intensity will decrease in an exponential fashion with the thickness of the absorber with the rate of decrease being controlled by the Linear Attenuation Coefficient. The expression is shown in graphical form below. The graph plots the intensity against thickness, x. We can see that the intensity decreases from I0, that is the number at x = 0, in a rapid fashion initially and then more slowly in the classic exponential manner. The influence of the Linear Attenuation Coefficient can be seen in the next figure. All three curves here are exponential in nature, only the Linear Attenuation Coefficient is different. Notice that when the Linear Attenuation Coefficient has a low value the curve decreases relatively slowly and when the Linear Attenuation Coefficient is large the curve decreases very quickly. The Linear Attenuation Coefficient is characteristic of individual absorbing materials. Some like carbon have a small value and are easily penetrated by gamma-rays. Other materials such as lead have a relatively large Linear Attenuation Coefficient and are relatively good absorbers of radiation: |Absorber||100 keV||200 keV||500 keV| The materials listed in the table above are air, water and a range of elements from carbon (Z=6) through to lead (Z=82) and their Linear Attenuation Coefficients are given for three gamma-ray energies. The first point to note is that the Linear Attenuation Coefficient increases as the atomic number of the absorber increases. For example it increases from a very small value of 0.000195 cm-1 for air at 100 keV to almost 60 cm-1 for lead. The second point to note is that the Linear Attenuation Coefficient for all materials decreases with the energy of the gamma-rays. For example the value for copper decreases from about 3.8 cm-1 at 100 keV to 0.73 cm-1 at 500 keV. The third point to note is that the trends in the table are consistent with the analysis presented earlier. Finally it is important to appreciate that our analysis above is only strictly true when we are dealing with narrow radiation beams. Other factors need to be taken into account when broad radiation beams are involved. Half Value Layer As with using the Half Life to describe the Radioactive Decay Law an indicator is usually derived from the exponential attenuation equation above which helps us think more clearly about what is going on. This indicator is called the Half Value Layer and it expresses the thickness of absorbing material which is needed to reduce the incident radiation intensity by a factor of two. From a graphical point of view we can say that when: the thickness of absorber is the Half Value Layer: The Half Value Layer for a range of absorbers is listed in the following table for three gamma-ray energies: |Absorber||100 keV||200 keV||500 keV| The first point to note is that the Half Value Layer decreases as the atomic number increases. For example the value for air at 100 keV is about 35 meters and it decreases to just 0.12 mm for lead at this energy. In other words 35 m of air is needed to reduce the intensity of a 100 keV gamma-ray beam by a factor of two whereas just 0.12 mm of lead can do the same thing. The second thing to note is that the Half Value Layer increases with increasing gamma-ray energy. For example from 0.18 cm for copper at 100 keV to about 1 cm at 500 keV. Thirdly note that relative to the data in the previous table there is a reciprocal relationship between the Half Value Layer and the Linear Attenuation Coefficient, which we will now investigate. Relationship between μ and the HVL As was the case with the Radioactive Decay Law, where we explored the relationship between the Half Life and the Decay Constant, a relationship can be derived between the Half Value Layer and the Linear Attenuation Coefficient. We can do this by using the definition of the Half Value Layer: and inserting it in the exponential attenuation equation, that is: These last two equations express the relationship between the Linear Attenuation Coefficient and the Half Value Layer. They are very useful as you will see when solving numerical questions relating to attenuation and frequently form the first step in solving a numerical problem. Mass Attenuation Coefficient We implied above that the Linear Attenuation Coefficient was useful when we were considering an absorbing material of the same density but of different thicknesses. A related coefficient can be of value when we wish to include the density, ρ, of the absorber in our analysis. This is the Mass Attenuation Coefficient which is defined as the: The measurement unit used for the Linear Attenuation Coefficient in the table above is cm-1, and a common unit of density is the g cm-3. You might like to derive for yourself on this basis that the cm2 g-1 is the equivalent unit of the Mass Attenuation Coefficient. Two questions are given below to help you develop your understanding of the material presented in this chapter. The first one is relatively straight-forward and will exercise your application of the exponential attenuation equation. The second question is a lot more challenging and will help you relate exponential attenuation to radioactivity and radiation exposure. How much aluminium is required to reduce the intensity of a 200 keV gamma-ray beam to 10% of its incident intensity? Assume that the Half Value Layer for 200 keV gamma-rays in Al is 2.14 cm. - The question phrased in terms of the symbols used above is: - We are told that the Half Value Layer is 2.14 cm. Therefore the Linear Attenuation Coefficient is - Now combining all this with the exponential attenuation equation: - we can write: - So the thickness of aluminium required to reduce these gamma-rays by a factor of ten is about 7 cm. This relatively large thickness is the reason why aluminium is not generally used in radiation protection - its atomic number is not high enough for efficient and significant attenuation of gamma-rays. - You might like to try this question for the case when Pb is the absorber - but you will need to find out the Half Value Layer for the 200 keV gamma-rays yourself! - Here's a hint though: have a look at one of the tables above. - And here's the answer for you to check when you've finished: 2.2 mm. - In other words a relatively thin thickness of Pb is required to do the same job as 7 cm of aluminium. A 105 MBq source of 137Cs is to be contained in a Pb box so that the exposure rate 1 m away from the source is less than 0.5 mR/hour. If the Half Value Layer for 137Cs gamma-rays in Pb is 0.6 cm, what thickness of Pb is required? The Specific Gamma Ray Constant for 137Cs is 3.3 R hr-1 mCi-1 at 1 cm. - This is a fairly typical question which arises when someone is using radioactive materials. We wish to use a certain quantity of the material and we wish to store it in a lead container so that the exposure rate when we are working a certain distance away is below some level for safety reasons. We know the radioactivity of the material we will be using. But its quoted in SI units. We look up a reference book to find out the exposure rate for this radioisotope and find that the Specific Gamma Ray Constant is quoted in traditional units. Just as in our question! - So let us start by getting our units right. The Specific Gamma Ray Constant is given as: - This is equal to: - which is equal to: - on the basis of the Inverse Square Law. This result expressed per becquerel is - since 1 mCi = 3.7 x 107 Bq. And therefore for 105 MBq, the exposure rate is: - That is the exposure rate 1 meter from our source is 891.9 mR hr-1. - We wish to reduce this exposure rate according to the question to less than 0.5 mR hr-1 using Pb. - You should be able at this stage to use the exponential attenuation equation along with the Half Value Layer for these gamma-rays in Pb to calculate that the thickness of Pb required is about 6.5 cm. External Links - Mucal on the Web - an online program which calculates x-ray absorption coefficients - by Pathikrit Bandyopadhyay, The Center for Synchrotron Radiation Research and Instrumentation at the Illinois Institute of Technology. - Tables of X-Ray Mass Attenuation Coefficients - a vast amount of data for all elements from National Institute of Science & Technology, USA. Gas-Filled Radiation Detectors We have learned in the last two chapters about how radiation interacts with matter and we are now in a position to apply our understanding to the detection of radiation. One of the major outcomes of the interaction of radiation with matter is the creation of ions as we saw in Chapter 5. This outcome is exploited in gas-filled detectors as you will see in this chapter. The detector in this case is essentially a gas, in that it is the atoms of a gas which are ionised by the radiation. We will see in the next chapter that solids can also be used as radiation detectors but for now we will deal with gases and be introduced to detectors such as the Ionization Chamber and the Geiger Counter. Before considering these specific types of gas-filled detectors we will first of all consider the situation from a very general perspective. Gas-Filled Detectors As we noted above the radiation interacts with gas atoms in this form of detector and causes ions to be produced. On the basis of what we covered in Chapter 5 it is easy to appreciate that it is the Photoelectric and Compton Effects that cause the ionisations when the radiation consists of gamma-rays with energies useful for diagnostic purposes. There are actually two particles generated when an ion is produced - the positive ion itself and an electron. These two particles are collectively called an ion pair. The detection of the production of ion pairs in the gas is the basis upon which gas detectors operate. The manner in which this is done is by using an electric field to sweep the electrons away to a positively charged electrode and the ions to a negatively charged electrode. Let us consider a very simple arrangement as shown in the following figure: Here we have two electrodes with the gas between them. Something like a capacitor with a gas dielectric. The gas which is used is typically an inert gas, for example argon or xenon. The reason for using an inert gas is so that chemical reactions will not occur within the gas following the ionisations which could change the characteristics of our detector. A dc voltage is placed between the two electrodes. As a result when the radiation interacts with a gas atom the electron will move towards the positive electrode and the ion will move towards the negative electrode. But will these charges reach their respective electrodes? The answer is obviously dependent on the magnitude of the dc voltage. For example if at one extreme we had a dc voltage of a microvolt (that is, one millionth of a volt) the resultant electric field may be insufficient to move the ion pair very far and the two particles may recombine to reform the gas atom. At the other extreme suppose we applied a million volts between the two electrodes. In this case we are likely to get sparks flying between the two electrodes - a lightning bolt if you like - and our detector might act something like a neon sign. Somewhere in between these two extremes though we should be able to provide a sufficient attractive force for the ion and electron to move to their respective electrodes without recombination or sparking occurring. We will look at this subject in more detail below. Before we do let us see how the concept of the simple detector illustrated above is applied in practice. The gas-filled chamber is generally cylindrical in shape in real detectors. This shape has been found to be more efficient than the parallel electrode arrangement shown above. A cross-sectional view through this cylinder is shown in the following figure: The positive electrode consists of a thin wire running through the centre of the cylinder and the negative electrode consists of the wall of the cylinder. In principle we could make such a detector by getting a section of a metal pipe, mounting a wire through its centre, filling it with an inert gas and sealing the ends of the pipe. Actual detectors are a little bit more complex however but let us not get side-tracked at this stage. We apply a dc voltage via a battery or via a dc voltage supply and connect it as shown in the figure using a resistor, R. Now, assume that a gamma-ray enters the detector. Ion pairs will be produced in the gas - the ions heading towards the outer wall and the electrons heading towards the centre wire. Let us think about the electrons for a moment. When they hit the centre wire we can simply think of them as entering the wire and flowing through the resistor to get to the positive terminal of the dc voltage supply. These electrons flowing through the resistor constitute an electric current and as a result of Ohm's Law a voltage is generated across the resistor. This voltage is amplified by an amplifier and some type of device is used to register the amplified voltage. A loud-speaker is a fairly simple device to use for this purpose and the generation of a voltage pulse is manifest by a click from the loud-speaker. Other display devices include a ratemeter which displays the number of voltage pulses generated per unit time - something like a speedometer in a car - and a pulse counter (or scaler) which counts the number of voltage pulses generated in a set period of time. A voltage pulse is frequently referred to in practice as a count and the number of voltage pulses generated per unit time is frequently called the count rate. DC Voltage Dependence If we were to build a detector and electronic circuit as shown in the figure above we could conduct an experiment that would allow us to explore the effect of the dc voltage on the magnitude of the voltage pulses produced across the resistor, R. Note that the term pulse height is frequently used in this field to refer to the magnitude of voltage pulses. Ideally, we could generate a result similar to that illustrated in the following figure: The graph illustrates the dependence of the pulse height on the dc voltage. Note that the vertical axis representing the pulse height is on a logarithmic scale for the sake of compressing a large linear scale onto a reasonably-sized graph. The experimental results can be divided into five regions as shown. We will now consider each region in turn. - Region A Here Vdc is relatively low so that recombination of positive ions and electrons occurs. As a result not all ion pairs are collected and the voltage pulse height is relatively low. It does increase as the dc voltage increases however as the amount of recombination reduces. - Region B Vdc is sufficiently high in this region so that only a negligible amount of recombination occurs. This is the region where a type of detector called the Ionization Chamber operates. - Region C Vdc is sufficiently high in this region so that electrons approaching the centre wire attain sufficient energy between collisions with the electrons of gas atoms to produce new ion pairs. Thus the number of electrons is increased so that the electric charge passing through the resistor, R, may be up to a thousand times greater than the charge produced initially by the radiation interaction. This is the region where a type of detector called the Proportional Counter operates. - Region D Vdc is so high that even a minimally-ionizing particle will produce a very large voltage pulse. The initial ionization produced by the radiation triggers a complete gas breakdown as an avalanche of electrons heads towards and spreads along the centre wire. This region is called the Geiger-Müller Region, and is exploited in the Geiger Counter. - Region E Here Vdc is high enough for the gas to completely breakdown and it cannot be used to detect radiation. We will now consider features of the Ionisation Chamber and the Geiger Counter in more detail. Ionisation Chamber The ionisation chamber consists of a gas-filled detector energised by a relatively low dc voltage. We will first of all make an estimate of the voltage pulse height generated by this type of detector. We will then consider some applications of ionisation chambers. When a beta-particle interacts with the gas the energy required to produce one ion pair is about 30 eV. Therefore when a beta-particle of energy 1 MeV is completely absorbed in the gas the number of ion pairs produced is: The electric charge produced in the gas is therefore If the capacitance of the ionisation chamber (remember that we compared a gas-filled detector to a capacitor above) is 100 pF then the amplitude of the voltage pulse generated is: Because such a small voltage is generated it is necessary to use a very sensitive amplifier in the electronic circuitry connected to the chamber. We will now learn about two applications of ionisation chambers. The first one is for the measurement of radiation exposures. You will remember from Chapter 4 that the unit of radiation exposure (be it the SI or the traditional unit) is defined in terms of the amount of electric charge produced in a unit mass of a air. An ionization chamber filled with air is the natural instrument to use for such measurements. The second application is the measurement of radioactivity. The ionisation chamber used here is configured in what is called a re-entrant arrangement (see figure below) so that the sample of radioactive material can be placed within the detector using a holder and hence most of the emitted radiation can be detected. The instrument is widely referred to as an Isotope Calibrator and the trickle of electric current generated by such a detector is calibrated so that a reading in units of radioactivity (for example MBq or mCi) can be obtained. Most well-run Nuclear Medicine Departments will have at least one of these devices so that doses of radioactivity can be checked prior to administration to patients. Here are some photographs of ionisation chambers designed for various applications: Geiger Counter We saw earlier that the Geiger Counter operates at relatively high dc voltages (for example 400-900 volts) and that an avalanche of electrons is generated following the absorption of radiation in the gas. The voltage pulses produced by this detector are relatively large since the gas effectively acts as an amplifier of the electric charge produced. There are four features of this detector which we will discuss. The first is that a sensitive amplifier (as was the case with the Ionization Chamber) is not required for this detector because of the gas amplification noted above. The second feature results from the fact that the generation of the electron avalanche must be stopped in order to reform the detector. In other words when a radiation particle/photon is absorbed by the gas a complete gas breakdown occurs which implies that the gas is incapable of detecting the next particle/photon which enters the detector. So in the extreme case one minute we have a radiation detector and the following moment we do not. A means of stopping the electron avalanche is therefore required - a process called Quenching. One means of doing this is by electronically lowering the dc voltage following an avalanche. A more widely used method of quenching is to add a small amount of a quenching gas to the inert gas. For example the gas could be argon with ethyl alcohol added. The ethyl alcohol is in vapour form and since it consists of relatively large molecules energy which would in their absence give rise to sustaining the electron avalanche is absorbed by these molecules. The large molecules act like a brake in effect. Irrespective of the type of quenching used the detector is insensitive for a small period of time following absorption of a radiation particle/photon. This period of time is called the Dead Time and this is the third feature of this detector which we will consider. Dead times are relatively short but nevertheless significant - being typically of the order of 200-400 µs. As a result the reading obtained with this detector is less than it should be. The true reading without going into detail can be obtained using the following equation: where T is the true reading, A is the actual reading and τ is the dead time. Some instruments perform this calculation automatically. The fourth feature to note about this detector is the dependence of its performance on the dc voltage. The Geiger-Müller Region of our figure above is shown in more detail below: Notice that it contains a plateau where the count rate obtained is independent of the dc voltage. The centre of this plateau is where most detectors are operated. It is clear that the count rate from the detector is not affected if the dc voltage fluctuates about the operating voltage. This implies that a relatively straight-forward dc voltage supply can be used. This feature coupled with the fact that a sensitive amplifier is not needed translates in practice to a relatively inexpensive radiation detector. External Links - Inside a smoke detector - about the ion chamber used in smoke detectors - from the How Stuff Works website. - Ionisation Chambers - a brief description from the Triumf Safety Group. - Radiation and Radioactivity - a self-paced lesson developed by the University of Michigan's Student Chapter of the Health Physics Society with a section on gas filled detectors. - The Geiger Counter - a brief overview from the NASA Goddard Space Flight Center, USA. Scintillation Detectors The second type of radiation detector we will discuss is called the scintillation detector. Scintillations are minute flashes of light which are produced by certain materials when they absorb radiation. These materials are variously called fluorescent materials, fluors, scintillators or phosphors. If we had a radioactive source and a scintillator in the lab we could darken the room, move the scintillator close to the source and see the scintillations. These small flashes of light might be green or blue or some other colour depending on the scintillator. We could also count the number of flashes produced to gain an estimate of the radioactivity of the source, that is the more flashes of light seen the more radiation present. The scintillation detector was possibly the first radiation detector discovered. You might have heard the story of the discovery of X-rays by Wilhelm Roentgen in 1895. He was working one evening in his laboratory in Wurzburg, Germany with a device which fired a beam of electrons at a target inside an evacuated glass tube. While working with this device he noticed that some platino-barium cyanide crystals, which he just happened to have close by, began to glow - and that they stopped glowing when he switched the device off. Roentgen had accidentally discovered a new form of radiation. He had also accidentally discovered a scintillator detector. Although scintillations can be seen we have a more sophisticated way of counting and measuring them today by using some form of photodetector. We will learn about the construction and mode of operation of this type of detector in this chapter. In addition, we will see how it can be used not just for detecting the presence of ionizing radiation but also for measuring the energy of that radiation. Before we do however it is useful to note that scintillators are very widely used in the medical radiations field. For example the X-ray cassette used in radiography contains a scintillator (called an intensifying screen) in close contact with a photographic film. A second example is the X-ray Image Intensifier used in fluoroscopy which contains scintillators called phosphors. Scintillators are also used in some CT Scanners and as we will see in the next chapter, in the Gamma Camera and PET Scanner. Their application is not limited to the medical radiations field in that scintillators are also used as screens in television sets and computer monitors and for generating light in fluorescent tubes - to mention just two common applications. What other applications can you think of? So scintillators are a lot more common than you might initially think and you will therefore find the information presented here useful to you not just for your studies of nuclear medicine. Fluorescent Materials Some fluorescent materials are listed in the following table. Thallium-activated sodium iodide, NaI(Tl) is a crystalline material which is widely used for the detection of gamma-rays in scintillation detectors. We will be looking at this in more detail below. Another crystalline material sodium-activated caesium iodide, CsI(Na) is widely used for X-ray detection in devices such as the X-ray image intensifier. Another one called calcium tungstate, CaWO4 has been widely used in X-ray cassettes although this substance has been replaced by other scintillators such as lanthanum oxybromide in many modern cassettes. |p-terphenyl in toluene||liquid| |p-terphenyl in polystyrene||plastic| Notice that some scintillation materials are activated with certain elements. What this means is that the base material has a small amount of the activation element present. The term doped is sometimes used instead of activated. This activating element is used to influence the wavelength (colour) of the light produced by the scintillator. Silver-activated zinc sulphide is a scintillator in powder form and p-terphenyl in toluene is a liquid scintillator. The advantage of such forms of scintillators is that the radioactive material can be placed in close contact with the scintillating material. For example if a radioactive sample happened to be in liquid form we could mix it with a liquid scintillator so as to optimise the chances of detection of the emitted radiation and hence have a very sensitive detector. A final example is p-terphenyl in polystyrene which is a scintillator in the form of a plastic. This form can be easily made into different shapes like most plastics and is therefore useful when detectors of particular shapes are required. Photomultiplier Tube A scintillation crystal coupled to a photomultiplier tube (PMT) is illustrated in the following figure. The overall device is typically cylindrical in shape and the figure shows a cross-section through this cylinder: The scintillation crystal, NaI(Tl) is very delicate and this is one of the reasons it is housed in an aluminium casing. The inside wall of the casing is designed so that any light which strikes it is reflected downwards towards the PMT. The PMT itself consists of a photocathode, a focussing grid, an array of dynodes and an anode housed in an evacuated glass tube. The function of the photocathode is to convert the light flashes produced by radiation attenuation in the scintillation crystal into electrons. The grid focuses these electrons onto the first dynode and the dynode array is used for electron multiplication. We will consider this process in more detail below. Finally the anode collects the electrons produced by the array of dynodes. The electrical circuitry which is typically attached to a PMT is shown in the next figure: It consists of a high voltage supply, a resistor divider chain and a load resistor, RL. The high voltage supply generates a dc voltage, Vdc which can be up to 1,000 volts. It is applied to the resistor divider chain which consists of an array of resistors, each of which has the same resistance, R. The function of this chain of resistors is to divide up Vdc into equal voltages which are supplied to the dynodes. As a result voltages which increase in equal steps are applied to the array of dynodes. The load resistor is used so that an output voltage, Vout can be generated. Finally the operation of the device is illustrated in the figure below: The ionizing radiation produces flashes of light in the scintillation crystal. This light strikes the photocathode and is converted into electrons. The electrons are directed by the grid onto the first dynode. Dynodes are made from certain alloys which emit electrons when their surface is struck by electrons with the advantage that more electrons are emitted than are absorbed. A dynode used in a PMT typically emits between two and five electrons for each electron which strikes it. So when an electron from the photocathode strikes the first dynode between two and five electrons are emitted and are directed towards the second dynode in the array (three are illustrated in the figure). This electron multiplication process is repeated at the second dynode so that we end up with nine electrons for example heading towards the third dynode. An electron avalanche therefore develops so that a sizeable number of electrons eventually hits the anode at the bottom of the dynode chain. These electrons flow through the load resistor, RL and constitute an electric current which according to Ohm's Law generates a voltage, Vout which is measured by electronic circuitry (which we will describe later). A number of photographs of devices based on scintillation detection are shown below: The important feature of the scintillation detector is that this output voltage, Vout is directly proportional to the energy deposited by the radiation in the crystal. We will see what a useful feature this is below. Before we do so we will briefly analyze the operation of this device. Mathematical Model A simple mathematical model will be presented below which will help us get a better handle on the performance of a scintillation detector. We will do this by quantifying the performance of the scintillator, the photocathode and the dynodes. Let's use the following symbols to characterize each stage of the detection process: - m: number of light photons produced in crystal - k: optical efficiency of the crystal, that is the efficiency with which the crystal transmits light - l: quantum efficiency of the photocathode, that is the efficiency with which the photocathode converts light photons to electrons - n: number of dynodes - R: dynode multiplication factor, that is the number of secondary electrons emitted by a dynode per primary electron absorbed. Therefore the charge collected at the anode is given by the following equation: where e: the electronic charge. For example supposing a 100 keV gamma-ray is absorbed in the crystal. The number of light photons produced, m, might be about 1,000 for a typical scintillation crystal. A typical crystal might have an optical efficiency, k, of 0.5 - in other words 50% of the light produced reaches the photocathode which might have a quantum efficiency of 0.15. A typical PMT has ten dynodes and let us assume that the dynode multiplication factor is 4.5. This amount of charge is very small. Even though we have used a sophisticated photodetector like a PMT we still end up with quite a small electrical signal. A very sensitive amplifier is therefore needed to amplify this signal. This type of amplifier is generally called a pre-amplifier and we will refer to it again later. Output Voltage We noted above that the voltage measured across the resistor, RL, is proportional to the energy deposited in the scintillation crystal by the radiation. Let us consider how the radiation might deposit its energy in the crystal. Let us consider a situation where gamma-rays are detected by the crystal. We learnt in Chapter 5 that there were two interaction mechanisms involved in gamma-ray attenuation - the Photoelectric Effect and the Compton Effect. You will remember that the Photoelectric Effect involves the total absorption of the energy of a gamma-ray, while the Compton Effect involves just partial absorption of this energy. Since the output voltage of a scintillation detector is proportional to the energy deposited by the gamma-rays it is reasonable to expect that Photoelectric Effects in the crystal will generate distinct and relatively large output voltages and that Compton Effects will result in lower output voltages. The usual way of presenting this information is by plotting a graph of the count rate versus the output voltage pulse height as shown in the following figure: This plot illustrates what is obtained for a monoenergetic gamma-emitting radioisotope, for example 99mTc - which, as we have noted before emits a single gamma-ray with an energy of 140 keV. Before we look at it in detail remember that we noted above that the output voltage from this detector is proportional to the energy deposited by the radiation in the crystal. The horizontal axis can therefore be used to represent the output voltage or the gamma-ray energy. Both of these quantities are shown in the figure to help with this discussion. In addition note that this plot is often called a Gamma-Ray Energy Spectrum. The figure above contains two regions. One called the Photopeak and the other called the Compton Smear. The Photopeak results because of Photoelectric absorption of the gamma-rays from the radioactive source - remember that we are dealing with a monoenergetic emitter in this example. It consists of a peak representing the gamma-ray energy (140 keV in our example). If our radioisotope emitted gamma-rays of two energies we would have two photopeaks in our spectrum and so on. Notice that the peak has a statistical spread. This has to do with how good our detector is and we will not get into any detail about it here other than to note that the extent of this spread is a measure of the quality of our detector. A high quality (and more expensive!) detector will have a narrower statistical spread in the photopeaks which it measures. The other component of our spectrum is the Compton Smear. It represents a range of output voltages which are lower than that for the Photopeak. It is therefore indicative of the partial absorption of the energy of gamma-rays in the crystal. In some Compton Effects a substantial scattering with a valance electron can occur which gives rise to relatively large voltage pulses. In other Compton Effects the gamma-ray just grazes off a valance electron with minimal energy transfer and hence a relatively small voltage pulse is generated. In between these two extremes are a range of scattering events involving a range of energy transfers and hence a range of voltage pulse heights. A 'smear' therefore manifests itself on the gamma-ray energy spectrum. It is important to note that the spectrum illustrated in the figure is simplified for the sake of this introductory discussion and that actual spectra are a little more complex - see figure below for an example: You will find though that your understanding of actual spectra can easily develop on the basis of the simple picture we have painted here. It is also important to appreciate the additional information which this type of radiation detector provides relative to a gas-filled detector. In essence gas-filled detectors can be used to tell us if any radiation is present as well as the amount of that radiation. Scintillation detectors also give us this information but they tell us about the energy of this radiation as well. This additional information can be used for many diverse applications such as the identification of unknown radioisotopes and the production of nuclear medicine images. Let us stay a little bit longer though with the fundamental features of how scintillation detectors work. The photopeak of the Gamma-Ray Energy Spectrum is generally of interest in nuclear medicine. This peak is the main signature of the radioisotope being used and its isolation from the Compton Smear is normally achieved using a technique called Pulse Height Analysis. Pulse Height Analysis This is an electronic technique which allows a spectrum to be acquired using two types of circuitry. One circuit is called a Lower Level Discriminator which only allows voltages pulses through it which are lower than its setting. The other is called an Upper Level Discriminator which only allows voltage pulses though which are (you guessed it!) higher than its setting. The result of using both these circuits in combination is a variable-width window which can be placed anywhere along a spectrum. For example if we wished to obtain information from the photopeak only of our simplified spectrum we would place the discrimination controls as shown in the following figure: A final point to note here is that since the scintillation detector is widely used to obtain information about the energies of the radiation emitted from a radioactive source it is frequently referred to as a Scintillation Spectrometer. Scintillation Spectrometer Types of scintillation spectrometer fall into two basic categories - the relatively straight-forward Single Channel Analyser and the more sophisticated Multi-Channel Analyser. The Single Channel Analyser is the type of instrument we have been describing so far in this discussion. A block diagram of the instrument is shown below: It consists of a scintillation crystal coupled to a photomultiplier tube which is powered by a high voltage circuit (H.V.). The output voltages are initially amplified by a sensitive pre-amplifier (Pre-Amp) as we noted above before being amplified further and conditioned by the amplifier (Amp). The voltage pulses are then in a suitable form for the pulse height analyser (P.H.A.) - the output pulses from which can be fed to a Scaler and a Ratemeter for display of the information about the portion of the spectrum we have allowed to pass through the PHA. The Ratemeter is a display device just like the speedometer in a car and indicates the number of pulses generated per unit time. The Scaler on the other hand usually consists of a digital display which shows the number of voltage pulses produced in a specified period of time. We can illustrate the operation of this circuitry by considering how it might be used to generate a Gamma-Ray Energy Spectrum. What we would do is set up the LLD and ULD so as to define a narrow window and place this to pass the lowest voltage pulses produced by the detector through to the Scaler and Ratemeter. In other words we would place a narrow window at the extreme left of the spectrum and acquire information about the lowest energy gamma-ray interactions in the crystal. We would then adjust the LLD and ULD settings to acquire information about the interactions of the next highest energy. We would proceed in this fashion to scan the whole spectrum. A more sophisticated detector circuit is illustrated in the following figure: It is quite similar to that in the previous figure with the exception that the PHA, Scaler and Ratemeter are replaced by a Multi-Channel Analyser and a computer. The Multi-Channel Analyser (MCA) is a circuit which is capable of setting up a large number of individual windows to look at a complete spectrum in one go. The MCA might consist of 1024 individual windows for example and the computer might consist of a personal computer which can acquire information simultaneously from each window and display it as an energy spectrum. The computer generally contains software which allows us to manipulate the resultant information in a variety of ways. Indeed the 137Cs spectrum shown above was generated using this approach. External Links - Radiation and Radioactivity - a self-paced lesson developed by the University of Michigan's Student Chapter of the Health Physics Society, with a section on sodium iodide detectors. Nuclear Medicine Imaging Systems Topics we have covered in this wikibook have included radioactivity, the interaction of gamma-rays with matter and radiation detection. The main reason for following this pathway was to bring us to the subject of this chapter: nuclear medicine imaging systems. These are devices which produce pictures of the distribution of radioactive material following administration to a patient. The radioactivity is generally administered to the patient in the form of a radiopharmaceutical - the term radiotracer is also used. This follows some physiological pathway to accumulate for a short period of time in some part of the body. A good example is 99mTc-tin colloid which following intravenous injection accumulates mainly in the patient's liver. The substance emits gamma-rays while it is in the patient's liver and we can produce an image of its distribution using a nuclear medicine imaging system. This image can tell us whether the function of the liver is normal or abnormal or if sections of it are damaged from some form of disease. Different radiopharmaceuticals are used to produce images from almost every region of the body: |Part of the Body||Example Radiotracer| Note that the form of information obtained using this imaging method is mainly related to the physiological functioning of an organ as opposed to the mainly anatomical information which is obtained using X-ray imaging systems. Nuclear medicine therefore provides a different perspective on a disease condition and generates additional information to that obtained from X-ray images. Our purpose here is to concentrate on the imaging systems used to produce the images. Early forms of imaging system used in this field consisted of a radiation detector (a scintillation detector for example) which was scanned slowly over a region of the patient in order to measure the radiation intensity emitted from individual points within the region. One such device was called the Rectilinear Scanner. Such imaging systems have been replaced since the 1970s by more sophisticated devices which produce images much more rapidly. The most common of these modern devices is called the Gamma Camera and we will consider its construction and mode of operation below. A review of recent developments in this technology for cardiac applications can be found in Slomka et al (2009). Gamma Camera The basic design of the most common type of gamma camera used today was developed by an American physicist, Hal Anger and is therefore sometimes called the Anger Camera. It consists of a large diameter NaI(Tl) scintillation crystal which is viewed by a large number of photomultiplier tubes. A block diagram of the basic components of a gamma camera is shown below: The crystal and PM Tubes are housed in a cylindrical shaped housing commonly called the camera head and a cross-sectional view of this is shown in the figure. The crystal can be between about 25 cm and 40 cm in diameter and about 1 cm thick. The diameter is dependent on the application of the device. For example a 25 cm diameter crystal might be used for a camera designed for cardiac applications while a larger 40 cm crystal would be used for producing images of the lungs. The thickness of the crystal is chosen so that it provides good detection for the 140 keV gamma-rays emitted from 99mTc - which is the most common radioisotope used today. Scintillations produced in the crystal are detected by a large number of PM tubes which are arranged in a two-dimensional array. There is typically between 37 and 91 PM tubes in modern gamma cameras. The output voltages generated by these PM tubes are fed to a position circuit which produces four output signals called ±X and ±Y. These position signals contain information about where the scintillations were produced within the crystal. In the most basic gamma camera design they are fed to a cathode ray oscilloscope (CRO). We will describe the operation of the CRO in more detail below. Before we do so we should note that the position signals also contain information about the intensity of each scintillation. This intensity information can be derived from the position signals by feeding them to a summation circuit (marked ∑ in the figure) which adds up the four position signals to generate a voltage pulse which represents the intensity of a scintillation. This voltage pulse is commonly called the Z-pulse (or zee-pulse in American English!) which following pulse height analysis (PHA) is fed as the unblank pulse to the CRO. So we end up with four position signals and an unblank pulse sent to the CRO. Let us briefly review the operation of a CRO before we continue. The core of a CRO consists of an evacuated tube with an electron gun at one end and a phosphor-coated screen at the other end. The electron gun generates an electron beam which is directed at the screen and the screen emits light at those points struck by the electron beam. The position of the electron beam can be controlled by vertical and horizontal deflection plates and with the appropriate voltages fed to these plates the electron beam can be positioned at any point on the screen. The normal mode of operation of an oscilloscope is for the electron beam to remain switched on. In the case of the gamma camera the electron beam of the CRO is normally switched off - it is said to be blanked. When an unblank pulse is generated by the PHA circuit the electron beam of the CRO is switched on for a brief period of time so as to display a flash of light on the screen. In other words the voltage pulse from the PHA circuit is used to unblank the electron beam of the CRO. So where does this flash of light occur on the screen of the CRO? The position of the flash of light is dictated by the ±X and ±Y signals generated by the position circuit. These signals as you might have guessed are fed to the deflection plates of the CRO so as to cause the unblanked electron beam to strike the screen at a point related to where the scintillation was originally produced in the NaI(Tl) crystal. Simple! The gamma camera can therefore be considered to be a sophisticated arrangement of electronic circuits used to translate the position of a flash of light in a scintillation crystal to a flash of light at a related point on the screen of an oscilloscope. In addition the use of a pulse height analyser in the circuitry allows us to translate the scintillations related only to photoelectric events in the crystal by rejecting all voltage pulses except those occurring within the photopeak of the gamma-ray energy spectrum. Let us summarise where we have got to before we proceed. A radiopharmaceutical is administered to the patient and it accumulates in the organ of interest. Gamma-rays are emitted in all directions from the organ and those heading in the direction of the gamma camera enter the crystal and produce scintillations (note that there is a device in front of the crystal called a collimator which we will discuss later). The scintillations are detected by an array of PM tubes whose outputs are fed to a position circuit which generates four voltage pulses related to the position of a scintillation within the crystal. These voltage pulses are fed to the deflection circuitry of the CRO. They are also fed to a summation circuit whose output (the Z-pulse) is fed to the PHA and the output of the PHA is used to switch on (that is, unblank) the electron beam of the CRO. A flash of light appears on the screen of the CRO at a point related to where the scintillation occurred within the NaI(Tl) crystal. An image of the distribution of the radiopharmaceutical within the organ is therefore formed on the screen of the CRO when the gamma-rays emitted from the organ are detected by the crystal. What we have described above is the operation of a fairly traditional gamma camera. Modern designs are a good deal more complex but the basic design has remained much the same as has been described. One area where major design improvements have occurred is the area of image formation and display. The most basic approach to image formation is to photograph the screen of the CRO over a period of time to allow integration of the light flashes to form an image on photographic film. A stage up from this is to use a storage oscilloscope which allows each flash of light to remain on the screen for a reasonable period of time. The most modern approach is to feed the position and energy signals into the memory circuitry of a computer for storage. The memory contents can therefore be displayed on a computer monitor and can also be manipulated (that is processed) in many ways. For example various colours can be used to represent different concentrations of a radiopharmaceutical within an organ. The use of digital image processing is now widespread in nuclear medicine in that it can be used to rapidly and conveniently control image acquisition and display as well as to analyse an image or sequences of images, to annotate images with the patient's name and examination details, to store the images for subsequent retrieval and to communicate the image data to other computers over a network. The essential elements of a modern gamma camera are shown in the next figure. Gamma rays emitted by the patient pass through the collimator and are detected within the camera head, which generates data related to the location of scintillations in the crystal as well as to the energy of the gamma rays. This data is then processed on-the-fly by electronic hardware which corrects for technical factors such as spatial linearity, PM tube drift and energy response so as to produce an imaging system with a spatially-uniform sensitivity and distortion-free performance. A multichannel analyzer (MCA) is used to display the energy spectrum of gamma rays which interact inside the crystal. Since these gamma rays originate from within the patient, some of them will have an energy lower than the photopeak as a result of being scattered as they travel through the patient's tissues - and by other components such as the patient table and structures of the imaging system. Some of these scattering events may involve just glancing interactions with free electrons, so that the gamma rays lose only a small amount of energy. These gamma rays may have an energy just below that of the photopeak so that their spectrum merges with the photopeak. The photopeak for a gamma camera imaging a patient therefore contains information from spatially-correlated, unattenuated gamma rays (which is the information we want) and from spatially-uncorrelated, scattered gamma rays. The scattered gamma rays act like a variable background within the true photopeak data and the effect is that of a background haze in gamma camera images. While scatter may not be a significant problem in planar scintigraphy, it has a strong bearing on the fidelity of quantitative information derived from gamma camera images and is a vital consideration for accurate image reconstruction in emission tomography. It is the unattenuated gamma rays (also called the primary radiation) that contain the desired information, because of their direct dependence on radioactivity. The scatter situation is illustrated in more detail in the figure below, which shows estimates of the primary and scatter spectra for 99mTc in patient imaging conditions. Such spectral estimates can be generated using Monte Carlo methods. It is seen in the figure that the energy of the scattered radiation forms a broad band, similar to the Compton Smear described previously, which merges into and contributes substantially to the detected photopeak. The detected photopeak is therefore an overestimate of the primary radiation. The extent of this overestimate is likely to be dependent on the specific imaging situation because of the different thicknesses of tissues involved. It is clear however that the scatter contribution within the detected photopeak needs to be accounted for if an accurate measure of radioactivity is required. One method of compensating for the scatter contribution is illustrated in the figure below and involves using data from a lower energy window as an estimate for subtraction from the photopeak, i.e. where k is a scaling factor to account for the extent of the scatter contribution. This approach to scatter compensation is referred to as the Dual-Energy Window (DEW) method. It can be implemented in practice by acquiring two images, one for each energy window, and subtracting a fraction (k) of the scatter image from the photopeak image. For the spectrum shown above, it can be seen that the scaling factor, k, is about 0.5, but it should be appreciated that its exact value is dependent on the scattering conditions. Gamma cameras which use the DEW method therefore generally provide the capability of adjusting k for different imaging situations. Some systems use a narrower scatter window than that illustrated, e.g. 114-126 keV, with a consequent increase in k to about 1.0, for instance. A host of other methods of scatter compensation have also been developed. These include more complex forms of energy analysis such as the Dual-Photopeak and the Triple-Energy Window techniques, as well as approaches based on deconvolution and models of photon attenuation. An excellent review of these developments is provided in Zaidi & Koral (2004). Some photographs of gamma cameras and related devices are shown below: We will continue with our description of the gamma camera by considering the construction and purpose of the collimator. The collimator is a device which is attached to the front of the gamma camera head. It functions something like a lens used in a photographic camera but this analogy is not quite correct because it is rather difficult to focus gamma-rays. Nevertheless in its simplest form it is used to block out all gamma rays which are heading towards the crystal except those which are travelling at right angles to the plane of the crystal: The figure illustrates a magnified view of a parallel-hole collimator attached to a crystal. The collimator simply consists of a large number of small holes drilled in a lead plate. Notice that gamma-rays entering at an angle to the crystal get absorbed by the lead and that only those entering along the direction of the holes get through to cause scintillations in the crystal. If the collimator was not in place these obliquely incident gamma-rays would blur the images produced by the gamma camera. In other words the images would not be very clear. Most gamma cameras have a number of collimators which can be fitted depending on the examination. The basic design of these collimators is the same except that they vary in terms of the diameter of each hole, the depth of each hole and the thickness of lead between each hole (commonly called the septum thickness). The choice of a specific collimator is dependent on the amount of radiation absorption that occurs (which influences the sensitivity of the gamma camera), and the clarity of images (that is the spatial resolution) it produces. Unfortunately these two factors are inversely related in that the use of a collimator which produces images of good spatial resolution generally implies that the instrument is not very sensitive to radiation. Other collimator designs beside the parallel hole type are also in use. For example a diverging hole collimator produces a minified image and converging hole and pin-hole collimators produce a magnified image. The pin-hole collimator is illustrated in the following figure: It is typically a cone-shaped device with its walls made from lead. A cross-section through this cone is shown in the figure. It operates in a similar fashion to a pin-hole photographic camera and produces an inverted image of an object - an arrow is used in the figure to illustrate this inversion. This type of collimator has been found useful for imaging small objects such as the thyroid gland. Example Images A representative selection of nuclear medicine images is shown below: Emission Tomography The form of imaging which we have been describing is called Planar Imaging. It produces a two-dimensional image of a three-dimensional object. As a result images contain no depth information and some details can be superimposed on top of each other and obscured or partially obscured as a result. Note that this is also a feature of conventional X-ray imaging. The usual way of trying to overcome this limitation is to take at least two views of the patient, one from the front and one from the side for example. So in chest radiography a posterio-anterior (PA) and a lateral view can be taken. And in a nuclear medicine liver scan an antero-posterior (AP) and lateral scan are acquired. This limitation of planar X-ray imaging was overcome by the development of the CAT Scanner about 1970 or thereabouts. CAT stands for Computerized Axial Tomography or Computer Assisted Tomography and today the term is often shortened to Computed Tomography or CT scanning (the term tomography comes from the Greek word tomos meaning slice). Irrespective of its exact name the technique allows images of slices through the body to be produced using a computer. It does this in essence by taking X-ray images at a number of angles around the patient. These slice images show the third dimension which is missing from planar images and thus eliminate the problem of superimposed details. Furthermore images of a number of successive slices through a region of the patient can be stacked on top of each other using the computer to produce a three-dimensional image. Clearly CT scanning is a very powerful imaging technique relative to planar imaging. The equivalent nuclear medicine imaging technique is called Emission Computed Tomography. We will consider two implementations of this technique below. Single Photon Emission Computed Tomography (SPECT) - This SPECT technique uses a gamma camera to record images at a series of angles around the patient. These images are then subjected to a form of digital image processing called Image Reconstruction in order to compute images of slices through the patient. - The Back Projection reconstruction process is illustrated below. Let us assume for simplicity that the slice through the patient actually consists of a 2x2 voxel array with the radioactivity in each voxel given by A1...A4: - The first projection, P1, is imaged from the right and the second projection, P2, from the right oblique and so on. The back projection process involves firstly adding the projections to each other as shown below: - and then normalising the summed (or superimposed) projections to generate an estimate of the radioactivity in each voxel. Since this process can generate streaking artefacts in reconstructed images, the projections are generally filtered prior to back projection, as described in a later chapter, with the overall process referred to as Filtered Back Projection (FBP): - An alternative image reconstruction technique is called Iterative Reconstruction. This is a successive approximation technique as illustrated below: |Projection||Patient||Additive Iterative Reconstruction| - The first estimate of the image matrix is made by distributing the first projection, P1, evenly through an empty pixel matrix. The second projection, P2, is compared to the same projection from the estimated matrix and the difference between actual and estimated projections is added to the estimated matrix. The process is repeated for all other projections. - The Maximum-Likelihood Expectation-Maximisation (ML-EM) algorithm is a refinement to this iterative approach where a division process is used to compare the actual and estimated projections, as shown below: - One cycle of data through this processing chain is referred to as one iteration. Sixteen or more iterations can be required in order to generate an adequate reconstruction and, as a result, computation times can be rather long. The Ordered-Subsets Expectation-Maximisation (OS-EM) algorithm can be used to substantially reduce the computation time by utilising a limited number of projections (called subsets) in a sequential fashion within the iterative process. Noise generated during the reconstruction process can be reduced, for example, using a Gaussian filter built into the reconstruction calculations or applied as a post-filter: - A comparison of these image reconstruction techniques is shown below for a slice through a ventilation scan of a patient's lungs: - The gamma camera is typiclly rotated around the patient in order to acquire the images. Modern gamma cameras which are designed specifically for SPECT scanning can consist of two camera heads mounted parallel to each other with the patient in between. The time required to produce images is therefore reduced by a factor of about two. In addition some SPECT gamma cameras designed for brain scanning have three camera heads mounted in a triangular arrangement. - A wide variety of strategies can be used for the acquisition and processing of SPECT images. Positron Emission Tomography (PET) - You will remember from chapter 2 that positrons can be emitted from radioactive nuclei which have too many neutrons for stability. You will also remember that positrons do not last for very long in matter since they will quickly encounter an electron and a process called annihilation results. In the process the positron and electron vanish and their energy is converted into two gamma-rays which are emitted at roughly 180o degrees to each other. The emission is often referred to as two back-to-back gamma-rays and they each have a discrete energy of 0.51 MeV. - So if we administer a positron-emitting radiopharmaceutical to a patient an emitted positrons can annihilate with a nearby electron and two gamma-rays will be emitted in opposite directions. These gamma-rays can be detected using a ring of radiation detectors encircling the patient and tomographic images can be generated using a computer system. The detectors are typically specialised scintillation devices which are optimised for detection of the 0.51 MeV gamma-rays. This ring of detectors, associated apparatus and computer system are called a PET Scanner: - The locations of positron decays within the patient are highlighted by the solid circles in the above diagram. In addition only a few detectors are shown in the diagram for reasons of clarity. Each detector around the ring is operated in coincidence with a bank of opposing detectors and the annihilation gamma-rays thus detected are used to build up a single profile. - It has also been found that gamma cameras fitted with thick crystals and special collimators can be used for PET scanning. - The radioisotopes used for PET scanning include 11C, 13N, 15O and 18F. These isotopes are usually produced using an instrument called a cyclotron. In addition these isotopes have relatively short half lives. PET scanning therefore needs a cyclotron and associated radiopharmaceutical production facilities located close by. We will consider cyclotrons in the next chapter of this wikibook. - Standardized Uptake Value (SUV) is a semi-quantitative index used in PET to express the uptake of a radiopharmaceutical in a region of interest of a patient's scan. Its typically calculated as the ratio of the radioactivity in the region to the injected dose, corrected for body weight. It should be noted that the SUV is influenced by several major sources of variability and it therefore should not be used as a quantitative measure. - A number of photographs of a PET scanner are shown below: - Slomka PJ, Patton JA, Berman DS & Germano G, 2009. Advances in technical aspects of myocardial perfusion SPECT imaging. Journal of Nuclear Cardiology, 16(2), 255–76. External Links - Centre for Positron Emission Tomography at the Austin & Repatriation Medical Centre, Melbourne with sections on what PET is, current facilities, projects & research and a PET image library. - Online Learning Tools - an advanced treatment from the Department of Radiology, Brigham and Women's Hospital, USA containing nuclear medicine teaching files, an atlas of myocardial perfusion SPECT, an atlas of brain perfusion SPECT and the physical characteristics of nuclear medicine images. Production of Radioisotopes Most of the radioisotopes found in nature have relatively long half lives. They also belong to elements which are not handled well by the human body. As a result medical applications generally require the use of radioisotopes which are produced artificially. We have looked at the subject of radioactivity in earlier chapters of this wikibook and have then progressed to cover the interaction of radiation with matter, radiation detectors and imaging systems. We return to sources of radioactivity in this chapter in order to learn about methods which are used to make radioisotopes. The type of radioisotope of value to nuclear medicine imaging should have characteristics which keep the radiation dose to the patient as low as possible. For this reason they generally have a short half life and emit only gamma-rays - that is no alpha-particle or beta-particle emissions. From an energy point of view the gamma-ray energy should not be so low that the radiation gets completely absorbed before emerging from the patient's body and not too high that it is difficult to detect. For this reason most of the radioisotopes used emit gamma-rays of medium energy, that is between about 100 and 200 keV. Finally since the radioisotope needs to be incorporated into some form of radiopharmaceutical it should also be capable of being produced in a form which is amenable to chemical, pharmaceutical and sterile processing. The production methods we will consider are nuclear fission, nuclear bombardment and the radioisotope generator. Nuclear Fission We were introduced to spontaneous fission in chapter 2 where we saw that a heavy nucleus can break into a number of fragments. This disintegration process can be induced to occur when certain heavy nuclei absorb neutrons. Following absorption of a neutron such nuclei break into smaller fragments with atomic numbers between about 30 and 65. Some of these new nuclei are of value to nuclear medicine and can be separated from other fission fragments using chemical processes. Nuclear Bombardment In this method of radioisotope production charged particles are accelerated up to very high energies and caused to collide into a target material. Examples of such charged particles are protons, alpha particles and deuterons. New nuclei can be formed when these particles collide with nuclei in the target material. Some of these nuclei are of value to nuclear medicine. An example of this method is the production of 22Na where a target of 24Mg is bombarded with deuterons, that is: 24Mg + 2H → 22Na + 4He. A deuteron you will remember from chapter 1 is the second most common isotope of hydrogen, that is 2H. When it collides with a 24Mg nucleus a 22Na nucleus plus an alpha particle is produced. The target is exposed to the deuterons for a period of time and is subsequently processed chemically in order to separate out the 22Na nuclei. The type of device commonly used for this method of radioisotope production is called a cyclotron. It consists of an ion gun for producing the charged particles, electrodes for accelerating them to high energies and a magnet for steering them towards the target material. All arranged in a circular structure. Radioisotope Generator This method is widely used to produce certain short-lived radioisotopes in a hospital or clinic. It involves obtaining a relatively long-lived radioisotope which decays into the short-lived isotope of interest. A good example is 99mTc which as we have noted before is the most widely used radioisotope in nuclear medicine today. This isotope has a half-life of six hours which is rather short if we wish to have it delivered directly from a nuclear facility. Instead the nuclear facility supplies the isotope 99Mo which decays into 99mTc with a half life of about 2.75 days. The 99Mo is called the parent isotope and 99mTc is called the daughter isotope. So the nuclear facility produces the parent isotope which decays relatively slowly into the daughter isotope and the daughter is separated chemically from the parent at the hospital/clinic. The chemical separation device is called, in this example, a 99mTc Generator: It consists of a ceramic column with 99Mo adsorbed onto its top surface. A solution called an eluent is passed through the column, reacts chemically with any 99mTc and emerges in a chemical form which is suitable for combining with a pharmaceutical to produce a radiopharmaceutical. The arrangement shown in the figure on the right is called a Positive Pressure system where the eluent is forced through the ceramic column by a pressure, slightly above atmospheric pressure, in the eluent vial. The ceramic column and collection vials need to be surrounded by lead shielding for radiation protection purposes. In addition all components are produced and need to be maintained in a sterile condition since the collected solution will be administered to patients. Finally an Isotope Calibrator is needed when a 99mTc Generator is used to determine the radioactivity for preparation of patient doses and to check whether any 99Mo is present in the collected solution. Operation of a 99m-Tc Generator Suppose we have a sample of 99Mo and suppose that at time there are nuclei in our sample and nothing else. The number of 99Mo nuclei decreases with time according to radioactive decay law as discussed in Chapter 3: where is the decay constant for 99Mo. Thus the number of 99Mo nuclei that decay during a small time interval is given by Since 99Mo decays into 99mTc, the same number of 99mTc nuclei are formed during the time period . At a time , only a fraction of these nuclei will still be around since the 99mTc is also decaying. The time for 99mTc to decay is given by . Plugging this into radioactive the decay law we arrive at: Now we sum up the little contributions . In other words we integrate over in order to find the number , that is the number of all 99mTc nuclei present at the time : Finally solving this integral we find: The figure below illustrates the outcome of this calculation. The horizontal axis represents time (in days), while the vertical one represents the number of nulcei present (in arbitrary units). The green curve illustrates the exponential decay of a sample of pure 99mTc. The red curve shows the number of 99mTc nuclei present in a 99mTc generator that is never eluted. Finally, the blue curve shows the situation for a 99mTc generator that is eluted every 12 hours. Photographs taken in a nuclear medicine hot lab are shown below: External Links - Concerns over Molybdenum Supplies - news from 2008 compiled by the British Nuclear Medicine Society. - Cyclotron Java Applet - a Java-based interactive demonstration of the operation of a cyclotron from GFu-Kwun Hwang, Dept. of Physics, National Taiwan Normal University, Virtual Physics Laboratory. - Nuclear Power Plant Demonstration - a Java-based interactive demonstration of controlling a nuclear reactor. Also contains nuclear power Information links. - ANSTO - information about Australia's nuclear organization. - Medical Valley - contains information on what nuclear medicine is, production of nuclear pharmaceuticals, molybdenum and technetium - from The Netherlands Energy Research Foundation Petten. Chapter Review Chapter Review: Atomic & Nuclear Structure - The atom consists of two components - a nucleus (positively charged) and an electron cloud (negatively charged); - The radius of the nucleus is about 10,000 times smaller than that of the atom; - The nucleus can have two component particles - neutrons (no charge) and protons (positively charged) - collectively called nucleons; - The mass of a proton is about equal to that of a neutron - and is about 1,840 times that of an electron; - The number of protons equals the number of electrons in an isolated atom; - The Atomic Number specifies the number of protons in a nucleus; - The Mass Number specifies the number of nucleons in a nucleus; - Isotopes of elements have the same atomic number but different mass numbers; - Isotopes are classified by specifying the element's chemical symbol preceded by a superscript giving the mass number and a subscript giving the atomic number; - The atomic mass unit is defined as 1/12th the mass of the stable, most commonly occurring isotope of carbon (i.e. C-12); - Binding energy is the energy which holds the nucleons together in a nucleus and is measured in electron volts (eV); - To combat the effect of the increase in electrostatic repulsion as the number of protons increases, the number of neutrons increases more rapidly - giving rise to the Nuclear Stability Curve; - There are ~2450 isotopes of ~100 elements and the unstable isotopes lie above or below the Nuclear Stability Curve; - Unstable isotopes attempt to reach the stability curve by splitting into fragments (fission) or by emitting particles/energy (radioactivity); - Unstable isotopes <=> radioactive isotopes <=> radioisotopes <=> radionuclides; - ~300 of the ~2450 isotopes are found in nature - the rest are produced artificially. Chapter Review: Radioactive Decay - Fission: Some heavy nuclei decay by splitting into 2 or 3 fragments plus some neutrons. These fragments form new nuclei which are usually radioactive; - Alpha Decay: Two protons and two neutrons leave the nucleus together in an assembly known as an alpha-particle; - An alpha-particle is a He-4 nucleus; - Beta Decay - Electron Emission: Certain nuclei with an excess of neutrons may reach stability by converting a neutron into a proton with the emission of a beta-minus particle; - A beta-minus particle is an electron; - Beta Decay - Positron Emission: When the number of protons in a nucleus is in excess, the nucleus may reach stability by converting a proton into a neutron with the emission of a beta-plus particle; - A beta-plus particle is a positron; - Positrons annihilate with electrons to produce two back-to-back gamma-rays; - Beta Decay - Electron Capture: An inner orbital electron is attracted into the nucleus where it combines with a proton to form a neutron; - Electron capture is also known as K-capture; - Following electron capture, the excited nucleus may give off some gamma-rays. In addition, as the vacant electron site is filled, an X-ray is emitted; - Gamma Decay - Isomeric Transition: A nucleus in an excited state may reach its ground state by the emission of a gamma-ray; - A gamma-ray is an electromagnetic photon of high energy; - Gamma Decay - Internal Conversion: the excitation energy of an excited nucleus is given to an atomic electron. Chapter Review: The Radioactive Decay Law - The radioactive decay law in equation form; - Radioactivity is the number of radioactive decays per unit time; - The decay constant is defined as the fraction of the initial number of radioactive nuclei which decay in unit time; - Half Life: The time taken for the number of radioactive nuclei in the sample to reduce by a factor of two; - Half Life = (0.693)/(Decay Constant); - The SI Unit of radioactivity is the becquerel (Bq) - 1 Bq = one radioactive decay per second; - The traditional unit of radioactivity is the curie (Ci); - 1 Ci = 3.7 x 1010 radioactive decays per second. Chapter Review: Units of Radiation Measurement - Exposure expresses the intensity of an X- or gamma-ray beam; - The SI unit of exposure is the coulomb per kilogram (C/kg); - 1 C/kg = The quantity of X- or gamma-rays such that the associated electrons emitted per kg of air at STP produce in air ions carrying 1 coulomb of electric charge; - The traditional unit of exposure is the roentgen (R); - 1 R = The quantity of X- or gamma-rays such that the associated electrons emitted per kg of air at STP produce in air ions carrying 2.58 x 10-4 coulombs of electric charge; - The exposure rate is the exposure per unit time, e.g. C/kg/s; - Absorbed dose is the radiation energy absorbed per unit mass of absorbing material; - The SI unit of absorbed dose is the gray (Gy); - 1 Gy = The absorption of 1 joule of radiation energy per kilogram of material; - The traditional unit of absorbed dose is the rad; - 1 rad = The absorption of 10-2 joules of radiation energy per kilogram of material; - The Specific Gamma-Ray Constant expresses the exposure rate produced by the gamma-rays from a radioisotope; - The Specific Gamma-Ray Constant is expressed in SI units in C/kg/s/Bq at 1 m; - Exposure from an X- or gamma-ray source follows the Inverse Square Law and decreases with the square of the distance from the source. Chapter Review: Interaction of Radiation with Matter - exert considerable electrostatic attraction on the outer orbital electrons of atoms near which they pass and cause ionisations; - travel in straight lines - except for rare direct collisions with nuclei of atoms in their path; - energy is always discrete. - Beta-Minus Particles: - attracted by nuclei and repelled by electron clouds as they pass through matter and cause ionisations; - have a tortuous path; - have a range of energies; - range of energies results because two particles are emitted - a beta-particle and a neutrino. - energy is always discrete; - have many modes of interaction with matter; - important interactions for nuclear medicine imaging (and radiography) are the Photoelectric Effect and the Compton Effect. - Photoelectric Effect: - when a gamma-ray collides with an orbital electron, it may transfer all its energy to the electron and cease to exist; - the electron can leave the atom with a kinetic energy equal to the energy of the gamma-ray less the orbital binding energy; - a positive ion is formed when the electron leaves the atom; - the electron is called a photoelectron; - the photoelectron can cause further ionisations; - subsequent X-ray emission as the orbital vacancy is filled. - Compton Effect: - A gamma-ray may transfer only part of its energy to a valence electron which is essentially free; ** gives rise to a scattered gamma-ray; - is sometimes called Compton Scatter; - a positive ion results; - Attenuation is term used to describe both absorption and scattering of radiation. Chapter Review: Attenuation of Gamma-Rays - Attenuation of a narrow-beam of gamma-rays increases as the thickness, the density and the atomic number of the absorber increases; - Attenuation of a narrow-beam of gamma-rays decreases as the energy of the gamma-rays increases; - Attenuation of a narrow beam is described by an equation; - the Linear Attenuation Coefficient is defined as the fraction of the incident intensity absorbed in a unit distance of the absorber; - Linear attenuation coefficients are usually expressed in units of cm-1; - the Half Value Layer is the thickness of absorber required to reduce the intensity of a radiation beam by a factor of 2; - Half Value Layer = (0.693)/(Linear Attenuation Coefficient); - the Mass Attenuation Coefficient is given by the linear attenuation coefficient divided by the density of the absorber; - Mass attenuation coefficients are usually expressed in units of cm2 g-1. Chapter Review: Gas-Filled Detectors - Gas-filled detectors include the ionisation chamber, the proportional counter and the Geiger counter; - They operate on the basis of ionisation of gas atoms by the incident radiation, where the positive ions and electrons produced are collected by electrodes; - An ion pair is the term used to describe a positive ion and an electron; - The operation of gas-filled detectors is critically dependent on the magnitude of the applied dc voltage; - The output voltage of an ionisation chamber can be calculated on the basis of the capacitance of the chamber; - A very sensitive amplifier is required to measure voltage pulses produced by an ionisation chamber; - The gas in ionisation chambers is usually air; - Ionisation chambers are typically used to measure radiation exposure (in a device called an Exposure Meter) and radioactivity (in a device called an Isotope Calibrator); - The total charge collected in a proportional counter may be up to 1000 times the charge produced initially by the radiation; - The initial ionisation triggers a complete gas breakdown in a Geiger counter; - The gas in a Geiger counter is usually an inert gas; - The gas breakdown must be stopped in order to prepare the Geiger counter for a new event by a process called quenching; - Two types of quenching are possible: electronic quenching and the use of a quenching gas; - Geiger counters suffer from dead time, a small period of time following the gas breakdown when the counter is inoperative; - The true count rate can be determined from the actual count rate and the dead time using an equation; - The value of the applied dc voltage in a Geiger counter is critical, but high stability is not required. Chapter Review: Scintillation Detectors - NaI(Tl) is a scintillation crystal widely used in nuclear medicine; - The crystal is coupled to a photomultiplier tube to generate a voltage pulse representing the energy deposited in the crystal by the radiation; - A very sensitive amplifier is needed to measure such voltage pulses; - The voltages pulses range in amplitude depending on how the radiation interacts with the crystal, i.e. the pulses form a spectrum whose shape depends on the interaction mechanisms involved, e.g. for medium-energy gamma-rays used in in-vivo nuclear medicine: the Compton effect and the Photoelectric effect; - A Gamma-Ray Energy Spectrum for a medium-energy, monoenergetic gamma-ray emitter consists (simply) of a Compton Smear and a Photopeak; - Pulse Height Analysis is used to discriminate the amplitude of voltage pulses; - A pulse height analyser (PHA) consists of a lower level discriminator (which passes voltage pulses which are than its setting) and an upper level discriminator (which passes voltage pulses lower than its setting); - The result is a variable width window which can be placed anywhere along a spectrum, or used to scan a spectrum; - A single channel analyser (SCA) consists of a single PHA with a scaler and a ratemeter; - A multi-channel analyser (MCA) is a computer-controlled device which can acquire data from many windows simultaneously. Chapter Review: Nuclear Medicine Imaging Systems - A gamma camera consists of a large diameter (25-40 cm) NaI(Tl) crystal, ~1 cm thick; - The crystal is viewed by an array of 37-91 PM tubes; - PM tubes signals are processed by a position circuit which generates +/- X and +/- Y signals; - These position signals are summed to form a Z signal which is fed to a pulse height analyser; - The +/- X, +/- Y and discriminated Z signals are sent to a computer for digital image processing; - A collimator is used to improve the spatial resolution of a gamma-camera; - Collimators typically consist of a Pb plate containing a large number of small holes; - The most common type is a parallel multi-hole collimator; - The most resolvable area is directly in front of a collimator; - Parallel-hole collimators vary in terms of the number of holes, the hole diameter, the length of each hole and the septum thickness - the combination of which affect the sensitivity and spatial resolution of the imaging system; - Other types include the diverging-hole collimator (which generates minified images), the converging-hole collimator (which generates magnified images) and the pin-hole collimator (which generates magnified inverted images); - Conventional imaging with a gamma camera is referred to as Planar Imaging, i.e. a 2D image portraying a 3D object giving superimposed details and no depth information; - Single Photon Emission Computed Tomography (SPECT) produces images of slices through the body; - SPECT uses a gamma camera to record images at a series of angles around the patient; - The resultant data can be processed using Filtered Back Projection and Iterative Reconstruction; - SPECT gamma-cameras can have one, two or three camera heads; - Positron Emission Tomography (PET) also produces images of slices through the body; - PET exploits the positron annihilation process where two 0.51 MeV back-to-back gamma-rays are produced; - If these gamma-rays are detected, their origin will lie on a line joining two of the detectors of the ring of detectors which encircles the patient; - A Time-of-Flight method can be used to localise their origin; - PET systems require on-site or nearby cyclotron to produce short-lived radioisotopes, such as C-11, N-13, O-15 and F-18. Chapter Review: Production of Radioisotopes - Naturally-occurring radioisotopes generally have long half lives and belong to relatively heavy elements - and are therefore unsuitable for medical diagnostic applications; - Medical diagnostic radioisotopes are generally produced artificially; - The fission process can be exploited so that radioisotopes of interest can be separated chemically from fission products; - A cyclotron can be used to accelerate charged particles up to high energies so that they to collide into a target of the material to be activated; - A radioisotope generator is generally used in hospitals to produce short-lived radioisotopes; - A technetium-99m generator consists of an alumina column containing Mo-99, which decays into Tc-99m; - Saline is passed through the generator to elute the Tc-99m - the resulting solution is called sodium pertechnetate; - Both positive pressure and negative pressure generators are in use; - An isotope calibrator is needed when a Tc-99m generator is used in order to determine the activity for preparation of patient doses and to test whether any Mo-99 is present in the collected solution. Exercise Questions 1. Discuss the process of radioactive decay from the perspective of the nuclear stability curve. 2. Describe in detail FOUR common forms of radioactive decay. 3. Give the equation which expresses the Radioactive Decay Law, and explain the meaning of each of its terms. 4. Define each of the following: - (a) Half life; - (b) Decay Constant; - (c) Becquerel. 5. A sample of radioactive substance is found to have an activity of 100 kBq. Its radioactivity is measured again 82 days later and is found to be 15 kBq. Calculate: - (a) the half-life; - (b) the decay constant. 6. Define each of the following radiation units: - (a) Roentgen; - (b) Becquerel; - (c) Gray. 7. Estimate the exposure rate at 1 metre from a 100 MBq source of radioactivity which has a Specific Gamma Ray Constant of 50 mR per hour per MBq at 1 cm. 8. Briefly describe the basic principle of operation of gas-filled radiation detectors. 9. Illustrate using a graph how the magnitude of the voltage pulses from a gas-filled radiation detector varies with applied voltage and identify on the graph the regions associated with the operation of Ionisation Chambers and the Geiger Counters. 10. Describe the construction and principles of operation of a scintillation spectrometer. 11. Discuss the components of the energy spectrum from a monoenergetic, medium energy gamma- emitting radioisotope obtained using a scintillation spectrometer on the basis of how the gamma-rays interact with the scintillation crystal. 12. Describe the construction and principles of operation of a Gamma Camera. 13. Compare features of three types of collimator which can be used with a Gamma Camera. Further Information Nuclear Medicine is a fascinating application of nuclear physics. The first ten chapters of this wikibook are intended to support a basic introductory course in an early semester of an undergraduate program. They assume that students have completed decent high school programs in maths and physics and are concurrently taking subjects in the medical sciences. Additional chapters cover more advanced topics in this field. Our focus in this wikibook is the diagnostic application of Nuclear Medicine. Therapeutic applications are considered in a separate wikibook, "Radiation Oncology". - Atomic & Nuclear Structure - Radioactive Decay - The Radioactive Decay Law - Units of Radiation Measurement - Interaction of Radiation with Matter - Attenuation of Gamma-Rays - Gas-Filled Radiation Detectors - Scintillation Detectors - Nuclear Medicine Imaging Systems - Computers in Nuclear Medicine - Fourier Methods - X-Ray CT in Nuclear Medicine - PACS and Advanced Image Processing - Three-Dimensional Visualization Techniques - Patient Dosimetry - Production of Radioisotopes - Chapter Review - Dynamic Studies in Nuclear Medicine - Deconvolution Analysis - Sonography & Nuclear Medicine - MRI & Nuclear Medicine - Dual-Energy Absorptiometry The principal author of this text is KieranMaher, who is very grateful for the expert editorial assistance of Dirk Hünniger during his German translation of the text and his contribution to the section on the Operation of a 99m-Tc Generator. You can send an e-mail message if you'd like to provide any feedback, criticism, correction, additions/improvement suggestions etc. regarding this wikibook. - Applied Imaging Technology, 4th Ed., JCP Heggie, NA Liddell & KP Maher (St Vincent's Hospital Melbourne, 2001) - Basic Science of Nuclear Medicine, 2nd Ed., RP Parker, PHS Smith, DM Taylor (Churchill Livingstone, 1984) - Computed Tomography: Fundamentals, System Technology, Image Quality, Applications, 3rd Ed., WA Kalender (Wiley, 2000) - Introduction to Nuclear Physics, H Enge (Addison-Wesley, 1966) - Magnetic Resonance in Medicine, 4th Ed., PA Rinck (Blackwell, 2001) - Nuclear Medicine in Urology & Nephrology, 2nd Ed., HJ Testa, PH O'Reilly & RA Shields (Butterworth-Heinemann, 1986) - Physics in Nuclear Medicine, JA Sorenson and ME Phelps (Grune & Stratton, 1980) - Radioisotope Techniques in Clinical Research and Diagnosis, N Veall and H Vetter (Butterworths, 1958) - Radionuclide Techniques in Medicine, JM McAlister (Cambridge University Press, 1979).
http://en.wikibooks.org/wiki/Basic_Physics_of_Nuclear_Medicine/Print_version
13
19
This article was researched and written by a student at the University of Massachusetts, Amherst participating in the Encyclopedia of Earth's (EoE) Student Science Communication Project. The project encourages students in undergraduate and graduate programs to write about timely scientific issues under close faculty guidance. All articles have been reviewed by internal EoE editors, and by independent experts on each topic. Freshwater mussels (Family Unionidae) populations are declining globally due to alteration in habitat, contamination, climate change and the introduction of exotic species. They are the most imperiled group of species in North America. From a group of almost three hundred species, 7% have become extinct in the last fifty years, and 65% are threatened or endangered. This significant loss of benthic biomass may result in large scale alterations of freshwater ecosystem processes and functions. Importance of freshwaterm mussels to stream ecosystems Freshwater mussels are important to food webs, water quality, nutrient cycling, and habitat quality of freshwater ecosystems. They spend their lives either fully or partially buried in sediment, usually only moving to seek conditions more favorable to survival. Species distributions depend on their biology and habitat preference, distribution of fish hosts, and environmental constraints. Freshwater mussels are found in permanent aquatic habitats such as streams, rivers, and lakes. Relatively stationary, these filter feeding mussels remove suspended algae, bacteria, zooplankton, and phytoplankton from the water column redistributing it in the form of feces and pseudofeces biodeposits. Freshwater mussels filter small particles that are largely unavailable to other organisms and convert them to larger particles that can be consumed by a greater diversity of animals. The cycling of particulate matter stimulates benthic productivity by providing a significant food source rich in dissolved nutrients that increases the abundance of benthic invertebrates and provides nutrients for primary producers. Filter-feeders can also have a positive effect on water quality. Mussels are an important source of food for aquatic predators and land-based scavengers such as river otters (Lutra canadensis), muskrats (Ondatra zibethicus), raccoons (Procyon lotor), and skunks (Mephitidae). Juveniles are eaten by flatworms (Platyhelminthes), leeches (Hirudidae), and crayfish as well as an array of freshwater fish including carp (Cyprinidae), sturgeon (Acipenseridae), catfish (Ictaluridae), and sunfish (Centrarchidae). Gulls and shorebirds feed on mussels when water levels are low. Unionoidae shells provide a suitable substrate for epiphytic and epizoic colonization, and help stabilize fine-grained sediments that other organisms use for habitat. Threats to Freshwater Mussels Freshwater mussels are threatened by a number of factors including the loss of fish hosts, increased parasite loads, habitat loss and fragmentation, climate change, and the affects of introducted species Loss of fish hosts The life history of freshwater mussels is intimately linked with the life history of some stream fishes, so factors that decrease fish populations can have an adverse effect on freshwater mussels. Reproduction and life history of freshwater mussels Unionidae are dioecious, meaning they have separate sexes. Breeding success is higher in populations with high densities of males and females and when environmental conditions (e.g. flow and temperature) are optimal for the survival of sperm. Low-density populations often encounter chronic failure to breed due to low contact between sexes as well as vulnerability to stochastic events such as floods. Hermaphrodites are rare but present in low-density populations where males and females are less likely to encounter each other. Males release sperm into the water and females filter it from the water, to fertilize their eggs internally. Larvae, called glochidia, are released from the female mussel and complete their larval development by attaching to the gills or fins of aquatic vertebrates, typically fish, as obligate parasites. The high fecundity of mussels increases the likelihood for at least some glochidia finding a host. Glochidia become juvenile mussels over the course of a few days to a few months then release from the fish, burrow into the sediment, and spend the rest of their lives as free-living animals. Viability of freshwater mussel populations is intimately linked to the availability and health of fish hosts as well as variations in mussel density and flow conditions. Mussels can be categorized as either “generalists” with the ability to use a wide range of fish species as larval hosts, or “specialists” which use only a few closely related fish species as hosts. Factors affecting fish populations Fish populations may be affected by a variety of causes including physical impediments such as dams and roads, climate change, overfishing, increased predation, sea lice, pollution, and habitat degradation. Reproduction of specialists mussels relies upon host fish dynamics, adding more uncertainty to the reproductive process. Effects of dams and roads In the Connecticut River watershed, the alewife floater (Anodonta implicata) relies on anadromous fishes, American shad (Alosa sapidissima), alewife (Alosa pseudoharengus), and blueback herring (Alosa aestivalis), as hosts. The loss of these species upstream due to impassable dams will also eliminate the alewife floater in these areas. Stocking of fish in river systems may help sustain populations of mussels where streams have become too warm and degraded to support self-sustaining fish populations. The host fish populations of the dwarf wedgemussel (Alasmidonta heterodon), a federally endangered species, rely on seasonal movement into small streams that are often impeded by road-stream crossings. Effects of climate change Host fish availability might decline as a result of climate change,. Host fish include salmonids which are sensitive to temperature rise, as well as the decrease in dissolved oxygen that comes with it. Increased precipitation may have a negative impact on gravel spawning ground which can be completely destroyed by large floods reducing egg survival. With the decline in host salmon and trout stocks, low host densities may be limiting recruitment of some mussel species. Increased parasite load Increased parasite densities are associated with reduced mussel reproductive output and physiological condition especially in impounded and nutrient rich streams. Parasites of unionoids feed on the host’s gill tissue which is used not only for respiratory function but also for feeding and reproduction output. Digentic (host-castrating) trematodes target gonadal tissue and also affect growth rate and larval production in freshwater mussels. It has been observed that physiologically compromised mussels are more likely to be susceptible to parasitism. Habitat loss and fragmentation Habitat loss and fragmentation have greatly contributed to the loss of freshwater mussels. There are many mussel populations composed of exclusively old age individuals due to human induced changes to habitat which have made it unfavorable for juvenile survival. Important causes of habitat loss are building dams and surrounding land use. Habitat disturbance by river engineering is often seen as the biggest culprit. The Nature Conservancy estimates that there are 2,622 dams in the Connecticut River watershed alone. All dams, regardless of size, have an effect on freshwater mussel survival because dams may affect hydrology, water temperature, water quality, and sediment transport. Fragmentation of river systems by multiple dams leads to isolated mussel populations and decreased reproductive success which will eventually lead to higher risk of extirpation. Dams impede or block the movement of native and anadromous fish which will also lower reproductive rates and survival. Habitat loss and ultimately species loss will occur downstream due to unnaturally high flow variations on short time scales causing the loss of fine sediments. Drawdowns of impoundments, if unmanaged, will also cause high mortality of mussels inhabiting the impoundment due to the drying of their habitat. The increased production of electricity by hydropower dams during periods of high demand causes rapid changes in low and high flow, producing near flood or drought conditions for mussels. Other contributors to habitat loss and fragmentation include road-stream crossings, poorly planned land use and development, and industrialization. Flood prevention and post-flood infrastructure has also caused a considerable amount of habitat degradation and high mussel mortality. Surrounding land use The quality of both water and sediment in river systems is affected by land conversion, agriculture, industries, urbanization, and industrialization. Nonpoint-source pollution stemming from a variety of land based sources reaches waterways by surface runoff, groundwater, or atmospheric deposition. Primary pollutants include bacteria, sediment, road salt, pesticides, herbicides, hydrocarbons, nutrients including nitrogen and phosphorus, as well as a number of other chemicals. Some species of freshwater mussels are affected by eutrophication more than others. Nitrogen in the form of ammonia and nitrates can be toxic to freshwater mussels and other aquatic organisms. Sediment pollution contributes to the loss of mussel habitat, decreased channel stability, loss of fish species, and causes a disruption of mussel feeding and respiration. Changes in temperature Changes in temperature have the potential to affect individual growth, longevity, and reproductive success. It is predicted that by around 2050 there will be a 1-2 degree centigrade increase in mean air surface temperature, which will affect surface water temperature. Although elevated surface water temperature has been shown to enhance recruitment (post-settlement survival) and increase the growth rates of glochidia, this is primarily observed in instances where mussels have a chance to acclimate themselves to the temperature, however, extreme thermal events may be detrimental to their survival. Temperature rise will also play a role in the timing of spawning, causing females to release glochidia into the water column earlier, thus uncoupling the timing of mussel and fish reproduction cycles, especially in anadromous fish. Changes in precipitation There have been recent increases in cloud cover as well as precipitation, evidenced as greater storm events, altering the habitat structure of mussel beds. Mussels have appeared to recruit well during wet years and recruitment may even increase as a result of increased precipitation. Since mussels require clean well aerated sand, higher river flows associated with wet years may be able to increase habitat. However, rainfall may also negatively influence mussel habitat availability by increasing high flow and runoff thereby changing patterns of erosion and deposition that degrade the river bed. Effects from increased precipitation on recruitment success will therefore vary due to the size and hydraulic characteristics of each river. In contrast, changes in seasonal pattern may be detrimental to mussel populations if summers continue to become drier. Mussel beds would be at risk of drying out and silt deposits, algal growth, and organic debris would increase. Periodic floods could serve to clean sediments and reduce debris, but mortality would occur when mussels are removed from the sediment and washed downstream. Changes in sea level Melting of ice caps may lead to a rise in sea level. Mussel populations in the lower reaches of rivers are at greater risk of immersion in sea water. Most freshwater mussels cannot tolerate saline conditions and a few species would be killed by permanent immersion in salt water. Also, many populations would be affected by sporadic intrusion by saltwater or brackish water caused by storm events, spring tides, storm surges, and onshore winds. The zebra mussel (Dreissena polymorpha) was introduced to North America in the early 1990s and has been spreading throughout the Mississippi river basin, which contains the largest number of endemic mussels in the world. Following the introduction of zebra mussels, native mussel populations have been extirpated within 4-8 years. The invasion of zebra mussels has increased the extinction rate of native species from 4% to 12% per decade. D. polymorpha is a biofouling organism that smothers other mollusks and competes with other suspension feeders for food. They attach to solid surfaces using adhesive byssal fibers and possess a planktonic larvae stage that stays in the water column before settlement. No native freshwater mussel has these characteristics. Unionid mussels have a complex life cycle and spend most of their lives buried halfway into the sediment providing a suitable surface for zebra mussels to colonize. D. polymorpha impairs unionid metabolic activity and locomotion resulting in depleted energy and effectively starve them to death. Zebra mussels are also known to harm other suspension feeders though massive filtration, depleting all food sources. Most river systems in North America will be colonized with zebra mussels in the near future, substantially reducing the species richness and abundance of native mussels. Populations that had survived several decades of environmental degradation were wiped out within a few years of the D. polymorpha invasion in the Mississippi river basin. The zebra mussel invasion reduces populations into small fragmented assemblages which become prone to extinction by other anthropogenic threats. In North America, freshwater mussels (Unionidae) are declining at a catastrophic rate. Many factors including habitat degradation, pollution, climate change, and the introduction of exotic species point toward impending mass extinction. The significant loss of biodiversity may permanently alter ecosystem functioning in rivers and lakes as well as alter the rate of ecological processes. Recent developments in management such as statutory protection, habitat restoration, and captive breeding, provide some optimism, but it is clear more research is needed in order to assess the extent to which populations are affected and what measures should be taken in the future. References and Further Reading - Gangloff, M.M., Lenertz, K.K., & Feminella J.W. “Parasitic mite and trematode abundance are associated with reduced reproductive output and physiological condition of freshwater mussels.” Hydrobiologia 610(2008):25-31. - Hastie, L.C., Cosgrove, P.J., Ellis, N., & Gaywood, M.J. “The Threat of Climate Change to Freshwater Pearl Mussel Populations.” Ambio 32(2003):40-43. - Nedeau, E.J. Freshwater Mussels and the Conneticut River Watershed. Conneticut River Watershed Council, Greenfield, MA. 2008: xvii+132 pp. - Ricciardi, A., Neves, R.J., & Rasmussen, J.B. “Impending extinctions of North American freshwater mussels (Unionida) following the zebra mussel (Dreissena polymorpha) invasion.” Journal of Animal Ecology 67(1998):613-619. - Vaughn, C.C., & Hakenkamp, C.C. “The functional role of burrowing bivalves in freshwater ecosystems.” Freshwater Biology 46(2001):1431-1446.
http://www.eoearth.org/article/Freshwater_mussels_in_North_America_-_factors_affecting_their_endangerment_and_extinction
13
18
The discipline and science of accounting is essential for the world economy to function well. Without an accurate way to keep track of investments, expenditures, depreciation, and more, there would be no way to understand the true financial picture of a company and no way to be confident in its prospects. This would kill commerce, capital investment, and other transactions that keep the economy running. The history of accounting is more fascinating than many people probably imagine, and several figures have made key contributions to the science. One of the most important people in the history of accounting is Luca Pacioli, a Franciscan friar who lived during the fifteenth and sixteenth centuries, and who is today known as the “Father of Accounting.” This resource will provide a history of accounting and an overview of Pacioli’s contributions to the discipline. History of Accounting Most people are not likely to think of accounting when the topic of the “world’s oldest profession” is raised, but many experts believe that accounting fits that description to a tee. From the start, it was necessary for individuals to have a way to keep track of their business dealings even if they were largely self-sufficient, merely growing their own food and taking care of their other needs. As civilization progressed, ancient bookkeeping methods were developed. In the so-called “Fertile Crescent,” ancient bookkeepers would use clay tokens of different shapes and sizes to keep track of wealth. Each token could represent a different commodity — sheep, cattle, grain, and so forth. New technologies and recording methods developed over time, and as money was introduced to facilitate economic exchange, the token system was abandoned in the favor of written accounts. During the medieval period, Italian merchants began to involve themselves in trade with other cities, first across the Mediterranean Sea and then in other parts of the world. The increasing complexity of these trade relationships required better record keeping, and the system of double-entry bookkeeping was invented. Luca Pacioli, an Italian Franciscan monk wrote Summa de Arithmetica, Geometrica, Proportioni et Proportionalita in 1494, and it was the first full description of this method of accounting. During the Enlightenment and Industrial Revolution, Britain’s rise as the world’s chief economic power meant that accounting methods would have to advance as well. Men such as Josiah Wedgwood began implementing systems of cost accounting in their companies, and professional accountants began offering their services in London. Such methods were carried over to the United States, and large firms such as General Motors adopted these accounting methods as well. Today, standardized accounting practices are in use across the globe, helping companies around the world to stay afloat, attract investment, and keep the engine of the world economy running. The Life of Luca Pacioli Luca Pacioli was born in 1445 in Tuscany, Italy, where he received an education in the ways of medieval merchants and commerce. Over time, his interest in mathematics led him to become an expert tutor in the subject, and he wrote a textbook on mathematics to help instruct his students. During the years 1472–1475, Pacioli became a Franciscan friar, but he did not end his tutoring career. In 1494, Pacioli published his most famous work —Summa de Arithmetica, Geometria, Proportioni et Proportionalita. In addition to providing instruction in standard mathematics, this work would also describe double-entry bookkeeping completely for the very first time, which has earned for him the title “father of accounting.” It was also the first textbook on algebra that was written in the vernacular language of northern Italy. Eventually, Pacioli would travel to Milan, where he became an associate of Leonardo da Vinci. Da Vinci actually learned a lot about mathematics from Pacioli, and the knowledge he gained would help Da Vinci create some of the excellent anatomical drawings for which he is known today. Much of Pacioli’s work in mathematics was not original or unique, but his writings had a large influence in Italy, allowing for information that was formerly the possession merely of the elite to be disseminated among the general populace. Pacioli died in 1517, the same year that Martin Luther’s 95 Theses in Germany would help spark the Protestant Reformation. Friar Luca’s Contributions to Accounting Pacioli did not actually invent double-entry bookkeeping, nor did ever claim to have done so. He gave credit to one Bendetto Cotrugli for coming up with the system, as he relied on an unpublished by Cotrugli for the portion of his work on accounting in his own Summa. Nevertheless, Pacioli’s summation of the method was incredibly important for the history of accounting, as it was one of the first descriptions of double-entry bookkeeping to be distributed on a large-scale. Double-entry bookkeeping allows for a company or individual to keep track of credits and debits and thereby keep accounts in balance. Every financial transaction is recorded in two columns, debits in the left and credits in the right, ensuring that the ways in which each transaction affects every aspect of the company’s finances is properly recorded. For example, a company that takes payment for a specific service will record a debit in the cash account and a credit in the revenue account, allowing them to keep track of the real impact of the payment on the company’s bottom line. Double-entry bookkeeping itself may not sound all that exciting, but without it, most experts would confess that the industrial revolution and growth of free-market capitalism could never have happened. Luca’s description of double-entry bookkeeping ensured that the process would become widely adopted across the Western world and would encourage the rise of Europe and the United States as global powers. Without the work of an otherwise obscure Franciscan friar in the fifteenth and sixteenth centuries, the economy as we know it today could not exist. Pacioli’s description of double-entry bookkeeping led to the rise of modern accounting, accurate record keeping, and the overall growth of industry and trade. Understanding his role in accounting history is important for understanding Western history and the way in which the economy functions today.
http://www.onlineaccountingdegree.net/resources/the-father-of-accounting-luca-pacioli/
13
18
The Babylonian and Assyrian Legal Systems of Ancient Mesopotamia Legal Systems Very Different from Ours Professor David Friedman Babylonia and Assyria were two empires of ancient Mesopotamia, a word that literally translates to “the land between the rivers” in Greek[i], as it was located between the Tigris and Euphrates Rivers,[ii] and covered modern-day Iraq, as well as parts of Turkey, Syria, and other surrounding countries.[iii] Mesopotamia is often referred to as the “cradle of civilization,”[iv] since “Assyria and Babylonia led the world in science, law, political development, mathematics, technology, and literature.”[v] Before the Babylonian and Assyrian empires, the Sumerians (whose exact origin is still uncertain but began ruling around 3500 BCE[vi]) followed by the Akkadians (beginning in 2350 BCE and forming the first regional state empire in Mesopotamia)[vii] lived upon and ruled over Mesopotamia.[viii] It is important to distinguish between the Babylonians and Assyrians, as these groups, although similar in that they were both descendants of the ancient Akkadian peoples[ix] and spoke almost identical Semitic languages,[x] formed two distinct empires.[xi] The southern part of Mesopotamia was ancient Babylonia, while Assyria took over the northern part of the region.[xii] Nonetheless, their legal systems were very similar; thus, law that developed in Old Babylonia and evidently had influence over Assyria will be the primary focus of this discussion. The Code of Hammurabi and Clay Tablets: The Old Babylonian period took place from 2000-1600 BCE.[xiii] During the later part of this period, estimated around 1792-1750 BCE[xiv], King Hammurabi reigned over Babylonia, making it a “cultural supremacy” by not only forming an extremely strong military, but also by seeking to secure social justice throughout the land.[xv] With that being said, Hammurabi created the Code of Hammurabi (“the Code”), one of the most famous features of Mesopotamia and “the earliest-known example of a ruler proclaiming publicly to his people an entire body of laws, arranged in orderly groups, so that all men might read and know what was required of them.”[xvi] The Code, which was found in 1901, is carved on an eight-foot tall black stone monument[xvii] and contains 282 sections, some of which are missing, as the actual monument is only nearly complete. It is important to note that Hammurabi did not come up with these laws himself, as earlier contracts with very similar phrases have been located and it is evident that Hammurabi often added onto other already existing law.[xviii] Specifically, Hammurabi’s Code likely adopted and added onto common law of the Sumerians.[xix] Nonetheless, the Code remains as one of our guides to the Babylonian and Assyrian legal systems. Most notable in the Code is the theme of “eye for eye,” but this rule seems to apply only in instances where both parties are of equal classes.[xx] Still, it is expected and it is true that because the Code was carved very long ago in such an ancient language, there are problems with the modern translations of the Code.[xxi] The Code has been translated many times, making each translation somewhat different and definitely not in the original wording of the Code.[xxii] Even though several interpretations of the Code exist, this flaw has not stopped historians from, nonetheless, applying the Code in their descriptions of the legal system of Babylonia.[xxiii] Besides the Code, the authors have also relied on other sources of law from before, during, and after Hammurabi’s reign to get a better sense of the Old Babylonian legal system as a whole. As will soon be discovered, these authors often depend on thousands of deeds, judicial decisions, letters, contracts, and other sources of law, and have coupled these sources with the Code to try and piece together the legal system of Babylonia.[xxiv] As stated above, in studying Old Babylonian law, historians are dealing with an extinct language inscribed on certain material (clay, stone, etc.), not to mention such texts are likely missing pieces as they are thousands of years old; thus, each source presented must be questioned. With that being said, the main source for this paper is Babylonian and Assyrian Laws, Contracts and Letters by C.H.W. Johns. As the title of his book suggests, Johns references thousands of contracts and letters from Mesopotamia, yet makes it clear that problems do occur in copying ancient inscriptions due to grammar and vocabulary issues, as well as the nature of the contracts and letters since we do not know exactly under what circumstances and with what intent they were written.[xxv] Thus, although the sources are all there, they are not completely helpful in accurately describing Mesopotamian legal systems. Similarly, Johns states, “[n]umerous as our documents are, they do not form a continuous series,” meaning that the contracts and letters may be from any day in a period spanning hundreds of years, in hundreds of different situations, and among hundreds of different members of society. For instance, one letter might be from an ordinary Assyrian man, while another might be from an Old Babylonian king. Throughout his book, Johns constantly references the Code (under his translation) and, when he mentions aspects of the Old Babylonian or Assyrian legal systems without referring to the Code, he interprets such information from the previously mentioned contracts and letters, some of which are translated at the end of his book. Regarding letters, Johns mentions that such communication was often not dated and was carved into a piece of clay shaped like a miniature pillow and placed into an “envelope,” a thin sheet of clay.[xxvi] Johns states that the most important letters are those of Hammurabi, as they obviously give a much fuller insight into the times of Hammurabi.[xxvii] I will often compare and contrast Johns’ book with two other important sources to get a better feel of what the law might have really been like. In The Babylonian Laws, a two-volume set, G.R. Driver and John C. Miles translate the Code of Hammurabi, as well as other Babylonian and Assyrian laws derived from clay tablets, often adding commentary of their own.[xxviii] They also come to the conclusion that society during Hammurabi’s time followed other established ordinances, besides the Code, as several sections of the Code refer to outside rules by stating “as in the King’s ordinances.” However, such ordinances are not available to us. Driver and Miles maintain the idea that Mesopotamia, from the Sumerian times to the Assyrian times, followed generally the same common law.[xxix] I will also compare comments and translations by Georges Contenau from his book, Everyday Life in Babylon and Assyria.[xxx] Contenau was a French archeologist and translated texts from Babylonian and Assyrian clay tablets. This book is a direct translation of Contenau’s original French version, but the translators, K.R. & A.R. Maxwell-Hyslop, have included notes where they find them necessary to better explain the text to English readers. I have also utilized various secondary sources, including Johns’ commentary in the 11th edition of The Encyclopedia Brittanica (1910-1911), and other sources that are referenced and available in the endnotes. Similarities among sources will indicate possible accuracy in translating the Code, contracts, and letters, and the differences will display the ongoing uncertainty of specific areas of Babylonian law. Old Babylonian Society The General Class System Old Babylonia had “three great classes, the amêlu, the muskênu, and the ardu,”[xxxi] as stated by Johns. The amêlu included the king (who, according to Contenau[xxxii], publicized laws, fixed the calendar and taxation, and decided when to begin and end a war), chief officers of state, and landed proprietors.[xxxiii] With such strong titles came increased fines and punishments for liabilities.[xxxiv] Though, we still continue to see the “eye for eye” theme.[xxxv] Amêlu class members were viewed as “gentlemen” or “noblemen”; however, over time and under the Code, amêlu often meant no more than a man who was a craftsman.[xxxvi] The muskênu, on the other hand, included most of the subject-population—the common, free man who owned slaves, paid less than an amêlu for doctor fees, and paid less to a wife for a divorce.[xxxvii] A muskênu was “a person of less consideration,” but he had no legal disability as far as the Code tells us under Johns’[xxxviii] and Contenau’s understanding.[xxxix] However, Contenau believes that the Muskênu was extremely poor and only a small step up from a slave, and actually considers the “free man” to be at the very top of the social ladder[xl] (I would have to disagree, as the free man could still be very poor and lower on the social ladder). Johns believes that Muskênus probably included a slave’s child adopted by free parents, and he later mentions, as will soon be seen, that adoption of a slave’s child results in the child being free.[xli] Finally, all authors agree that the ardu was the slave.[xlii] According to Contenau, ardus were marked in some way, yet we do not know exactly how.[xliii] Slaves were controlled by their masters in a parent-child type relationship and were considered chattel, especially since they could be “pledged for debt.”[xliv] A slave was, thus, property of the owner.[xlv] Runaway slaves are often mentioned in the Code, which Johns believes is indicative of a slave’s miserable living conditions.[xlvi] Under the Code, it was a crime to harbor a slave or to keep a runaway slave that one found, and damages would be owed to a master for injuries to his slave.[xlvii] A slave was subject to mutilation if he attacked a free man or repudiated his master, yet a slave was oddly permitted to marry a free woman (their children would be free—both Johns and Contenau agree on this) and could own property with her.[xlviii] However, unlike the free man, the slave’s master acquired the slave’s property at the slave’s death rather than the slave’s children acquiring the property.[xlix] Additionally, a master was required to pay for medications and care if his slave was ill.[l] So, then, a master did not truly have full power to treat his slave as he wished. As declared by Johns and Contenau, most of our understanding surrounding Babylonian slavery comes from the documents (on tablets) concerning the sale of slaves.[li] Such documents have suggested that slaves could be returned if the buyer found a “defect” in the slave, whether it was physical (such as a serious disease—a very horrible disease, the bennu, is mentioned in several Babylonian contracts, and the sibtu is mentioned in Assyrian deeds and Contenau believes it was some sort of paralysis), or the fact that the slave had a tendency to run away.[lii] These defects clearly lowered the value of the slave, and understandably so, because what good is a slave to his master if the slave is about to die or is so ill that he cannot work and fulfill his purpose? Johns also mentions the abundance of information on Assyrian slaves, particularly concerning the differences between male and female slaves. Male slaves lived in their master’s house until the master found him a wife to marry, and the wife was often a female slave.[liii] Female slaves lived in their master’s house until they were old unless they married a slave, since married slaves lived in their own homes.[liv] An important thing to note is that slaves, according to Johns, were sold with their families and never alone (unless the slave young man would naturally leave his family home at that age, had he been free).[lv] However, Contenau disputes this point and claims that a master could sell the children from slave marriages separately, but this was not a common practice.[lvi] Therefore, it is likely that slaves were generally sold with their families. Next comes a class—the rîd sâbî and the bâ ́iru—whose roles are uncertain, according to Johns.[lvii] Under Johns’ interpretation of the Code, these two titles were closely connected officials who “had charge of the levy, the local quota for the army, or for public works.”[lviii] They each had a benefice, containing land and other property, with the king being the benefactor that such things were credited to.[lix] They were basically the king’s servants and had to follow his orders at all times, including being away from home and family for a long time, or else they would have committed a penal offense.[lx] These officials would often be engaged in military tasks and, if caught by an enemy, would have to pay for their own ransom (if they could not pay, their towns might help them, and if their towns could not pay, the state might).[lxi] Johns and Driver & Miles speak of another interesting class, the votaries, consisting of females who were vowed to God[lxii] (to either Shamash, the sun god and the god of justice, or Marduk, the chief god).[lxiii] Driver & Miles believe that fathers were the ones to offer their daughters as votaries to a god.[lxiv] The votary was to be respected by the people, as indicated in Johns’ translation of the Code: “§ 127. If a man has caused the finger to be pointed at a votary, or a man's wife, and has not justified himself, that man shall be brought before the judges, and have his forehead branded.”[lxv] Further, the votary lived in a “bride house,” or convent.[lxvi] This did not mean she was constrained to stay in the convent forever, as she was even permitted to leave and marry; yet, she was still required to maintain her respect.[lxvii] For example, under the Code, if a votary opened a “beer-shop” or even drank in one, she would get burned as a punishment.[lxviii] In addition, a votary was forced to remain a virgin even if married, forcing her to give her husband a maid to have children with if he wished to do so.[lxix] However, Driver & Miles make note that votaries often had children, but they were not legitimate, according to tablets.[lxx] Even if a votary was not married to a man, her vow to the god allowed her to rank as a married woman and be protected from public slander.[lxxi] Contenau declares that the available contracts make it clear that a seller is he “who gives, who delivers” and a buyer is he “who fixes the price.”[lxxii] There was great responsibility on the seller regarding the object he sold. For example, Contenau brings up a section in the Code: an architect who built a faulty house in which the owner died due to the house’s defect would be killed; if the owner’s child was killed, the architect’s child would also be killed[lxxiii] (eye for eye). Similarly, the builder of a leaky boat had to fix the boat and pay for it himself.[lxxiv] Both Johns and Driver & Miles speak of partnerships; however, their thoughts tend to differ. Johns states that partnership can be as simple as two men buying a piece of land together.[lxxv] Partnerships also included two people doing business together in that each partner contributed a certain amount of money and shared profits—such a partnership was over when each partner took his share, but a partnership renewal was still possible.[lxxvi] Johns makes sure to note that further research about partnerships is needed, but he continues to rely on clay tablets, since the Code, in his translation, at least, makes no mention of partnerships.[lxxvii] On the other hand, Driver & Miles translates the Code as including partnerships in § U[lxxviii] (Note: Both authors are missing certain sections of the code, specifically after § 65 and before § 100, and include letters instead. Johns has three sections in this gap [X, Y, Z], while Driver & Miles have A through V. The Code of Hammurabi was missing some sections, as it was almost whole when found—the extra sections inserted by various authors might be what they believe to have been in the law according to copies of the Code or other texts). § U states, “If a man has given silver to a man for a partnership, they shall divide the profit or the loss which there may be in proportion before a god,” indicating an equal partnership that carries out a certain aspect of business but ends when that business is taken care of,[lxxix] just as Johns argued. However, as mentioned in the note, not all translators of the Code agree that the partnership section was included—still, it is evident by at least some clay tablets. The Legal Process and Penalties Judges are mentioned in the Code, although not to a great extent; even so, Johns believes that judges decided the penalties of those who violated the Code.[lxxx] I agree, as Judges were often punished for decision-related functions under the Code. Judges were truly bound to their duties in court, as they would be dismissed from their positions just for revoking previous decisions, since judgments were irrevocable under the Code, Johns states.[lxxxi] Judges had to examine the words of witnesses and, in capital punishment cases, a judge would likely give six months for the accused to find and get help from witnesses,[lxxxii] which I view as displaying the importance of witnesses as well as the value of evidence and fairness in the justice system. Unlike in several civilizations, families could not cut off or disinherit their sons, children could not kick out from their homes their widowed mothers, and a widow with children could not remarry unless first asking to do so before a judge.[lxxxiii] There is no evidence of whether a judge was paid for carrying out his duties, but he was highly respected and honored by the people; in fact, it is very likely that the king acted as judge from time to time in certain cases.[lxxxiv] Although we do not know how judges were ranked, Johns argues that some judges were regarded as more “royal” than others, as we see evidence, from clay tablets, of there being “king’s judges.”[lxxxv] Additionally, evidence points to the role of judge as being hereditary, passing on from father to son(s).[lxxxvi] In fact, there is evidence of one woman, though also a scribe, among the judges, but it is still not clear whether she was a judge’s daughter and, if so, whether daughters had an absolute right to be a judge if there was no son.[lxxxvii] Driver & Miles also mention that not much is known about judges and they actually rely on the research of others in determining the roles of judges[lxxxviii] (likely from clay tablets). Yet, similar to Johns’ theory, the letters of Hammurabi actually make it quite clear that Hammurabi often acted as judge, dealing with offenders and punishing them, but that he preferred to refer cases to the courts.[lxxxix] One interesting point Driver & Miles make is that judges are mostly referred to in plural, indicating that they might have sat together and agreed on one decision; with that being said, the authors go on to suggest that when the singular term “judge” is used, the judge might have only acted as the leader of a group of judges (a Neo-Babylonian tablet mentions a “president of the court”).[xc] This seems plausible; although, does this mean that the entire bench of judges would be punished for violating the Code, or would the “president of the court” only be punished? As argued by Johns, in-court cases had a complex procedure. First, the plaintiff would make a statement and then the defendant would make a counter-statement in front of the judge.[xci] The object or deed at issue was brought into court and placed in the hands of the god, so that “the decision was ‘the judgment of Shamash.’”[xcii] Witnesses would be take an oath in front of the king and god, and a jury was present in court; interfering with the duties of witnesses and jury people was punished under the Code.[xciii] As mentioned above, evidence was important and a judge would often go to the estate at issue to see it for himself in estate disputes.[xciv] Judges then made a final decision, and damages were even rewarded when debt or compensation was owed.[xcv] It is important to note the significance of an “oath of god” in Babylonia (and religion’s role, as a whole, which will be mentioned in another section), as the oath is mentioned quite often in the Code since it was likely administered by judges to parties and witnesses and even occurred in contracts.[xcvi] As reported by Johns, out-of-court settlements were common between parties, with a scribe drawing up the contract that was binding after both parties “swore by the gods and the king.”[xcvii] He likely came to this conclusion based on many contracts from such settlements carved onto tablets. Witnesses would often be involved and have their names written on the contract and both parties and the scribe kept a copy of the contract[xcviii] (perhaps identical tablets were found?). When something went wrong, these cases were apparently brought to a judge.[xcix] For some reason, Johns is the only author, of the three I am covering, who goes into detail about witnesses in cases, and his proof comes from both the Code and the tablets. There were three types of witnesses: those who were the elders of the city and were likely nominated or approved by the king, sometimes even including other judges; those who the parties chose to testify in their favor or those who the judge chose because he believed they would know needed facts; and, those such as parties’ family members, neighbors of the property at issue, or officials who were somehow involved, who witnessed documents.[c] Johns asserts that witnesses had to take an oath during Old Babylonian times, but in Assyrian times, witnesses did not have to take an oath.[ci] Most authors generally agree upon the translation of penalties under the Code, indicating that such penalties were likely. Johns claims that, under the Code, plaintiffs who lost their cases did not go unpunished, especially if they purposely brought false claims. In serious cases, such a mistake would be punished “eye for eye” (example—in § 3 of the Code, if a false claim was used in a capital punishment case, the accuser would be put to death).[cii] Most authors agree, including Driver & Miles, as their translations of §§ 1-4 of the Code prove various punishments existed for false accusers. This is interesting because it shows the respect for a court’s time and money spent to try cases, much like how modern judges do not like to waste court efficiency and adequate function, as well as respect for a falsely accused defendant. Furthermore, after reviewing tablets, Johns asserts that if litigants chose to reopen cases, they would possibly be subject to “dedicate” their eldest child to a god or goddess as a “burnt offering”—this was a huge deterrent to stop others from wasting time to open an already closed decision, but there is no proof that it was ever used, as, most of the time, some form of payment was required instead.[ciii] I would agree that such a penalty was only probably used as a deterrent to maintain court efficiency, since other penalties had the same goal of keeping order. Johns moves on to criminal penalties, which are extremely harsher than we are used to. Capital punishment was inflicted on violators of the Code who were accused for theft, rape, death by assault, death caused by a bad building,[civ] and other crimes[cv] like incest, selling beer too cheaply (this one is odd . . . causing people to become drunk more easily and getting into serious trouble was the reason, perhaps), adultery, and more. Note that actual “murder pure and simple” is never mentioned in the code; however, we see several of cases of murder, even those caused by fatal assault, being punished with the death-penalty under the Code, so the punishment for such a crime was likely assumed to be an equal death (eye for eye).[cvi] There were different ways to carry out the death penalty, including death by drowning, death by fire, impalement on a stake, and possibly other methods that are not specified in the Code because some sections only mention “he shall be killed”[cvii] (Driver & Miles agree with this translation). The next worst penalty was mutilation. According to Johns, there were two types of mutilation under the Code: “retaliation for bodily disfigurement,” such as the “eye for eye, tooth for tooth, limb for limb” theme, and mutilation “symbolic of the offence itself.”[cviii] The latter type of mutilation involved penalties like “the hands cut off mark the sin of the hands in striking a father, in unlawful surgery, or in branding,” as well as “[t]he eye torn out [for] . . . punishing of unlawful curiosity”; “[t]he ear cut off marked the sin of the organ of hearing and obedience”; and “[t]he tongue . . . cut out for the ingratitude evidenced in speech.”[cix] For “a gross assault on a superior,” scourging was done, which was carried out “with an ox-hide scourge, or thong, and sixty strokes were ordered” in public against a violator.[cx] Incest between a father and daughter resulted in banishment from the city.[cxi] Also, once again, the “eye for eye” theme was used, but affecting personal property (children and slaves were like chattel), such as “son for son, daughter for daughter, [and] slave for slave.”[cxii] Some cases were required to be left unpunished with no remedies, including “[c]ontributory negligence, the natural death of hostage for debt, [and] the accidental goring of a man by a wild bull.”[cxiii] Penalties under the Code, like the ones for surgeons (for malpractice on a free man resulting in the free man’s death, the surgeon would get his hand cut off, and for doing the same to a slave, he would owe the master another slave), display the theme of strict liability for all professionals, as stated by another secondary source (refer to endnotes).[cxiv] § 235 of the Code also enforced strict liability against shipmen: “If a shipman has caulked a ship for a man and has not made his work secure and so that ship springs a leak in that very year or reveals a defect, the shipman shall break up that ship and shall make the ship sound out of his own property and give back a sound ship to the owner of the ship.”[cxv] Thus, privity was required, but not negligence—only a mere defect was required to be held liable, just as in the modern idea of strict liability.[cxvi] The idea of strict liability in Babylonian society was likely true, as most translations are similar regarding such sections of the Code. Lastly, if you refer back to the surgeon malpractice example, note that penalties resulting from wronging a slave and penalties resulting from wronging a free man greatly differed in that they were expectedly harsher for wronging a free man. Johns mentions that there is no evidence of a police administration; Driver & Miles flat out say that there was no police force.[cxvii] Consequently, one would think that there was likely no police force and, even if there was one, it perhaps consisted of soldiers who regulated prisoners of war (which will later be illustrated) rather than society as a whole. Even so, a less reliable source[cxviii] claims that there was a police force that consisted of soldiers who were sent to shoot or arrest brigands (thieves, gangs, etc.) who wandered in the countryside; still, there is no real proof for this assertion. Johns asserts that there is not much information on marriage in Assyrian times and in the late Babylonian times, but clay tablets make it highly likely that the laws of Old Babylonia regarding marriage remained unchanged for the entire Mesopotamian period after Hammurabi’s time,[cxix] while Contenau brings a different perspective, claiming that texts have shown that wives were not bought and sold in late Assyrian and Neo-Babylonian times.[cxx] As asserted by both Johns and Driver & Miles, prior to an engagement and marriage, the man was the “suitor” and once he hoped to marry a girl, he brought gifts (more gifts for poorer families and less gifts for richer families of brides), such as money and/or slaves, to her parents, and the father (or, in some cases where there was no father, the mother) had to accept or decline the engagement.[cxxi] Parents eventually gave this money to their daughter bride, but it is not stated when or under what conditions.[cxxii] Contenau adds that, once betrothed, a girl was basically considered family even so much that if the suitor died, she would be married off to his brother or one of his relatives.[cxxiii] Perhaps he saw evidence of this practice in several tablets. In any case, after comparing research and finding great similarities, it can be assumed that the “betrothal gift” idea is accurate, especially since it is mentioned several times in the Code. In fact, if after giving a bride’s parents presents, the suitor had taken interest in another girl, he lost the presents and they remained with the parents of the first bride, pursuant to the Code (Johns and Driver & Miles translate this section similarly).[cxxiv] Also, if a man tried to seduce a betrothed woman, he received a penalty, as required under the Code.[cxxv] Although the suitor had a choice as to which girl he wished to marry (even though in later Babylonian times, he had to gain his father’s approval), there is no indication that the bride had any say.[cxxvi] However, women who used to be married were permitted to choose their own husbands after divorces, separations, or the deaths of their spouses.[cxxvii] Johns states that to be a wife, a woman had to have “bonds,” or a marriage contract that listed the husband and wife’s names and their lineages, as well as the husband’s declaration that he takes the woman to be his wife.[cxxviii] Driver & Miles agree with this interpretation of the Code in § 128 (she is “not a wife”).[cxxix] According to Johns, a “man might be required to insert the clause that his wife was not to be held responsible for any debts he might have incurred before marriage”; if there was no such bond, “the wedded pair were one body as far as liability for debt was concerned.”[cxxx] But, both the husband and the wife were responsible for debts accrued after marriage.[cxxxi] All three sources agree that monogamy was prevalent, and when a man did have two wives, it was for various reasons and viewed as the exception to the general rule.[cxxxii] This seems to be the correct interpretation after comparing research. A husband might have a concubine, free women, not wives, but they might have had some legal rights similar to those held by wives.[cxxxiii] A man was only allowed to have a concubine if his wife was childless, but not if the childless wife gave him a maid for children.[cxxxiv] The maid was a slave and, if she had children, she was not sold and was free after her master’s death.[cxxxv] The father was in charge and had total power over his entire family; still, the mother, although inferior, was the husband’s helper and his honorable wife, as argued by Johns.[cxxxvi] A family was a unit and a family’s ancestors were very important and defined which “clan” it belonged to and its prestige, as well-known and distinguished families had certain privileges.[cxxxvii] However, Johns declares that research does not tell us the extent of these privileges and that we can only guess that there were famous families due to certain family names being constantly mentioned on clay tablets. Families of not-so-famous lineage often referred to their origin as being from a certain tradesman, like sons of “the baker[.]”[cxxxviii] Often, certain cities had their own trades, such as the trade of goldsmith in Nineveh.[cxxxix] A man could not simply claim to be of distinguished ancestry without any proof, since there is proof of an actually a court of ancestry (bît mâr bânûti) that looked into such assertions, as provided on clay tablets.[cxl] Since a father could treat his children as slaves, “as a chattel to be pledged for his debts,” Johns believe this to mean that the father could sell his child.[cxli] A father could also take his son’s wages, as he had rights over his son’s finances, which were, in effect, his own.[cxlii] A father could favor a certain son and deed him all or most of his property at his death.[cxliii] Johns suspects a father only owed his daughter a promise to dower her (marry her off), but he could also give her away as a concubine.[cxliv] Driver & Miles somewhat agree with fathers not owing much more to their daughters than to find them a husband, in that daughters were not guaranteed inheritance of their father’s property;[cxlv] therefore, it is likely accurate that the only responsibility fathers had to daughters was to marry them off. Even so, under the Code, children owed great respect to their fathers. “If a son struck his father, his hands were cut off,” and if a child repudiated his father, he was disinherited, now had the status of a slave, and was possibly branded.[cxlvi] Likewise, if a son repudiated his mother, he “was branded and expelled from house and city,” but was not sold as a slave.[cxlvii] The worst repudiation of them all was that of an adoptive child against his adoptive parents, often resulting in “having an eye torn out, or the tongue cut out,”[cxlviii] punishing his lack of gratitude. Johns declares that although there were various reasons adoption occurred, the most common one was “that the adopting parents had lost by marriage all their own children and were left with no children to look after them,” and the birth parents of these children would be grateful that their children would be taken care of by acquiring their adoptive-father’s property at his death.[cxlix] Other reasons for adoption were that a father might want to adopt his illegitimate son so that he could become legitimate and have property rights, a father might have no legal heir, or a father might simply need a way to be provided for in old age.[cl] Driver & Miles agree with the above reasons for adoption, but add that, often, the reason for adoption was because the adopter was owed religious rites after his death and needed somebody to carry out those traditions.[cli] Driver & Miles also mention that even though the norm was for adopters to only adopt a son if they did not have one, several contracts show adopters adopting sons when they already had sons, but, as the authors suggest, this could just mean that the adopter adopted his illegitimate son (from a concubine, etc.) in order to make him legitimate.[clii] This idea seems more plausible, because if a father already had sons to take care of him and acquire his land at his death, there was really no other reason to adopt and provide for additional children. In any case, “[a]doption was effected by a deed, drawn up and sealed by the adoptive parents, duly sworn to and witnessed,”[cliii] and the proof probably lies in deed tablets. A well-off lady could also adopt a daughter to give her food and drink and help take care of her.[cliv] When the adopted child failed to fulfill his or her responsibilities, he or she was disinherited if a judge so ordered.[clv] According to Contenau, this disgraced child was often sold into slavery.[clvi] Regarding ending a marriage or legal relationship, all translations of the Code display that both men and women were permitted to divorce under certain circumstances.[clvii] When a man divorced his wife or concubine, the wife or concubine was given a maintenance, custody of children, and both the children and the wife divided up the husband’s property at his death.[clviii] Divorced wives were permitted to remarry but not during the husband’s lifetime, and if the husband remarried and had children, then all of his children combined would divide his property upon his death.[clix] A man could divorce his wife for the reason of being childless or for being a bad wife (perhaps as an adulteress, potentially resulting in her death by drowning), but he could not leave her for her being chronically ill (but he could marry a second wife).[clx] A wife could divorce her husband if she could prove that he treated her poorly and that she had no blame, but received no alimony because she was the one who wanted the divorce in that case.[clxi] If a man deserted his wife without telling her anything, she could be with another man and the husband could not claim her upon his return.[clxii] Therefore, though the husband/father was the head of the family unit, the wife/mother was owed some respect, as well. The temple was not only of great financial status as “that of the chief,” but it also had huge political impact,[clxiii] according to most authors who have reviewed the tablets. Finances and Gifts Johns asserts that the temple received payments, “primarily of a ginû, or fixed customary daily payment, and a sattukku, or fixed monthly payment,” from families that held temple lands in perpetuity.[clxiv] Payments often included vegetables, like corn, and animal meats, such as that of birds.[clxv] Temples lands could not rented out or sold.[clxvi] The state financed these lands, as they were free from all other dues to the state. Temples had other land, as well as houses, that they could let.[clxvii] Moreover, many people actually provided the temple with gifts voluntarily as “thank-offerings,” and such presents included food and money “often accompanied by some permanent record, a tablet, vase, stone or metal vessel, inscribed with a votive inscription.”[clxviii] Due to its large stock of raw products, the temple gave the raw material to the poor in rough times or just for charity, to tenants, to slaves, “and to contractors who took the material on purely commercial terms,” who, in turn, paid for the material upfront or with stipulated interest.[clxix] Contractors also purchased wool and other material and, in exchange, were expected to give back rugs or hangings.[clxx] Perhaps tablets indicated such transactions. The temple also did banking business by holding money deposit “against the call of the depositor.”[clxxi] It gave out loans with or without interest and god was considered the owner of the money.[clxxii] Priests and Temple Staff Driver & Miles state that there no laws priests must follow under the Code.[clxxiii] As specified by Johns, priests’ support of the king was crucial, as the king “could do nothing without religious sanction” because priests were very wealthy and had sanctions of religion that could be given out at their disposal.[clxxiv] Basically, priests were on the side of “right” in regards to what is right and what is wrong, and if someone did something wrong, he had offended the priests, an act that was publicly frowned upon.[clxxv] Moreover, Johns asserts that the highest-ranked priest was the priest proper, sangû, and he acted as a “mediator between god and man,” and sometimes as a judge[clxxvi] It is likely that priests were very respected and powerful, as is true in other ancient societies. Besides priests, temples had various classes of people and officials. Temple slaves were given clothing and food but were not paid for routine work.[clxxvii] Furthermore, a temple had “slaughterers, water-carriers, doorkeepers, bakers, weavers, . . . shepherds, cultivators, irrigators, gardeners,” and even a personal doctor, but the exact duties and independence, or lack thereof, of such temple classes remain unknown.[clxxviii] In fact, Johns claims that research does not even tell us when exactly “a temple official acts in his own private capacity and when on behalf of the temple.”[clxxix] According to both Johns and Driver & Miles, the Code and tablets mention that women had roles in temples, since there were also female clerks, nuns, priestesses, and as well as women who held some of the previously mentioned offices.[clxxx] The temple’s staff was provided shelter, food, and clothing by the temple, and the right to hold these positions was often hereditary.[clxxxi] Additionally, celibacy seems to be nonexistent among the various temple classes, including priests, since even they were often married.[clxxxii] Contenau believes that the gods had limitless power over all of mankind, including the poor and even the king.[clxxxiii] He might know this due to the prologue and epilogue of the Code (see next paragraph), as well as in clay tablets; therefore, it can be assumed that the temple yielded great political influence. Before Hammurabi, several kings trusted magicians, but, according to Johns, Hammurabi sought to end the magic.[clxxxiv] This could possibly mean that Hammurabi relied on priests for political advice. Johns states that it is not clear what responsibilities temples had to the state, but sometimes the king and his officials had to borrow the temples for military reasons.[clxxxv] Also, some kings often took the temple’s resources, but it is believed that they later repaid the temples; this was viewed as wrong in the peoples’ eyes, though.[clxxxvi] Thus, it seems evident that if the state used a temple for its own gain, it would lose political trust among the civilians as this civilization was dedicated to and even truly ruled by god. Furthermore, Contenau asserts that succession to the throne was not only through royal lineage, but also by nomination by the gods, a claim that is assumed through poems carved on tablets.[clxxxvii] This idea is also somewhat mirrored in the prologue and epilogue of the Code, which, as stated by Driver & Miles, mentions that Marduk, the patron deity of Hammurabi’s dynasty, “called Hammurabi ‘to make justice to appear in the land, to destroy the evil and the wicked . . . ’” and also indicates that an actual copy of the Code was placed in Marduk’s temple in Babylon.[clxxxviii] So, it seems that if Hammurabi was made king under the gods’ wishes, he was then forced to rule according to the gods’ teachings. Well, what are such teachings? I am assuming they might be what the priests say they are, and the priests were likely trained to teach certain material. The Army and Prisoner Slaves In Old Babylonia, there was always a territorial levy of troops (each district was required to supply its quota) that were called out by the king (“the king’s men”), as specified by Johns.[clxxxix] These men who were obligated to serve were mainly slaves and poor men and, under the Code, those who harbored men who ran away from their obligations were punished with death[cxc] (the Babylonian version of a draft, only forcing the less fortunate into the army). Men only had to serve for a certain number of years, thought to be six.[cxci] Though, “religious officials and shepherds in charge of flocks were exempt” from service in the army.[cxcii] All states had a number of duties to the army, such as providing ferry dues, customs, and highway and waterways.[cxciii] All estates of a specified size were required to provide a soldier to the state.[cxciv] Some cities claimed for their citizens a right of exemption from “the levy.”[cxcv] Enslavement of Prisoners of War An interesting idea concerning Assyrian war is that of slavery. Contenau brings up the claim of war being another excuse to enslave men, as Assyrian kings could then form a large labor force to execute their demands.[cxcvi] These men are believed to have been prisoners of war, who, upon return to Assyria, would be forced to service the temple, upkeep the canals, or be sold as slaves to Assyrians.[cxcvii] With that being said, it can be inferred that POWs never saw their homelands again and continued to work in forced labor. I could not find any research regarding whether slaves who were once POWs were permitted to marry Assyrian slaves, as regular slaves were allowed to do; however, it would be interesting, especially because the masters would then be able to acquire more slaves (including wives and potential children), since slaves normally remained with their families. Furthermore, I wonder if such slaves would be freed after the deaths of their masters, just as other slaves were freed under similar circumstances. I have also failed to find information concerning any legal rights of prisoners of wars, but I imagine that they were provided with survival necessities, including food and drink, but, perhaps, not much else. However, it seems that Contenau formed his ideas about prisoners of war mostly from Assyrian bas-reliefs, which does not make his beliefs true (artwork is often inaccurate), but only possible. However, one might agree that armies would bring back prisoners for the benefit of Babylonia or Assyria. The Code of Hammurabi and the recovered Babylonian and Assyrian tablets, when coupled together, are able to give us an insight into the legal systems of Babylonia and Assyria. Scholars have clearly not agreed upon all interpretations of the law, but they have not disagreed on all areas, either. Nonetheless, scholars have continued to attempt to decode the ancient Mesopotamian legal systems. Although we will never be fully aware of the Babylonian and Assyrian legal systems, at least we have thousands of clay tablets and an almost full steele code to give us an idea of what laws the ancient Mesopotamians had to follow. [i] H.W.F. Saggs, Everyday Life in Babylonia and Assyria (B.T. BatsfordPutnam 1965) available at http://www.aina.org/books/eliba/eliba.htm#c4. [ii] See Ann Marie Dlott, Ancient Mesopotamia, Able Media: Classics Technology Center, http://ablemedia.com/ctcweb/showcase/dlottmesopotamia1.html (last visited Mar. 5, 2012) (displaying geography of Mesopotamia). [iii] Saggs, supra note 1. [iv] Life in Mesopotamia, Univ. of Chicago Library, http://mesopotamia.lib.uchicago.edu/mesopotamialife/index.php (last visited Mar. 28, 2012). [v] Hannibal Travis, The Cultural and Intellectual Property Interests of the Indigenous Peoples of Turkey and Iraq, 15 Tex. Wesleyan L. Rev. 415, 432 (2009). [vi] C.H.W. Johns, Babylonian and Assyrian Laws, Contracts and Letters 23 (Kessinger Publishing 2004). [vii] History of the Bronze Age in Mesopotamia, SRON Netherlands Institute for Space Research, http://www.sron.nl/~jheise/akkadian/bronze_age.html#oldbabylonian (last visited Mar. 5, 2012). [viii] Saggs, supra note 1. [ix] History of the Bronze Age in Mesopotamia, SRON Netherlands Institute for Space Research, http://www.sron.nl/~jheise/akkadian/bronze_age.html#oldbabylonian (last visited Mar. 5, 2012). [x] Andrew George, Babylonian and Assyrian: A history of Akkadian, Postgate, J. N. (2007) available at http://eprints.soas.ac.uk/3139/1/PAGE_31-71.pdf. [xi] Travis, supra note 5, at 432. [xii] Saggs, supra note 1. [xiii] History of the Bronze Age in Mesopotamia, SRON Netherlands Institute for Space Research, http://www.sron.nl/~jheise/akkadian/bronze_age.html#oldbabylonian (last visited Mar. 5, 2012). [xv] Saggs, supra note 1. [xvi] Charles F. Horne, The Code of Hammurabi: Introduction (1915), available at http://www.fordham.edu/halsall/ancient/hamcode.asp. [xviii] Johns, supra note 6. [xx] Hammurabi Code of Law, All About Archaeology, available at http://www.allaboutarchaeology.org/hammurabi-code-of-law-faq.htm (last visited Mar. 20, 2012). [xxi] Hammurabi’s Code in History, Yahoo!, available at http://voices.yahoo.com/hammurabis-code-history-2510464.html. [xxiv] Claude Hermann Walter Johns, Babylonian Law—The Code of Hammurabi, Encyclopedia Britannica (11th ed. 1910-1911), available at http://www.fordham.edu/halsall/ancient/hamcode.asp. [xxv] Johns, supra note 6, at 4-5. [xxvi] Id. at 139. [xxvii] Id. at 142. [xxviii] G.R. Driver & John C. Miles, The Babylonian Laws (1952). [xxix] Id. at 17. [xxx] Georges Contenau, Everyday Life in Babylon and Assyria (1954). [xxxi] Johns, supra note 6, at 42. [xxxii] Contenau, supra note 32, at 137. [xxxiii] Johns, supra note 6, at 42. [xxxviii] Johns, supra note 6, at 42. [xxxix] Contenau, supra note 32, at 15. [xli] Johns, supra note 6, at 42. [xlii] Rev. Claude Hermann Walter Johns, Babylonian Law—The Code of Hammurabi, Encyclopedia Britannica (1910-1911), available at http://webcache.googleusercontent.com/search?q=cache:K9P_IcvlF_kJ:avalon.law.yale.edu/ancient/hammpre.asp+babylonian+law+ardu&cd=1&hl=en&ct=clnk&gl=us. [xliii] Contenau, supra note 32, at 20. [xliv] Johns, supra note 6, at 42. [xlv] Id. at 82. [xlvi] Id. at 42. [xlix] Id. at 82. [l] Id. at 42. [li] Id. at 82. [lv] Id. at 83. [lvi] Contenau, supra note 32, at 22. [lvii] Johns, supra note 6, at 42. [lx] Id. at 43. [lxiii] Sumerian Gods and Goddesses, Crystalinks, available at http://www.crystalinks.com/sumergods1a.html (last visited March 20, 2012). [lxiv] Driver & Miles, supra note 29, at 371. [lxv] Johns, supra note 6, at 31. [lxvi] Id. at 43. [lxx] Driver & Miles, supra note 29, at 371. [lxxi] Johns, supra note 6, at 43. [lxxii] Contenau, supra note 32, at 80. [lxxv] Johns, supra note 6, at 131. [lxxviii] Driver & Miles, supra note 29, at 186-87. [lxxix] Id. at 43. [lxxx] Johns, supra note 6, at 45. [lxxxiv] Id. at 46. [lxxxvii] Johns, supra note 6, at 46. [lxxxviii] Driver & Miles, supra note 29, at 490. [xc] Id. at 491. [xci] Id. at 48. [xcvi] Id. at 49. [xcvii] Id. at 47. [c] Johns, supra note 6, at 47. [ci] Id. at 49. [cii] Id. at 50. [cv] Id. at 51. [cvi] Id. at 50. [cvii] Id. at 51. [cviii] Johns, supra note 6, at 51. [cxiv] James Acret, Liability Based on Warranty Law, Architects and Engineers § 6:6 (4th ed.). [cxvii] Driver & Miles, supra note 29, at 493-94. [cxviii] Jeffrey Hays, Mesopotamian Government and Justice System (Mar. 2011), available at http://webcache.googleusercontent.com/search?q=cache:I9XG27C4DAUJ:factsanddetails.com/world.php%3Fitemid%3D1517%26catid%3D56%26subcatid%3D363+police+in+mesopotamia&cd=9&hl=en&ct=clnk&gl=us. [cxix] Johns, supra note 6, at 61. [cxx] Contenau, supra note 32, at 16. [cxxi] Johns, supra note 6, at 63. [cxxii] Id. at 63. [cxxiii] Contenau, supra note 32, at 15. [cxxiv] Johns, supra note 6, at 63. [cxxv] Id. at 66. [cxxvi] Id. at 64. [cxxvii] Johns, supra note 6, at 64. [cxxviii] Id. at 61. [cxxix] Driver & Miles, supra note 29, at 245. [cxxx] Johns, supra note 6, at 61. [cxxxii] Id. at 67. [cxxxiii] Johns, supra note 6, at 67. [cxxxvi] Id. at 61. [cxxxix] Johns, supra note 6, at 61. [cxli] Id. at 74. [cxlv] Driver & Miles, supra note 29, at 335. [cxlvi] Johns, supra note 6, at 74. [cxlix] Id. at 76. [cli] Driver & Miles, supra note 29, at 335. [clii] Id. at 384. [cliii] Johns, supra note 6, at 76. [cliv] Id. at 78. [clvi] Contenau, supra note 32, at 19. [clvii] Johns, supra note 6, at 71. [clxi] Johns, supra note 6, at 71. [clxii] Id. at 72. [clxiii] Id. at 97. [clxvii] Johns, supra note 6, at 97. [clxxii] Johns, supra note 6, at 98. [clxxiii] Driver & Miles, supra note 30, at 359. [clxxiv] Johns, supra note 6, at 98. [clxxvii] Johns, supra note 6, at 99. [clxxix] Id. at 100. [clxxx] Id. at 99. [clxxxiii] Contenau, supra note 32, at 266. [clxxxiv] Johns, supra note 6, at 99. [clxxxv] Id. at 100. [clxxxvii] Contenau, supra note 32, at 114. [clxxxviii] Driver & Miles, supra note 30, at 38. [clxxxix] Johns, supra note 6, at 94. [cxci] Rev. Claude Hermann Walter Johns, Babylonian Law—The Code of Hammurabi, Encyclopedia Britannica (1910-1911), available at http://webcache.googleusercontent.com/search?q=cache:K9P_IcvlF_kJ:avalon.law.yale.edu/ancient/hammpre.asp+babylonian+law+ardu&cd=1&hl=en&ct=clnk&gl=us. [cxciv] Johns, supra note 6, at 94. [cxcvi] Contenau, supra note 32, at 19.
http://www.daviddfriedman.com/Academic/Course_Pages/legal_systems_very_different_12/Papers_12/%20Babylon_Kilaita_12.htm
13
17
Asthma is a chronic disease that involves inflammation of the lungs. Airways swell and restrict airflow in and out of the lungs, making it hard to breathe. The word asthma comes from the Greek word for "panting." People with asthma pant and wheeze because they can’t get enough air into their lungs. Normally, when you breathe in something irritating or you do something that causes you to need more air, like exercise, your airways relax and open. But with asthma, muscles in the airways tighten, and the lining of the air passages swells. About 20 million Americans have asthma, including 9 million children. In fact, asthma is the most common chronic childhood illness. About half of all cases develop before the age of 10, and many children with asthma also have allergies. Asthma can either be allergic or non-allergic. In allergic asthma, an allergic reaction to an inhaled irritant -- pet dander, pollen, dust mites -- triggers an attack. The immune system springs into action, but instead of helping, it causes inflammation. This is the most common form of asthma. Non-allergic asthma does not involve the immune system. Attacks can be triggered by stress, anxiety, cold air, smoke, or a virus. Some people have symptoms only when they exercise, a condition known as exercise-induced asthma. While there is no cure for asthma, it can be controlled. People with moderate to severe asthma should use conventional medications to help control symptoms. Complementary and alternative therapies, used under your doctor’s supervision, may help, but shouldn’t replace conventional treatment. Signs and Symptoms Most people with asthma may go for periods of time without any symptoms, then have an asthma attack. Some people have chronic shortness of breath that gets worse during an attack. Asthma attacks can last minutes to days, and can become dangerous if airflow to the lungs becomes severely restricted. Primary symptoms include: - Shortness of breath - Wheezing -- usually begins suddenly; may be worse at night or early in the morning; can be made worse by cold air, exercise, and heartburn; is relieved by using bronchodilators (drugs that open the airways; see Medications) - Chest tightness - Cough (dry or with sputum) -- in cough-variant asthma, this may be the only symptom If you have any of these symptoms, seek emergency treatment: - Extreme difficulty breathing or stopping breathing - Bluish color to the lips and face, called cyanosis - Severe anxiety - Rapid pulse - Excessive sweating - Decreased level of consciousness, such as drowsiness or confusion Asthma is most likely caused by several factors. Genes play a part; you’re more likely to develop asthma if others in your family have it. Among those who are susceptible, being exposed to environmental factors such as allergens, substances that cause an allergic reaction, or infections may increase the chance of developing asthma. The following factors may increase the risk of developing asthma: - Having allergies - Family history of asthma or allergies - Being exposed to secondhand smoke - Having upper respiratory infections as an infant - Living in a large city - Gender -- among younger children, asthma develops twice as often in boys as in girls, but after puberty it may be more common in girls - Gastroesophageal reflux (heartburn) Childhood asthma, in particular, can be triggered by almost all of the same things that trigger allergies, such as: - Dust, cockroach waste, pet dander, indoor and outdoor mold, pollen - Air pollutants, such as smoke, perfumes, diesel particles, sulfur dioxide, high ozone levels, and fumes from paint, cleaning products, and gas stoves - Changes in the weather, especially in temperature (particularly cold) and humidity Other triggers include: - Activities that affect breathing, such as exercising, laughing, crying, yelling - Stress and anxiety Asthma symptoms can mimic several other conditions, and your doctor will take a thorough history to rule out other diseases. You may also have lung function tests to measure how much air your lungs can hold and how much air you breathe out. Your doctor may use a spirometer to measure how much air you exhale and how quickly you get air out of your lungs. Other tests may include chest and sinus x-rays, blood tests, or allergy tests. Asthma is classified as: - Mild intermittent: Having mild symptoms up to 2 days a week and 2 nights a month - Mild persistent: Having symptoms more than 2 a week but not more than one time in a single day - Moderate persistent: Having symptoms once a day and more than one night per week - Severe persistent: Having symptoms throughout the day on most days and often at night. Although you can’t prevent asthma, you can take steps to reduce the number and frequency of attacks: - Avoid allergens and irritants as much as possible. For example, reduce your exposure to dust mites by using special mattress and pillow covers that keep allergens out and removing carpets from bedrooms. Clean your house frequently. Wearing a mask while cleaning and choosing cleaners without harsh chemicals may help. - Exercise. Even people with exercise-induced asthma can stay active, and exercise will help you by strengthening your lungs and helping you maintain a proper weight. Taking precautions when it’s cold outside -- such as wearing a face mask to warm the air that you’re breathing -- can help you avoid asthma symptoms. Talk to your doctor before starting an exercise regimen. - Pay attention to your breathing. Watch for signs of an oncoming attack, such as wheezing. Your doctor may give you a machine called a peak flow meter that can detect slight differences in your breathing before you even notice them, so that you can take medication to ward off an attack. Your doctor will help you know which changes would mean you need medical attention right away. - Treat attacks quickly. The sooner you treat an attack, the less severe it will be, and the less medication you’ll need. - If you have allergies, immunotherapy ("allergy shots") may lower the number of asthma attacks and their intensity, and reduce the amount of medication you need. Immunotherapy includes regular injections of the substance you're allergic to, with each shot containing a slightly higher amount. Sublingual immunotherapy delivers the allergen in drops under the tongue. Over time your immune system becomes used to the allergen and no longer reacts to it. Talk to your doctor about whether immunotherapy is right for you. Avoiding asthma attacks, reducing inflammation, and preventing lung damage are the primary goals of treatment. The more you know about your condition, the more closely you can work with your doctor to develop an asthma action plan. To control asthma, you need to prevent exposure to allergic triggers and take medication as prescribed. You may need emergency medications during an asthma attack, but monitoring your breathing and taking your medications every day will help you control asthma over the long term. In a severe attack, you may need to be hospitalized for oxygen and medications that are given intravenously (IV). - If you smoke, quit. - Lose weight if you are overweight. Being overweight may put pressure on the lungs and trigger an inflammatory response. - Monitor your condition every day using a peak flow meter, a portable device that helps measure how your lungs are working. Keep a diary of readings to show your doctor. Together, you will establish your "personal best" reading. You should call your doctor if your peak flow reading falls below 80% of your personal best and go to the hospital if it falls below 50%. - Keep a journal that logs changes or attacks -- it may help determine triggers. Medications for asthma are prescribed for two different purposes: to stop an immediate attack, and to control inflammation and reduce lung damage over the long term. Quick relief medications -- These drugs are called bronchodilators and help open the airways when you have an attack. Short-acting beta-adrenergic agonists start working immediately. These drugs include: - Albuterol (Proventil) - Metaproterenol (Alupent) - Pirbuterol (Maxair) - Terbutaline (Brethaire) - Levalbuterol (Xopenex) Sometimes, steroids are needed for an acute asthma attack. They can take longer to work (from a couple of hours to a few days) and include: Long-term control -- These drugs are usually taken every day. Inhaled corticosteroids reduce inflammation and have fewer side effects than oral corticosteroids. They include: - Beclamethasone (Qvar) - Budesonide (Pulmicort) - Flunisolide (Aerobid) - Fluticasone (Flovent) - Triamcinolone (Azmacort) A class of drugs called leukotriene modifiers help reduce the production of inflammatory chemicals called leukotrienes that cause your airways to swell. They include: - Montelukast (Singulair) - Zafirlukast (Accolate) Cromolyn (Intal), Nedocromil (Tilade) -- These medications, which are inhaled, can help prevent mild to moderate attacks and are used to treat exercise-induced asthma. Theophylline (TheoDur) -- This medication helps open airways and prevent asthma symptoms, especially at night. Too much can cause serious side effects, so your doctor will monitor levels in your blood. Omalizumab (Xolair) -- Used to treat allergic asthma when other medications haven't worked. Nutrition and Dietary Supplements Although there is no diet for asthma, people who have allergic asthma may also have food allergies that can make their asthma worse. If you think you may have food allergies, talk to your doctor about trying an elimination diet. Eating plenty of fruits and vegetables that are high in antioxidants may also help you keep your asthma under better control. One study found that people with asthma who followed the Mediterranean diet had better control of asthma symptoms. Some studies have shown that people with asthma tend to have low levels of certain nutrients, but there is no evidence that taking supplements helps reduce asthma attacks. However, an overall healthy diet will help you get the nutrients you need and help your body deal with a long-term condition such as asthma. - Choline -- This B vitamin may help reduce the severity and frequency of asthma attacks. Some evidence indicates that higher doses (3 g per day for adults) may work better, but you should not take high doses without your doctor's supervision. More research is needed to say for sure whether choline helps. - Magnesium -- The idea of using magnesium to treat asthma comes from the fact that people who have asthma often have low levels of magnesium, and from some (but not all) studies showing that intravenous (IV) magnesium can work as an emergency treatment for an asthma attack. However, studies that have looked at whether taking oral magnesium helped have shown mixed results. More research is needed. - Fish oil -- The evidence for using omega-3 fatty acids (found in fish oil) to treat asthma is mixed. At least a few studies have found that fish oil supplements may reduce inflammation and symptoms in children and adults with asthma. But the studies have only included a small number of people, and one study found that fish oil might make aspirin-induced asthma worse. Ask your doctor whether a high quality fish oil supplement makes sense for you. In high doses, fish oil may increase the risk of bleeding, especially if you take a blood thinner such as warfarin (Coumadin). - Quercetin -- Quercetin, a kind of antioxidant called a flavonoid, helps to reduce the release of histamine and other allergic or inflammatory chemicals in the body. Histamine contributes to allergy symptoms such as a runny nose, watery eyes, and hives. Because of that, quercetin has been proposed as a treatment for asthma. But no human studies have examined whether it works or not. Quercetin can interact with certain medications, so ask your doctor before taking it. - Vitamin C (1 g per day) -- One preliminary study suggested that children with asthma had significantly less wheezing when they ate a diet rich in fruits with vitamin C. Vitamin C does have anti-inflammatory and antioxidant properties, which may help you maintain good health overall. Some studies have indicated that taking a vitamin C supplement (1 g per day) may help keep airways open, but other studies have found no benefit. - Other -- Other supplements that may help treat asthma include: - Coenzyme Q 10 (CoQ10) -- if you have asthma, you may have low levels of this antioxidant in your blood. Researchers don't know, however, whether taking CoQ10 supplements will make any difference in your symptoms. - Lycopene and beta-carotene -- preliminary data suggests that these two antioxidants, found in many fruits and vegetables, may help prevent exercise-induced asthma. People who smoke or take simvastatin (Zocor) should not take beta-carotene without talking to their doctor. - Vitamin B6 -- may be needed if you are taking theophylline because this medication can lower blood levels of B6. - Potassium -- levels in the body also may be lowered if you take theophylline. The use of herbs is a time-honored approach to strengthening the body and treating disease. Herbs, however, can trigger side effects and interact with other herbs, supplements, or medications. For these reasons, herbs should be taken with care, under the supervision of a health care practitioner. - Boswellia (Boswellia serrata, 3 mg three times per day) -- Boswellia (also known as Salai guggal) is an herb commonly used in Ayurvedic medicine, a traditional Indian system of health care. In one double-blind, placebo-controlled study, people who took boswellia had fewer attacks and improved lung function. Boswellia may help leukotreine modifiers work better. However, more research is needed. People who take medication to lower their cholesterol, or people who take nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen (Advil, Motrin) should talk to their doctor before taking boswellia. - Coleus forskohlii -- Coleus forskohlii, for forskolin, is another herb used in Ayurvedic medicine to treat asthma. A few preliminary studies suggest that inhaled forskolin powder seemed to relieve symptoms, but more research is needed to know for sure. People who have diabetes or thyroid conditions should not take forskolin. If you take blood thinners such as warfarin (Coumadin), taking forskolin may increase your risk of bleeding. Pregnant women should not take forskolin. Forskolin interacts with calcium channel blockers such as verapamil (Calan, Verelan), nifedipine (Procardia), and diltiazem (Cardizem, Dilacor) and with nitroglycerin (Nitro-Bid, Nitro-Dur, and Nitrostat) and isosorbide (Imdur, Isordil, and Sorbitrate). - Tylophora (Tylophora indica, 250 mg one to three times per day) -- Tylophora has also been used historically to treat asthma. Some modern scientific studies show that it can help reduce symptoms, but the studies were of poor quality. More research is needed. Tylophora may cause serious side effects at high doses, so talk to your doctor before taking it. Do not take tylophora if you are pregnant, have diabetes, high blood pressure, or congestive heart failure. - Pycnogenol (Pinus pinaster, 1 mg per pound of body weight, up to 200 mg) -- A 2002 review of studies on a standardized extract from French maritime pine bark, called pycnogenol, suggests that it may reduce symptoms and improve lung function in people with asthma. Another study found that children with asthma who took pycnogenol along with prescription asthma medications had fewer symptoms and needed fewer rescue medications. Do not use pycnogenol if you have diabetes or take medication for high blood pressure. If you take blood thinners such as warfarin (Coumadin) or aspirin, taking pycnogenol may increase your risk of bleeding. - Saiboku-to -- In three preliminary studies, a traditional Japanese herbal mixture called Saiboku-to has helped reduce symptoms and allowed study participants to reduce doses of corticosteroids. In test tubes, Saiboku-to has shown anti-inflammatory effects. Saiboku-to contains several herbs, including Asian ginseng (Panax ginseng), Chinese skullcap (Baikal scutellaria), licorice (Glycyrrhiza glabra), and ginger (Zingiber officinale). These herbs can interact with other medications, so talk to your healthcare provider before taking Saiboku-to. Some preliminary studies indicate that acupuncture may help reduce symptoms for some people with asthma, but not all studies agree. Acupuncture should be used in addition to, not as a replacement for, conventional medicine when treating asthma. Although very few studies have examined the effectiveness of specific homeopathic therapies, professional homeopaths may consider the following remedies for the treatment of asthma based on their knowledge and experience. Before prescribing a remedy, homeopaths take into account a person's constitutional type-- your physical, emotional, and psychological makeup. An experienced homeopath assesses all of these factors when determining the most appropriate treatment for each individual. Some people may find their symptoms get worse for a short time when starting on a homeopathic remedy. Because this may be dangerous for some people, be sure to work with a knowledgeable homeopath. - Arsenicum album -- for asthma that generally worsens between midnight and 2 am and is accompanied by restlessness, anxiety, chills, and thirst. - Ipecacuanha -- for those with asthma, particularly children, who have significant tightness in the chest, a chronic cough with lots of phlegm that may lead to vomiting, and worsening of symptoms in hot, humid weather. - Pulsatilla -- for asthma with yellow or greenish phlegm that gets worse in the evening, in warm, stuffy rooms, or after consuming rich, fatty foods; this remedy is most appropriate for adults or children who are tearful and clingy or sweet and affectionate. - Sambucus -- for asthma that awakens a person at night with a sensation of suffocation; symptoms worsen when the person is lying down . Because stress and anxiety can make asthma worse, including stress management techniques in your daily life may help reduce symptoms. These techniques do not directly treat asthma, however. - Hypnosis -- may be especially useful for children, who can easily learn the technique. - Yoga -- in addition to general relaxation and stress reduction, several studies of people with asthma have suggested that lung function improves with the regular practice of yoga. Any benefits in breathing appear to be slight, however. - Journaling -- A study published in the New England Journal of Medicine documented the positive effect of daily journaling on people with asthma. Some people think that journaling allows for the release of pent-up emotions and helps reduce stress overall. Warnings and Precautions Long-term treatment with theophylline for asthma may reduce blood levels of vitamin B6. Prognosis and Complications People with asthma can live normal, active lives. Because asthma is a chronic illness, it requires self-care and monitoring over the long term, as well as close contact with your doctor. Most people with asthma have occasional attacks separated by symptom-free periods. Paying attention to your mood, lowering the stress in your life, and having a good emotional support system will help you take good care of yourself. Aligne CA, Auinger P, Byrd RS, Weitzman M. Risk factors for pediatric asthma. Contributions of poverty, race, and urban residence. Am J Respir Crit Care Med. 2000;162(3 Pt 1):873-877. Anandan C, Nurmatov U, Sheikh A. Omega 3 and 6 oils for primary prevention of allergic disease: systematic review and meta-analysis. Allergy. 2009 Jun;64(6):840-8. Epub 2009 Apr 7. Review. Barros R, Moreira A, Fonseca J, et al. Adherence to the Mediterranean diet and fresh fruit intake are associated with improved asthma control. Allergy. 2008 Jul;63(7):917-23. Biltagi MA, Baset AA, Bassiouny M, Kasrawi MA, Attia M. Omega-3 fatty acids, vitamin C and Zn supplementation in asthmatic children: a randomized self-controlled study. Acta Paediatr. 2009 Apr;98(4):737-42. Birkel DA, Edgren L. Hatha yoga: improved vital capacity of college students. Altern Ther Health Med. 2000;6(6):55-63. Burns JS, Dockery DW, Neas LM, Schwartz J, Coull BA, Raizenne M, Speizer FE. Low dietary nutrient intakes and respiratory health in adolescents. Chest. 2007 Jul;132(1):238-45. Epub 2007 May 2. Chatzi L, Kogevinas M. Prenatal and childhood Mediterranean diet and the development of asthma and allergies in children. Public Health Nutr. 2009 Sep;12(9A):1629-34. Chiang LC, Ma WF, Huang JL, Tseng LF, Hsueh KC. Effect of relaxation-breathing training on anxiety and asthma signs/symptoms of children with moderate-to-severe asthma: a randomized controlled trial. Int J Nurs Stud. 2009 Aug;46(8):1061-70. Chu KA, Wu YC, Ting YM, Wang HC, Lu JY. Acupuncture therapy results in immediate bronchodilating effect in asthma patients. J Chin Med Assoc. 2007 Jul;70(7):265-8. Ciarallo L, Brousseau D, Reinert S. Higher-dose intravenous magnesium therapy for children with moderate to severe acute asthma. Arch Ped Adol Med. 2000;154(10):979-983. Ernst E. Breathing techniques -- adjunctive treatment modalities for asthma? A systematic review. Eur Respir J. 2000;15(5):969-972. Fetterman JW Jr, Zdanowicz MM. Therapeutic potential of n-3 polyunsaturated fatty acids in disease. Am J Health Syst Pharm. 2009 Jul 1;66(13):1169-79. Gazdol F, Gvozdjakova A, Nadvornikova R, et al. Decreased levels of coenzyme Q(10) in patients with bronchial asthma. Allergy. 2002;57(9):811-814. Gdalevich M, Mimouni D, Mimouni M. Breast-feeding and the risk of bronchial asthma in childhood: a systematic review with meta-analysis of prospective studies. J Pediatr. 2001;139(2):261-266. Gilliland FD, Berhane KT, Li YF, Kim DH, Margolis HG. Dietary magnesium, potassium, sodium, and children's lung function. Am J Epidemiol. 2002. 15;155(2):125-131. Haby MM, Peat JK, Marks GB, Woolcock AJ, Leeder SR. Asthma in preschool children: prevalence and risk factors. Thorax. 2001;56(8):589-595. Hackman RM, Stern JS, Gershwin ME. Hypnosis and asthma: a critical review. J Asthma. 2000;37(1):1-15. Hijazi N, Abalkhail B, Seaton A. Diet and childhood asthma in a society in transition: a study in urban and rural Saudi Arabia. Thorax. 2000;55:775-779. Huntley A, Ernst E. Herbal medicines for asthma: a systematic review. Thorax. 2000:Nov;55(11):925-9. Review. Huntley A, White AR, Ernst E. Relaxation therapies for asthma: a systematic review. Thorax. 2002;57(20:127-131. Joos S, Schott C, Zou H, Daniel V, Martin E. Immunomodulatory effects of acupuncture in the treatment of allergic asthma: a randomized controlled study. J Alt Comp Med. 2000;6(6), 519-525. Kalliomaki M, Salminen S, Arvilommi H, Kero P, Koskinen P, Isolauri E. Probiotics in primary prevention of atopic disease: a randomized placebo controlled trial. Lancet. 2001;357(9262):1076-1079. Kaur B, Rowe BH, Ram FS. Vitamin C supplementation for asthma (Cochrane Review). Cochrane Database Syst Rev. 2001;4:CD000993. Lehrer P, Feldman J, Giardino N, Song HS, Schmaling K. Psychological aspects of asthma. J Consult Clin Psychol. 2002;70(3):691-711. Levine M, Rumsey SC, Daruwala R, Park JB, Wang Y. Criteria and recommendations for vitamin C intake. JAMA. 1999;281(15):1415-1453. Li XM. Traditional Chinese herbal remedies for asthma and food allergy. J Allergy Clin Immunol. 2007 Jul;120(1):25-31. Review. Linde, K, Jobst K, Panton J. Acupuncture for chronic asthma (Cochrane Review). In: The Cochrane Library, Issue 3, 2001. Oxford: Update Software. Mazur LJ, De Ybarrondo L, Miller J, Colasurdo G. Use of alternative and complementary therapies for pediatric asthma. Tex Med. 2001;97(6):64-68. Mehta AK, Arora N, Gaur SN, Singh BP. Choline supplementation reduces oxidative stress in mouse model of allergic airway disease. Eur J Clin Invest. 2009 Jun 26. [Epub ahead of print] Miller AL. The etiologies, pathophysiology, and alternative/complementary treatment of asthma. Altern Med Rev. 2001;6(1):20-47. Nagakura T, Matsuda S, Shichijyo K, Sugimoto H, Hata K. Dietary supplementation with fish oil rich in omega-3 polyunsaturated fatty acids in children with bronchial asthma. Eur Resp J. 2000;16(5):861-865. Nakao M, Muramoto Y, Hisadome M, Yamano N, Shoji M, Fukushima Y, et al. The effect of Shoseiryuto, a traditional Japanese medicine, on cytochrome P450s, N-acetyltransferase 2 and xanthine oxidase, in extensive or intermediate metabolizers of CYP2D6. Eur J Clin Pharmacol. 2007 Apr;63(4):345-53. Neuman I, Nahum H, Ben-Amotz A. Reduction of exercise-induced asthma oxidative stress by lycopene, a natural antioxidant. Allergy. 2000;55(12):1184-1189. Newnham DM. Asthma medications and their potential adverse effects in the elderly: recommendations for prescribing. Drug Saf. 2001;24(14):1065-1080. Okamoto M, Misunobu F, Ashida K, Mifune T, Hosaki Y, Tsugeno H et al. Effects of dietary supplementation with n-3 fatty acids compared with n-6 fatty acids on bronchial asthma. Int Med. 2000;39(2):107-111. Okamoto M, Misunobu F, Ashida K, et al. Effects of perilla seed oil supplementation on leukotriene generation by leucocytes in patients with asthma associated with lipometabolism. Int Arch Allergy Immunol. 2000;122(2):137-142. Raviv S, Smith LJ. Diet and asthma. Curr Opin Pulm Med. 2009 Sep 4. [Epub ahead of print] Rohdewald P. A review of the French maritime pine bark extract (Pycnogenol), a herbal medication with a diverse clinical pharmacology. Int J Clin Pharmacol Ther. 2002;40(4):158-168. Romieu I, Trenga C. Diet and obstructive lung diseases. Epidemiol Rev. 2001;23(2):268-287. Rowe BH, Edmonds ML, Spooner CH, Camargo CA. Evidence-based treatments for acute asthma. [Review]. Respir Care. 2001;46(12):1380-1390. Sathyaprabha TN, Murthy H, Murthy BT. Efficacy of naturopathy and yoga in bronchial asthma -- a self controlled matched scientific study. Ind J Physiol Pharmacol. 2001;45(10:80-86. Shaheen SO, Newson RB, Rayman MP, Wong AP, Tumilty MK, Phillips JM, et al. Randomised, double blind, placebo-controlled trial of selenium supplementation in adult asthma. Thorax. 2007 Jun;62(6):483-90. Shaheen SO, Sterne JA, Thompson RL, Songhurst CE, Margetts BM, Burney PG. Dietary antioxidants and asthma in adults: population-based case-control study. Am J Respir Crit Care Med. 2001;164(10 Pt 1):1823-1828. Tamaoki J, Nakata J, Kawatani K, Tagaya E, Nagai A. Ginsenoside-induced relaxation of human bronchial smooth muscle via release of nitric oxide. Br J Pharmacol. 2000;130(8):1859-1864 Urata Y, Yoshida S, Irie Y, et al. Treatment of asthma patients with herbal medicine TJ-96: a randomized controlled trial. Respir Med. 2002 Jun;96(6):469-474. Ziment I, Tashkin DP. Alternative medicine for allergy and asthma. J Allergy Clin Immunol. 2000;106(4):603-614.
http://www.mch.com/page/EN/5042/Alternative-Medicine/Asthma.aspx
13
16
Theatre and School Success Students involved in theatre. . . 1. Receive more As & Bs in English; 2. Score better on standardized tests; 3. Are almost 4 times less likely to drop out of school; 4. Are more likely to participate in community service. 5. 2002 SAT scores show students having coursework in drama scored higher than students who had no drama coursework. Theatre and Employability Traits Ever wonder what employer’s are really looking for when hiring? Theatre teaches analytical reasoning, creative thinking, decision making, and problem solving. Theatre education offers precisely the skills employers are looking for. 1. The ability to articulate a vision. This is what any artist does every time he or she works. 2. Orientation towards results. Theatre is wrapped up in doing, creating a piece of art, or finishing a performance. 3. Spirit of collaboration or empathy. Theatre fosters a keen sensitivity to the artist’s effect on those around him or her. Lessons Theatre Teaches What impact does theatre education have on students? - Neither words nor numbers define what we know. - Problems can have more than one solution. - Celebrate multiple perspectives - Small differences can have large effects - Actions convey what cannot be said. THE BENEFITS OF THEATRE ARTS by Jonas Nasom . Self-Confidence: Taking risks in class and performing for an audience teach students to trust their ideas and abilities. The confidence gained in drama applies to school, career, and life. . Imagination: Making creative choices, thinking of new ideas, and interpreting familiar material in new ways are essential to drama. Einstein said, “Imagination is more important than knowledge.” . Empathy: Acting roles from different situations, time periods, and cultures promotes compassion and tolerance for others’ feelings and viewpoints. . Cooperation/Collaboration: Theater combines the creative ideas and abilities of its participants. This cooperative process includes discussing, negotiating, rehearsing, and performing. . Concentration: Playing, practicing, and performing develop a sustained focus of mind, body, and voice, which also helps in other school subjects and life. . Communication Skills: Drama enhances verbal and nonverbal expression of ideas. It improves voice projection, articulation of words, fluency with language, and persuasive speech. Listening and observation skills develop by playing drama games, being an audience, rehearsing, and performing. . Problem Solving: Students learn how to communicate the who, what, where, and why to the audience. Improvisation fosters quick-thinking solutions, which leads to greater adaptability in life. . Fun: Drama brings play, humor, and laughter to learning; this improves motivation and reduces stress. . Emotional Outlet: Pretend play and drama games allow students to express a range of emotions. Aggression and tension are released in a safe, controlled environment, reducing antisocial behaviors. . Relaxation: Many drama activities reduce stress by releasing mental, physical, and emotional tension. . Self-Discipline: The process of moving from ideas to actions to performances teaches the value of practice and perseverance. Drama games and creative movement improve self-control. . Trust: The social interaction and risk taking in drama develop trust in self, others, and the process. . Physical Fitness: Movement in drama improves flexibility, coordination, balance, and control. . Memory: Rehearsing and performing words, movements, and cues strengthen this skill like a muscle. . Social Awareness: Legends, myths, poems, stories, and plays used in drama teach students about social issues and conflicts from cultures, past and present, all over the world. . Aesthetic Appreciation: Participating in and viewing theater raise appreciation for the art form. It is important to raise a generation that understands, values, and supports theater’s place in society. tells the riveting story of the profound changes in the lives of kids, teachers, and parents in ten economically disadvantaged communities across the country that place their bets on the arts as a way to create great schools. The schools become caring communities where kids - many of whom face challenges of poverty, the need to learn English, and to surmount learning difficulties - thrive and succeed and where teachers find new joy and satisfaction in teaching. “Finally, it seems important to note one more feature of the arts that may explain their special role in the transformations described in this book. Among other qualities, the arts are attempts to understand both the common (experienced by most or all) and profound (of great seriousness and significance) aspects of what it means to be human. They explore experiences all of us are likely to have in our lifetimes – loss, love, fear, and moral confusions, for example. The arts strive to make visible and communicable that which eludes our general capacities to express, thus creating the possibility of forging connections between people on the ground of basic human experience. I do not believe there is any other setting in schools that provides such an opportunity so well.” This book suggests an alternative vision of both the process and result of school reform. It points to reform that occurs not as a result of accountability measures, but as a natural transformation through the building of a new kind of community of learners, a community of creators. This book describes a “kinder, gentler” (to borrow from George Bush, Sr.) approach to school change, not based so much on punitive accountability, but rather on an invitation to create an exciting, meaningful, and more beautiful school. It is always good to have some alternatives in mind when trying to tackle as large a problem as the improvement of our public schools. This book provides such an alternative. I hope we can learn the lessons it offers.” Director, Project Zero Director, Arts in Education Program, Harvard Graduate School of Education Theatre can empower individuals and communities. Theatre is a force that can unite, uplift, teach, build communities, inspire, and heal. Fine arts are no frill and deserve funding Published Friday, March 18, 2011 CORPUS CHRISTI — Renée Zellweger might have won an Academy Award without the theater courses she took at Katy High School. And it's possible that Norah Jones may have won multiple Grammy Awards even if she hadn't attended choir classes at Grapevine Junior High School. But in each of these cases, and in countless others, a quality fine arts education in Texas public schools is at the foundation of their success. Fine arts courses in our schools enable students to develop their interest and talent in the arts at an early age, and every student benefits from fine arts courses, even when their future career successes are outside of music, acting, dance, or art. In a state where high-stakes testing drives decisions on funding, staffing, and instructional minutes, fine arts programs are frequently a target when school budget cuts must be made. With the Legislature and school boards dealing with budget shortfalls of historic proportions, there is already evidence from districts across the state that fine arts programs are on the chopping block. These programs often suffer because of a misguided perception that the arts are an extracurricular, non-essential part of education. Yet, nothing could be further from the truth. Fine arts is part of the state-required curriculum that all school districts must offer from elementary through high school. Fine arts classes that meet during the school day are inarguably curricular by nature and by law. As State Sen. Florence Shapiro, Chair of the Senate Education Committee, said in a press conference last week: "Fine arts courses are just as essential as every other part of the required curriculum. In fact, fine arts courses are becoming increasingly critical in preparing students for the 21st-century workforce." During the last legislative session in a joint briefing to the House and Senate, best-selling business author Dan Pink advised legislators that the 21st-century workforce belongs to creative right-brain thinkers for whom the arts are a cornerstone of their development. Within that briefing, a NASA ISS systems engineer, an IBM master inventor, and an AT&T executive echoed Pink's convictions. While it's clear that business leaders value arts education, the more than 1.4 million students enrolled in middle and high school fine arts courses today speaks to the fact that these programs are also valued across the state by students and parents. Elementary music, art, and theater teachers serve tens of thousands of students daily and are among the most dedicated and passionate teachers in our Texas classrooms. Research studies also continue to offer resounding conclusions about the importance of arts education. In 2008, the Dana Foundation released a comprehensive study, "Learning, Arts, and the Brain," that for the first time reported a causal relationship between rigorous study in the arts and improved cognition. And a November 2010 Scientific American editorial that was headlined "Hearing the Music, Honing the Mind" stated, "Music produces profound and lasting changes in the brain. Schools should add classes, not cut them." Finally, the Texas Cultural Arts economic study released in 2009 entitled "20 Reasons the Texas Economy Depends on the Arts and the Creative Sector" found an undeniable connection between support for the arts, a vibrant creative sector, and a strong economy. To quote that study, "During tough economic times it may seem intuitive to cut arts and culture initiatives, but these are the very projects that can help the economy recover." Before school districts or the legislature propose wholesale cutting of fine arts programs to solve what is admittedly a critical public education funding crisis, they should remember their responsibility to educate the whole child. Because fine arts courses are academic and a vital component in delivering the well-rounded education required by law, they should not take a disproportionate share of staffing and budget cuts. As former Texas congresswoman Barbara Jordan so eloquently stated in 1993, "The arts, instead of quaking along the periphery of our policy concerns, must push boldly into the core of policy. The arts are not a frill." Robert Floyd is Executive Director of the Texas Music Educators Association and chairs the Texas Coalition for Quality Arts Education. Research on the benefits of Arts Education Fact Sheet About the Benefits of Arts Education for Children Benefits of Arts Education Source: Americans for the Arts, 2002 § Stimulates and develops the imagination and critical thinking, and refines cognitive and creative skills. § Has a tremendous impact on the developmental growth of every child and has proven to help level the “learning field” across socio-economic boundaries. § Strengthens problem-solving and critical-thinking skills, adding to overall academic achievement and school success. § Develops a sense of craftsmanship, quality task performance, and goal-setting—skills needed to succeed in the classroom and beyond. § Teaches children life skills such as developing an informed perception; articulating a vision; learning to solve problems and make decisions; building self-confidence and self-discipline; developing the ability to imagine what might be; and accepting responsibility to complete tasks from start to finish. § Nurtures important values, including team-building skills; respecting alternative viewpoints; and appreciating and being aware of different cultures and traditions. Source: Young Children and the Arts: Making Creative Connections, 1998, Introduction § Plays a central role in cognitive, motor, language, and social-emotional development. § Motivates and engages children in learning, stimulates memory, facilitates understanding, enhances symbolic communication, promotes relationships, and provides an avenue for building competence. § Provides a natural source of learning. Child development specialists note that play is the business of young children; play is the way children promote and enhance their development. The arts are a most natural vehicle for play. § We know that “art,” understood as spontaneous creative play, is what young children naturally do—singing, dancing, drawing, and role-playing. We also know that the arts engage all the senses and involve a variety of modalities including the kinesthetic, auditory, and visual. When caregivers engage and encourage children in arts activities on a regular basis from early in life, they are laying the foundation for—and even helping wire children’s brains for—successful learning. Adults Agree on Importance of Arts Education Source: Americans for the Arts national public opinion survey, January 2001 § Ninety-one percent of respondents believe the arts are vital to a well-rounded education. § Ninety-five percent of respondents believe the arts teach intangibles such as creativity, self-expression, and individualism. § Seventy-six percent of respondents somewhat or strongly agree that arts education is important enough to get personally involved. However, just thirty-five percent of those who are closely involved in the life of a child have done so. § Sixty-seven percent say they do not know how to get involved. § Eighty-nine percent of respondents believe that arts education is important enough that schools should find the money to ensure inclusion in the curriculum. § Ninety-six percent agree the arts belong to everyone, not just the fortunate or privileged. The Social and Academic Impact of Arts Education Source: Eisner, E. W., Ten Lessons the Arts Teach, (January 1998) § Art is defined as something aesthetic to the senses. A “work of art” is both an activity and a result; it is a noun and a verb. “One of the great aims of education is to make it possible for people to be engaged in the process of creating themselves. Artists and scientists are alike in this respect.” § Arts curricula is typically process-driven and relationship based, so its impact on academic performance is often underestimated and undervalued. The arts provide a logical counterbalance to the trend of standardized testing and should not be marginalized just because the curriculum is more difficult to measure. § The emphasis and time given to a particular school subject sends a message to students about how important that subject is in life. § Arts programs, especially those including trained professionals, can help draw students out of “formal” ways of approaching relationships, outcomes, and perceptions. § The arts can play a crucial role in improving students’ abilities to learn, because they draw on a range of intelligences and learning styles, not just the linguistic and logical-mathematical intelligences upon which most schools are based. (Eloquent Evidence: Arts at the Core of Learning, President’s Committee on the Arts and Humanities, talking about Howard Gardener’s Theory of Multiple Intelligences, 1995) The Physical and Sensory Impact of Arts Education A student making music experiences the “simultaneous engagement of senses, muscles, and intellect. Brain scans taken during musical performances show that virtually the entire cerebral cortex is active while musicians are playing.” (Learning and the Arts: Crossing Boundaries, 2000, p. 14) “Dramatic play, rhyming games, and songs are some of the language-rich activities that build pre-reading skills.” (Young Children and the Arts: Making Creative Connection, 1998, p. 1) “Preschoolers who were given music keyboard lessons improved their spatial-temporal reasoning…used for understanding relationships between objects such as calculating a proportion or playing chess.” (Education Leadership, November, 1998, p. 38) “Creative activity is also a source of joy and wonder, while it bids its students to touch, taste, hear, and see the world. Children are powerfully affected by storytelling, music, dance, and the visual arts. They often construct their understanding of the world around musical games, imaginative dramas and drawing.” (Hamblen, Karen A., Theories and Research That Support Art Instruction for Instrumental Outcomes, 1993) “Regular, frequent instruction in drama and sign language created higher scores in language development for Head Start students than for a control group.” (Young Children and the Arts: Making Creative Connections, 1998, p. 1) “Listening to music for just an hour a day changes brain organization…EEG results showed greater brain coherence and more time spent in the alpha state.” (Malyarenko, et al., 1996) § Research Supports Arts in Education Performing Arts & Entertainment in Canada, Autumn, 2000 by Jenifer Milner § THE ARTS-IN-EDUCATION MOVEMENT IS NOT NEW. IN THE UNITED STATES, “A STUDY OF MORE THAN 30 YEARS OF AMERICAN ARTS IN EDUCATION HISTORY REVEALS CYCLES OF boom and bust, periods full of rhetorical promise and bursts of activity followed by long stretches of dashed expectations, withdrawn support and in some instances, abrupt abandonment of promising projects, programs, and initiatives.” § This waxing and waning of arts in education reflects to some extent quality-of-life versus man-as-commodity thinking. A society encouraging personal freedom, self-esteem, and contemplative living embraces the arts for their intrinsic worth, whereas a society driven by competition seeks knowledge and skills to increase one’s advantage and employability. § The drive towards traditional schools and a Three Rs education is more understandable in light of our floundering economy. (Traditional schools exist in Surrey, Langley, and Abbotsford; Richmond’s school board just approved the formation of a traditional school; and Vancouver’s school board will face the issue in the fall.) But who benefits the most from teacher-led traditional schools — students or parents? § Many of the problems associated with our school system today stem from a lack of student satisfaction: high dropout rates and truancy, poor grades, vandalism, and violence. Stressed educational budgets mean stressed teachers, less resources, more constraints, and inappropriate facilities. The list goes on. § It is ironic that arts in education appears to be something of a political bandwagon. The case for the arts in schools, once marginalized or slashed entirely from classrooms to save money or de-valued as an inconsequential frill, is gaining support. A substantial body of research now proves student satisfaction and engagement in learning increase with participation in the arts. § James Catterall, a UCLA researcher, studied 25,000 students in grades 8 to 10. He discovered that students “highly involved in arts programs” fare better in other subjects too and are “much less likely to drop out” of school or become uninterested in school life. Catterall’s study also shows that students from low-income families who participate in arts experiences are more likely to do better academically than those who do not. § Not only do students’ attitudes, attendance, abilities, and grades dramatically improve when the arts become part of their school life, but “research shows that arts education programs result in measurable gains in student motivation and achievement in reading, writing, and mathematics” — exactly what traditional school proponents want to accomplish. § With exceptions like Art Starts in Schools, a local nonprofit organization dedicated to bringing the arts to B.C.’s school kids, arts programming in Greater Vancouver Regional District schools has flourished independently and, for the most part, outside the core curricula. ArtStarts, a funding program administered by ArtStarts on behalf of the J.W. McConnell Family Foundation, is the first local foray into integrating the arts across the curriculum. § Thousands of dedicated parents, teachers, artists, arts organizations, and community groups are dedicated to producing arts programming or establishing arts-integrated curricula; they are to be commended. Research supports the benefits of arts in education to society, business, government, and schools. But let’s not lose sight of what arts experiences mean to children. § “Arts teachers daily ask their students to engage in learning activities which require use of higher-order thinking skills like analysis, synthesis, and evaluation. Arts education, then, is first of all an activity of the mind. § “Creative activity is also a source of joy and wonder, while it bids its students to touch and taste and hear and see the world. Children are powerfully affected by storytelling, music, dance, and the visual arts. They often construct their understanding of the world around musical games, imaginative dramas, and drawing.” § Sounds like the arts provide just about everything to nourish our children. § Jenifer Milner is the communications manager of the Vancouver Alliance for Arts and Culture. § Jenifer Milner “Research Supports Arts in Education“. Performing Arts & Entertainment in Canada. FindArticles.com. 21 Feb, 2010. http://findarticles.com/p/articles/mi_m1319/is_2_33/ai_71634790/
http://teacherweb.com/TX/CedarValleyMiddleSchool/kbaker/apt17.aspx
13
43
In this lesson, students learn personal financial management strategies based on budgeting and theoretical concepts involved with budgeting and financial management, including income, expenses, savings, and debt. Students watch a video segment from the PBS series What's Up in Finance? to see how a college student learns to manage his budget. They then complete hands-on activities to create three different saving and spending scenarios based on their own lives and expenses. After completing these activities, students use an online interactive game to apply the financial management concepts and strategies they have learned. As a final activity, student brainstorm ways to manage their own budgets while making room for important investments, like classes, that will help their personal development in the long run. In this way, students examine ways to save money to" invest" in themselves, for a return in their lives over the long-term. - Understand the components of a budget - Compute savings - Compute debt - Learn financial management - Learn the nature of opportunity costs - Understand the importance of self-regulation (3) 50 minute class periods Moving Out QuickTime Video In this video segment from What’s Up in Finance?, a college student consults a financial planner for strategies to manage his finances. Bank It or Bust Flash Interactive In this interactive game from What's Up in Finance?, players make financial decisions to help them meet their goal of buying a car. Before The Lesson - Bookmark the web site used in the lesson on each computer in your classroom. Using a social bookmarking tool such as del.icio.us or diigo (or an online bookmarking utility such as portaportal) will allow you to organize all the links in a central location. - Preview all of the video segments and web sites used in the lesson to make certain that they are appropriate for your students, currently available, and accessible from your classroom. - Download the video clips used in this lesson onto your hard drive or a portable storage device, or prepare to stream the clips from your classroom. Introductory Activity: Setting the Stage - Open the discussion by asking if any of the students have saved money or maintain a budget of any kind. - Write the following terms and their definitions on the board: budget, income, expense, savings, and debt. (See the Financial Management Terms Teacher Organizer.) - Discuss with the students what each term means, and how savings and debt are a result of different combinations of income and expenses. - Discuss with the students any dreams they have for the future, and what investments they could make now to achieve those dreams. For instance, if one person would like to play in a band, they may need guitar lessons now to achieve that dream in the future. Ask the students to think about how they could save enough money to pay for a current expense that is necessary to achieve that dream. - Pass out the Dreams for the Future Student Organizer to each student. Ask students to brainstorm four different dreams for the future, and the steps they need to take now to achieve that dream. Ask them to also think about the potential cost of the current steps. - Now ask if any of the students have thought about living on their own. Make a list of some of the costs that students might have to consider if they were living on their own (for example: rent, food, transportation, laundry, phone and internet, entertainment). - Explain to the class that they will be watching a video segment from What’s Up in Finance? where they will meet Eddie, a young person who is living on his own for the first time. Eddie is facing some challenges maintaining a budget and saving money. - Ask students to think about the areas where Eddie could spend less money as they watch the segment. Play the Moving Out segment. - Discuss the segment with students, pointing out that Eddie had to re-work his budget in order to afford his dream: attending a four-year college. Review some of the ways Eddie adjusted his budget to spend less money. - Ask students to discuss why attending a four-year college might help Eddie achieve his future goals. - Hand out the Debt Savings Student Organizer. Explain that this is a template to use for hypothetical budgets. The students fill them out based on their own expenses or projected expenses, but all the scenarios assume that the students are making $500 a month in income. - Ask students to complete the column titled "Scenario 1: Debt" with as much expense as they would like in each category. They need to spend more than $500. If the students think of additional expenses beyond those listed on the organizer, they can add them below "Books/Magazines" on the chart. - Students will then add up their expenses, and subtract them from the income figure of $500. The expenses should be higher than $500, so the total will be negative. This means the students have gone into debt and the debt figure should go in the "Debt" row. - Next, ask students to fill out the column titled "Scenario 2: Break-Even." The goal here is to spend exactly $500. This means that students will have to lower their expenses in certain areas from the "Debt" column. - Students will add up their expenses to confirm that their expense and income totals are equal. They will then have no savings or debt, so a "0" should go in the "Debt" and "Savings" rows. - Then, ask students to complete the column titled "Scenario 3: Savings." The goal with this column is to lower their costs below those of the "Break-even Scenario" in order to have savings. - Students will add up their expenses and subtract the total from the income figure of $500. This figure should go in the "Savings" row. - Discuss with students how they were able to achieve savings. Ask students how they set their spending priorities in order to lower some expenses, and ultimately save money. - Explain to students they will play the online game "Bank It or Bust" to utilize the financial management concepts and strategies they have learned. In "Bank it or Bust," students create and modify a budget, and then stick to it for ten weeks with the goal of saving up to buy a car. While playing, ask students to focus on some strategies for saving money. - After the students have finished playing "Bank it or Bust," review some of the savings strategies they used in the game. Ask students if they can name things that happened to them during the game that provided them with extra cash (answers will vary). Ask them to name some "emergencies" that occurred that required to students to use some of their savings (answers will vary). - Ask students to take out their Dreams for the Future Student Organizer and to think back to the initial discussion regarding their dreams. Have the students review the monthly cost of the current activities they brainstormed – the “personal investments” that would help them in the future. - Next, ask students to take out their Debt Savings Student Organizer, and look at their figures for "Scenario 3: Savings." - Students should look at the monthly savings figure, and determine if they have saved enough for their "personal investment." - If students have not saved enough for their personal investment, they should re-work their Savings Scenario on the Student Organizer one more time to come up with the necessary monthly savings to cover these investments in their personal development.
http://www.teachersdomain.org/resource/fin10.socst.econfin.lives.lpinvest/
13
67
Endowed with gold and oil palms and situated between the trans- Saharan trade routes and the African coastline visited by successive European traders, the area known today as Ghana has been involved in all phases of Africa’s economic development during the last thousand years. As the economic fortunes of African societies have waxed and waned, so, too, have Ghana’s, leaving that country in the early 1990s in a state of arrested development, unable to make the “leap” to Africa’s next, as yet uncertain, phase of economic evolution. As early as the thirteenth century, present-day Ghana was drawn into long-distance trade, in large part because of its gold reserves. The trans-Saharan trade, one of the most wide-ranging trading networks of pre-modern times, involved an exchange of European, North African, and Saharan commodities southward in exchange for the products of the African savannas and forests, including gold, kola nuts, and slaves. Present-day Ghana, named the Gold Coast by European traders, was an important source of the gold traded across the Sahara. Centralized states such as Asante controlled prices by regulating production and marketing of this precious commodity. As European navigational techniques improved in the fifteenth century, Portuguese and later Dutch and English traders tried to circumvent the Saharan trade by sailing directly to its southernmost source on the West African coast. In 1482 the Portuguese built a fortified trading post at Elmina and began purchasing gold, ivory, and pepper from African coastal merchants. Although Africans for centuries had exported their raw materials—ivory, gold, kola nuts—in exchange for imports ranging from salt to foreign metals, the introduction of the Atlantic slave trade in the early sixteenth century changed the nature of African export production in fundamental ways. An increasing number of Ghanaians sought to enrich themselves by capturing fellow Africans in warfare and selling them to slave dealers from North America and South America. The slaves were transported to the coast and sold through African merchants using the same routes and connections through which gold and ivory had formerly flowed. In return, Africans often received guns as payment, which could be used to capture more slaves and, more importantly, to gain and preserve political power. An estimated ten million Africans, at least half a million from the Gold Coast, left the continent in this manner. Some economists have argued that the slave trade increased African economic resources and therefore did not necessarily impede development, but others, notably historian Walter Rodney, have argued that by removing the continent’s most valuable resource—humans—the slave trade robbed Africa of unknown invention, innovation, and production. Rodney further argues that the slave trade fueled a process of underdevelopment, whereby African societies came to rely on the export of resources crucial to their own economic growth, thereby precluding local development of those resources. Although some scholars maintain that the subsequent economic history of this region supports Rodney’s interpretation, no consensus exists on this point. Indeed, in recent years, some historians not only have rejected Rodney’s interpretation but also have advanced the notion that it is the Africans themselves rather than an array of external forces that are to blame for the continent’s economic plight. When the slave trade ended in the early years of the nineteenth century, the local economy became the focus of the so-called legitimate trade, which the emerging industrial powers of Europe encouraged as a source of materials and markets to aid their own production and sales. The British, in particular, gained increasing control over the region throughout the nineteenth century and promoted the production of palm oil and timber as well as the continuation of gold production. In return, Africans were inundated with imports of consumer goods that, unlike the luxuries or locally unavailable imports of the trans-Saharan trade, quickly displaced African products, especially textiles. In 1878 cacao trees were introduced from the Americas. Cocoa quickly became the colony’s major export; Ghana produced more than half the global yield by the 1920s. African farmers used kinship networks like business corporations to spread cocoa cultivation throughout large areas of southern Ghana. Legitimate trade restored the overall productivity of Ghana’s economy; however, the influx of European goods began to displace indigenous industries, and farmers focused more on cash crops than on essential food crops for local consumption. When Ghana gained its independence from Britain in 1957, the economy appeared stable and prosperous. Ghana was the world’s leading producer of cocoa, boasted a well-developed infrastructure to service trade, and enjoyed a relatively advanced education system. At independence, President Kwame Nkrumah sought to use the apparent stability of the Ghanaian economy as a springboard for economic diversification and expansion. He began process of moving Ghana from a primarily agricultural economy to a mixed agricultural-industrial one. Using cocoa revenues as security, Nkrumah took out loans to establish industries that would produce import substitutes as well as process many of Ghana’s exports. Nkrumah’s plans were ambitious and grounded in the desire to reduce Ghana’s vulnerability to world trade. Unfortunately, the price of cocoa collapsed in the mid-1960s, destroying the fundamental stability of the economy and making it nearly impossible for Nkrumah to continue his plans. Pervasive corruption exacerbated these problems. In 1966 a group of military officers overthrew Nkrumah and inherited a nearly bankrupt country. Since then, Ghana has been caught in a cycle of debt, weak commodity demand, and currency overvaluation, which has resulted in the decay of productive capacities and a crippling foreign debt. Once the price of cocoa fell in the mid-1960s, Ghana obtained less of the foreign currency necessary to repay loans, the value of which jumped almost ten times between 1960 and 1966. Some economists recommended that Ghana devalue its currency—the cedi—to make its cocoa price more attractive on the world market, but devaluation of the cedi would also have rendered loan repayment in United States dollars much more difficult. Moreover, such a devaluation would have increased the costs of imports, both for consumers and nascent industries. Until the early 1980s, successive governments refused to devalue the currency (with the exception of the government of Kofi A. Busia, which devalued the cedi in 1971 and was promptly overthrown). Cocoa prices languished, discouraging cocoa production altogether and leading to smuggling of existing cocoa crops to neighboring countries, where francs rather than cedis could be obtained in payment. As production and official exports collapsed, revenue necessary for the survival of the economy was obtained through the procurement of further loans, thereby intensifying a self-destructive cycle driven by debt and reliance on vulnerable world commodity markets. By the early 1980s, Ghana’s economy was in an advanced state of collapse. Per capita gross domestic product ( GDP) showed negative growth throughout the 1960s and fell by 3.2 percent per year from 1970 to 1981. Most important was the decline in cocoa production, which fell by half between the mid-1960s and the late 1970s, drastically reducing Ghana’s share of the world market from about one-third in the early 1970s to only one-eighth in 1982-83. At the same time, mineral production fell by 32 percent; gold production declined by 47 percent, diamonds by 67 percent, manganese by 43 percent, and bauxite by 46 percent. Inflation averaged more than 50 percent a year between 1976 and 1981, hitting 116.5 percent in 1981. Real minimum wages dropped from an index of 75 in 1975 to one of 15.4 in 1981. Tax revenue fell from 17 percent of GDP in 1973 to only 5 percent in 1983, and actual imports by volume in 1982 were only 43 percent of average 1975-76 levels. Productivity, the standard of living, and the government’s resources had plummeted dramatically. In 1981 a military government under the leadership of Flight Lieutenant Jerry John Rawlings came to power. Calling itself the Provisional National Defence Council (PNDC), the Rawlings regime initially blamed the nation’s economic problems on the corruption of previous governments. Rawlings soon discovered, however, that Ghana’s problems were the result of forces more complicated than economic abuse. Following a severe drought in 1983, the government accepted stringent International Monetary Fund ( IMF) and World Bank loan conditions and instituted the Economic Recovery Program (ERP). Signaling a dramatic shift in policies, the ERP fundamentally changed the government’s social, political, and economic orientation. Aimed primarily at enabling Ghana to repay its foreign debts, the ERP exemplified the structural adjustment policies formulated by international banking and donor institutions in the 1980s. The program emphasized the promotion of the export sector and an enforced fiscal stringency, which together aimed to eradicate budget deficits. The PNDC followed the ERP faithfully and gained the support of the international financial community. The effects of the ERP on the domestic economy, however, led to a lowered standard of living for most Ghanaians.
http://www.modernghana.com/GhanaHome/ghana/economy.asp?menu_id=6&sub_menu_id=13&gender=&s=a
13
19
The Tea Act of 1773 was one of several measures imposed on the American colonists by the heavily indebted British government in the decade leading up to the American Revolutionary War (1775-83). The act's main purpose was not to raise revenue from the colonies but to bail out the floundering East India Company, a key actor in the British economy. The British government granted the company a monopoly on the importation and sale of tea in the colonies. The colonists had never accepted the constitutionality of the duty on tea, and the Tea Act rekindled their opposition to it. Their resistance culminated in the Boston Tea Party on December 16, 1773, in which colonists boarded East India Company ships and dumped their loads of tea overboard. Parliament responded with a series of harsh measures intended to stifle colonial resistance to British rule; two years later the war began. - Crisis in Britain - Saving the East India Company - The Destruction of the Tea - The Coercive Acts and American Independence Crisis in Britain In 1763, the British Empire emerged as the victor of the Seven Years' War (1756-63). Although the victory greatly expanded the empire’s imperial holdings, it also left it with a massive national debt, and the British government looked to its North American colonies as an untapped source of revenue. In 1765, the British Parliament passed the Stamp Act, the first direct, internal tax that it had ever levied on the colonists. The colonists resisted the new tax, arguing that only their own elective colonial assemblies could tax them, and that "taxation without representation" was unjust and unconstitutional. After the British government rejected their arguments, the colonists resorted to physical intimidation and mob violence to prevent the collection of the stamp tax. Recognizing that the Stamp Act was a lost cause, Parliament repealed it in 1766. Parliament did not, however, renounce its right to tax the colonies or otherwise enact legislation over them. In 1767, Charles Townshend (1725-67), Britain's new chancellor of the Exchequer (an office that placed him in charge of collecting the government's revenue), proposed a law known as the Townshend Revenue Act. This act placed duties on a number of goods imported into the colonies, including tea, glass, paper and paint. The revenue raised by these duties would be used to pay the salaries of royal colonial governors. Since Parliament had a long history of using duties to regulate imperial trade, Townshend expected that the colonists would acquiesce to the imposition of the new taxes. Unfortunately for Townshend, the Stamp Act had aroused colonial resentment to all new taxes, whether levied on imports or on the colonists directly. Moreover, Townshend's proposal to use the revenue to pay the salaries of colonial governors aroused great suspicion among the colonists. In most colonies, the elective assemblies paid the governors' salaries, and losing that power of the purse would greatly enhance the power of the royally appointed governors at the expense of representative government. To express their displeasure, the colonists organized popular and effective boycotts of the taxed goods. Once again, colonial resistance had undermined the new system of taxation, and once again, the British government bowed to reality without abandoning the principle that it had rightful authority to tax the colonies. In 1770, Parliament repealed all of the Townshend Act duties except for the one on tea, which was retained as a symbol of Parliament's power over the colonies. Saving the East India Company The repeal of the majority of the Townshend Act took the wind out of the sails of the colonial boycott. Although many colonists continued to refuse to drink tea out of principle, many others resumed partaking of the beverage, though some of them salved their conscience by drinking smuggled Dutch tea, which was generally cheaper than legally imported tea. The American consumption of smuggled tea hurt the finances of the East India Company, which was already struggling through economic hardship. Although it was a private concern, the company played an integral role in Britain's imperial economy and served as its conduit to the riches of the East Indies. A glut of tea and a diminished American market had left the company with tons of tea leaves rotting in its warehouses. In an effort to save the troubled enterprise, the British Parliament passed the Tea Act in 1773. The act granted the company the right to ship its tea directly to the colonies without first landing it in England, and to commission agents who would have the sole right to sell tea in the colonies. The act retained the duty on imported tea at its existing rate, but, since the company was no longer required to pay an additional tax in England, the Tea Act effectively lowered the price of the East India Company’s tea in the colonies. The Destruction of the Tea If Parliament expected that the lowered cost of tea would mollify the colonists into acquiescing to the Tea Act, it was gravely mistaken. By allowing the East India Company to sell tea directly in the American colonies, the Tea Act cut out colonial merchants, and the prominent and influential colonial merchants reacted with anger. Other colonists viewed the act as a Trojan horse designed to seduce them into accepting Parliament's right to impose taxes on them. The fact that the agents commissioned by the company to sell its tea included a number of pro-Parliament men only added fuel to the fire. The Tea Act revived the boycott on tea and inspired direct resistance not seen since the Stamp Act crisis. The act also made allies of merchants and patriot groups like the Sons of Liberty. Patriot mobs intimidated the company's agents into resigning their commissions. In several towns, crowds of colonists gathered along the ports and forced company ships to turn away without unloading their cargo. The most spectacular action occurred in Boston, Massachusetts, where on December 16, 1773, a well-organized group of men dressed up as Native Americans and boarded the company ships. The men smashed open the chests of tea and dumped their contents into Boston Harbor in what later came to be known as the Boston Tea Party. The Coercive Acts and American Independence The Boston Tea Party caused considerable property damage and infuriated the British government. Parliament responded with the Coercive Acts of 1774, which colonists came to call the Intolerable Acts. The series of measures, among other things, repealed the colonial charter of Massachusetts and closed the port of Boston until the colonists reimbursed the cost of the destroyed tea. Parliament also appointed General Thomas Gage (1719-87), the commander in chief of British forces in North America, as the governor of Massachusetts. Since the Stamp Act crisis of 1765, radical colonists had warned that new British taxes heralded an attempt to overthrow representative government in the colonies and to subjugate the colonists to British tyranny. The Coercive Acts convinced more moderate Americans that the radicals' claims had merit. Colonial resistance intensified until, three years after Parliament passed the Tea Act, the colonies declared their independence as the United States of America. How to Cite this Page: Tea Act. (2013). The History Channel website. Retrieved 10:34, May 20, 2013, from http://www.history.com/topics/tea-act. Tea Act. [Internet]. 2013. The History Channel website. Available from: http://www.history.com/topics/tea-act [Accessed 20 May 2013]. “Tea Act.” 2013. The History Channel website. May 20 2013, 10:34 http://www.history.com/topics/tea-act. “Tea Act,” The History Channel website, 2013, http://www.history.com/topics/tea-act [accessed May 20, 2013]. “Tea Act,” The History Channel website, http://www.history.com/topics/tea-act (accessed May 20, 2013). Tea Act [Internet]. The History Channel website; 2013 [cited 2013 May 20] Available from: http://www.history.com/topics/tea-act. Tea Act, http://www.history.com/topics/tea-act (last visited May 20, 2013). Tea Act. The History Channel website. 2013. Available at: http://www.history.com/topics/tea-act. Accessed May 20, 2013.
http://www.history.com/topics/print/tea-act
13
27
Freshwater systems cover less than 1% of the Earth's surface yet they are essential to support life. Water quality supports the health of people and ecosystems. Rivers and groundwater need a holistic landscape-scale approach to address pressures on upstream and downstream resources, giving recognition to the importance of the aesthetic, religious, historical, and archaeological values water contributes to a nation's heritage. Freshwater habitats provide a home for 126,000 species, or 7%, of the estimated 1.8 million described species including a quarter of the estimated 60,000 vertebrates (Balian et al., 2008). They also have economic value. According to one estimate, the value of the goods and services provided by the world's wetlands is US$ 70 billion per year (Schuyt and Brander, 2004). Both biodiversity and human well-being are affected by changes to freshwater. On average freshwater species populations were reduced by half between 1970 and 2005, a sharper decline than for other biomes (World Water Assessment Programme, 2009). The Red List Index for birds living in freshwater habitats shows one of the most serious declines for all habitats, second only to marine habitats (Butchart et al., 2004). A global Red List assessment for freshwater crabs reported that, of species for which enough data were available to carry out an assessment, 32% were threatened (Cumberlidge et al., 2009). Reviews of the status of freshwater fishes across particular regions report figures ranging from 11% threatened in southern Africa (Darwall et al., 2008) to 56% of endemic Mediterranean freshwater fishes being threatened (Smith and Darwall, 2006). More than 60% of the largest 227 rivers are fragmented by dams, diversions or canals (Revenga et al., 2000) leading to widespread degradation of freshwater ecosystems. Overfishing and destructive fishing practices, pollution, invasive species and climate change are additional major concerns for most freshwater systems. Darwall et al., (2008) report that 85% of threatened fish in southern Africa, 55% of threatened freshwater fish in Europe, and just under 45% of threatened freshwater fish in Madagascar are affected by invasive species. In the latter case, this is largely the result of implementation of a plan to re-establish local fisheries through the introduction of 24 nonnative fish species (Benstead et al., 2003). Climate change will cause further vulnerability and result in further impacts on freshwater systems. Finally, in many countries water policies and laws are undergoing reform and need to be implemented effectively to conserve water resources. In a world with diminishing access to water, solving conservation challenges requires solutions that combine the needs of both people and nature. The Vision for Water and Nature (2000) promotes an ecosystem approach to applying integrated water resources management (IWRM), including through improving water governance, empowering stakeholders, building knowledge and valuing water resources. IUCN has prepared a series of toolkits to support the implementation of sound water resource management to strengthen water security, including Change, Flow, Value, Pay, Share and Rule. They are all accessible online, and are available in several languages, at: http://www.iucn.org/about/work/programmes/water/wp_resources/wp_resources_toolkits/. People need a minimum of 20 litres of water a day to drink, bathe, and maintain basic hygiene (UN Water, 2007). Imagine what it is like to survive on one-quarter of that amount, 5 litres a day – the amount people were living on during the East African drought (2005–2006). The UN states that by 2025 two-thirds of us will experience water shortages, with a severe lack of water afflicting the lives and livelihoods of 1.8 billion people (UN Water, 2007). The challenges we face relate both to quantity and quality of water. The 2006 Global International Waters Assessment confirmed that shortages of freshwater were a problem in most parts of the world but especially in sub-Saharan Africa where freshwater shortages affect nine of 19 freshwater systems assessed by the Global International Waters Assessment and pollution (including transboundary pollution) affects five systems. By 2025, many southern regions of the world are projected to face water scarcity (see Figure 19.1). However, water scarcity is not consistent across time and space. Physical water scarcity occurs when physical access is limited, and thus water resources' development is approaching or has exceeded sustainable limits. Economic water scarcity exists when the population does not have the human, institutional and economic capital to access water even though water in nature is available locally to meet human demands. Economic water scarcity resulting from unequal distribution of resources has many causes including political and ethnic conflict. Much of sub-Saharan Africa suffers from the effects of this type of water scarcity (Comprehensive Assessment of Water in Agriculture, 2007). The water crisis stems from rising demand, falling quality and therefore dwindling per capita availability. Distribution and management are also issues. The difference in water reliability between Japan and Cambodia – which have annually about the same average rainfall of 160cm a year – is that Japan has been able to create infrastructure to harness and store water. In countries with heavy rainfall, such as Bangladesh and Myanmar, much of the monsoon precipitation is not captured for productive use and runs off into the ocean. While the minimum water needed may be 20 litres per day, the average daily use in the USA and European countries is 200–600 litres per day (UN Water, 2007). Managing your own water consumption might be as easy as turning off the tap while brushing your teeth. One tool that can be used to determine water consumption is the water footprint tool (Box 19.1).The water footprint of an individual, community or business is defined as the total volume of freshwater that is used to produce the goods and services consumed by the individual or community or produced by the business. The water footprint tool and other approaches can be used as tools to implement IWRM. Figure 19.1 Projected water scarcity in 2025 (IWMI, 2009) IWRM is “a process which promotes the coordinated development and management of water, land and related resources in order to maximize the resultant economic and social welfare in an equitable manner without compromising the sustainability of vital ecosystems” (GWP, 2009). It integrates landscapescale management that acts on a scale broad enough to recognize the role of all critical influencing factors and stakeholders that shape land-use decisions. IWRM is based on the Dublin Principles (GWP, 2000), namely: Principle I: Water as a finite and vulnerable resource Principle II: Participatory approach Principle III: The important role of women Principle IV: Water as an economic good IUCN's Members, in Resolution 4.063 (The new Water Culture – integrated water resources management) have urged governments to adopt IWRM and support frameworks for its implementation. The key question when managing water allocations is “How can we ensure there is enough water for nature?” This can be answered by applying environmental flows. Environmental flows describe the quantity, timing, and quality of water flows required to sustain freshwater and estuarine ecosystems and the human livelihoods and well-being that depend on these ecosystems (Brisbane Declaration, 2007). Assessments are undertaken to determine the amount of flow needed to maintain a healthy river and support vital ecosystem services. This information is used to make informed decisions about allocation of water to all sectors including the environment. To increase integration of environmental flows into policy and practice for water management, communication, learning and demonstration of the benefits of flows for people and nature are needed. The Environmental Flows Network (www.eflownet.org) is a central reference point for information on flows and also is a tool to share experiences, develop the concept and link to a broad, cross-sectoral audience. IUCN supports application of environmental flows to mitigate the effects of infrastructure development on rivers, including dams and large-scale irrigation. Environmental flows are implemented by changing the operation of infrastructure in ways that restore the quantity, quality and seasonal rhythm of river flows in order to sustain downstream ecosystems and the services they provide to people. Application of environmental flows is through negotiation of water allocations by stakeholders, which encourages the integration of the needs of both people and nature in decisions about water resources management. Strengthening support for application of environmental flows in policy and law drives development of the knowledge, capacities and institutions needed to implement IWRM. Effective water management must be supported by policies and laws that enable transparent definition of rights, roles and responsibilities, including sufficient allocation of water to sustain healthy ecosystems. Successful implementation of well-structured water policies and laws also requires the necessary institutions for that implementation as well as an enabling environment that is characterized by transparency, certainty, accountability and lack of corruption. At an international level this was recognized at the UN 2000 Millennium Assembly, which agreed “to stop the unsustainable exploitation of water resources, by developing water management strategies at the regional, national and local levels, which promote both equitable access and adequate supplies”. At the World Summit on Sustainable Development (WSSD) in 2002 Heads of State agreed a specific target to prepare Integrated Water Resource Management (IWRM) and water efficiency plans by 2005 – a target that was not met. Water governance continues to be a major challenge in many countries, for example because of lack of coherence among sectors and conflicting policies and laws made at different times by different administrations and interest groups. Reforming national policies and laws into a cohesive package is a difficult and resource-consuming task, but countries that have tackled it have found that their downstream implementation plans go more smoothly. For example Brazil has undertaken a lengthy reform of its water governance structure which, as a result of the systematic reorganization of policy, law and institutions, led to a substantial improvement of its water management scheme. In addition, Iza and Stein (2009) suggest that water governance reforms that reduce poverty and make economies more resilient should be based on principles of equity and sustainability. For example, South Africa has implemented ambitious water reforms over the last decade. The National Water Act guarantees a “water reserve” to secure a basic water supply and the health of aquatic ecosystems. IUCN Members, in WCC Resolution 3.006 (Protecting the Earth's waters for public and ecological benefit) urged support for achieving the WSSD target as well as full participation in decision-making about conservation, protection, distribution and use of water. The international community is also promoting rights-based approaches to water management based on the fundamental need for clean and drinkable water. At the national level, the State has to translate these obligations and commitments acquired in the international context into actual practice. Transformation of water policy and management comes from consensus building in multi-stakeholder platforms. These platforms empower stakeholders at local, basin or transboundary levels to agree on rights, roles and responsibilities and to negotiate on water law reforms. Furthermore, a good governance system should “think basin-wide, but act local”. When grassroots water user associations are involved in the process of planning, execution and maintenance of traditional water harvesting systems, they are more resilient and enable communities to adapt to climate change. Involving civil society at all levels encourages awareness and responsibility towards water and facilitates the acceptance of the legal system. This in turn presents a useful platform for solving possible conflicts between traditional and customary rights, by facilitating the implementation of water law through an active participation of the users at the final stage of water distribution. Finally, they can play a very important role in monitoring their share of the water system. Successful water governance and management depends on including women. A 1988 study by the International Water and Sanitation Centre of community water supply and sanitation projects in 88 communities in 15 countries found that projects designed and run with the full participation of women are more sustainable and effective than those that do not involve women as full partners (IWSC, 1988). Governance of transboundary waters is a complex issue with several challenges to delivering its environmental objectives. There are more than 260 international rivers in the world, covering 45% of the land surface of the Earth, and accounting for about 80% of global river flows. About 90% of the world's population currently lives in the countries sharing these rivers (World Bank, 2009). These essential resources are coming under increasing pressure as populations grow and economies develop. It is important to identify mechanisms and instruments to support the use of water as a catalyst for regional cooperation rather than a source of potential conflict. Cooperatively managing and developing these rivers requires great skill, robust institutions, significant investment, and strong cross-border cooperation. Examples of initiatives to do just that include the Nile Basin Dialogue, the Mekong River Commission, and the newly formed Volta Basin Authority. Finding a common approach to the governance of transboundary waters is further complicated by the differing legislation, water management practices, institutional structures, languages and cultures of the bordering countries. Nevertheless, cooperation in managing the quality and quantity of transboundary water bodies also presents an opportunity from which all of the parties involved can benefit (Aguilar and Iza, 2006). Negotiations, consensus and agreements reached between two or more parts of a shared river basin become part of the system of water governance, but it is the political will of sovereign States that determines whether those will successfully support sustainable water management. Water resources underpin the economy and dividends from investing in watershed services must account for the benefits and water security for livelihoods, business and economic development. Within the business sector there are diverse water interests; water services interest (people making money out of water); companies which sell products that need water; hydropower companies; companies that make biofuels; energy companies that use water for cooling; industries that require water for processing, etc. Before engaging businesses, however, it is important that users have a full understanding of all potential losses of ecosystem services that may be caused by development. Market-based incentives, including payments for ecosystem services (PES), are part of sustainable financing for IWRM. In Ecuador, the Quito Water Fund (FONAG) has built an investment prospectus to attract contributions from the public and private sectors to a long-term trust fund that aims to secure quantity and quality of water supplied to Quito from the Guayllabamna River Basin. Water is a vital resource for the global agriculture and energy sectors. Agriculture is by far the main user of water. Irrigation and livestock account for 70% of water withdrawals, which can rise to more than 80% in some regions, so conservationists need to connect more with the agricultural sector to strengthen knowledge on water issues (MA, 2005c; World Water Assessment Programme, 2009). Without reliable access to water of the right quantity and quality, hydropower generation fails, especially where flows or cooling of power stations is reduced. These sectors, including the expanding numbers of biofuel producers, need to make sustainable water futures a priority, including investment in sustainable watershed management. Water and energy policy needs to be coordinated in both strategy and operation. Returns on investment in water management and in ecosystems services are too often unaccounted for or underestimated. Ecosystem services-based management can provide a framework within which to support decision-making for services provided by natural systems and identify the trade-offs that may be needed in decisions (Farber et al., 2006). Investments in river basin sustainability stimulate “green growth” and economic resilience. Water and the services provided by watersheds, including water storage, purification, flood regulation and food security, have benefits across the economy, from local to national levels. Investments which ensure continuing or renewed water security and watershed services sustain local livelihoods, create opportunities for enterprise development and underpin national economic growth. Investments in river basin sustainability can thus stimulate growth that is pro-poor and environmentally robust while strengthening the resilience of communities and national economies. Climate change is projected to cause significant impacts on water resources and widespread vulnerabilities. These impacts will be felt first and foremost through water – through drought, floods, storms, ice melting and sea-level rise. The rapid shrinking of the Himalayan glaciers, which may lose four-fifths of their area by 2030, means a huge natural reservoir storing water for more than a billion people may be lost. Coping with such impacts means the need for climate change adaptation strategies. While water is at the centre of climate change impacts, it is also at the centre of adaptation policies, planning and action. River basins and coasts, and their ecosystems, are natural infrastructure for coping with these impacts. They provide water storage, flood control and coastal defence, all vital for reducing the vulnerabilities of communities and economies to climate change. Investment in IWRM, as “critical national natural infrastructure”, should be integral to climate change adaptation portfolios (Smith and Barchiesi, 2008). < previous section < index > next section >
http://data.iucn.org/dbtw-wpd/html/2009-026/section20.html
13
100
United States - History The first Americans—distant ancestors of the Native Americans—probably crossed the Bering Strait from Asia at least 12,000 years ago. By the time Christopher Columbus came to the New World in 1492 there were probably no more than 2 million Native Americans living in the land that was to become the United States. Following exploration of the American coasts by English, Portuguese, Spanish, Dutch, and French sea captains from the late 15th century onward, European settlements sprang up in the latter part of the 16th century. The Spanish established the first permanent settlement at St. Augustine in the future state of Florida in 1565, and another in New Mexico in 1599. During the early 17th century, the English founded Jamestown in Virginia Colony (1607) and Plymouth Colony in present-day Massachusetts (1620). The Dutch established settlements at Ft. Orange (now Albany, N.Y.) in 1624, New Amsterdam (now New York City) in 1626, and at Bergen (now part of Jersey City, N.J.) in 1660; they conquered New Sweden—the Swedish colony in Delaware and New Jersey—in 1655. Nine years later, however, the English seized this New Netherland Colony and subsequently monopolized settlement of the East Coast except for Florida, where Spanish rule prevailed until 1821. In the Southwest, California, Arizona, New Mexico, and Texas also were part of the Spanish empire until the 19th century. Meanwhile, in the Great Lakes area south of present-day Canada, France set up a few trading posts and settlements but never established effective control; New Orleans was one of the few areas of the United States where France pursued an active colonial policy. From the founding of Jamestown to the outbreak of the American Revolution more than 150 years later, the British government administered its American colonies within the context of mercantilism: the colonies existed primarily for the economic benefit of the empire. Great Britain valued its American colonies especially for their tobacco, lumber, indigo, rice, furs, fish, grain, and naval stores, relying particularly in the southern colonies on black slave labor. The colonies enjoyed a large measure of internal self-government until the end of the French and Indian War (1745–63), which resulted in the loss of French Canada to the British. To prevent further troubles with the Indians, the British government in 1763 prohibited the American colonists from settling beyond the Appalachian Mountains. Heavy debts forced London to decree that the colonists should assume the costs of their own defense, and the British government enacted a series of revenue measures to provide funds for that purpose. But soon, the colonists began to insist that they could be taxed only with their consent and the struggle grew to become one of local versus imperial authority. Widening cultural and intellectual differences also served to divide the colonies and the mother country. Life on the edge of the civilized world had brought about changes in the colonists' attitudes and outlook, emphasizing their remoteness from English life. In view of the long tradition of virtual self-government in the colonies, strict enforcement of imperial regulations and British efforts to curtail the power of colonial legislatures presaged inevitable conflict between the colonies and the mother country. When citizens of Massachusetts, protesting the tax on tea, dumped a shipload of tea belonging to the East India Company into Boston harbor in 1773, the British felt compelled to act in defense of their authority as well as in defense of private property. Punitive measures—referred to as the Intolerable Acts by the colonists—struck at the foundations of self-government. In response, the First Continental Congress, composed of delegates from 12 of the 13 colonies—Georgia was not represented—met in Philadelphia in September 1774, and proposed a general boycott of English goods, together with the organizing of a militia. British troops marched to Concord, Mass., on 19 April 1775 and destroyed the supplies that the colonists had assembled there. American "minutemen" assembled on the nearby Lexington green and fired "the shot heard round the world," although no one knows who actually fired the first shot that morning. The British soldiers withdrew and fought their way back to Boston. Voices in favor of conciliation were raised in the Second Continental Congress that assembled in Philadelphia on 10 May 1775, this time including Georgia; but with news of the Restraining Act (30 March 1775), which denied the colonies the right to trade with countries outside the British Empire, all hopes for peace vanished. George Washington was appointed commander in chief of the new American army, and on 4 July 1776, the 13 American colonies adopted the Declaration of Independence, justifying the right of revolution by the theory of natural rights. British and American forces met in their first organized encounter near Boston on 17 June 1775. Numerous battles up and down the coast followed. The British seized and held the principal cities but were unable to inflict a decisive defeat on Washington's troops. The entry of France into the war on the American side eventually tipped the balance. On 19 October 1781, the British commander, Cornwallis, cut off from reinforcements by the French fleet on one side and besieged by French and American forces on the other, surrendered his army at Yorktown, Va. American independence was acknowledged by the British in a treaty of peace signed in Paris on 3 September 1783. The first constitution uniting the 13 original states—the Articles of Confederation—reflected all the suspicions that Americans entertained about a strong central government. Congress was denied power to raise taxes or regulate commerce, and many of the powers it was authorized to exercise required the approval of a minimum of nine states. Dissatisfaction with the Articles of Confederation was aggravated by the hardships of a postwar depression, and in 1787—the same year that Congress passed the Northwest Ordinance, providing for the organization of new territories and states on the frontier—a convention assembled in Philadelphia to revise the articles. The convention adopted an altogether new constitution, the present Constitution of the United States, which greatly increased the powers of the central government at the expense of the states. This document was ratified by the states with the understanding that it would be amended to include a bill of rights guaranteeing certain fundamental freedoms. These freedoms—including the rights of free speech, press, and assembly, freedom from unreasonable search and seizure, and the right to a speedy and public trial by an impartial jury—are assured by the first 10 amendments to the constitution, adopted on 5 December 1791; the constitution did however recognize slavery, and did not provide for universal suffrage. On 30 April 1789 George Washington was inaugurated as the first president of the United States. During Washington's administration, the credit of the new nation was bolstered by acts providing for a revenue tariff and an excise tax; opposition to the excise on whiskey sparked the Whiskey Rebellion, suppressed on Washington's orders in 1794. Alexander Hamilton's proposals for funding the domestic and foreign debt and permitting the national government to assume the debts of the states were also implemented. Hamilton, the secretary of the treasury, also created the first national bank, and was the founder of the Federalist Party. Opposition to the bank as well as to the rest of the Hamiltonian program, which tended to favor northeastern commercial and business interests, led to the formation of an anti-Federalist party, the Democratic-Republicans, led by Thomas Jefferson. The Federalist Party, to which Washington belonged, regarded the French Revolution as a threat to security and property; the Democratic-Republicans, while condemning the violence of the revolutionists, hailed the overthrow of the French monarchy as a blow to tyranny. The split of the nation's leadership into rival camps was the first manifestation of the two-party system, which has since been the dominant characteristic of the US political scene. (Jefferson's party should not be confused with the modern Republican Party, formed in 1854.) The 1800 election brought the defeat of Federalist President John Adams, Washington's successor, by Jefferson; a key factor in Adam's loss was the unpopularity of the Alien and Sedition Acts (1798), Federalist-sponsored measures that had abridged certain freedoms guaranteed in the Bill of Rights. In 1803, Jefferson achieved the purchase from France of the Louisiana Territory, including all the present territory of the United States west of the Mississippi drained by that river and its tributaries; exploration and mapping of the new territory, notably through the expeditions of Meriwether Lewis and William Clark, began almost immediately. Under Chief Justice John Marshall, the US Supreme Court, in the landmark case of Marbury v. Madison , established the principle of federal supremacy in conflicts with the states and enunciated the doctrine of judicial review. During Jefferson's second term in office, the United States became involved in a protracted struggle between Britain and Napoleonic France. Seizures of US ships and the impressment of US seamen by the British navy led the administration to pass the Embargo Act of 1807, under which no US ships were to put out to sea. After the act was repealed in 1809, ship seizures and impressment of seamen by the British continued, and were the ostensible reasons for the declaration of war on Britain in 1812 during the administration of James Madison. An underlying cause of the War of 1812, however, was land-hungry Westerners' coveting of southern Canada as potential US territory. The war was largely a standoff. A few surprising US naval victories countered British successes on land. The Treaty of Ghent (24 December 1814), which ended the war, made no mention of impressment and provided for no territorial changes. The occasion for further maritime conflict with Britain, however, disappeared with the defeat of Napoleon in 1815. Now the nation became occupied primarily with domestic problems and westward expansion. Because the United States had been cut off from its normal sources of manufactured goods in Great Britain during the war, textiles and other industries developed and prospered in New England. To protect these infant industries, Congress adopted a high-tariff policy in 1816. Three events of the late 1810s and the 1820s were of considerable importance for the future of the country. The federal government in 1817 began a policy of forcibly resettling the Indians, already decimated by war and disease, in what later became known as Indian Territory (now Oklahoma); those Indians not forced to move were restricted to reservations. The Missouri Compromise (1820) was an attempt to find a nationally acceptable solution to the volatile dispute over the extension of black slavery to new territories. It provided for admission of Missouri into the Union as a slave state but banned slavery in territories to the west that lay north of 36º30´. As a result of the establishment of independent Latin American republics and threats by France and Spain to reestablish colonial rule, President James Monroe in 1823 asserted that the Western Hemisphere was closed to further colonization by European powers. The Monroe Doctrine declared that any effort by such powers to recover territories whose independence the United States had recognized would be regarded as an unfriendly act. From the 1820s to the outbreak of the Civil War, the growth of manufacturing continued, mainly in the North, and was accelerated by inventions and technological advances. Farming expanded with westward migration. The South discovered that its future lay in the cultivation of cotton. The cotton gin, invented by Eli Whitney in 1793, greatly simplified the problems of production; the growth of the textile industry in New England and Great Britain assured a firm market for cotton. Hence, during the first half of the 19th century, the South remained a fundamentally agrarian society based increasingly on a one-crop economy. Large numbers of field hands were required for cotton cultivation, and black slavery became solidly entrenched in the southern economy. The construction of roads and canals paralleled the country's growth and economic expansion. The successful completion of the Erie Canal (1825), linking the Great Lakes with the Atlantic, ushered in a canal-building boom. Railroad building began in earnest in the 1830s, and by 1840, about 3,300 mi (5,300 km) of track had been laid. The development of the telegraph a few years later gave the nation the beginnings of a modern telecommunications network. As a result of the establishment of the factory system, a laboring class appeared in the North by the 1830s, bringing with it the earliest unionization efforts. Western states admitted into the Union following the War of 1812 provided for free white male suffrage without property qualifications and helped spark a democratic revolution. As eastern states began to broaden the franchise, mass appeal became an important requisite for political candidates. The election to the presidency in 1928 of Andrew Jackson, a military hero and Indian fighter from Tennessee, was no doubt a result of this widening of the democratic process. By this time, the United States consisted of 24 states and had a population of nearly 13 million. The relentless westward thrust of the United States population ultimately involved the United States in foreign conflict. In 1836, US settlers in Texas revolted against Mexican rule and established an independent republic. Texas was admitted to the Union as a state in 1845, and relations between Mexico and the United States steadily worsened. A dispute arose over the southern boundary of Texas, and a Mexican attack on a US patrol in May 1846 gave President James K. Polk a pretext to declare war. After a rapid advance, US forces captured Mexico City, and on 2 February 1848, Mexico formally gave up the unequal fight by signing the Treaty of Guadalupe Hidalgo, providing for the cession of California and the territory of New Mexico to the United States. With the Gadsden Purchase of 1853, the United States acquired from Mexico for $10 million large strips of land forming the balance of southern Arizona and New Mexico. A dispute with Britain over the Oregon Territory was settled in 1846 by a treaty that established the 49th parallel as the boundary with Canada. Thenceforth the United States was to be a Pacific as well as an Atlantic power. Westward expansion exacerbated the issue of slavery in the territories. By 1840, abolition of slavery constituted a fundamental aspect of a movement for moral reform, which also encompassed women's rights, universal education, alleviation of working class hardships, and temperance. In 1849, a year after the discovery of gold had precipitated a rush of new settlers to California, that territory (whose constitution prohibited slavery) demanded admission to the Union. A compromise engineered in Congress by Senator Henry Clay in 1850 provided for California's admission as a free state in return for various concessions to the South. But enmities dividing North and South could not be silenced. The issue of slavery in the territories came to a head with the Kansas-Nebraska Act of 1854, which repealed the Missouri Compromise and left the question of slavery in those territories to be decided by the settlers themselves. The ensuing conflicts in Kansas between northern and southern settlers earned the territory the name "bleeding Kansas." In 1860, the Democratic Party, split along northern and southern lines, offered two presidential candidates. The new Republican Party, organized in 1854 and opposed to the expansion of slavery, nominated Abraham Lincoln. Owing to the defection in Democratic ranks, Lincoln was able to carry the election in the electoral college, although he did not obtain a majority of the popular vote. To ardent supporters of slavery, Lincoln's election provided a reason for immediate secession. Between December 1860 and February 1861, the seven states of the Deep South—South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas—withdrew from the Union and formed a separate government, known as the Confederate States of America, under the presidency of Jefferson Davis. The secessionists soon began to confiscate federal property in the South. On 12 April 1861, the Confederates opened fire on Ft. Sumter in the harbor of Charleston, S.C., and thus precipitated the US Civil War. Following the outbreak of hostilities, Arkansas, North Carolina, Virginia, and Tennessee joined the Confederacy. For the next four years, war raged between the Confederate and Union forces, largely in southern territories. An estimated 360,000 men in the Union forces died of various causes, including 110,000 killed in battle. Confederate dead were estimated at 250,000, including 94,000 killed in battle. The North, with great superiority in manpower and resources, finally prevailed. A Confederate invasion of the North was repulsed at the battle of Gettysburg, Pennsylvania, in July 1863; a Union army took Atlanta in September 1864; and Confederate forces evacuated Richmond, the Confederate capital, in early April 1865. With much of the South in Union hands, Confederate Gen. Robert E. Lee surrendered to Gen. Ulysses S. Grant at Appomattox Courthouse in Virginia on 9 April. The outcome of the war brought great changes in US life. Lincoln's Emancipation Proclamation of 1863 was the initial step in freeing some 4 million black slaves; their liberation was completed soon after the war's end by amendments to the Constitution. Lincoln's plan for the reconstruction of the rebellious states was compassionate, but only five days after Lee's surrender, Lincoln was assassinated by John Wilkes Booth as part of a conspiracy in which US Secretary of State William H. Seward was seriously wounded. During the Reconstruction era (1865–77), the defeated South was governed by Union Army commanders, and the resultant bitterness of southerners toward northern Republican rule, which enfranchised blacks, persisted for years afterward. Vice President Andrew Johnson, who succeeded Lincoln as president, tried to carry out Lincoln's conciliatory policies but was opposed by radical Republican leaders in Congress, who demanded harsher treatment of the South. On the pretext that he had failed to carry out an act of Congress, the House of Representatives voted to impeach Johnson in 1868, but the Senate failed by one vote to convict him and remove him from office. It was during Johnson's presidency that Secretary of State Seward negotiated the purchase of Alaska (which attained statehood in 1959) from Russia for $7.2 million. The efforts of southern whites to regain political control of their states led to the formation of terrorist organizations like the Ku Klux Klan, which employed violence to prevent blacks from voting. By the end of the Reconstruction era, whites had reestablished their political domination over blacks in the southern states and had begun to enforce patterns of segregation in education and social organization that were to last for nearly a century. In many southern states, the decades following the Civil War were ones of economic devastation, in which rural whites as well as blacks were reduced to sharecropper status. Outside the South, however, a great period of economic expansion began. Transcontinental railroads were constructed, corporate enterprise spurted ahead, and the remaining western frontier lands were rapidly occupied and settled. The age of big business tycoons dawned. As heavy manufacturing developed, Pittsburgh, Chicago, and New York emerged as the nation's great industrial centers. The Knights of Labor, founded in 1869, engaged in numerous strikes, and violent conflicts between strikers and strikebreakers were common. The American Federation of Labor, founded in 1886, established a nationwide system of craft unionism that remained dominant for many decades. During this period, too, the woman's rights movement organized actively to secure the vote (although woman's suffrage was not enacted nationally until 1920), and groups outraged by the depletion of forests and wildlife in the West pressed for the conservation of natural resources. During the latter half of the 19th century, the acceleration of westward expansion made room for millions of immigrants from Europe. The country's population grew to more than 76 million by 1900. As homesteaders, prospectors, and other settlers tamed the frontier, the federal government forced Indians west of the Mississippi to cede vast tracts of land to the whites, precipitating a series of wars with various tribes. By 1890, only 250,000 Indians remained in the United States, virtually all of them residing on reservations. The 1890s marked the closing of the United States frontier for settlement and the beginning of US overseas expansion. By 1892, Hawaiian sugar planters of US origin had become strong enough to bring about the downfall of the native queen and to establish a republic, which in 1898, at its own request, was annexed as a territory by the United States. The sympathies of the United States with the Cuban nationalists who were battling for independence from Spain were aroused by a lurid press and by expansionist elements. A series of events climaxed by the sinking of the USS Maine in Havana harbor finally forced a reluctant President William McKinley to declare war on Spain on 25 April 1898. US forces overwhelmed those of Spain in Cuba, and as a result of the Spanish-American War, the United States added to its territories the Philippines, Guam, and Puerto Rico. A newly independent Cuba was drawn into the United States orbit as a virtual protectorate through the 1950s. Many eminent citizens saw these new departures into imperialism as a betrayal of the time-honored US doctrine of government by the consent of the governed. With the marked expansion of big business came increasing protests against the oppressive policies of large corporations and their dominant role in the public life of the nation. A demand emerged for strict control of monopolistic business practice through the enforcement of antitrust laws. Two US presidents, Theodore Roosevelt (1901–09), a Republican and Woodrow Wilson (1913–21), a Democrat, approved of the general movement for reform, which came to be called progressivism. Roosevelt developed a considerable reputation as a trustbuster, while Wilson's program, known as the New Freedom, called for reform of tariffs, business procedures, and banking. During Roosevelt's first term, the United States leased the Panama Canal Zone and started construction of a 42-mi (68-km) canal, completed in 1914. US involvement in World War I marked the country's active emergence as one of the great powers of the world. When war broke out in 1914 between Germany, Austria-Hungary, and Turkey on one side and Britain, France, and Russia on the other, sentiment in the United States was strongly opposed to participation in the conflict, although a large segment of the American people sympathized with the British and the French. While both sides violated US maritime rights on the high seas, the Germans, enmeshed in a British blockade, resorted to unrestricted submarine warfare. On 6 April 1917, Congress declared war on Germany. Through a national draft of all able-bodied men between the ages of 18 and 45, some 4 million US soldiers were trained, of whom more than 2 million were sent overseas to France. By late 1917, when US troops began to take part in the fighting on the western front, the European armies were approaching exhaustion, and US intervention may well have been decisive in ensuring the eventual victory of the Allies. In a series of great battles in which US soldiers took an increasingly major part, the German forces were rolled back in the west, and in the autumn of 1918 were compelled to sue for peace. Fighting ended with the armistice of 11 November 1918. President Wilson played an active role in drawing up the 1919 Versailles peace treaty, which embodied his dream of establishing a League of Nations to preserve the peace, but the isolationist bloc in the Senate was able to prevent US ratification of the treaty. In the 1920s, the United States had little enthusiasm left for crusades, either for democracy abroad or for reform at home; a rare instance of idealism in action was the Kellogg-Briand Pact (1928), an antiwar accord negotiated on behalf of the United States by Secretary of State Frank B. Kellogg. In general, however, the philosophy of the Republican administrations from 1921 to 1933 was expressed in the aphorism "The business of America is business," and the 1920s saw a great business boom. The years 1923–24 also witnessed the unraveling of the Teapot Dome scandal: the revelation that President Warren G. Harding's secretary of the interior, Albert B. Fall, had secretly leased federal oil reserves in California and Wyoming to private oil companies in return for gifts and loans. The great stock market crash of October 1929 ushered in the most serious and most prolonged economic depression the country had ever known. By 1933, an estimated 12 million men and women were out of work; personal savings were wiped out on a vast scale through a disastrous series of corporate bankruptcies and bank failures. Relief for the unemployed was left to private charities and local governments, which were incapable of handling the enormous task. The inauguration of the successful Democratic presidential candidate, Franklin D. Roosevelt, in March 1933 ushered in a new era of US history, in which the federal government was to assume a much more prominent role in the nation's economic affairs. Proposing to give the country a "New Deal," Roosevelt accepted national responsibility for alleviating the hardships of unemployment; relief measures were instituted, work projects were established, the deficit spending was accepted in preference to ignoring public distress. The federal Social Security program was inaugurated, as were various measures designed to stimulate and develop the economy through federal intervention. Unions were strengthened through the National Labor Relations Act, which established the right of employees' organizations to bargain collectively with employers. Union membership increased rapidly, and the dominance of the American Federation of Labor was challenged by the newly formed Congress of Industrial Organizations, which organized workers along industrial lines. The depression of the 1930s was worldwide, and certain nations attempted to counter economic stagnation by building large military establishments and embarking on foreign adventures. Following German, Italian, and Japanese aggression, World War II broke out in Europe during September 1939. In 1940, Roosevelt, disregarding a tradition dating back to Washington that no president should serve more than two terms, ran again for reelection. He easily defeated his Republican opponent, Wendell Willkie, who, along with Roosevelt, advocated increased rearmament and all possible aid to victims of aggression. The United States was brought actively into the war by the Japanese attack on the Pearl Harbor naval base in Hawaii on 7 December 1941. The forces of Germany, Italy, and Japan were now arrayed over a vast theater of war against those of the United States and the British Commonwealth; in Europe, Germany was locked in a bloody struggle with the Soviet Union. US forces waged war across the vast expanses of the Pacific, in Africa, in Asia, and in Europe. Italy surrendered in 1943; Germany was successfully invaded in 1944 and conquered in May 1945; and after the United States dropped the world's first atomic bombs on Hiroshima and Nagasaki, the Japanese capitulated in August. The Philippines became an independent republic soon after the war, but the United States retained most of its other Pacific possessions, with Hawaii becoming the 50th state in 1959. Roosevelt, who had been elected to a fourth term in 1944, died in April 1945 and was succeeded by Harry S Truman, his vice president. Under the Truman administration, the United States became an active member of the new world organization, the United Nations. The Truman administration embarked on large-scale programs of military aid and economic support to check the expansion of communism. Aid to Greece and Turkey in 1948 and the Marshall Plan, a program designed to accelerate the economic recovery of Western Europe, were outstanding features of US postwar foreign policy. The North Atlantic Treaty (1949) established a defensive alliance among a number of West European nations and the United States. Truman's Point Four program gave technical and scientific aid to developing nations. When, following the North Korean attack on South Korea on 25 June 1950, the UN Security Council resolved that members of the UN should proceed to the aid of South Korea. US naval, air, and ground forces were immediately dispatched by President Truman. An undeclared war ensued, which eventually was brought to a halt by an armistice signed on 27 June 1953. In 1952, Dwight D. Eisenhower, supreme commander of Allied forces in Europe during World War II, was elected president on the Republican ticket, thereby bringing to an end 20 years of Democratic presidential leadership. In foreign affairs, the Eisenhower administration continued the Truman policy of containing the USSR and threatened "massive retaliation" in the event of Soviet aggression, thus heightening the Cold War between the world's two great nuclear powers. Although Republican domestic policies were more conservative than those of the Democrats, the Eisenhower administration extended certain major social and economic programs of the Roosevelt and Truman administrations, notably Social Security and public housing. The early years of the Eisenhower administration were marked by agitation (arising in 1950) over charges of Communist and other allegedly subversive activities in the United States—a phenomenon known as McCarthyism, after Republican Senator Joseph R. McCarthy of Wisconsin, who aroused much controversy with unsubstantiated allegations that Communists had penetrated the US government, especially the Army and the Department of State. Even those who personally opposed McCarthy lent their support to the imposition of loyalty oaths and the blacklisting of persons with left-wing backgrounds. A major event of the Eisenhower years was the US Supreme Court's decision in Brown v. Board of Education of Topeka (1954) outlawing segregation of whites and blacks in public schools. In the aftermath of this ruling, desegregation proceeded slowly and painfully. In the early 1960s, sit-ins, "freedom rides," and similar expressions of nonviolent resistance by blacks and their sympathizers led to a lessening of segregation practices in public facilities. Under Chief Justice Earl Warren, the high court in 1962 mandated the reapportionment of state and federal legislative districts according to a "one person, one vote" formula. It also broadly extended the rights of defendants in criminal trials to include the provision of a defense lawyer at public expense for an accused person unable to afford one, and established the duty of police to advise an accused person of his or her legal rights immediately upon arrest. In the early 1960s, during the administration of Eisenhower's Democratic successor, John F. Kennedy, the Cold War heated up as Cuba, under the regime of Fidel Castro, aligned itself with the Soviet Union. Attempts by anti-Communist Cuban exiles to invade their homeland in the spring of 1961 failed despite US aid. In October 1962, President Kennedy successfully forced a showdown with the Soviet Union over Cuba in demanding the withdrawal of Soviet-supplied "offensive weapons"—missiles—from the nearby island. On 22 November 1963, President Kennedy was assassinated while riding in a motorcade through Dallas, Texas; hours later, Vice President Lyndon B. Johnson was inaugurated president. In the November 1964 elections, Johnson overwhelmingly defeated his Republican opponent, Barry M. Goldwater, and embarked on a vigorous program of social legislation unprecedented since Roosevelt's New Deal. His "Great Society" program sought to ensure black Americans' rights in voting and public housing, to give the underprivileged job training, and to provide persons 65 and over with hospitalization and other medical benefits (Medicare). Measures ensuring equal opportunity for minority groups may have contributed to the growth of the woman's rights movement in the late 1960s. This same period also saw the growth of a powerful environmental protection movement. US military and economic aid to anti-Communist forces in Vietnam, which had its beginnings during the Truman administration (while Vietnam was still part of French Indochina) and was increased gradually by presidents Eisenhower and Kennedy, escalated in 1965. In that year, President Johnson sent US combat troops to South Vietnam and ordered US bombing raids on North Vietnam, after Congress (in the Gulf of Tonkin Resolution of 1964) had given him practically carte blanche authority to wage war in that region. By the end of 1968, American forces in Vietnam numbered 536,100 men, but US military might was unable to defeat the Vietnamese guerrillas, and the American people were badly split over continuing the undeclared (and, some thought, ill-advised or even immoral) war, with its high price in casualties and materiel. Reacting to widespread dissatisfaction with his Vietnam policies, Johnson withdrew in March 1968 from the upcoming presidential race, and in November, Republican Richard M. Nixon, who had been the vice president under Eisenhower, was elected president. Thus, the Johnson years—which had begun with the new hopes of a Great Society but had soured with a rising tide of racial violence in US cities and the assassinations of civil rights leader Martin Luther King, Jr., and US senator Robert F. Kennedy, among others—drew to a close. President Nixon gradually withdrew US ground troops from Vietnam but expanded aerial bombardment throughout Indochina, and the increasingly unpopular and costly war continued for four more years before a cease-fire—negotiated by Nixon's national security adviser, Henry Kissinger—was finally signed on 27 January 1973 and the last US soldiers were withdrawn. The most protracted conflict in American history had resulted in 46,163 US combat deaths and 303,654 wounded soldiers, and had cost the US government $112 billion in military allocations. Two years later, the South Vietnamese army collapsed, and the North Vietnamese Communist regime united the country. In 1972, during the last year of his first administration, Nixon initiated the normalization of relations—ruptured in 1949—with the People's Republic of China and signed a strategic arms limitation agreement with the Soviet Union as part of a Nixon-Kissinger policy of pursuing détente with both major Communist powers. (Earlier, in July 1969, American technology had achieved a national triumph by landing the first astronaut on the moon.) The Nixon administration sought to muster a "silent majority" in support of its Indochina policies and its conservative social outlook in domestic affairs. The most momentous domestic development, however, was the Watergate scandal, which began on 17 June 1972 with the arrest of five men associated with Nixon's reelection campaign, during a break-in at Democratic Party headquarters in the Watergate office building in Washington, D.C. Although Nixon was reelected in 1972, subsequent disclosures by the press and by a Senate investigating committee revealed a complex pattern of political "dirty tricks" and illegal domestic surveillance throughout his first term. The president's apparent attempts to obstruct justice by helping his aides cover up the scandal were confirmed by tape recordings (made by Nixon himself) of his private conversations, which the Supreme Court ordered him to release for use as evidence in criminal proceedings. The House voted to begin impeachment proceedings, and in late July 1974, its Judiciary Committee approved three articles of impeachment. On 9 August, Nixon became the first president to resign the office. The following year, Nixon's top aides and former attorney general, John N. Mitchell, were convicted of obstruction and were subsequently sentenced to prison. Nixon's successor was Gerald R. Ford, who in October 1973 had been appointed to succeed Vice President Spiro T. Agnew when Agnew resigned following his plea of nolo contendere to charges that he had evaded paying income tax on moneys he had received from contractors while governor of Maryland. Less than a month after taking office, President Ford granted a full pardon to Nixon for any crimes he may have committed as president. In August 1974, Ford nominated Nelson A. Rockefeller as vice president (he was not confirmed until December), thus giving the country the first instance of a nonelected president and an appointed vice president serving simultaneously. Ford's pardon of Nixon, as well as continued inflation and unemployment, probably contributed to his narrow defeat by a Georgia Democrat, Jimmy Carter, in 1976. President Carter's forthright championing of human rights—though consistent with the Helsinki accords, the "final act" of the Conference on Security and Cooperation in Europe, signed by the United States and 34 other nations in July 1974—contributed to strained relations with the USSR and with some US allies. During 1978–79, the president concluded and secured Senate passage of treaties ending US sovereignty over the Panama Canal Zone. His major accomplishment in foreign affairs, however, was his role in mediating a peace agreement between Israel and Egypt, signed at the camp David, Md., retreat in September 1978. Domestically, the Carter administration initiated a national energy program to reduce US dependence on foreign oil by cutting gasoline and oil consumption and by encouraging the development of alternative energy resources. But the continuing decline of the economy because of double-digit inflation and high unemployment caused his popularity to wane, and confusing shifts in economic policy (coupled with a lack of clear goals in foreign affairs) characterized his administration during 1979 and 1980; a prolonged quarrel with Iran over more than 50 US hostages seized in Tehran on 4 November 1979 contributed to public doubts about his presidency. Exactly a year after the hostages were taken, former California Governor Ronald Reagan defeated Carter in an election that saw the Republican Party score major gains throughout the United States. The hostages were released on 20 January 1981, the day of Reagan's inauguration. Reagan, who survived a chest wound from an assassination attempt in Washington, D.C., in 1981, used his popularity to push through significant policy changes. He succeeded in enacting income tax cuts of 25%, reducing the maximum tax rate on unearned income from 70% to 50%, and accelerating depreciation allowances for businesses. At the same time, he more than doubled the military budget, in constant 1985 dollars, between 1980 and 1989. Vowing to reduce domestic spending, Reagan cut benefits for the working poor, reduced allocations for food stamps and Aid to Families With Dependent Children by 13%, and decreased grants for the education of disadvantaged children. He slashed the budget of the Environmental Protection Agency and instituted a flat rate reimbursement system for the treatment of Medicare patients with particular illnesses, replacing a more flexible arrangement in which hospitals had been reimbursed for "reasonable charges." Reagan's appointment of Sandra Day O'Connor as the first woman justice of the Supreme Court was widely praised and won unanimous confirmation from the Senate. However, some of his other high-level choices were extremely controversial—none more so than that of his secretary of the interior, James G. Watt, who finally resigned on October 1983. To direct foreign affairs, Reagan named Alexander M. Haig, Jr., former NATO supreme commander for Europe, to the post of secretary of state; Haig, who clashed frequently with other administration officials, resigned in June 1982 and was replace by George P. Shultz. In framing his foreign and defense policy, Reagan insisted on a military buildup as a precondition for arms-control talks with the USSR. His administration sent money and advisers to help the government of El Salvador in its war against leftist rebels, and US advisers were also sent to Honduras, reportedly to aid groups of Nicaraguans trying to overthrow the Sandinista government in their country. Troops were also dispatched to Lebanon in September 1982, as part of a multinational peacekeeping force in Beirut, and to Grenada in October 1983 to oust a leftist government there. Reelected in 1984, President Reagan embarked on his second term with a legislative agenda that included reduction of federal budget deficits (which had mounted rapidly during his first term in office), further cuts in domestic spending, and reform of the federal tax code. In military affairs, Reagan persuaded Congress to fund on a modest scale his Strategic Defense Initiative, commonly known as Star Wars, a highly complex and extremely costly space-based antimissile system. In 1987, the downing of an aircraft carrying arms to Nicaragua led to the disclosure that a group of National Security Council members had secretly diverted $48 million that the federal government had received in payment from Iran for American arms to rebel forces in Nicaragua. The disclosure prompted the resignation of two of the leaders of the group, Vice Admiral John Poindexter and Lieutenant Colonel Oliver North, as well as investigations by House and Senate committees and a special prosecutor, Lawrence Walsh. The congressional investigations found no conclusive evidence that Reagan had authorized or known of the diversion. Yet they noted that because Reagan had approved of the sale of arms to Iran and had encouraged his staff to assist Nicaraguan rebels despite the prohibition of such assistance by Congress, "the President created or at least tolerated an environment where those who did know of the diversion believed with certainty that they were carrying out the President's policies." Reagan was succeeded in 1988 by his vice president, George Bush. Benefiting from a prolonged economic expansion, Bush handily defeated Michael Dukakis, governor of Massachusetts and a liberal Democrat. On domestic issues, Bush sought to maintain policies introduced by the Reagan administration. His few legislative initiatives included the passage of legislation establishing strict regulations of air pollution, providing subsidies for child care, and protecting the rights of the disabled. Abroad, Bush showed more confidence and energy. While he responded cautiously to revolutions in Eastern Europe and the Soviet Union, he used his personal relationships with foreign leaders to bring about comprehensive peace talks between Israel and its Arab neighbors, to encourage a peaceful unification of Germany, and to negotiate broad and substantial arms cuts with the Russians. Bush reacted to Iraq's invasion of Kuwait in 1990 by sending 400,000 soldiers to form the basis of a multinational coalition, which he assembled and which destroyed Iraq's main force within seven months. This conflict became known as the Gulf War. One of the biggest crises that the Bush administration encountered was the collapse of the savings and loan industry in the late eighties. Thrift institutions were required by law to pay low interest rates for deposits and long-term loans. The creation of money market funds for the small investor in the eighties which paid higher rates of return than savings accounts prompted depositors to withdraw their money from banks and invest it in the higher yielding mutual funds. To finance the withdrawals, banks began selling assets at a loss. The deregulation of the savings and loan industry, combined with the increase in federal deposit insurance from $40,000 to $100,000 per account, encouraged many desperate savings institutions to invest in high-risk real-estate ventures, for which no state supervision or regulation existed. When the majority of such ventures predictably failed, the federal government found itself compelled by law to rescue the thrifts. It is estimated that this will cost to taxpayers $345 billion, in settlements that will continue through 2029. In his bid for reelection in 1992, Bush faced not only Democratic nominee Bill Clinton, Governor of Arkansas, but also third-party candidate Ross Perot, a Dallas billionaire who had made his fortune in the computer industry. In contrast to Bush's first run for the presidency, when the nation had enjoyed an unusually long period of economic expansion, the economy in 1992 was just beginning to recover from a recession. Although data released the following year indicated that a healthy rebound had already begun in 1992, the public perceived the economy during election year as weak. Clinton took advantage of this perception in his campaign, focusing on the financial concerns of what he called "the forgotten middle class." He also took a more centrist position on many issues than more traditional Democrats, promising fiscal responsibility and economic growth. Clinton defeated Bush, winning 43% of the vote to Bush's 38%. Perot garnered 18% of the vote. At its outset, Clinton's presidency was plagued by numerous setbacks, most notably the failure of his controversial health care reform plan, drawn up under the leadership of first lady Hillary Rodham Clinton. Major accomplishments included the passage, by a narrow margin, of a deficit-reduction bill calling for tax increases and spending cuts and Congressional approval of the North American Free Trade Agreement, which removed or reduced tariffs on most goods moving across the borders of the United States, Canada, and Mexico. Although supporters and critics agreed that the treaty would create or eliminate relatively few jobs—two hundred thousand—the accord prompted heated debate. Labor strenuously opposed the agreement, seeing it as accelerating the flight of factory jobs to countries with low labor costs such as Mexico, the third largest trading partner of the United States. Business, on the other hand, lobbied heavily for the treaty, arguing that it would create new markets for American goods and insisting that competition from Mexico would benefit the American economy. By the fall of 1994, many American workers, still confronting stagnating wages, benefits, and living standards, had yet to feel the effects of the nation's recovery from the recession of 1990–91. The resulting disillusionment with the actions of the Clinton administration and the Democrat-controlled Congress, combined with the widespread climate of social conservatism resulting from a perceived erosion of traditional moral values led to an overwhelming upset by the Republican party in the 1994 midterm elections. The GOP gained control of both houses of Congress for the first time in over 40 years, also winning 11 gubernatorial races, for control of a total of 30 governorships nationwide. The Republican agenda—increased defense spending and cuts in taxes, social programs, and farm subsidies—had been popularized under the label "Contract with America," the title of a manifesto circulated during the campaign. The ensuing confrontation between the nation's Democratic president and Republican-controlled Congress came to a head at the end of 1995, when Congress responded to presidential vetoes of appropriations and budget bills by refusing to pass stop gap spending measures, resulting in major shutdowns of the federal government in November and December. The following summer, however, the president and Congress joined forces to reform the welfare system through a bill replacing Aid to Families with Dependent Children with block grants through which welfare funding would largely become the province of the states. The nation's economic recovery gained strength as the decade advanced, with healthy growth, falling unemployment, and moderate interest and inflation levels. Public confidence in the economy was reflected in a bull market on the stock exchange, which gained 60% between 1995 and 1997. Bolstered by a favorable economy at home and peace abroad, Clinton's faltering popularity rebounded and in 1996 he became the first Democratic president elected to a second term since Franklin D. Roosevelt in 1936, defeating the Republican candidate, former Senate majority leader Robert Dole, and Independent Ross Perot, whose electoral support was greatly reduced from its 1992 level. The Republicans retained control of both houses of Congress. In 1997, President Clinton signed into law a bipartisan budget plan designed to balance the federal budget by 2002 for the first time since 1969, through a combination of tax and spending cuts. In 1998–99, the federal government experienced two straight years of budget surpluses. In 1998, special prosecutor Kenneth Starr submitted a report to Congress that resulted in the House of Representatives passing four articles of impeachment against President Clinton. In the subsequent trial in the Senate, the articles were defeated. Regulation of the three large financial industries underwent significant change in late 1999. The Gramm-Leach-Bliley Act, (also known as the Financial Modernization Act) passed by Congress in November 1999. It cleared the way for banks, insurance companies, and securities companies to sell each other's services and to engage in merger and acquisition activity. Prior to the Act's passage, activities of the banking, insurance and securities industries were strictly limited by the Glass Steagall Act of 1933, which Gramm-Leach-Bliley repealed. Health care issues received significant attention in 2000. On 23 November 1998, 46 states and the District of Columbia together reached a settlement with the large US tobacco companies over compensation for smoking-related health-care costs incurred by the states. Payments to the states, totaling $206 billion, were scheduled to be made over 25 years beginning in 1999. As of 2000, 44 states and the District of Columbia had passed Patients' Rights legislation; 39 passed legislation allowing Medicaid to pay for assisted-living care in qualifying cases; and all 50 states and the District of Columbia passed Children's Health Insurance Programs (CHIP) legislation to provide health care to children in low-income families. The ongoing strong economy continued through the late 1990s and into 2000. Economic expansion set a record for longevity, and—except for higher gasoline prices during summer 2000, stemming from higher crude oil prices—inflation continued to be relatively low. By 2000, there was additional evidence that productivity growth had improved substantially since the mid-1990s, boosting living standards while helping to hold down increases in costs and prices despite very tight labor markets. In 2000, Hispanics replaced African Americans as the largest minority group in the United States. (Hispanics numbered 35.3 million in 2000, or 12.5% of the population, compared with 34.7 million blacks, or 12.3% of the population.) The 2000 presidential election was one of the closest in US history, pitting Democratic Vice President Al Gore against Republican Party candidate George W. Bush, son of former President George H. W. Bush. The vote count in Florida became the determining factor in the 7 November election, as each candidate needed to obtain the state's 25 electoral college votes in order to capture the 270 needed to win the presidency. When in the early hours of 8 November Bush appeared to have won the state's 25 votes, Gore called Bush to concede the election. He soon retracted the concession, however, after the extremely thin margin of victory triggered an automatic recount of the vote in Flordida. The Democrats subsequently mounted a series of legal challenges to the vote count in Florida, which favored Bush. Eventually, the US Supreme Court, in Bush v. Gore , was summoned to rule on the election. On 12 December 2000, the US Supreme Court, divided 5–4, reversed the Florida state supreme court decision that had ordered new recounts called for by Al Gore. George W. Bush was declared president. Gore had won the popular vote, however, capturing 48.4% of votes cast to Bush's47.9%. Once inaugurated, Bush called education his top priority, stating that "no child should be left behind" in America. He affirmed support for Medicare and Social Security, and called for pay and benefit increases for the military. He called upon charities and faith-based and community groups to aid the disadvantaged. Bush announced a $1.6 trillion tax cut plan (subsequently reduced to $1.35 trillion) in his first State of the Union Address as an economic stimulus package designed to respond to an economy that had begun to falter. He called for research and development of a missile-defense program, and warned of the threat of international terrorism. The threat of international terrorism was made all too real on 11 September 2001, when 19 hijackers crashed 4 passenger aircraft into the North and South towers of the World Trade Center, the Pentagon, and a field in Stony Creek Township in Pennsylvania. The World Trade Center towers were destroyed. As of 7 September 2002, 3,044 people were presumed dead as a result of all four 11 September 2001 attacks. The terrorist organization al-Qaeda, led by Saudi citizen Osama bin Laden, was believed to be responsible for the attacks, and a manhunt for bin Laden began. On 7 October 2001, the United States and Britain launched air strikes against known terrorist training camps and military installations within Afghanistan, ruled by the Taliban regime that supported the al-Qaeda organization. The air strikes were supported by leaders of the European Union and Russia, as well as other nations. By December 2001, the Taliban were defeated, and Afghan leader Hamid Karzai was chosen to lead an interim administration for the country. Remnants of al-Qaeda still remained in Afghanistan and the surrounding region, and a year after the 2001 offensive more than 10,000 US soldiers remained in Afghanistan to suppress efforts by either the Taliban or al-Qaeda to regroup. As of mid-2003, Allied soldiers continued to come under periodic attack in Afghanistan. As a response to the 11 September 2001 terrorist attacks, the US Congress that October approved the USA Patriot Act, proposed by the Bush administration. The act gave the government greater powers to detain suspected terrorists (or also immigrants), to counter money-laundering, and increase surveillance by domestic law enforcement and international intelligence agencies. Critics claimed the law did not provide for the system of checks and balances that safeguard civil liberties in the United States. Beginning in late 2001, corporate America suffered a crisis of confidence. In December 2001, the energy giant Enron Corporation declared bankruptcy after massive false accounting practices came to light. Eclipsing the Enron scandal, telecommunications giant World Com in June 2002 disclosed that it had hid $3.8 billion in expenses over 15 months. The fraud led to World Com's bankruptcy, the largest in US history (the company had $107 billion in assets). In his January 2002 State of the Union Address, President Bush announced that Iran, Iraq, and North Korea constituted an "axis of evil," sponsoring terrorism and threatening the United States and its allies with weapons of mass destruction. Throughout 2002, the United States pressed its case against Iraq, stating that the Iraqi regime had to disarm itself of weapons of mass destruction. In November 2002, the UN Security Council passed Resolution 1441, calling upon Iraq to disarm itself of any chemical, biological, or nuclear weapons or weapons capabilities it might possess, to comply with all previous UN Security Council resolutions regarding the country since the end of the Gulf War in 1991, and to allow for the immediate return of UN and International Atomic Energy Agency (IAEA) weapons inspectors (they had been expelled in 1998). UN and IAEA weapons inspectors returned to the country, but the United States and the United Kingdom expressed disatisfaction with their progress, and indicated military force might be necessary to remove the Iraqi regime, led by Saddam Hussein. France and Russia, permanent members of the UN Security Council, and Germany, a nonpermanent member, in particular, opposed the use of military force. The disagreement caused a diplomatic rift in the West that was slow to repair. After diplomatic efforts at conflict resolution failed by March 2003, the United States, on 20 March, launched air strikes against targets in Baghdad, and war began. British forces moved into southern Iraq, around the city of Basra, and US ground forces began a march to Baghdad. On 9 April, Baghdad fell to US forces, and work began on restoring basic services to the Iraqi population, including providing safe drinking water, electricity, and sanitation. On 1 May, President Bush declared major combat operations had been completed. On 13 July 2003, a 25-member Iraqi interim Governing Council was formed. On 22 July, Saddam Hussein's two sons, Uday and Qusay, were killed by US forces in Mosul. US forces increasingly became the targets of attacks in Iraq, and by 1 August 2003, 52 US soldiers had been killed since combat was declared over on 1 May. By mid-August 2003, neither Saddam Hussein nor any weapons of mass destruction had been found in Iraq.
http://www.nationsencyclopedia.com/Americas/United-States-HISTORY.html
13
27
|Part of a series on| Carbon neutral, or having a net zero carbon footprint, refers to achieving net zero carbon emissions by balancing a measured amount of carbon released with an equivalent amount sequestered or offset, or buying enough carbon credits to make up the difference. It is used in the context of carbon dioxide releasing processes associated with transportation, energy production, and industrial processes such as production of carbon neutral fuel. The carbon neutrality concept may be extended to include other greenhouse gases (GHG) measured in terms of their carbon dioxide equivalence—the impact a GHG has on the atmosphere expressed in the equivalent amount of CO2. The term climate neutral reflects the broader inclusiveness of other greenhouse gases in climate change, even if CO2 is the most abundant, encompassing other greenhouse gases regulated by the Kyoto Protocol, namely: methane (CH4), nitrous oxide (N2O), hydrofluorocarbons (HFC), perfluorocarbons (PFC), and sulphur hexafluoride (SF6). Both terms are used interchangeably throughout this article. The best practice for organizations and individuals seeking carbon neutral status entails reducing and/or avoiding carbon emissions first so that only unavoidable emissions are offset. Carbon neutral status is commonly achieved in two ways: - Balancing carbon dioxide released into the atmosphere from burning fossil fuels, with renewable energy that creates a similar amount of useful energy, so that the carbon emissions are compensated, or alternatively using only renewable energies that don't produce any carbon dioxide (also called a post-carbon economy). - Carbon offsetting by paying others to remove or sequester 100% of the carbon dioxide emitted from the atmosphere – for example by planting trees – or by funding 'carbon projects' that should lead to the prevention of future greenhouse gas emissions, or by buying carbon credits to remove (or 'retire') them through carbon trading. While carbon offsetting is often used alongside energy conservation measures to minimize energy use, the practice is criticized by some. Carbon, or climate, neutrality is usually achieved by combining the following steps (although these may vary depending whether the strategy is implemented by individuals, companies, organizations, cities, regions, or countries): In the case of individuals, decision-making is likely to be straightforward, but for more complex set-ups, it usually requires political leadership at the highest level and wide popular agreement that the effort is worth making. Counting and analyzing Counting and analyzing the emissions that need to be eliminated, and the options for doing so, is the most crucial step in the cycle as it enables setting the priorities for action – from the products purchased to energy use and transport – and to start monitoring progress. This can be achieved through a GHG inventory that aims at answering questions such as: - Which operations, activities, units should be included? - Which sources should be included (see section Direct and indirect emissions)? - Who is responsible for which emissions? - Which gases should be included? For individuals, carbon calculators simplify compiling an inventory. Typically they measure electricity consumption in kWh, the amount and type of fuel used to heat water and warm the house, and how many kilometres an individual drives, flies and rides in different vehicles. Individuals may also set various limits of the system they are concerned with, e.g. personal GHG emissions, household emissions, or the company they work for. There are plenty of carbon calculators available online, which vary significantly in their usefulness and the parameters they measure. Some, for example, factor in only cars, aircraft and household energy use. Others cover household waste or leisure interests as well. In some circumstances, actually going beyond carbon neutral (usually after a certain length of time taken to reach carbon breakeven) is an objective. In starting to work towards climate neutrality, businesses and local administrations can make use of an environmental (or sustainability) management system or EMS established by the international standard ISO 14001 (developed by the International Organization for Standardization). Another EMS framework is EMAS, the European Eco Management and Audit Scheme, used by numerous companies throughout the EU. Many local authorities apply the management system to certain sectors of their administration or certify their whole operations. One of the strongest arguments for reducing GHG emissions is that it will often save money. Energy prices across the world are rising, making it harder to afford to travel, heat and light homes and factories, and keep a modern economy ticking over. So it is both common sense and sensible for the climate to use energy as sparingly as possible. Examples of possible actions to reduce GHG emissions are: - Limiting energy usage and emissions from transportation (walking, using bicycles or public transport, avoiding flying, using low-energy vehicles), as well as from buildings, equipment, animals and processes. - Obtaining electricity and other energy from a renewable energy source, either directly by generating it (installing solar panels on the roof for example) or by selecting an approved green energy provider, and by using low-carbon alternative fuels such as sustainable biofuels. The use of Carbon offsets aims to neutralize a certain volume of GHG emissions by funding projects which should cause an equivalent reduction of GHG emissions somewhere else, such as tree planting. Under the premise “First reduce what you can, then offset the remainder”, offsetting can be done by supporting a responsible carbon project, or by buying carbon offsets or carbon credits. Offsetting is sometimes seen as a charged and contentious issue. For example, James Hansen describes offsets as "modern day indulgences, sold to an increasingly carbon-conscious public to absolve their climate sins." Evaluation and repeating This phase includes evaluation of the results and compilation of a list of suggested improvements, with results documented and reported, so that experience gained of what does (and does not) work is shared with those who can put it to good use. Finally, with all that completed, the cycle starts all over again, only this time incorporating the lessons learnt. Science and technology move on, regulations become tighter, the standards people demand go up. So the second cycle will go further than the first, and the process will continue, each successive phase building on and improving on what went before. Being carbon neutral is increasingly seen as good corporate or state social responsibility and a growing list of corporations and states are announcing dates for when they intend to become fully neutral. Events such as the G8 Summit and organizations like the World Bank are also using offset schemes to become carbon neutral. Artists like The Rolling Stones and Pink Floyd have made albums or tours carbon neutral. Direct and indirect emissions To be considered carbon neutral, an organization must reduce its carbon footprint to zero. Determining what to include in the carbon footprint depends upon the organization and the standards they are following. Generally, direct emissions sources must be reduced and offset completely, while indirect emissions from purchased electricity can be reduced with renewable energy purchases. Direct emissions include all pollution from manufacturing, company owned vehicles and reimbursed travel, livestock and any other source that is directly controlled by the owner. Indirect emissions include all emissions that result from the use or purchase of a product. For instance, the direct emissions of an airline are all the jet fuel that is burned, while the indirect emissions include manufacture and disposal of airplanes, all the electricity used to operate the airline's office, and the daily emissions from employee travel to and from work. In another example, the power company has a direct emission of greenhouse gas, while the office that purchases it considers it an indirect emission. Simplification of standards and definitions Carbon neutral fuels are those that neither contribute to nor reduce the amount of carbon into the atmosphere. Before an agency can certify an organization or individual as carbon neutral, it is important to specify whether indirect emissions are included in the Carbon Footprint calculation. Most Voluntary Carbon neutral certifiers such as Standard Carbon in the US, require both direct and indirect sources to be reduced and offset. As an example, for an organization to be certified carbon neutral by Standard Carbon, it must offset all direct and indirect emissions from travel by 1 lb CO2e per passenger mile, and all non-electricity direct emissions 100%. Indirect electrical purchases must be equalized either with offsets, or renewable energy purchase. This standard differs slightly from the widely used World Resource Institute and may be easier to calculate and apply. The World Resource Institute, in addition to publishing many tables and help aids for calculating carbon footprints, only requires direct emissions to be reduced and balanced for carbon neutral status, however there is adequate encouragement to include all emissions sources. With this accounting, there are essentially two levels of Carbon neutral: Either all direct and indirect emissions, or only direct emissions. Much of the confusion in carbon neutral standards can be attributed to the number of voluntary carbon standards which are available. For organizations looking at which carbon offsets to purchase, knowing which standards are robust, credible in permanent is vital in choosing the right carbon offsets and projects to get involved in. Some of the main standards in the voluntary market include; CEB VER Standard, The Voluntary Carbon Standard, The Gold Standard and The California Climate Action Registry. In addition companies can purchase Certified Emission Reductions (CERs) which result from mitigated carbon emissions from UNFCCC approved projects for voluntary purposes. There are various resources available however to help companies navigate the often complex carbon offsetting standards maze. The concept of shared resources also reduces the volume of carbon a particular organization has to offset, with all upstream and downstream emissions the responsibility of other organizations or individuals. If all organizations and individuals were involved then this would not result in any double accounting. Regarding terminology in UK and Ireland, in December 2011 the Advertising Standards Authority (in an ASA decision which was upheld by its Independent Reviewer, Sir Hayden Phillips) controversially ruled that no manufactured product can be marketed as "zero carbon", because carbon was inevitably emitted during its manufacture. This decision was made in relation to a solar panel system whose embodied carbon was repaid during 1.2 years of use and it appears to mean that no buildings or manufactured products can legitimately be described as zero carbon in its jurisdiction. Being carbon neutral is increasingly seen as good corporate or state social responsibility and a growing list of corporations, cities and states are announcing dates for when they intend to become fully neutral. Companies and organizations The original Climate Neutral Network was an Oregon based non-profit organization founded by Sue Hall and incorporated in 1999 to persuade companies that being climate neutral was potentially cost saving as well as environmentally sustainable. It developed both the Climate Neutral Certification and Climate Cool brand name with key stakeholders such as the US EPA, The Nature Conservancy, Rocky Mountain Institute, Conservation International, and the World Resources Institute and succeeded in enrolling the 2002 Winter Olympics to compensate for its associated greenhouse gas emissions. The non-profit's web site as of March 2011, lists the organization as closing its doors and plans to continue the Climate Cool upon transfer to a new non-profit organization, unknown at this time. Interestingly, the for-profit consulting firm, Climate Neutral Business Network, lists the same Sue Hall as CEO and lists many of the same companies who were participants in the original Climate Neutral Network, as consulting clients. Few companies have actually attained Climate Neutral Certification, applying to a rigorous review process and establishing that they have achieved absolute net zero or better impact on the world's climate. Shaklee Corporation announced it became the first Climate Neutral certified company in April 2000. Climate Neutral Business Network states that it certified Dave Matthews Band's concert tour as Climate Neutral. The Christian Science Monitor criticized the use of NativeEnergy. a for-profit company that sells offsets credits to businesses and celebrities like Dave Matthews. Salt Spring Coffee has become carbon neutral by lowering emissions through reducing long-range trucking and using bio-diesel fuel in delivery trucks, upgrading to energy efficient equipment and purchasing carbon offsets. The company claims to the first carbon neutral coffee sold in Canada. Salt Spring Coffee was recognized by the David Suzuki Foundation in their 2010 report Doing Business in a New Climate. Some corporate examples of self-proclaimed carbon neutral and climate neutral initiatives include Dell, Google, HSBC, ING Group, PepsiCo, Sky, Tesco, Toronto-Dominion Bank, and Bank of Montreal. Under the leadership of Secretary-General Ban Ki-moon, the United Nations pledged to work towards climate neutrality in December 2007. The United Nations Environment Programme (UNEP) announced it was becoming climate neutral in 2008 and established a Climate Neutral Network to promote the idea in February 2008. Events such as the G8 Summit and organizations like the World Bank are also using offset schemes to become carbon neutral. Artists like The Rolling Stones and Pink Floyd have made albums or tours carbon neutral, while Live Earth says that its seven concerts held on 7 July 2007 were the largest carbon neutral public event in history. Buildings are the largest single contributor to the production of greenhouse gases. The American Institute of Architects 2030 Commitment is a voluntary program for AIA member firms and other entities in the built environment that asks these organizations to pledge to design all their buildings to be carbon neutral by 2030. In 2010, architectural firm HOK worked with energy and daylighting consultant The Weidt Group to design a 170,735-square-foot (15,861.8 m2) net zero carbon emissions Class A office building prototype in St. Louis, Missouri, U.S. Countries and communities Several countries and communities have pledged carbon neutrality, including: Costa Rica The Central American nation of Costa Rica aims to be fully carbon neutral by 2021. In 2004, 46.7% of Costa Rica's primary energy came from renewable sources, while 94% of its electricity was generated from hydroelectric power, wind farms and geothermal energy in 2006. A 3.5% tax on gasoline in the country is used for payments to compensate landowners for growing trees and protecting forests and its government is making further plans for reducing emissions from transport, farming and industry. Samsø island in Denmark is the largest carbon-neutral settlement on the planet, with a population of 4200, based on wind-generated electricity and biomass-based district heating. They currently generate extra wind power and export the electricity to compensate for petro-fueled vehicles. There are future hopes of using electric or biofuel vehicles. The ex-president of the Maldives has pledged to make his country carbon-neutral within a decade by moving to wind and solar energy. The Maldives, a country consisting of very low-lying islands, would be one of the first countries to be submerged due to sea level rise. The Maldives presided over the foundation of the Climate Vulnerable Forum. New Zealand Another nation to pledge carbon neutrality is New Zealand. Its Carbon Neutral Public Sector Initiative aimed to offset the greenhouse gas emissions of an initial group of six governmental agencies by 2012. Unavoidable emissions would be offset, primarily through indigenous forest regeneration projects on conservation land. All 34 public service agencies also needed to have emission reduction plans in place. The Carbon Neutral Public Service programme was discontinued in March 2009. On April 19, 2007, Prime Minister Jens Stoltenberg announced to the Labour Party annual congress that Norway's greenhouse gas emissions would be cut by 10 percent more than its Kyoto commitment by 2012, and that the government had agreed to achieve emission cuts of 30% by 2020. He also proposed that Norway should become carbon neutral by 2050, and called upon other rich countries to do likewise. This carbon neutrality would be achieved partly by carbon offsetting, a proposal criticised by Greenpeace, who also called on Norway to take responsibility for the 500m tonnes of emissions caused by its exports of oil and gas. World Wildlife Fund Norway also believes that the purchase of carbon offsets is unacceptable, saying 'it is a political stillbirth to believe that China will quietly accept that Norway will buy climate quotas abroad'. The Norwegian environmental activist Bellona Foundation believes that the prime minister was forced to act due to pressure from anti-European Union members of the coalition government, and called the announcement 'visions without content'. In January 2008 the Norwegian government went a step further and declared a goal of being carbon neutral by 2030. But the government has not been specific about any plans to reduce emissions at home; the plan is based on buying carbon offsets from other countries. Iceland is also moving towards climate neutrality. Over 99% of electricity production and almost 80% of total energy production comes from hydropower and geothermal. No other nation uses such a high proportion of renewable energy resources. In February 2008, Costa Rica, Iceland, New Zealand and Norway were the first four countries to join the Climate Neutral Network, an initiative led by the United Nations Environment Programme (UNEP) to catalyze global action towards low carbon economies and societies. Vatican City In July 2007, Vatican City announced a plan to become the first carbon-neutral state in the world, following the politics of the Pope to eliminate global warming. The goal would be reached through the donation of the Vatican Climate Forest in Hungary. The forest is to be sized to offset the year's carbon dioxide emissions. However, no trees have actually been planted as of 2008. The company KlimaFa is no longer in existence and hasn't fulfilled its promises. In November 2008, the city state also installed and put into operation 2,400 solar panels on the roof of the Paul VI Centre audience hall. British Columbia In June 2011, the Canadian Province of British Columbia announced they had officially become the first provincial/state jurisdiction in North America to achieve carbon neutrality in public sector operations: every school, hospital, university, Crown corporation, and government office measured, reported and purchased carbon offsets on all their 2010 Greenhouse Gas emissions as required under legislation. Local Governments across B.C. are also beginning to declare Carbon Neutrality. Carbon neutral initiatives Many initiatives seek to assist individuals, businesses and states in reducing their carbon footprint or achieving climate neutrality. These include website neutralization projects like CO2Stats and, the similar European initiative CO2 neutral website as well as the Climate Neutral Network, Caring for Climate, and Together campaign. Although there is currently no international certification scheme for carbon or climate neutrality, some countries have established national certification schemes. Examples include Norwegian Eco-Lighthouse Program and the Australian government's National Carbon Offset Standard (NCOS). Climate Neutral Certification ||This section appears to be written like an advertisement. (April 2011)| Climate Neutral Certification was established and trademarked originally through the Climate Neutral Network, an Oregon based non-profit organization, not to be confused with the UNEP's Climate Neutral Network. Applications for certification are no longer being accepted according to the non-profit organization's web site, where the organization also states it is closing its doors. The first three companies certified as Climate Neutral were Shaklee corporation, Interface, and Saunders Hotels. Stakeholders in developing and supporting the Climate Neutral Certification are listed as The Nature Conservancy, Conservation International, Rocky Mountain Institute, and the U.S EPA. What is unclear is whether the Climate Neutral certification will be continued by the for-profit consulting firm, Climate Neutral Business Network, or another non-profit organization in the future. Climate Neutral Network also promoted, trademarked, and licensed the brand Climate Cool, for products certified by the organization's Environmental Review Panel and determined to achieve net zero climate impact, by reducing and offsetting associated emissions. The organization's web site promises to transfer the Climate Cool branding to another non-profit organization, upon closing the current organization. In Australia, the only government endorsed carbon neutral certification is the government’s National Carbon Offset Standard (NCOS). More information can be found at http://www.climatechange.gov.au/ncos See also - Carbon diet - Carbon Neutral Protocol - Carbon cycle - Carbon offset - Carbon negative - Carbon footprint - Cellulosic ethanol - Climate change mitigation - Live Earth - Low carbon diet - Low-carbon economy - Zero-carbon building (carbon neutral buildings) - 2000-watt society ||This article uses bare URLs for citations. (January 2012)| - Carbon Neutral - What Does it Mean?, eejitsguides.com, published 2006, accessed 2007-07-03 - Carbon-Neutral Is Hip, but Is It Green?, New York Times, published 2007-04-29, accessed 2007-08-03 - The Carbon Neutral Myth - Carbon Neutral: Oxford Word of the Year - Is it possible to be carbon neutral?, Zero Emission Project - BBC NEWS |The G8 summit promises to be a "carbon-neutral" event - GreenBiz News |World Bank Group Goes Carbon-Neutral - GreenBiz News |Rolling Stones Pledge Carbon-Neutral U.K. Tour - prnewswire.co.uk: Pink Floyd breathe life into new forests - Carbon footprints will decrease] - Carbon Neutral Certification Guidelines - EcoSecurities Navigating the Carbon Offsetting Standards Maze - Article with references to ASA documentation banning marketing of "Zero Carbon". - Burn Oil, Then Help A School Out; It All Evens Out - "www.climateneutralnetwork.org". www.climateneutralnetwork.org. Retrieved 2012-02-09. - "Climate Neutral Business Network". Climateneutral.com. Retrieved 2012-02-09. - "Shaklee Corporation: Providing a healthier life for everyone and a better life for anyone. Health, Wellness, Nature, Opportunity". Shaklee.com. Retrieved 2012-02-09. - "Is Dave Matthews' carbon offsets provider really carbon neutral?". CSMonitor.com. 2010-04-20. Retrieved 2012-02-09. - Smith, Charlie (September 9, 2004). "Biodiesel Revolution Gathering Momentum". The Georgia Straight. Retrieved July 27, 2009. - "Green goes mainstream". The Vancouver Sun. April 15, 2008. Retrieved 2009-07-28. - "Doing Business in a New Climate: A Guide to Measuring, Reducing and Offsetting Greenhouse Gas Emissions | Publications | David Suzuki Foundation". Davidsuzuki.org. Retrieved 2012-07-24. - Dell to go 'carbon neutral' by late 2008 - BBC NEWS | Technology |Google's drive for clean future - Official Google Blog: Carbon neutrality by end of 2007 - BBC NEWS | Business |HSBC bank to go carbon neutral - Climate Change and ING - Sky CarbonNeutral - PepsiCo takes top spot in global warming battle - BBC NEWS | Business |Tesco boss unveils green pledges - silicon.com | Tesco wants to be carbon neutral by 2050. How will the retailer do it? - TD Greenhouse Gas Emissiona - BMO Meets Carbon Neutral Commitment - Offsetting the Vancouver 2010 Olympic Winter Games - AIA Introduces 2030 Commitment Program to Reach Goal of Carbon Neutrality by 2030 - A Net Zero Office Today - UNEP Climate Neutral Network: Costa Rica Retrieved on 2011-10-12. - Share of Total Primary Energy Supply in 2004, International Energy Agency, published 2006, accessed 2007-08-06 - President Aims for Carbon Neutrality, Environmental Entrepreneurs, published 2007-06-28, accessed 2007-08-06 - Kolbert, Elizabeth (2011-08-01). "A Reporter at Large: The Island in the Wind". The New Yorker. Retrieved 2012-02-09. - UNEP Climate Neutral Network website - UNEP Climate Neutral Network website - "Public service takes lead in becoming carbon neutral | Ministry for the Environment". Mfe.govt.nz. 2009-06-12. Retrieved 2012-07-24. - "Speech to the congress of the Labour Party". regjeringen.no. 2007-04-19. Retrieved 2012-02-09. - "Science News | Technology News - ABC News". Abcnews.go.com. 2011-12-04. Retrieved 2012-02-09. - [dead link] - Rosenthal, Elisabeth (2008-03-22). "Lofty Pledge to Cut Emissions Comes With Caveat in Norway - New York Times". Norway: Nytimes.com. Retrieved 2012-02-09. - The Vatican to go carbon neutral - Thompson, Kalee. "Carbon Discredit | Popular Science". Popsci.com. Retrieved 2012-02-09. - Carbon offsets: How a Vatican forest failed to reduce global warming - Pope's first "green audience", Catholic News Service. - Carbon Neutral B.C., LiveSmart BC - Greenhouse Gas Reduction Targets Act, Carbon Neutral Government Regulation, BC Laws - "http://us.carbonneutral.com/environmental-standards/carbonneutral/". http://us.carbonneutral.com/environmental-standards/carbonneutral/. Retrieved 2012-05-16. - "www.climateneutralnetwork.org". www.climateneutralnetwork.org. Retrieved 2012-02-09. - Three US Companies Become Climate Neutral
http://en.wikipedia.org/wiki/Carbon_neutral
13
25
Prehistory | Ancient Bronze Age | The Shang Dynasty (ca. 1570 BCE- 1045 BCE) | The Zhou Dynasty (1045- 256 BCE) | Western Zhou (1045 BCE- 771 BCE) | Eastern Zhou (770 BCE- 256 BCE) | Spring and Autumn Period (770 BCE- 403 BCE) | The Hundred Schools of Thought | Warring States Period (403 BCE- 221 BCE) | The First Imperial Period (221 BCE- 220 CE) | Era of Disunity (220 CE- 589 CE) | Restoration of Empire (589 CE- 1279 CE) During the long Paleolithic period, bands of predatory hunter-gatherers lived in what is now China. Homo erectus, an extinct species closely related to modern humans, or Homo sapiens, appeared in China more than one million years ago. Anthropologists disagree about whether Homo erectus is the direct ancestor of Homo sapiens or merely related through a mutual ancestor. In either case, modern humans may have first appeared in China as far back as 200,000 years ago. Beginning in about 10,000 BCE, humans in China began developing agriculture, possibly influenced by developments in Southeast Asia. By 5000 BCE there were Neolithic village settlements in several regions of China. On the fine, wind-blown loess soils of the north and northwest, the primary crop was millet, while villages along the lower Yangtze River in Central China were centered on rice production in paddy fields, supplemented by fish and aquatic plants. Humans in both regions had domesticated pigs, dogs, and cattle, and by 3000 BCE sheep had become important in the north and water buffalo in the south. Over the course of the 5th to 3rd millennia BCE, many distinct, regional Neolithic cultures emerged. In the northwest, for instance, people made red pottery vessels decorated in black pigment with designs such as spirals, sawtooth lines, and zoomorphic (animal-like) stick figures. During the same period, Neolithic cultures in the east produced pottery that was rarely painted but had distinctive shapes, such as three-legged, deep-bodied tripods. Archaeologists have uncovered numerous jade ornaments, blades, and ritual objects in several eastern sites, but jade is rare in western ones. In many areas, stamped-earth fortified walls came to be built around settlements, suggesting not only increased contact between settlements but also increased conflict. Later Chinese civilization probably evolved from the interaction of many distinct Neolithic cultures, which over time came to share more in the way of material culture and social and cultural practices. For example, many burial practices, including the use of coffins and ramped chambers, spread way beyond their place of origin. Ancient Chinese historians knew nothing of their Neolithic forebears, whose existence was discovered by 20th-century archaeologists. Traditionally, the Chinese traced their history through many dynasties to a series of legendary rulers, like the Yellow Lord (Huang Di), who invented the key features of civilization- agriculture, the family, silk, boats, carts, bows and arrows, and the calendar. The last of these kings was Yu, and when he died the people chose his son to lead them, thus establishing the principle of hereditary, dynastic rule. Yu’s descendants created the Xia dynasty (ca. 2205 BCE- 1570 BCE), which was said to have lasted for 14 generations before declining and being superseded by the Shang dynasty. The Xia dynasty may correspond to the first phases of the transition to the Bronze Age. Between 2000 BCE and 1600 BCE a more complex Bronze Age civilization emerged out of the diverse Neolithic cultures in northern China. This civilization was marked by writing, metalwork, domestication of horses, a class system, and a stable political and religious hierarchy. Although Bronze Age civilizations developed earlier in Southwest Asia, China seems to have developed both its writing system and its bronze technology with relatively little stimulus from outside. However, other elements of early Chinese civilization, such as the spoke-wheeled horse chariot, apparently reached China indirectly from places to the west. No written documents survive to link the earliest Bronze Age sites unambiguously to Xia. With the Shang dynasty, however, the historical and archaeological records begin to coincide. Chinese accounts of the Shang rulers match inscriptions on animal bones and tortoise shells found in the 20th century at the city of Anyang in the valley of the Huang He (Yellow River). Archaeological remains provide many details about Shang civilization. A king was the religious and political head of the society. He ruled through dynastic alliances; divination (his subjects believed that he alone could predict the future by interpreting cracks in animal bones); and royal journeys, hunts, and military campaigns that took him to outlying areas. The Shang were often at war with neighboring peoples and moved their capital several times. Shang kings could mobilize large armies for warfare and huge numbers of workers to construct defensive walls and elaborate tombs. The Shang directly controlled only the central part of China proper, extending over much of modern Henan, Hubei, Shandong, Anhui, Shanxi, and Hebei provinces. However, Shang influence extended beyond the state’s borders, and Shang art motifs are often found in artifacts from more-distant regions. The Shang king’s rule was based equally on religious and military power. He played a priestly role in the worship of his ancestors and the high god Di. The king made animal sacrifices and communicated with his ancestors by interpreting the cracks on heated cattle bones or tortoise shells that had been prepared by professional diviners. Royal ancestors were viewed as able to intervene with Di, send curses, produce dreams, and assist the king in battle. Kings were buried with ritual vessels, weapons, jades, and numerous servants and sacrificial victims, suggesting that the Shang believed in some form of afterlife. The Shang used bronze more for purposes of ritual than war. Although some weapons were made of bronze, the great bulk of the surviving Shang bronze objects are cups, goblets, steamers, and cauldrons, presumably made for use in sacrificial rituals. They were beautifully formed in a great variety of shapes and sizes and decorated with images of wild animals. As many as 200 of these bronze vessels might be buried in a single royal grave. The bronze industry required centralized coordination of a large labor force to mine, refine, and transport copper, tin, and lead ores, as well as to produce and transport charcoal. It also required technically skilled artisans to make clay models, construct ceramic molds, and assemble and finish vessels, the largest which weighed as much as 800 kg (1,800 lb). The writing system used by the Shang is the direct ancestor of the modern Chinese writing system, with symbols or characters for each word. This writing system would evolve over time, but it never became a purely phonetic system like the Roman alphabet, which uses symbols (letters) to represent specific sounds. Thus mastering the written language required learning to recognize and write several thousand characters, making literacy a highly specialized skill requiring many years to master fully. In the 11th century BCE a frontier state called Zhou rose against and defeated the Shang dynasty. The Zhou dynasty is traditionally divided into two periods: the Western Zhou (ca. 1045 BCE- 771 BCE), when the capital was near modern Xi’an in the west, and the Eastern Zhou (770 BCE- 256 BCE), when the capital was moved further east to modern Luoyang. The Easter Zhou is divided into two sub- periods: The Spring and Autumn Period (770 BCE- 403 BCE) and the Warring States Period (403 BCE- 221 BCE), which are collectively referred to as 'China's Golden Age'. Like the Shang kings, the Zhou kings sacrificed to their ancestors, but they also sacrificed to Heaven (Tian). The Shu jing (Book of History), one of the earliest transmitted texts, describes the Zhou’s version of their history. It assumes a close relationship between Heaven and the king, called the Son of Heaven, explaining that Heaven gives the king a mandate to rule only as long as he does so in the interest of the people. Because the last Shang king had been decadent and cruel, Heaven withdrew the Mandate of Heaven (Tian Ming) from him and entrusted it to the virtuous Zhou kings. The Shu jing praises the first three Zhou rulers: King Wen (the Cultured King) expanded the Zhou domain; his son, King Wu (the Martial King), conquered the Shang; and King Wu's brother, Zhou Gong (often referred to as Duke of Zhou), consolidated the conquest and served as loyal regent for Wu’s heir. The Shi jing (Book of Poetry) offers another glimpse of life in early Zhou China. Its 305 poems include odes celebrating the exploits of the early Zhou rulers, hymns for sacrificial ceremonies, and folk songs. The folk songs are about ordinary people in everyday situations, such as working in fields, spinning and weaving, marching on campaigns, and longing for lovers. In these books, which became classics of the Confucian tradition, the Western Zhou dynasty is described as an age when people honored family relationships and stressed social status distinctions. The early Zhou rulers did not attempt to exercise direct control over the entire region they conquered. Instead, they secured their position by selecting loyal supporters and relatives to rule walled towns and the surrounding territories. Each of these local rulers, or vassals, was generally able to pass his position on to a son, so that in time the domain became a hereditary vassal state. Within each state, there were noble houses holding hereditary titles. The rulers of the states and the members of the nobility were linked both to one another and to their ancestors by bonds of obligation based on kinship. Below the nobility were the officers (shi) and the peasants, both of which were also hereditary statuses. The relationship between each level and its superiors was conceived as a moral one. Peasants served their superiors, and their superiors looked after the peasants’ welfare. Social interaction at the upper levels was governed by li, a set of complex rules of social etiquette and personal conduct. Those who practiced li were considered civilized; those who did not, such as those outside the Zhou realm, were considered barbarians. The Zhou kings maintained control over their vassals for more than two centuries, but as the generations passed, the ties of kinship and vassalage weakened. In 770 BCE several of the states rebelled and joined with non-Chinese forces to drive the Zhou from their capital. The Zhou established a new capital to the east at Chengzhou (near present-day Luoyang), where they were safer from barbarian attack, but the Eastern Zhou kings no longer exercised much political or military authority over the vassal states. In the Eastern Zhou period, real power lay with the larger states, although the Zhou kings continued as nominal overlords, partly because they were recognized as custodians of the Mandate of Heaven, but also because no single feudal state was strong enough to dominate the others. The Eastern Zhou period witnessed various social and economic advances. The use of iron-tipped, ox-drawn plows and improved irrigation techniques produced higher agricultural yields. This in turn supported a steady population increase. Other economic advances included the circulation of coins for money, the beginning of private ownership of land, and the growth of cities. Military technology also advanced. The Zhou developed the crossbow and methods of siege warfare, and adopted cavalry warfare from nomads to the north. Social changes were just as important, particularly the breakdown of old class barriers and the development of conscripted infantry armies. To maintain and increase power, state rulers sought the advice of teachers and strategists. This fueled intellectual activity and debate, and intense reappraisal of traditions. Though this time in Chinese history was marked by disunity and civil strife, an unprecedented era of cultural prosperity- the "golden age" of China flourished. The atmosphere of reform and new ideas was attributed to the struggle for survival among warring regional lords who competed in building strong and loyal armies and in increasing economic production to ensure a broader base for tax collection. To effect these economic, military, and cultural developments, the regional lords needed ever-increasing numbers of skilled, literate officials and teachers, the recruitment of whom was based on merit. Also during this time, commerce was stimulated through the introduction of coinage and technological improvements. Iron came into general use, making possible not only the forging of weapons of war but also the manufacture of farm implements. Public works on a grand scale, such as flood control, irrigation projects, and canal digging, were executed. Enormous walls were built around cities and along the broad stretches of the northern frontier. So many different philosophies developed during the late Spring and Autumn and early Warring States periods that the era is often known as the time when the “hundred schools of thought contended.” From the Hundred Schools of Thought came many of the great classical writings on which Chinese practices were to be based for the next two and one half millennia. Many of the thinkers were itinerant intellectuals who, besides teaching their disciples, were employed as advisers to one or another of the various state rulers on the methods of government, war, and diplomacy. There were thinkers fascinated by logical puzzles; utopians and hermits who argued for withdrawal from public life; agriculturists who argued that no one should eat who does not plough; military theorists who analyzed ways to deceive the enemy; and cosmologists who developed theories of the forces of nature, including the opposite and complementary forces of yin and yang. The three most influential schools of thought that evolved during this period were Confucianism, Daoism, and Legalism. The body of thought that had the most enduring effect on subsequent Chinese life was that of the School of Literati (ru), often called the Confucian school in the West. The written legacy of the School of Literati is embodied in the Confucian Classics, which were to become the basis for the order of traditional society. Kongfuzi, or Confucius as he is known in the West, lived from 551 BCE- 479 BCE. Also called Kong Zi, or Master Kong, Confucius was a teacher from the state of Lu (in present-day Shandong Province), revered tradition and looked to the early days of Zhou rule for an ideal social and political order. He believed that the only way such a system could be made to work properly was for each person to act according to prescribed relationships. "Let the ruler be a ruler and the subject a subject," he said, but he added that to rule properly a king must be virtuous. To Confucius, the functions of government and social stratification were facts of life to be sustained by ethical values. Confucius exalted virtues such as filial piety (reverent respect and obedience toward parents and grandparents), humanity (an unselfish concern for the welfare of others), integrity, and a sense of duty. His ideal was the junzi (ruler's son), which he redefined to mean gentleman; a man of moral cultivation was a superior man, rather than a man of noble birth. He repeatedly urged his students to aspire to be gentlemen who pursue integrity and duty, rather than petty men who pursue personal gain. Confucius’s teachings are known through the Lunyu (Analects), a collection of his conversations compiled by his followers after his death. He encouraged his disciples to master historical records, music, poetry, and ritual. He tried in vain to gain high office, traveling from state to state with his disciples in search of a ruler who would employ him. Confucius talked repeatedly of his vision of a more perfect society in which rulers and subjects, nobles and commoners, parents and children, and men and women would wholeheartedly accept the parts assigned to them, devoting themselves to their responsibilities to others. There were to be accretions to the corpus of Confucian thought, both immediately and over the millennia, and from within and outside the Confucian school. Interpretations made to suit or influence contemporary society made Confucianism dynamic while preserving a fundamental system of model behavior based on ancient texts. The eventual success of Confucian ideas owes much to Confucius's followers in the two centuries after his death, particularly to Mencius and Xun Zi. Mencius (372 BCE- 289 BCE), or Meng Zi, was a Confucian disciple who made major contributions to the humanism of Confucian thought. Mencius, like Confucius, traveled to various states, offering advice to their rulers. He expostulated the idea that a ruler who governed benevolently would earn the respect of the people and would unify the realm; a ruler could not govern without the people's tacit consent and that the penalty for unpopular, despotic rule was the loss of the "mandate of heaven." Mencius proposed concrete political and financial measures for easing tax burdens and otherwise improving the people's lot. With his disciples and fellow philosophers, he discussed other issues in moral philosophy. Mencius declared that man was by nature good, arguing strongly, that everyone is born with the capacity to recognize what is right and act upon it. The effect of the combined work of Confucius, the codifier and interpreter of a system of relationships based on ethical behavior, and Mencius, the synthesizer and developer of applied Confucian thought, was to provide traditional Chinese society with a comprehensive framework on which to order virtually every aspect of life. Diametrically opposed to Mencius, for example, was the interpretation of Xun Zi (ca. 300 BCE-237 BCE), another Confucian follower. Xun Zi preached that man is innately selfish and evil and that goodness is attainable only through conduct befitting one's status and education, that they learn to put moral principle above their own interests. He also argued that the best government is one based on authoritarian control, not ethical or moral persuasion. Xun Zi stressed the importance of ritual to social and political life, but took a secular view of it. For instance, Xun Zi argued that the ruler should pray for rain during a drought because to do so is the traditional ritual, not because it moves Heaven to send rain. Xun Zi's unsentimental and authoritarian inclinations were developed into the doctrine embodied in the School of Law (fa), or Legalism. Legalism differed from both Confucianism and Daoism in its narrow focus on statecraft. The doctrine was formulated by Han Fei Zi (ca. 280 BCE- 233 BCE) and Li Si (d. 208 BCE), who reasoned that the extreme disorders of their day called for new and drastic measures. They argued that social order depended on effective systems of rewards and punishments, by rejecting the Confucian theory that strong government depended on the moral quality of the ruler and his officials and their success in winning over the people. To ensure his power, the ruler had to keep his officials in line with strict rules and regulations and his people obedient with predictably enforced laws. The Legalists exalted the state and sought its prosperity and martial prowess above the welfare of the common people. Legalism became the philosophic basis for the imperial form of government. When the most practical and useful aspects of Confucianism and Legalism were synthesized in the Han period (206 BCE- CE 220), a system of governance came into existence that was to survive largely intact until the late 19th century. The doctrines of Taoism (Daoism), the second great school of philosophy that emerged during the Warring States Period, also developed during the Zhou period and set forth in the Daodejing (Classic of the Way and Its Power), which is attributed traditionally to the legendary sage Lao Zi (ca. 579 BCE- 490 BCE), or Old Master, and in the compiled writings of Zhuangzi (369 BCE- 286 BCE). Both works share a disapproval of the unnatural and artificial. Whereas plants and animals act spontaneously in the ways appropriate to them, humans have separated themselves from the Way (Dao) by plotting and planning, analyzing and organizing. Both texts reject social conventions and call for an ecstatic surrender to the spontaneity of cosmic processes. At the political level, Daoism advocated a return to primitive agricultural communities, in which life could follow the most natural course. Government policy should be one of extreme noninterference, permitting the people to respond to nature spontaneously. The Zhuangzi is much longer than the Daodejing. A literary masterpiece, it is full of tall tales, parables, and fictional encounters between historical figures. Zhuangzi poked fun at people mired in everyday affairs and urged people to see death as part of the natural cosmic processes. The focus of Taoism is the individual in nature rather than the individual in society. It holds that the goal of life for each individual is to find one's own personal adjustment to the rhythm of the natural (and supernatural) world, to follow the Dao of the universe. In many ways the opposite of rigid Confucian moralism, Taoism served many of its adherents as a complement to their ordered daily lives. A scholar on duty as an official would usually follow Confucian teachings but at leisure or in retirement might seek harmony with nature as a Taoist recluse. Another strain of thought dating to the Warring States Period is the school of yin-yang and the five elements. The theories of this school attempted to explain the universe in terms of basic forces in nature, the complementary agents of yin (dark, cold, female, negative) and yang (light, hot, male, positive) and the five elements (water, fire, wood, metal, and earth). In later periods these theories came to have importance both in philosophy and in popular belief. Still another school of thought was based on the doctrine of Mo Zi (ca. 470 BCE- 391 BCE), or Mo Di. Mo Zi believed that "all men are equal before God" and that mankind should follow heaven by practicing universal love. Advocating that all action must be utilitarian, Mo Zi condemned the Confucian emphasis on ritual and music. He regarded warfare as wasteful and advocated pacificism. Mo Zi also believed that unity of thought and action were necessary to achieve social goals. He maintained that the people should obey their leaders and that the leaders should follow the will of heaven. Although Moism failed to establish itself as a major school of thought, its views are said to be "strongly echoed" in Legalist thought. In general, the teachings of Mo Zi left an indelible impression on the Chinese mind. THE IMPERIAL ERA Much of what came to constitute China Proper was unified for the first time in 221 BCE In that year the western frontier state of Qin, the most aggressive of the Warring States, subjugated the last of its rival states. (Qin in Wade-Giles Romanization is Ch'in, from which the English China probably derived.) Once the king of Qin consolidated his power, he took the title Shi Huangdi (First Emperor), a formulation previously reserved for deities and the mythological sage-emperors, and imposed Qin's centralized, nonhereditary bureaucratic system on his new empire. In subjugating the six other major states of Eastern Zhou, the Qin kings had relied heavily on Legalist scholar-advisers. Centralization, achieved by ruthless methods, was focused on standardizing legal codes and bureaucratic procedures, the forms of writing and coinage, and the pattern of thought and scholarship. To silence criticism of imperial rule, the kings banished or put to death many dissenting Confucian scholars and confiscated and burned their books. Qin aggrandizement was aided by frequent military expeditions pushing forward the frontiers in the north and south. To fend off barbarian intrusion, the fortification walls built by the various warring states were connected to make a 5,000-kilometer-long great wall. (What is commonly referred to as the Great Wall is actually four great walls rebuilt or extended during the Western Han, Sui, Jin, and Ming periods, rather than a single, continuous wall. At its extremities, the Great Wall reaches from northeastern Heilongjiang Province to northwestern Gansu. A number of public works projects were also undertaken to consolidate and strengthen imperial rule. These activities required enormous levies of manpower and resources, not to mention repressive measures. Revolts broke out as soon as the first Qin emperor died in 210 BCE. His dynasty was extinguished less than twenty years after its triumph. The imperial system initiated during the Qin dynasty, however, set a pattern that was developed over the next two millennia. After a short civil war, a new dynasty, called Han (206 BCE- CE 220), emerged with its capital at Chang'an. The new empire retained much of the Qin administrative structure but retreated a bit from centralized rule by establishing vassal principalities in some areas for the sake of political convenience. The Han rulers modified some of the harsher aspects of the previous dynasty; Confucian ideals of government, out of favor during the Qin period, were adopted as the creed of the Han empire, and Confucian scholars gained prominent status as the core of the civil service. A civil service examination system also was initiated. Intellectual, literary, and artistic endeavors revived and flourished. The Han period produced China's most famous historian, Sima Qian (ca. 145 BCE- 87 BCE), whose Shiji (Historical Records) provides a detailed chronicle from the time of a legendary Xia emperor to that of the Han emperor Wu Di (141 BCE- 87 BCE). Technological advances also marked this period. Two of the great Chinese inventions, paper and porcelain, date from Han times. The Han dynasty, after which the members of the ethnic majority in China, the "people of Han," are named, was notable also for its military prowess. The empire expanded westward as far as the rim of the Tarim Basin (in modern Xinjiang-Uyghur Autonomous Region), making possible relatively secure caravan traffic across Central Asia to Antioch, Baghdad, and Alexandria. The paths of caravan traffic are often called the "silk route" because the route was used to export Chinese silk to the Roman Empire. Chinese armies also invaded and annexed parts of northern Vietnam and northern Korea toward the end of the 2nd century BCE Han control of peripheral regions was generally insecure, however. To ensure peace with non-Chinese local powers, the Han court developed a mutually beneficial "tributary system." Non-Chinese states were allowed to remain autonomous in exchange for symbolic acceptance of Han overlordship. Tributary ties were confirmed and strengthened through intermarriages at the ruling level and periodic exchanges of gifts and goods. After 200 years, Han rule was interrupted briefly (in 9 CE- 24 CE by Wang Mang, a reformer), and then restored for another 200 years. The Han rulers, however, were unable to adjust to what centralization had wrought: a growing population, increasing wealth and resultant financial difficulties and rivalries, and ever-more complex political institutions. Riddled with the corruption characteristic of the dynastic cycle, by 220 CE the Han empire collapsed. The collapse of the Han dynasty was followed by nearly four centuries of rule by warlords. The age of civil wars and disunity began with the era of the Three Kingdoms (Wei, Shu, and Wu, which had overlapping reigns during the period 220 CE- 80 CE). In later times, fiction and drama greatly romanticized the reputed chivalry of this period. Unity was restored briefly in the early years of the Jin dynasty (265 CE- 420 CE), but the Jin could not long contain the invasions of the nomadic peoples. In 317 CE, the Jin court was forced to flee from Luoyang and reestablished itself at Nanjing to the south. The transfer of the capital coincided with China's political fragmentation into a succession of dynasties that was to last from 304 CE to 589 CE. During this period the process of sinicization accelerated among the non-Chinese arrivals in the north and among the aboriginal tribesmen in the south. This process was also accompanied by the increasing popularity of Buddhism (introduced into China in the 1st century CE) in both north and south China. Despite the political disunity of the times, there were notable technological advances. The invention of gunpowder (at that time for use only in fireworks) and the wheelbarrow is believed to date from the 6th or 7th century. Advances in medicine, astronomy, and cartography are also noted by historians. China was reunified in 589 CE by the short-lived Sui dynasty (581 CE- 617 CE), which has often been compared to the earlier Qin dynasty in tenure and the ruthlessness of its accomplishments. The Sui dynasty's early demise was attributed to the government's tyrannical demands on the people, who bore the crushing burden of taxes and compulsory labor. These resources were overstrained in the completion of the Grand Canal--a monumental engineering feat--and in the undertaking of other construction projects, including the reconstruction of the Great Wall. Weakened by costly and disastrous military campaigns against Korea in the early seventh century, the dynasty disintegrated through a combination of popular revolts, disloyalty, and assassination. The Tang dynasty (618 CE- 907 CE), with its capital at Chang'an, is regarded by historians as a high point in Chinese civilization--equal, or even superior, to the Han period. Its territory, acquired through the military exploits of its early rulers, was greater than that of the Han. Stimulated by contact with India and the Middle East, the empire saw a flowering of creativity in many fields. Buddhism, originating in India around the time of Confucius, flourished during the Tang period, becoming thoroughly sinicized and a permanent part of Chinese traditional culture. Block printing was invented, making the written word available to vastly greater audiences. The Tang period was the golden age of literature and art. A government system supported by a large class of Confucian literati selected through civil service examinations was perfected under Tang rule. This competitive procedure was designed to draw the best talents into government. But perhaps an even greater consideration for the Tang rulers, aware that imperial dependence on powerful aristocratic families and warlords would have destabilizing consequences, was to create a body of career officials having no autonomous territorial or functional power base. As it turned out, these scholar-officials acquired status in their local communities, family ties, and shared values that connected them to the imperial court. From Tang times until the closing days of the Qing empire in 1911, scholar-officials functioned often as intermediaries between the grass-roots level and the government. By the middle of the 8th century CE, Tang power had ebbed. Domestic economic instability and military defeat in 751 by Arabs at Talas, in Central Asia, marked the beginning of five centuries of steady military decline for the Chinese empire. Misrule, court intrigues, economic exploitation, and popular rebellions weakened the empire, making it possible for northern invaders to terminate the dynasty in 907. The next half-century saw the fragmentation of China into five northern dynasties and ten southern kingdoms. But in 960 a new power, Song (960- 1279), reunified most of China Proper. The Song period divides into two phases: Northern Song (960- 1127) and Southern Song (1127- 1279). The division was caused by the forced abandonment of north China in 1127 by the Song court, which could not push back the nomadic invaders. The founders of the Song dynasty built an effective centralized bureaucracy staffed with civilian scholar-officials. Regional military governors and their supporters were replaced by centrally appointed officials. This system of civilian rule led to a greater concentration of power in the emperor and his palace bureaucracy than had been achieved in the previous dynasties. The Song dynasty is notable for the development of cities not only for administrative purposes but also as centers of trade, industry, and maritime commerce. The landed scholar-officials, sometimes collectively referred to as the gentry, lived in the provincial centers alongside the shopkeepers, artisans, and merchants. A new group of wealthy commoners--the mercantile class--arose as printing and education spread, private trade grew, and a market economy began to link the coastal provinces and the interior. Landholding and government employment were no longer the only means of gaining wealth and prestige. Culturally, the Song refined many of the developments of the previous centuries. Included in these refinements were not only the Tang ideal of the universal man, who combined the qualities of scholar, poet, painter, and statesman, but also historical writings, painting, calligraphy, and hard-glazed porcelain. Song intellectuals sought answers to all philosophical and political questions in the Confucian Classics. This renewed interest in the Confucian ideals and society of ancient times coincided with the decline of Buddhism, which the Chinese regarded as foreign and offering few practical guidelines for the solution of political and other mundane problems. The Song Neo-Confucian philosophers, finding a certain purity in the originality of the ancient classical texts, wrote commentaries on them. The most influential of these philosophers was Zhu Xi (1130- 1200), whose synthesis of Confucian thought and Buddhist, Taoist, and other ideas became the official imperial ideology from late Song times to the late 19th century. As incorporated into the examination system, Zhu Xi's philosophy evolved into a rigid official creed, which stressed the one-sided obligations of obedience and compliance of subject to ruler, child to father, wife to husband, and younger brother to elder brother. The effect was to inhibit the societal development of pre-modern China, resulting both in many generations of political, social, and spiritual stability and in a slowness of cultural and institutional change up to the nineteenth century. Neo-Confucian doctrines also came to play the dominant role in the intellectual life of Korea, Vietnam, and Japan. By the mid-thirteenth century, the Mongols had subjugated north China, Korea, and the Muslim kingdoms of Central Asia and had twice penetrated Europe. With the resources of his vast empire, Kublai Khan (1215- 1294), a grandson of Genghis Khan (ca. 1167- 1227) and the supreme leader of all Mongol tribes, began his drive against the Southern Song. Even before the extinction of the Song dynasty, Kublai Khan had established the first alien dynasty to rule all China- the Yüan (1279-1368).
http://www.ibiblio.org/chinesehistory/contents/01his/c01s03.html
13
18
An earth-orbiting station, equipped to study the sun, the stars, and the earth, is a concept found in the earliest speculation about space travel. During the formative years of the United States space program, space stations were among many projects considered. But after the national decision in 1961 to send men to the moon, space stations were relegated to the background. Project Apollo was a firm commitment for the 1960s, but beyond that the prospects for space exploration were not clear. As the first half of the decade ended, new social and political forces raised serious questions about the nation's priorities and brought the space program under pressure. At the same time, those responsible for America's space capability saw the need to look beyond Apollo for projects that would preserve the country's leadership in space. The time was not propitious for such a search, for the national mood that had sustained the space program was changing. In the summer of 1965, the office that became the Skylab program office was established in NASA Headquarters, and the project that evolved into Skylab was formally chartered as a conceptual design study. During the years 1965-1969 the form of the spacecraft and the content of the program were worked out. As long as the Apollo goal remained to be achieved, Skylab was a stepchild of manned spaceflight, achieving status only with the first lunar landing. When it became clear that America's space program could not continue at the level of urgency and priority that Apollo had enjoyed, Skylab became the means of sustaining manned spaceflight while the next generation of hardware and missions developed. The first five chapters of this book trace the origins of the Skylab concept from its emergence in the period 1962-1965 through its evolution into final form in 1969. Directions for Manned Spaceflight. Space Stations after 1962. Sizing Up a Space Station. Air Force Seeks Role in Space. President Calls for NASA's Plans. Mueller Opens Apollo Applications Program Office. The summer of 1965 was an eventful one for the thousands of people involved in the American space program. In its seventh year, the National Aeronautics and Space Administration (NASA) was hard at work on the Gemini program, its second series of earth-orbiting manned missions. Mercury had concluded on 16 May 1963. For 22 months after that, while the two-man Gemini spacecraft was brought to flight readiness, no American went into space. Two unmanned test flights preceded the first manned Gemini mission, launched on 23 March 1965.1 Mercury had been used to learn the fundamentals of manned spaceflight. Even before the first Mercury astronaut orbited the earth, President John F. Kennedy had set NASA its major task: to send a man to the moon and bring him back safely by 1970. Much had to be learned before that could be done-not to mention the rockets, ground support facilities, and launch complexes that had to be built and tested-and Gemini was part of the training program. Rendezvous-bringing two spacecraft together in orbit-was a part of that program; another was a determination of man's ability to survive and function in the weightlessness of spaceflight. That summer the American public was getting acquainted, by way of network television, with the site where most of the Gemini action was taking place-the Manned Spacecraft Center (MSC). Located on the flat Texas coastal plain 30 kilometers southeast of downtown Houston- close enough to be claimed by that city and given to it by the media-MSC was NASA's newest field center, and Gemini was the first program managed there. Mercury had been planned and conducted by the Space Task Group, located at Langley Research Center, Hampton, Virginia. Creation of the new Manned Spacecraft Center, to be staffed initially by members of the Space Task Group, was announced in 1961; by the middle of 1962 its personnel had been moved to temporary quarters in Houston; and in 1964 it occupied its new home. The 4.1-square-kilometer center provided facilities for spacecraft design and testing, crew training, and flight operations or mission control. By 1965 nearly 5000 civil servants and about twice that many aerospace-contractor employees were working at the Texas site.2 Heading this second largest of NASA's manned spaceflight centers was the man who had formed its predecessor group in 1958, Robert R. Gilruth. Gilruth had joined the staff at Langley in 1937 when it was a center for aeronautics research of NASA's precursor, the National Advisory Committee for Aeronautics (NACA). He soon demonstrated his ability in Langley's Flight Research Division, working with test pilots in quantifying the characteristics that make a satisfactory airplane. Progressing to transonic and supersonic flight research, Gilruth came naturally to the problems of guided missiles. In 1945 he was put in charge of the Pilotless Aircraft Research Division at Wallops Island, Virginia, where one problem to be solved was that of bringing a missile back through the atmosphere intact. When the decision was made in 1958 to give the new national space agency the job of putting a man into earth orbit, Gilruth and several of his Wallops Island colleagues moved to the Space Task Group, a new organization charged with designing the spacecraft to do that job.3 The Space Task Group had, in fact, already claimed that task for itself, and it went at the problem in typical NACA fashion. NACA had been a design, research, and testing organization, accustomed to working with aircraft builders but doing no fabrication work itself. The same mode characterized MSC. The Mercury and Gemini spacecraft owed their basic design to Gilruth's engineers, who supervised construction by the McDonnell Aircraft Company of St. Louis and helped test the finished hardware.4 In the summer of 1965 the Manned Spacecraft Center was up to its ears in work. By the middle of June two manned Gemini missions had been flown and a third was in preparation. Thirty-three astronauts, including the first six selected as scientist-astronauts,i were in various stages of training and preparation for flight. Reflecting the general bullishness of the manned space program, NASA announced plans in September to recruit still more flight crews.5 Houston's design engineers, meanwhile, were hard at work on the spacecraft for the Apollo program. The important choice of mission mode-rendezvous in lunar orbit-had been made in 1962; it dictated two vehicles, whose construction MSC was supervising. North American Aviation, Inc., of Downey, California, was building the command ship consisting of a command module and a supporting service module- collectively called the command and service module-which carried the crew to lunar orbit and back to earth. A continent away in Bethpage Long Island, Grumman Aircraft Engineering Corporation was working on the lunar module, a spidery-looking spacecraft that would set two men down on the moon's surface and return them to the command module, waiting in lunar orbit, for the trip home to earth. Houston engineers had established the basic design of both spacecraft and were working closely with the contractors in building and testing them. All of the important subsystems-guidance and navigation, propulsion and attitude control, life-support and environmental control-were MSC responsibilities; and beginning with Gemini 4, control of all missions passed to Houston once the booster had cleared the launch pad.6 Since the drama of spaceflight was inherent in the risks taken by the men in the spacecraft, public attention was most often directed at the Houston operation. This superficial and news-conscious view, though true enough during flight and recovery, paid scant attention to the launch vehicles and to the complex operations at the launch site, without which the comparatively small spacecraft could never have gone anywhere, let alone to the moon. The Saturn launch vehicles were the responsibility of NASA's largest field center, the George C. Marshall Space Flight Center, 10 kilometers southwest of Huntsville in northern Alabama. Marshall had been built around the most famous cadre in rocketry-Wernher von Braun and his associates from Peenemunde, Germany's center for rocket research during World War II. Driven since his schoolboy days by the dream of spaceflight, von Braun in 1965 was well on the way to seeing that dream realized, for the NASA center of which he was director was supervising the development of the Saturn V, the monster three-stage rocket that would power the moon mission.7 Marshall Space Flight Center was shaped by experiences quite unlike those that molded the Manned Spacecraft Center. The rocket research and development that von Braun and his colleagues began in Germany in the 1930s had been supported by the German army, and their postwar work continued under the supervision of the U.S. army. In 1950 the group moved to Redstone Arsenal outside Huntsville, where it functioned much as an army arsenal does, not only designing launch vehicles but building them as well. From von Braun all the way down, Huntsville's rocket builders were dirty-hands engineers, and they had produced many Redstone and Jupiter missiles. In 1962 von Braun remarked in an article written for a management magazine, "we can still carry an idea for a space vehicle . . . from the concept through the entire development cycle of design, development, fabrication, and testing." That was the way he felt his organization should operate, and so it did; of 10 first stages built for the Saturn I, 8 were turned out at Marshall.8 The sheer size of the Apollo task required a division of responsibility, and the MSC and Marshall shares were sometimes characterized as "above and below the instrument unit." ii To be sure, the booster and its payload were not completely independent, and the two centers cooperated whenever necessary. But on the whole, as Robert Gilruth said of their roles, "They built a damned good rocket and we built a damned good spacecraft." Von Braun, however, whose thinking had never been restricted to launch vehicles alone, aspired to a larger role for Marshall: manned operations, construction of stations in earth orbit, and all phases of a complete space program-which would eventually encroach on Houston's responsibilities.9 But as long as Marshall was occupied with Saturn, that aspiration was far from realization. Saturn development was proceeding well in 1965. The last test flights of the Saturn I were run off that year and preparations were under way for a series of Saturn IB shots. iii In August each of the three stages of the Saturn V was successfully static-fired at full thrust and duration. Not only that, but the third stage was fired, shut down, and restarted, successfully simulating its role of injecting the Apollo spacecraft into its lunar trajectory. Flight testing remained to be done, but Saturn V had taken a long stride.10 Confident though they were of ultimate success, Marshall's 7300 employees could have felt apprehensive about their future that summer. After Saturn V there was nothing on the drawing boards. Apollo still had a long way to go, but most of the remaining work would take place in Houston. Von Braun could hardly be optimistic when he summarized Marshall's prospects in a mid-August memo. Noting the trend of spaceflight programs, especially booster development, and reminding his coworkers that 200 positions were to be transferred from Huntsville to Houston, von Braun remarked that it was time "to turn our attention to the future role of Marshall in the nation's space program." As a headquarters official would later characterize it, Marshall in 1965 was "a tremendous solution looking for a problem." Sooner than the other centers, Marshall was seriously wondering, "What do we do after Apollo ?" 11 Some 960 kilometers southeast of Huntsville, halfway down the Atlantic coast of Florida, the third of the manned spaceflight centers had no time for worry about the future. The John F. Kennedy Space Center, usually referred to as "the Cape" from its location adjacent to Cape Canaveral' was in rapid expansion. What had started as the Launch Operations Directorate of Marshall Space Flight Center was, by 1965, a busy center with a total work force (including contractor employees) of 20 000 people. In April construction teams topped off the huge Vehicle Assembly Building, where the 110-meter Saturn V could be assembled indoors. Two months later road tests began for the mammoth crawler-transporter that would move the rocket, complete and upright, to one of two launch pads. Twelve kilometers eastward on the Cape, NASA launch teams were winding up Saturn I flights and working Gemini missions with the Air Force.12 Under the directorship of Kurt Debus, who had come from Germany with von Braun in 1945, KSC's responsibilities included much more than launching rockets. At KSC all of the booster stages and spacecraft first came together, and though they were thoroughly checked and tested by their manufacturers, engineers at the Cape had to make sure they worked when put together. One of KSC's largest tasks was the complete checkout of every system in the completed vehicle, verifying that NASA's elaborate system of "interface control" actually worked. If two vehicle components, manufactured by different contractors in different states, did not function together as intended, it was KSC's job to find out why and see that they were fixed. Checkout responsibility brought KSC into close contact not only with the two other NASA centers but with all of the major contractors.13 Responsibility for orchestrating the operations of the field centers and their contractors lay with the Office of Manned Space Flight (OMSF) at NASA Headquarters in Washington. One of three program offices, OMSF reported to NASA's third-ranking official, Associate Administrator Robert C. Seamans, Jr. Ever since the Apollo commitment in 1961, OMSF had overshadowed the other program offices (the Office of Space Science and Applications and the Office of Advanced Research and Technology) not only in its share of public attention but in its share of the agency's budget. Directing OMSF in 1965 was George E. Mueller (pronounced "Miller"), an electrical engineer with a doctorate in physics and 23 years' experience in academic and industrial research. Before taking the reins as associate administrator for manned spaceflight in 1963, Mueller had been vice president of Space Technology Laboratories, Inc., in Los Angeles, where he was deeply involved in the Air Force's Minuteman missile program. He had spent his first year in Washington reorganizing OMSF and gradually acclimatizing the field centers to his way of doing business. Considering centralized control to be the prime requisite for achieving the Apollo goal, Mueller established an administrative organization that gave Headquarters the principal responsibility for policy-making while delegating as much authority as possible to the centers.14 Mueller had to pick his path carefully, for the centers had what might be called a "States'-rights attitude" toward direction from Headquarters and had enjoyed considerable autonomy. Early in his tenure, convinced that Apollo was not going to make it by the end of the decade, Mueller went against center judgment to institute "all-up" testing for the Saturn V. This called for complete vehicles to be test-flown with all stages functioning the first time-a radical departure from the stage-by-stage testing NASA and NACA had previously done, but a procedure that had worked for Minuteman. It would save time and money-if it worked- but would put a substantial burden on reliability and quality control. Getting the centers to accept all-up testing was no small feat; when it succeeded, Mueller's stock went up. Besides putting Apollo back on schedule, this practice increased the possibility that some of the vehicles ordered for Apollo might become surplus and thus available for other uses.15 In an important sense the decision to shoot for the moon shortcircuited conventional schemes of space exploration. From the earliest days of serious speculation on exploration of the universe, the Europeans who had done most of it assumed that the first step would be a permanent station orbiting the earth. Pioneers such as Konstantin Eduardovich Tsiolkowskiy and Hermann Oberth conceived such a station to be useful, not only for its vantage point over the earth below, but as a staging area for expeditions outward. Wernher von Braun, raised in the European school, championed the earth-orbiting space station in the early 1950s in a widely circulated national magazine article.16 There were sound technical reasons for setting up an orbiting waystation en route to distant space destinations. Rocket technology was a limiting factor; building a station in orbit by launching its components on many small rockets seemed easier than developing the huge ones required to leave the earth in one jump. Too, a permanent station would provide a place to study many of the unknowns in manned flight, man's adaptability to weightlessness being an important one. There was, as well, a wealth of scientific investigation that could be done in orbit. The space station was, to many, the best way to get into space exploration; all else followed from that.17 The sense of urgency pervading the United States in the year following Sputnik was reflected in the common metaphor, "the space race." It was a race Congress wanted very much to win, even if the location of the finish line was uncertain. In late 1958 the House Select Committee on Space began interviewing leading scientists, engineers, corporate executives, and government officials, seeking to establish goals beyond Mercury. The committee's report, The Next Ten Years in Space, concluded that a space station was the next logical step. Wernher von Braun and his staff at the Army Ballistic Missile Agency presented a similar view in briefings for NASA. Both a space station and a manned lunar landing were included in a list of goals given to Congress by NASA Deputy Administrator Hugh Dryden in February 1959.18 Later that year NASA created a Research Steering Committee on Manned Space Flight to study possibilities for post-Mercury programs. That committee is usually identified as the progenitor of Apollo; but at its first meeting members placed a space station ahead of the lunar landing in a list of logical steps for a long-term space program. Subsequent meetings debated the research value of a station versus a moon landing, advocated as a true "end objective" requiring no justification in terms of some larger goal to which it contributed. Both the space station and the lunar mission had strong advocates, and Administrator T. Keith Glennan declined to commit NASA either way. Early in 1960, however, he did agree that after Mercury the moon should be the end objective of manned spaceflight.19 Still, there remained strong justification for the manned orbital station and plenty of doubt that rocket development could make the lunar voyage possible at any early date. Robert Gilruth told a symposium on manned space stations in the spring of 1960 that NASA's flight missions were a compromise between what space officials would like to do and what they could do. Looking at all the factors involved, Gilruth said, "It appears that the multi-man earth satellites are achievable . . ., while such programs as manned lunar landing and return should not be directly pursued at this time. " Heinz H. Koelle, chief of the Future Projects Office at Marshall Space Flight Center, offered the opinion that a small laboratory was the next logical step in earth-orbital operations, with a larger (up to 18 metric tons) and more complex one coming along when rocket payloads could be increased.20 This was the Marshall viewpoint, frequently expressed up until 1962. During 1960, however, manned flight to the moon gained ascendancy. In the fiscal 1961 budget hearings, very little was said about space stations; the budget proposal, unlike the previous year's, sought no funds for preliminary studies. The agency's long-range plan of January 1961 dropped the goal of a permanent station by 1969; rather, the Space Task Group was considering a much smaller laboratory-one that could fit into the adapter section that supported the proposed Apollo spacecraft on its launch vehicle.21 Then, in May 1961, President John F. Kennedy all but sealed the space station's fate with his proclamation of the moon landing as America's goal in space. It was the kind of challenge American technology could most readily accept: concise, definite, and measurable. Success or failure would be self-evident. It meant, however, that all of the efforts of NASA and much of aerospace industry would have to be narrowly focused. Given a commitment for a 20-year program of methodical space development, von Braun's 1952 concept might have been accepted as the best way to go. With only 8 1/2 years it was out of the question. The United States was going to pull off its biggest act first, and there would be little time to think about what might follow. The decision to go for the moon did not in itself rule out a space station; it made a large or complex one improbable, simply because there would be neither time nor money for it. At Marshall, von Braun's group argued during the next year for reaching the moon by earth-orbit rendezvous-the mission mode whereby a moon-bound vehicle would be fueled from "tankers" put into orbit near the earth. Compared to the other two modes being considered-direct flight and lunar-orbit rendezvousiv-this seemed both safer and more practical, and Marshall was solidly committed to it. In studies done in 1962 and 1963, Marshall proposed a permanent station capable of checking out and launching lunar vehicles. In June 1962, however, NASA chose lunar-orbit rendezvous for Apollo, closing off prospects for extensive earth-orbital operations as a prerequisite for the lunar landing.22 From mid-1962, therefore, space stations were proper subjects for advanced studies-exercises to identify the needs of the space program and pinpoint areas where research and development were required. Much of this future-studies work went to aerospace contractors, since NASA was heavily engaged with Apollo. The door of the space age had just opened, and it was an era when, as one future projects official put it, "the sky was not the limit" to imaginative thinking. Congress was generous, too; between 1962 and 1965 it appropriated $70 million for future studies. A dozen firms received over 140 contracts to study earth-orbital, lunar, and planetary missions and the spacecraft to carry them out. There were good reasons for this intensive planning. As a NASA official told a congressional committee, millions of dollars in development costs could be saved by determining what not to try.23 Langley Research Center took the lead in space-station studies in the early 1960s. After developing a concept for a modest station in the summer of 1959-one that foreshadowed most of Skylab's purposes and even considered the use of a spent rocket stage-Langley's planners went on to consider much bigger stations. Artificial gravity, to be produced by rotating the station, was one of their principal interests from the start. Having established an optimum rate and radius of rotation (4 revolutions per minute and 25 meters), they studied a number of configurations, settling finally on a hexagonal wheel with spokes radiating from a central control module. Enclosing nearly 1400 cubic meters of work space and accommodating 24 to 36 crewmen, the station would weigh 77 metric tons at launch.24 Getting something of this size into orbit was another problem. Designers anticipated severe problems if the station were launched piecemeal and assembled in orbit-a scheme von Braun had advocated 10 years earlier-and began to consider inflatable structures. Although tests were run on an 8-meter prototype, the concept was finally rejected, partly on the grounds that such a structure would be too vulnerable to meteoroids. As an alternative Langley suggested a collapsible structure that could be erected, more or less umbrella-fashion, in orbit and awarded North American Aviation a contract to study it.25 Langley's first efforts were summarized in a symposium in July 1962. Papers dealt with virtually all of the problems of a large rotating station, including life support, environmental control, and waste management. Langley engineers felt they had made considerable progress toward defining these problems; they were somewhat concerned, however, that their proposals might be too large for NASA's immediate needs.26 Similar studies were under way in Houston, where early in 1962 MSC began planning a large rotating station to be launched on the Saturn V. As with Langley's proposed stations, Houston's objectives were to assess the problems of living in space and to conduct scientific and technological research. Resupply modules and relief crews would be sent to the station with the smaller Saturn IB and an Apollo spacecraft modified to carry six men, twice its normal complement. MSC's study proposed to put the station in orbit within four years.27 By the fall of 1962 the immediate demands of Apollo had eased somewhat, allowing Headquarters to give more attention to future programs. In late September Headquarters officials urged the centers to go ahead with their technical studies even though no one could foresee when a station might fly. Furthermore, it had begun to look as though rising costs in Apollo would reduce the money available for future programs. Responses from both MSC and Langley recognized the need for simplicity and fiscal restraint; but the centers differed as to the station's mission. Langley emphasized a laboratory for advanced technology. Accordingly, NASA's offices of space science and advanced technology should play important roles in planning. MSC considered the station's major purpose to be a base for manned flights to Mars.28 The following month Joseph Shea, deputy director for systems in the Office of Manned Space Flight, sought help in formulating future objectives for manned spaceflight. In a letter to the field centers and Headquarters program offices, Shea listed several options being considered by OMSF, including an orbiting laboratory. Such a station was thought to be feasible, he said, but it required adequate justification to gain approval. He asked for recommendations concerning purposes, configurations' and specific scientific and engineering requirements for the space station, with two points defining the context: the importance of a space station program to science, technology, or national goals; and the unique characteristics of such a station and why such a program could not be accomplished by using Mercury, Gemini, Apollo, or unmanned spacecraft.29 Public statements and internal correspondence during the next six months stressed the agency's intention to design a space station that would serve national needs.30 By mid-1963, NASA had a definite rationale for an earth-orbiting laboratory. The primary mission on early flights would be to determine whether man could live and work effectively in space for long periods. The weightlessness of space was a peculiar condition that could not be simulated on earth-at least not for more than 30 seconds in an airplane. No one could predict either the long-term effects of weightlessness or the results of a sudden return to normal gravity. These biomedical concerns, though interesting in themselves, were part of a larger goal: to use space stations as bases for interplanetary flight. A first-generation laboratory would provide facilities to develop and qualify the various systems, structures, and operational techniques needed for an orbital launch facility or a larger space station. Finally, a manned laboratory had obvious uses in the conduct of scientific research in astronomy, physics, and biology. Although mission objectives and space-station configuration were related, the experiments did not necessarily dictate a specific design. NASA could test man's reaction to weightlessness in a series of gradually extended flights beginning with Gemini hardware, a low-cost approach particularly attractive to Washington. An alternate plan would measure astronauts' reaction to varying levels of artificial gravity within a large rotating station. Joseph Shea pondered the choices at a conference in August 1963: Is a minimal Apollo-type MOL [Manned Orbiting Laboratory] sufficient for the performance of a significant biomedical experiment? Or perhaps the benefits of a truly multi-purpose MOL are so overwhelming . . that one should not spend unnecessary time and effort . . . building small stations, but, rather, proceed immediately with the development of a large laboratory in space.31 Whatever choice NASA made, it could select from a wide range of spacestation concepts generated since 1958 by the research centers and aerospace contractors. The possibilities fit into three categories: small, medium, and large. The minimum vehicle, emphasizing the use of developed hardware, offered the shortest development time and lowest cost. Most often mentioned in this category was Apollo, the spacecraft NASA was developing for the lunar landings. There were three basic parts to Apollo: command, service, and lunar modules. The conical command module carried the crew from launch to lunar orbit and back to reentry and recovery, supported by systems and supplies in the cylindrical service module to which it was attached until just before reentry. Designed to support three men, the CM was roomy by Gemini standards, even though its interior was no larger than a small elevator. Stowage space was at a premium, and not much of its instrumentation could be removed for operations in earth orbit. One part of the service module was left empty to accommodate experiments, but it was unpressurized and could only be reached by extravehicular activity. The lunar module was an even more specialized and less spacious craft. It was in two parts: a pressurized ascent stage containing the life-support and control systems, and a descent stage, considerably larger but unpressurized. The descent stage could be fitted with a fair amount of experiments; but like the service module, it was accessible only by extravehicular activity.32 The shortage of accessible space was an obvious difficulty in using Apollo hardware for a space station. Proposals had been made to add a pressurized module that would fit into the adapter area, between the launch vehicle and the spacecraft, but this tended to offset the advantages of using existing hardware. Still, in July 1963, with the idea of an Apollo laboratory gaining favor, Headquarters asked Houston to supervise a North American Aviation study of an Extended Apollo mission.33 North American, MSC's prime Apollo contractor, had briefly considered the Space Task Group's proposal for an Apollo laboratory two years earlier. Now company officials revived the idea of the module in the adapter area, which had grown considerably during the evolution of the Saturn design. Though the study's primary objective was to identify the modifications required to support a 120-day flight, North American also examined the possibility of a one-year mission sustained by periodic resupply of expendables. Three possible configurations were studied: an Apollo command module with enlarged subsystems; Apollo with an attached module supported by the command module; and Apollo plus a new, selfsupporting laboratory module. A crew of two was postulated for the first concept; the others allowed a third astronaut.34 Changing the spacecraft's mission would entail extensive modifications but no basic structural changes. Solar cells would replace the standard hydrogen-oxygen fuel cells, which imposed too great a weight penalty. In view of the adverse effects of breathing pure oxygen for extended periods, North American recommended a nitrogen-oxygen atmosphere, and instead of the bulky lithium hydroxide canister to absorb carbon dioxide, the study proposed to use more compact and regenerable molecular sieves.v Drawing from earlier studies, the study group prepared a list of essential medical experiments and established their approximate weights and volumes, as well as the power, time, and workspace required to conduct them. It turned out that the command module was too small to support more than a bare minimum of these experiments, and even with the additional module and a third crewman there would not be enough time to perform all of the desired tests.35 North American's study concluded that all three concepts were technically sound and could perform the required mission. The command module alone was the least costly, but reliance on a two-man crew created operational liabilities. Adding a laboratory module, though obviously advantageous, increased costs by 15-30% and posed a weight problem. Adding the dependent module brought the payload very near the Saturn IB's weight-lifting limit, while the independent module exceeded it. Since NASA expected to increase the Saturn's thrust by 1967, this was no reason to reject the concept; however, it represented a problem that would persist until 1969: payloads that exceeded the available thrust. North American recommended that any follow-up study be limited to the Apollo plus a dependent module, since this had the greatest applicability to all three mission proposals. The findings were welcomed at Headquarters, where the funding picture for post-Apollo programs remained unclear. The company was asked to continue its investigation in 1964, concentrating on the technical problems of extending the life of Apollo subsystems.36 Several schemes called for a larger manned orbiting laboratory that would support four to six men for a year with ample room for experiments Like the minimum vehicle, the medium-sized laboratory was usually a zero-gravity station that could be adapted to provide artificial gravity Langley's Manned Orbiting Research Laboratory, a study begun in late 1962, was probably the best-known example of this type: a four-man canister 4 meters in diameter and 7 meters long containing its own life-support systems. Although the laboratory itself would have to be developed, launch vehicles and ferry craft were proven hardware. A Saturn IB or the Air Force's Titan III could launch the laboratory, and Gemini spacecraft would carry the crews. Another advantage was simplicity: the module would be launched in its final configuration, with no requirement for assembly or deployment in orbit. Use of the Gemini spacecraft meant there would be no new operational problems to solve. Even so, the initial cost was unfavorable and Headquarters considered the complicated program of crew rotation a disadvantage.37 Large station concepts, like MSC's Project Olympus, generally required a Saturn V booster and separately launched crew-ferry and logistics spacecraft. Crew size would vary from 12 to 24, and the station would have a five-year life span. Proposed large laboratories ranged from 46 to 61 meters in diameter, and typically contained 1400 cubic meters of space. Most provided for continuous rotation to create artificial gravity, with non-rotating central hubs for docking and zero-gravity work. Such concepts represented a space station in the traditional sense of the term, but entailed quite an increase in cost and development time.38 Despite the interest in Apollo as an interim laboratory, Houston was more enthusiastic about a large space station. In June 1963, MSC contracted for two studies, one by Douglas Aircraft Company for a zerogravity station and one with Lockheed for a rotating station. Study specifications called for a Saturn V booster, a hangar to enclose a 12-man ferry craft, and a 24-man crew. Douglas produced a cylindrical design 31 meters long with pressurized compartments for living quarters and recreation, a command center, a laboratory that included a one-man centrifuge to simulate gravity for short periods, and a hangar large enough to service four Apollos. The concept, submitted in February 1964, was judged to be within projected future capabilities, but the work was discontinued because there was no justification for a station of that size.39 Lockheed's concept stood a better chance of eventual adoption, since it provided artificial gravity-favored by MSC engineers, not simply for physiological reasons but for its greater efficiency. As one of them said, "For long periods of time [such as a trip to Mars], it might just be easier and more comfortable for man to live in an environment where he knew where the floor was, and where his pencil was going to be, and that sort of thing." Lockheed's station was a Y-shaped module with a central hub providing a zero-gravity station and a hangar for ferry and logistics spacecraft. Out along the radial arms, 48 men could live in varying levels of artificial gravity.40 While studies of medium and large stations continued, NASA began plans in 1964 to fly Extended Apollo as its first space laboratory. George Mueller's all-up testing decision in November 1963 increased the likelihood of surplus hardware by reducing the number of launches required in the moon program. Officials refused to predict how many flights might be eliminated, but 1964 plans assumed 10 or more excess Saturns. Dollar signs, however, had become more important than surplus hardware Following two years of generous support, Congress reduced NASA's budget for fiscal 1964 from $5.7 to $5.1 billion. The usually Optimistic von Braun told Heinz Koelle in August 1963, "I'm convinced that in view of NASA's overall funding situation, this space station thing will not get into high gear in the next few years. Minimum C-IB approach [Saturn IB and Extended Apollo] is the only thing we can afford at this time." The same uncertainty shaped NASA's planning the following year. In April 1964, Koelle told von Braun that Administrator James Webb had instructed NASA planners to provide management with "various alternative objectives and missions and their associated costs and consequences rather than detailed definition of a single specific long term program." Von Braun's wry response summed up NASA's dilemma: "Yes, that's the new line at Hq., so they can switch the tack as the Congressional winds change."41 At the FY 1965 budget hearings in February 1964, testimony concerning advanced manned missions spoke of gradual evolution from Apollo-Saturn hardware to more advanced spacecraft. NASA had not made up its mind about a post-Apollo space station. Two months later, however, Michael Yarymovych, director for earth-orbital-mission studies, spelled out the agency's plans to the First Space Congress meeting at Cocoa Beach, Florida. Extended Apollo, he said, would be an essential element of an expanding earth-orbital program, first as a laboratory and later as a logistics system. Some time in the future, NASA would select a more sophisticated space station from among the medium and large concepts under consideration. Mueller gave credence to his remarks the following month by placing Yarymovych on special assignment to increase Apollo system capabilities.42 Meanwhile, a project had appeared that was to become Skylab's chief competitor for the next five years: an Air Force orbiting laboratory. For a decade after Sputnik, the U.S. Air Force and NASA vied for roles in space. The initial advantage lay with the civilian agency, for the Space Act of 1958 declared that "activities in space should be devoted to peaceful purposes." In Line with this policy, the civilian Mercury project was chosen over the Air Force's "Man in Space Soonest" as America's first manned space program.43 But the Space Act also gave DoD responsibility for military operations and development of weapon systems; Consequently the Air Force sponsored studies over the next three years to define space bombers, manned spy-satellites, interceptors, and a command and control center. In congressional briefings after the 1960 elections, USAF spokesmen stressed the theme that "military space, defined as space out to 10 Earth diameters, is the battleground of the future."44 For all its efforts, however, the Air Force could not convince its civilian superiors that space was the next battleground. When Congress added $86 million to the Air Force budget for its manned space glider, Dyna-Soar, Secretary of Defense Robert S. McNamara refused to spend the money. DoD's director of defense research and development testified to a congressional committee, "there is no definable need at this time, or military requirement at this time" for a manned military space program. It was wise to advance American space technology, since military uses might appear; but "NASA can develop much of it or even most of it." Budget requests in 1962 reflected the Air Force's loss of position. NASA's $3.7 billion authorization was three times what the Air Force got for space activities; three years earlier the two had been almost equal.45 Throughout the Cold War, Russian advances proved the most effective stimuli for American actions; so again in August 1962 a Soviet space spectacular strengthened the Air Force argument for a space role. Russia placed two spacecraft into similar orbits for the first time. Vostok 3 and 4 closed to within 6 1/2 kilometers, and some American reports spoke of a rendezvous and docking. Air Force supporters saw military implications in the Soviet feat, prompting McNamara to reexamine Air Force plans. Critics questioned the effectiveness of NASA-USAF communication on technical and managerial problems. In response, James Webb created a new NASA post, deputy associate administrator for defense affairs, and named Adm. Walter F. Boone (USN, ret.) to it in November 1962. In the meantime, congressional demands for a crash program had subsided, partly because successful NASA launchesvi bolstered confidence in America's civilian programs.46 The Cuban missile crisis occupied the Pentagon's attention through much of the fall, but when space roles were again considered, McNamara showed a surprising change of attitude. Early in 1962 Air Force officials had begun talking about a "Blue Gemini" program, a plan to use NASA's Gemini hardware in early training missions for rendezvous and support of a military space station. Some NASA officials welcomed the idea as a way to enlarge the Gemini program and secure DoD funds. But when Webb and Seamans sought to expand the Air Force's participation in December 1962, McNamara proposed that his department assume responsibility for all America's manned spaceflight programs. NASA officials successfully rebuffed this bid for control, but did agree, at McNamara's insistence, that neither agency would start a new manned program in near-earth orbit without the other's approval.47 The issue remained alive for months. At one point the Air Force attempted to gain control over NASA's long-range planning. An agreement was finally reached in September protecting NASA's right to conduct advanced space-station studies but also providing for better liaison through the Aeronautics and Astronautics Coordinating Board (the principal means for formal liaison between the two agencies). The preamble to the agreement expressed the view that, as far as practicable, the two agencies should combine their requirements in a common space-station.48 McNamara's efforts for a joint space-station were prompted in part by Air Force unhappiness with Gemini. Talk of a "Blue Gemini" faded in 1963 and Dyna-Soar lost much of its appeal. If NASA held to its schedules, Gemini would fly two years before the space glider could make its first solo flight On 10 December Secretary McNamara terminated the Dyna-soar project, transferring a part of its funds to a new project, a Manned Orbiting Laboratory (MOL).49 With MOL the Air Force hoped to establish a military role for man in space; but since the program met no specific defense needs, it had to be accomplished at minimum cost. Accordingly, the Air Force planned to use proven hardware: the Titan IIIC launch vehicle, originally developed for the Dyna-Soar, and a modified Gemini spacecraft. Only the system's third major component, the laboratory, and its test equipment would be new. The Titan could lift 5700 kilograms in addition to the spacecraft; about two-thirds of this would go to the laboratory, the rest to test equipment. Initial plans provided 30 cubic meters of space in the laboratory, roughly the volume of a medium-sized house trailer. Laboratory and spacecraft were to be launched together; when the payload reached orbit, two crewmen would move from the Gemini into the laboratory for a month's occupancy. Air Force officials projected a cost of $1.5 billion for four flights, the first in 1968.50 The MOL decision raised immediate questions about the NASA-DoD pact on cooperative development of an orbital station. Although some outsiders considered the Pentagon's decision a repudiation of the Webb-McNamara agreement, both NASA and DoD described MOL as a single military project rather than a broad space program. They agreed not to construe it as the National Space Station, a separate program then under joint study; and when NASA and DoD established a National Space Station Planning Subpanel in March 1964 (as an adjunct of the Aeronautics and Astronautics Coordinating Board), its task was to recommend a station that would follow MOL. Air Force press releases implied that McNamara's approval gave primary responsibility for space stations to the military, while NASA officials insisted that the military program complemented its own post-Apollo plans. Nevertheless, concern that the two programs might appear too similar prompted engineers at Langley and MSC to rework their designs to look less like MOL.51 Actually, McNamara's announcement did not constitute program approval, and for the next 20 months MOL struggled for recognition and adequate funding. Planning went ahead in 1964 and some contracts were let, but the deliberate approach to MOL reflected political realities. In September Congressman Olin Teague (Dem., Tex.), chairman of the House Subcommittee on Manned Space Flight and of the Subcommittee on NASA Oversight, recommended that DoD adapt Apollo to its needs. Shortly after the 1964 election, Senate space committee chairman Clinton Anderson (Dem., N.M.) told the president that he opposed MOL; he believed the government could save more than a billion dollars in the next five years by canceling the Air Force project and applying its funds to an Extended Apollo station. Despite rumors of MOL's impending cancellation, the FY 1966 budget proposal included a tentative commitment of $ 150 million.52 The Bureau of the Budget, reluctant to approve two programs that seemed likely to overlap, allocated funds to MOL in December with the understanding that McNamara would hold the money pending further studies and another review in May. DoD would continue to define military experiments, while NASA identified Apollo configurations that might satisfy military requirements. A joint study would consider MOL's utility for non-military missions. A NASA-DoD news release on 25 January 1965 said that overlapping programs must be avoided. For the next few years both agencies would use hardware and facilities "already available or now under active development" for their manned spaceflight programs-at least "to the maximum degree possible."53 In February a NASA committee undertook a three-month study to determine Apollo's potential as an earth-orbiting laboratory and define key scientific experiments for a post-Apollo earth-orbital flight program. Although the group had worked closely with an Air Force team, the committee's recommendations apparently had little effect on MOL, the basic concept for which was unaltered by the review. More important, the study helped NASA clarify its own post-Apollo plans.54 Since late 1964, advocates of a military space program had increased their support for MOL, the House Military Operations Subcommittee recommending in June that DoD begin full-scale development without further delay. Two weeks later a member of the House Committee on Science and Astronautics urged a crash program to launch the first MOL within 18 months. Russian and American advances with the Voskhod and Gemini flights-multi-manned missions and space walks-made a military role more plausible. On 25 August 1965, MOL finally received President Johnson's blessing.55 Asked if the Air Force had clearly established a role for man in space, a Pentagon spokesman indicated that the chances seemed good enough to warrant evaluating man's ability "much more thoroughly than we're able to do on the ground." NASA could not provide the answers because the Gemini spacecraft was too cramped. One newsman wanted to know why the Air Force had abandoned Apollo; the reply was that Apollo's lunar capabilities were in many ways much more than MOL needed. If hindsight suggests that parochial interests were factor, the Air Force nevertheless had good reasons to shun Apollo. The lunar landing remained America's chief commitment in space. Until the goal was accomplished, an Air Force program using would surely take second place.56 In early 1964 NASA undertook yet another detailed examination of its plans, this time at the request of the White House. Lyndon Johnson had played an important role in the U.S. space program since his days as the Senate majority leader. Noting that post-Apollo programs were likely to prove costly and complex, the president requested a statement of future space objectives and the research and development programs that supported them.57 Webb handed the assignment to an ad hoc Future Programs Task Group. After five months of work, the group made no startling proposals. Their report recognized that Gemini and Apollo were making heavy demands on financial and human resources and urged NASA to concentrate on those programs while deferring "large new mission commitments for further study and analysis." By capitalizing on the "size, versatility, and efficiency" of the Saturn and Apollo, the U.S. should be able to maintain space preeminence well into the 1970s. Early definition of an intermediate set of missions using proven hardware was recommended. Then, a relatively small commitment of funds within the next year would enable NASA to fly worthwhile Extended Apollo missions by 1968. Finally, long-range planning should be continued for space stations and manned flights to Mars in the 1970s.58 The report apparently satisfied Webb, who used it extensively in subsequent congressional hearings. It should also have pleased Robert Seamans, since he was anxious to extend the Apollo capability beyond the lunar landing. Others in and outside of NASA found fault with the document. The Senate space committee described the report as "somewhat obsolete," containing "less information than expected in terms of future planning." Committee members faulted its omission of essential details and recommended a 50% cut in Extended Apollo funding, arguing that enough studies had already been conducted. Elsewhere on Capitol Hill, NASA supporters called for specific recommendations. Within the space agency, some officials had hoped for a more ambitious declaration, perhaps a recommendation for a Mars landing as the next manned project. At Huntsville, a future projects official concluded that the plan offered no real challenge to NASA (and particularly to Marshall) once Apollo was accomplished.59 In thinking of future missions, NASA officials were aware of how little experience had been gained in manned flight. The longest Mercury mission had lasted less than 35 hours. Webb and Seamans insisted before congressional committees that the results of the longer Gemini flights might affect future planning, and a decision on any major new program should, in any event, be delayed until after the lunar landing. The matter of funding weighed even more heavily against starting a new program. NASA budgets had reached a plateau at $5.2 billion in fiscal 1964, an amount just sufficient for Gemini and Apollo. Barring an increase in available money, new manned programs would have to wait for the downturn in Apollo spending after 1966. There was little support in the Johnson administration or Congress to increase NASA's budget; indeed, Great Society programs and the Vietnam war were pushing in the opposite direction. The Air Force's space program was another problem, since some members of Congress and the Budget Bureau favored MOL as the country's first space laboratory.60 Equally compelling reasons favored an early start of Extended Apollo. A follow-on program, even one using Saturn and Apollo hardware, would require three to four years' lead time. Unless a new program started in 1965 or early 1966, the hiatus between the lunar landing program and its successor would adversely affect the 400 000-member Apollo team. Already, skilled design engineers were nearing the end of their tasks. The problem was particularly worrisome to Marshall, for Saturn IB-Apollo flights would end early in 1968. In the fall of 1964, a Future Projects Group appointed by von Braun began biweekly meetings to consider Marshall's future. In Washington, George Mueller pondered ways of keeping the Apollo team intact. By 1968 or 1969, when the U.S. Ianded on the moon, the nation's aerospace establishment would be able to produce and fly 8 Apollos and 12 Saturns per year; but Mueller faced a cruel paradox: the buildup of the Apollo industrial base left him no money to employ it effectively after the lunar landing.61 Until mid-1965 Extended Apollo was classified as advanced study planning; that summer Mueller moved it into the second phase of project development, project definition. A Saturn-Apollo Applications Program Office was established alongside the Gemini and Apollo offices at NASA Headquarters. Maj. Gen. David Jones, an Air Force officer on temporary duty with NASA, headed the new office; John H. Disher became deputy director, a post he would fill for the next eight years.62 Little fanfare attended the opening on 6 August 1965. Apollo and Gemini held the spotlight, but establishment of the program office was a significant milestone nonetheless. Behind lay six years of space-station studies and three years of post-Apollo planning. Ahead loomed several large problems: winning fiscal support from the Johnson administration and Congress, defining new relationships between NASA centers, and coordinating Apollo Applications with Apollo. Mueller had advanced the new program's cause in spite of these uncertainties, confident in the worth of Extended Apollo studies and motivated by the needs of his Apollo team. In the trying years ahead, the Apollo Applications Program (AAP) would need all the confidence and motivation it could muster. i All three of the Skylab scientist-astronauts were in this first group, selected on 27 June 1965. ii The instrument unit was the electronic nerve center of inflight rocket control and was located between the booster's uppermost stage and the spacecraft. iii The Saturn IB or "uprated Saturn 1" was a two-stage rocket like its predecessor but with an improved and enlarged second stage. iv In direct flight the vehicle travels from the earth to the moon by the shortest route, brakes, and lands; it returns the same way. This requires taking off with all the stages and fuel needed for the round trip, dictating a very large booster. In lunar-orbit rendezvous two spacecraft are sent to the moon: a landing vehicle and an earth-return vehicle. While the former lands, the latter stays in orbit awaiting the lander's return; when they have rejoined, the lander is discarded and the crew comes home in the return ship. Von Braun and his group adopted earth-orbit rendezvous as doctrine. v Molecular sieves contain a highly absorbent mineral usually a zeolite (a potassium aluminosilicate), whose structure is a 3-dimensional lattice with regularly spaced channels of molecular dimensions; the channels comprise up to half the volume of the material. Molecules (such as carbon dioxide) small enough to enter these channels are absorbed, and can later be driven off by heating regenerating the zeolite for further use. vi Mariner 2 was launched toward Venus on 27 August 1962; in October came two Explorer launches and the Mercury flight of Walter M. Schirra, on 16 November NASA conducted its third successful Saturn I test flight.
http://history.nasa.gov/SP-4208/ch1.htm
13
15
The Fair Use Doctrine is a part of the copyright law that mitigates the rights of the copyright holder to permit appropriate usage for educational purposes. Fair Use is the use, including copying, of copyrighted material, without the holder's permission, for certain purposes. Those purposes include teaching, preparation for teaching, scholarship, research, and news reporting. The nature of fair use is ambiguous; the legal definition of fair use is only a set of guidelines. It does not articulate for educators and scholars explicit rules or THE exact measures. Compliance with fair use guidelines is the responsibility of anyone involved in the creation, reproduction or distribution of such products used in educational activities for the institution. To determine if your "use" (copying, performing, etc.) of a copyrighted work is legitimate, consider all four of the following factors: - The purpose and character of the use - e.g., nonprofit, educational - The nature of the original copyrighted work, especially whether it is creative (like a novel), or factual (like news reporting). Creative works generally are more protected. - The amount, substantiality, or portion used in relation to the work as a whole. - The effect of the use on the potential market value of the work. Wheaton College has defined guidelines for fair use in the following categories: - copying for personal use - copying for classroom use - copying for reserve use - the use of multimedia - the use of computer/internet material - interlibrary loan requests for copies - writing with expired copyright may be reproduced - most U.S. Government publications may be copied - when duplication amounts to a "fair use" of the material - most single-copy reproductions for one's personal use. Number of copies (article or book chapter) permitted: Multiple copies can only be made if the following conditions are also met: - Brevity. That is, the work is only one short poem or less than 250 words of a longer poem;one complete article, story or essay of less that 2,500 words or less than 10% of a larger work of prose; or a single chart, graph, cartoon, picture, etc. - Spontaneity. The inspiration and decision to use the work are so close in time that it would be unreasonable to expect a timely reply to a request for permission to make multiple copies. - Cumulative Effect. This basically means that the copying of this one work is not a part of a larger amount of multiple copying, especially of works of one author or from one volume. - Copyright notice. Each copy includes a notice of copyright. A teacher may make one copy, for each student, of a chapter from a book; a periodical/newspaper article; a short story, essay or poem; a chart, graph, diagram, cartoon, and a picture from a book, periodical or newspaper. This is permitted if: - There is only one copy for each student; - The material includes a copyright notice on the first page; - The students are not charged a fee beyond the actual cost of copying -- Creating anthologies of photocopied materials: Guidelines prepared by author and publisher do not permit this unless copyright permission is granted. They state, "Copying shall not be used to create or to replace or substitute for anthologies, compilations or collective works." Several court cases have upheld this principle, including cases against New York University, University of Texas, Kinko's and Michigan Document Services, Inc. The copying of workbooks, exercises, standardized tests, test booklets and answer sheets and like consumable material is not permitted. A music instructor can make copies of excerpts of sheet music or other printed works, provided that the excerpts do not constitute a "performable unit" such as a whole song, section, movement or aria. Educators may use print, images, Internet sites, motion picture media and sound recordings - in both analog and digital formats. A digital copy is the same as a hard copy in terms of fair use. If a use is fair, the source of the content in question may be a recorded broadcast, a borrowed piece of media, a rented movie, or a teacher's personal copy of a newspaper or a DVD. Labels on commercial media products stating that they are "licensed for home (or private or educational or noncommercial) use only" do not affect the instructor's right to make fair use of the contents. Fair use rights extend to the portions of copyrighted works that are considered necessary to achieve educational goals - and at times even to small or short works in their entirety. The fairness of a use depends, in part, on whether the user took more than was needed to achieve his or her legitimate purpose. Material selected should be pertinent to the project or topic, using only what is required for the educational purpose for which it is being made. There are no cut-and-dried rules that can be relied upon in making this determination (such as 10 percent of the work being quoted or 400 words of text, etc.). Fair use is situational, context is important. Transformativeness, of significant consequence in fair use law, can entail revising material or putting material in a new context, or both. Educational uses that add significant academic value to referenced media objects will often be considered fair. In all cases, educators should provide proper attribution. Furthermore, when material is accessible in digital formats there should be security against third-party access and downloads. - Copyrighted material may be employed in media lessons: Under fair use, teachers can select instructive material from copyrighted sources and make them available to students, in class, workshops, informal mentoring and instructional settings, and on school-related Web sites. - Copyrighted material may be used to prepare curriculum materials: Educators can integrate copyrighted material into curriculum materials including books, workbooks, podcasts, DVD compilations, videos, and Web sites. - Sharing media curriculum materials: Teachers should be able to share effective examples of instruction, including resource materials and lessons. If curriculum developers are utilizing fair use when they produce materials, then their work should be seen and used -- given that fair use applies to commercial materials as well. - Student use of copyrighted materials in their own academic and creative work: Learners are free to include, revise, and re-present existing media objects in their own classroom assignments. Students' use of copyrighted material should not be a substitute for creative endeavors. Their use of a copyrighted work should alter or repurpose the original. Material included under fair use should be credited. - Developing audiences for student work: Sharing students˙≠ work through the Internet, restricted to the college network, is apt to be considered fair use. But to distribute their work more broadly to the public or to incorporate it as portion of a personal portfolio, permission should be sought. A single recording of a performance of copyrighted music may be made by a student for evaluation or rehearsal purposes, and the educational institution or individual teacher may keep a copy. In addition, a single copy of a sound recording owned by an educational institution or an individual teacher (such as a tape, disc or cassette) or copyrighted music may be made for the purpose of constructing aural exercises or examinations, and the educational institution or individual teacher can keep a copy. Students and educators may use copyrighted music for a variety of purposes, but cannot rely on fair use when their goal is to establish a mood, or when they make use of popular songs just to take advantage of their popularity. Public television program Non-profit educational institutions may record television programs at the request of a teacher. The tape may not be altered in any way. The tape may be shown only in a place devoted to instruction. A tape may be shown to several classes if appropriate. Purchasing or renting from a local video store for use in class: Ownership of a copy of a film or video does not confer the right to show the work. The performance right of a license contains certain restrictions that should be followed. However, the use of these tapes -- which are generally licensed for "Home Use Only" -- is considered a fair use in a face-to-face teaching situation. A face-to-face teaching situation implies a classroom setting with only the instructor and students present. Furthermore, the activity must be a part of the established curriculum. It does not extend to showing tapes for entertainment or to students or others not in the class. Copying a rental video for later use: This would clearly infringe on both the copyright and the license granted to the rental store. Copying a college-owned video for Course Reserve: A duplicate copy can be made only after permission to copy has been obtained from the copyright holder. In some cases it may be easier simply to purchase a second copy. Showing a video to a group or club outside of the classroom: Some film and video distributors will offer the rental or purchase of videos with "public performance rights" for a higher fee. The public performance right is what is needed to show a video in a non-teaching situation. Most of the instructional videos in the Library's collection have been purchased with the necessary public performance rights. Showing a rental video in a residence hall lounge: Although experts disagree, this usage is clearly not a classroom situation, but some say it is comparable to home or private use and therefore legitimate. Others say that the somewhat public nature of a residence hall lounge would make this a public performance. If the group of people viewing the video is restricted to acquaintances of the student renter, it is probably okay. If the showing is publicized and/or shown with an "anyone welcome" attitude, it is probably not a legitimate use. Copying a preview video before its return to the vendor or distributor: Preview videos may not be copied. However, the Library Acquisitions office may be able to arrange an extension of the preview period. Copying a video (no longer available for acquisition)for preservation purposes: Permission to copy a video title in the Library's collection must still be obtained from the copyright holder even if the video is no longer available for acquisition. Copyright protection for World Wide Web pages: Copyright Law applies to materials found on the Internet to the same extent it applies to material in traditional formats. The stylistic and content elements of a web page and the overall design of a web page are protected by copyright from the moment of creation. Internet postings and "public domain": Considerable controversy surrounds this concept, and copyright law certainly does not recognize the principle of implied license. Works enter the public domain only through the permission of the creator or copyright expiration. Copying a portion of another person's Web page: Educators and students must credit the sources and display the copyright notice and copyright ownership information. A full bibliographic description, including: author, title, publisher, and place and date of publication must identify the source. Using a digital image, a scanned photo or graphic illustration in a published work and placing it on your web page: A photograph or illustration may be used in its entirety. These digital images may be displayed on the institution's secure electronic network, to students enrolled in a course, by the educator, for classroom use. Scanning a graphic/photo that is old enough for the copyright to have expired (generally over 75 years old) or in the public domain would be permissible. Linking to anything on the Web: The Digital Millenium Copyright Act (PL105-304 Section 512) provides a "safe harbor" for Online Service Providers (OSP), such as libraries, stating that they will not be held liable for linking to sites. Links are a form of citation. You may make links to other Web locations on one's own Web site. But bringing material from another site to your own site is not allowed. Copying someone else's list of links: Using a few links from a list that someone else has created should not pose a problem. The creation of a large list of links is potentially copyrightable so it could only be copied in its entirety under the conditions of fair use. Copying a newsgroup message (electronic mail, discussion lists, and Web blogs): The author holds copyright in e-mail. Such messages are definitely copyrighted and have the same protection as any written work, but making one print copy for your own personal/educational use would most likely constitute fair use. These messages are not in the public domain. Sharing someone else's message in a listserv or newsgroup: Forwarding of e-mail without permission is not permitted. This would most likely constitute unauthorized distribution unless it was a reposting of the message to the original group of people (i.e. in responding to a newsgroup, listserv or regular email message). Online course readings: There are four means to place a course reading online: electronic reserve, onCourse, a Website, or Blackboard courseware. We have prepared guidelines for you to make an e-reserve request. If readings are placed on onCourse, your Website, or Blackboard, first determine whether the use of the material falls within the fair use guidelines. Consult the Copyright Decision Map before seeking permission. If in the analysis, it has been determined that permission is needed then it is the responsibility of the instructor to do so. We offer suggestions for this procedure online. Transfers from analog to digital formats are allowed if: - The digital format is not available; - Only the necessary portion is copied; - The digitized copy is not shared with other institutions; - Digital copies are not made of the digital copies. Computer programs (software): - The library may loan one, licensed copy of a program, to which the required copyright warning is affixed, for non-profit use. But the patron may not keep a copy of the program installed on her computer. - The library may loan a copy of application software, which was purchased for a non-profit purpose, if the required warning is affixed to the package. - The library may make a program available, via a campus wide computer system, to the campus community, to access simultaneously, provided that the library obtains a network version of the program and a site license permitting simultaneous access. License agreement of journals, databases, and computer programs may govern the uses of some materials. Users, libraries and educational institutions have a right to expect that the terms of licenses will not restrict fair use or other lawful library or educational uses. The Interlibrary Loan Service includes the making and sending of copies found in books, journals and other copyrighted material. Section 108 of the Copyright Act permits this copying under certain conditions. The law allows the Library to reproduce no more than one copy and send, to another library, portions of copyrighted material, provided that the copies are not made for commercial advantage; the quantities of copied items received by the borrowing library do not substitute a periodical subscription or purchase of a work; the library's collections are open to the public; and the copy includes a notice of copyright. Furthermore, the copy must become the property of the user; and the library has had no notice that the copy will be used for anything other than "private study, scholarship or research." The Copyright Act does not provide explicit, quantitative guidelines for interlibrary loan, thus, the National Commission on New Technological Uses of Copyrighted Works (CONTU) developed more specific guidelines. The CONTU guidelines suggest that the borrowing library adhere to the "rule of five", in other words, five articles per periodical for the past five calendar years and no more than five copies from a work can be made during a calendar year for other materials. Periodicals older than five years are not addressed by the CONTU guidelines, but copyright term is till in effect in this case. Requests must be accompanied by a statement that the borrowing library is complying with the CONTU guidelines. The borrowing library must keep records for three years of its activities. Wallace Library tracks patron requests and, once the guidelines are exceeded, the library pays royalty fees. Section 108 does not apply to musical works; graphic, pictorial, or sculptural works; motions pictures; or other audiovisual works, except those dealing with the news. Digital collections are managed through license agreements. WEB RESOURCES FOR FAIR USE - Copyright Decision Map - Center for Social Media: Code of Best Practices in Fair Use for Media Literacy Education Questions about fair use? Catalog & Metadata Librarian x3716 or [email protected] The creator of this page provides the Wheaton College campus with copyright information and guidelines but it is not offered as professional legal advise.
http://wheatoncollege.edu/library/library-information/copyright/fair-use/
13
14
Multiple Bonds and Molecular Shapes In a double bondAttraction between two atoms (nuclei and core electrons) that results from sharing two pairs of electrons between the atoms; a bond with bond order = 2., two electronA negatively charged, sub-atomic particle with charge of 1.602 x 10-19 coulombs and mass of9.109 x 1023 kilograms; electrons have both wave and particle properties; electrons occupy most of the volume of an atom but represent only a tiny fraction of an atom's mass. pairs are shared between a pair of atomic nuclei. Despite the fact that the two electron pairs repel each other, they must remain between the nuclei, and so they cannot avoid each other. Therefore, for purposes of predicting molecular geometry, the two electron pairs in a double bond behave as one. They will, however, be somewhat “fatter” than a single electron-pair bond. For the same reason the three electron pairs in a triple bondAttraction between two atoms (nuclei and core electrons) that results from sharing of three pairs of electrons between the atoms; a bond with bond order = 3. behave as an “extra-fat” bond. As an example of the multiple-bond rules, consider hydrogen cyanide, HCN. The Lewis structure is Treating the triple bond as if it were a single “fat” electron pair, we predict a linear moleculeA set of atoms joined by covalent bonds and having no net charge. with an H―C―H angle of 180°. This is confirmed experimentally. Another example is formaldehyde, CH2O, whose Lewis structure is Since no lone pairs are present on C, the two H’s and the O should be arranged trigonally, with all four atomsThe smallest particle of an element that can be involved in chemical combination with another element; an atom consists of protons and neutrons in a tiny, very dense nucleus, surrounded by electrons, which occupy most of its volume. in the same plane. Also, because of the “fatness” of the double bond, squeezing the C—H bond pairs together, we expect the H―C―H angle to be slightly less than 120°. Experimentally it is found to have the value of 117°. EXAMPLE 1 Predict the shape of the two molecules (a) nitrosyl chloride, NOCl, and (b) carbon dioxide, CO2. SolutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture. a) We must first construct a skeleton structure and then a Lewis diagram. Since N has a valence of 3, O a valence of 2, and Cl is monovalent, a probable structure for NOCl is Completing the Lewis diagram, we find Since N has two bonds and one lone pair, the molecule must be angular. The O—N—Cl angle should be about 120°. Since the “fat” lone pair would act to reduce this angle while the “fat” double bond would tend to increase it, it is impossible to predict at this level of argument whether the angle will be slightly larger or smaller than 120°. b) The Lewis structure of CO2 was considered in the previous chapter and found to be Since C has no lone pairs in its valence shell and each double bond acts as a fat bond pair, we conclude that the two O atoms are separated by 180° and that the molecule is linear.
http://chempaths.chemeddl.org/services/chempaths/?q=book/General%20Chemistry%20Textbook/Further%20Aspects%20of%20Covalent%20Bonding/1341/multiple-bonds-and-molecula
13
83
Taxes are classified as direct tax and indirect tax. But the meaning of these two types of taxes is not clear. For a long time economists interpreted these two types in different ways. For instance, one group of economists considered taxes on production as direct taxes and those on consumption as indirect taxes. J.S. Mill distinguished these two types of taxes in terms of the ability to shift the tax. Any person on whom the tax is imposed, if he himself pays the tax, it is called direct tax and if he is able to shift the tax to somebody who ultimately pays it then it is called indirect tax. For example, income tax is paid by a person as it is levied on the income earned by him, so it is a direct tax. On the other hand the sales tax imposed on the seller is shifted to the buyer. Now-a-days the distinction between direct and indirect taxes is explained with reference to the basis of assessments and not on the point of assessment. Hence, taxes assessed on the basis of income are called direct taxes and those assessed on the basis of expenditure are called indirect taxes. However, even this classification is not free from difficulties. For instance, when one man’s income is treated as another man’s expenditure, tax on one man’s income may become the tax on another man’s expenditure. Hence, till date there has been no satisfactory distinction between direct and indirect taxes. However, in practice this distinction is retained more for the purpose of grouping the different taxes. Merits of Direct Tax - Direct taxes are based on the principle of ability to pay and so they help to distribute tax burden equally. - As the tax is imposed on each individual, for example, based on his income, he is certain about the amount of tax payable by him. Hence, the direct tax satisfies the canon of certainty. - Direct taxes are also highly flexible. The revenue from them can be increased or decreased depending upon the need of the government. For example, the government can simply raise the rate of tax to get more revenue and bring down the tax rate to reduce the revenue. - The tax paying people are more interested in the ways in which the tax revenue is spent by the government. They feel proud of participating in the public projects by paying tax. Demerits of Direct Tax - Direct taxes like income tax, are considered as tax on honesty of the people. Those people who can evade or avoid it are rarely prosecuted. Hence, there is no incentive to pay the tax. - There is no logical basis for levying or determining the tax. As a result political considerations over-weigh the economic and other considerations. For example, a communist government may impose very stiff tax rate while a socialist government may not do so. Hence, there is ample scope for arbitrariness in the imposition of tax. - From the view point of tax collection, the cost of collection of direct taxes is very high compared to that of indirect taxes, For example, income tax has to be collected from every person who should pay tax. Hence, a very elaborate arrangement is required in the form of administrative machinery which simply increases the cost of tax collection. - One more difficulty is not all the tax paying individuals are aware of the provisions of income tax. The provisions are so complicated that unless an individual is clear about them he will be paying more tax. In the case of corporate tax, every effort is made to minimize the tax burden by taking advantage of the loopholes in the tax laws. - Another demerit of the direct taxes is that in case of any dispute, it takes a long time for the common public to secure justice and still more time to get back the excess tax paid. The evaluation of the merits and demerits of direct taxes indicates the problems experienced are more related to administrative aspect and not the economic aspect. Efforts have been taken in Merits of Indirect Tax - Indirect taxes are imposed at the point of consumption and so it is very easy to collect them. - The cost of collection of indirect taxes is almost nil as every person will pay the tax as he buys the commodity on which the tax is imposed. - It is very simple and easily understandable as only a fixed percentage of the sale price is collected as tax. - A significant merit of indirect taxes is that it cannot be evaded or avoided as the only alternative to not paying the taxes is not to consume. - Like direct taxes, indirect taxes are also highly flexible. They can be altered to suit the requirement of governments need for funds. - Another important merit is that even the poorest in a country will contribute towards the cost of public service. - Indirect tax is the ideal way to discourage consumption of luxurious and unwanted goods. Demerits of Indirect Tax - The fundamental defect of this type of tax is that it does not conform to the principle of ability to pay as it affects every individual hi the same way irrespective of his economic position. - The revenue from indirect taxes cannot be certain. This is because a tax imposed on a commodity with highly elastic demand will bring down the demand for the commodity and along with that the taxes. On the other hand a tax on a good with inelastic demand can fetch the desired revenue. A main difficulty is that the elasticity of good is influenced by several factors and so the tax revenue may be uncertain. - As the indirect taxes are usually a fraction of the price paid, the tax payers never feel the payment of taxes. Hence, they evince little interest to know how the amount of tax revenue is spent. - Yet another problem of indirect taxes is that very stiff rates encourage black marketing, smuggling and other illegal trading practices. - Sometimes, the indirect tax levied on a commodity will vary from state to state causing lot of hardship for the tax administrators and encouraging the people to buy the goods in the state where the tax is less and sell the goods, in the state where the tax is high. This might affect the businessmen in the latter state. - Though the cost of collection of indirect taxes is less, yet, the records to be maintained and inspected are voluminous involving enormous time and energy of the lax administrators. This gives wide scope for corruption and malpractice among the officials. Comparison of Direct and Indirect Taxes Having discussed the merits and demerits of both the direct and indirect taxes, it could be understood that indirect taxes are superior to direct taxes in several respects. For example, indirect taxes can be selectively imposed on goods of harmful nature to discourage the consumption of such goods. This selectivity is not possible under direct tax. Between the two types of taxes, direct taxes directly affect the incentive to work and save severely. But indirect taxes have no such direct impact. In a developing country, to reduce income inequality increased dose of indirect taxes is better. Though this may mean taxing the poor also, yet in modern times, with overall improvement in standard of living, slowly poor people should also be subject to the tax net. Moreover, in a country where there is large scale tax evasion, tax avoidance, black marketing, smuggling, etc., indirect taxes are the best instruments to put down such evil practices. All these superiority of indirect taxes over the direct taxes need not mean that direct taxes should be abolished. A balance should be maintained between these two types of taxes so as to discourage and avoid any attempt to evade or avoid tax. The various problems associated with each one of these two types of taxes should be seriously studied to overcome them. Especially the administrative problems can be overcome only by exposing the officials to the modem techniques of tax collection, giving them attractive incentives and rewards for honest work, encouraging them to suggest modification to improve the effectiveness of the taxes, etc. Credit: Tax Management-CU
http://www.mbaknol.com/tax-management/direct-and-indirect-taxes/
13
24
Economic Literacy, according to the 1997 California History-Social Science Framework, is defined in the following manner: To develop economic literacy, students must: Understand the basic economic problems confronting all societies. Basic to all economic decision making is the problem of scarcity. Scarcity requires that all individuals and societies make choices about how to use their productive resources. Students need to understand this basic problem confronting all societies and to examine the ways in which economic systems seek to resolve the three basic economic problems of choice (determining what, how, and for whom to produce) created by scarcity. Understand comparative economic systems. Beginning in the elementary school, students should be introduced to the basic processes through which market economies function and to the growing network of markets and prices that reflect shifting supply and demand conditions in a market economy. In later years, students should be able to compare the origins and differentiating characteristics of traditional, command, market, and "mixed" economic systems. Students should understand the mechanisms through which each system functions in regulating the distribution of scarce resources in the production of desired goods and services, and they should analyze their relationships to the social and political systems of the societies in which they function. Understand the basic economic goals, performance, and problems of our society. Students need to be able to analyze the basic economic goals of their society; that is, freedom of choice, efficiency, equity, full employment, price stability, growth, and security. Students should also recognize the existence of trade-offs among these goals. They need to develop analytical skills to assess economic issues and proposed governmental policies in light of these goals. They also need to know how to explain or describe the performance of the nation's economy. Finally, students need opportunities to examine some of the local, national, and global problems of the nation's mixed economy, including (1) inflationary and deflationary pressures and their effects on workers' real earnings; (2) underemployment and labor; (3) the persistence of poverty in a generally productive economy; (4) the rate of growth and worker production and hence material output; and (5) the successes and failures of governmental programs. Understand the international economic system. Students need to understand (1) the organization and importance of the international economic system; (2) the distribution of wealth and resources on a global scale; (3) the struggle of the "developing nations" to attain economic independence and a better standard of living for their citizens; (4) the role of the transnational corporation in changing rules of exchange; and (5) the influence of political events in the international economic order.
http://www.cceesandiego.org/index.php?option=com_content&view=article&id=26&Itemid=67
13
14
Schizophrenia is a long-term mental health condition that causes a range of different psychological symptoms. Schizophrenia is a long-term mental health condition that causes a range of different psychological symptoms, including: - hallucinations - hearing or seeing things that do not exist - delusions - unusual beliefs not based on reality which often contradict the evidence - muddled thoughts based on the hallucinations or delusions - changes in behaviour Doctors often describe schizophrenia as a psychotic illness. This means sometimes a person may not be able to distinguish their own thoughts and ideas from reality. Read more about the symptoms of schizophrenia. Why does schizophrenia happen? The exact cause of schizophrenia is unknown. However, most experts believe the condition is caused by a combination of genetic and environmental factors. It is thought certain things make you more vulnerable to developing schizophrenia, and certain situations can trigger the condition. Read more about the causes of schizophrenia. Who is affected? Schizophrenia is one of the most common serious mental health conditions. About 1 in 100 people will experience schizophrenia in their lifetime, with many continuing to lead normal lives. Schizophrenia is most often diagnosed between the ages of 15 and 35. Men and women are equally affected. There is no single test for schizophrenia. It is most often diagnosed after an assessment by a mental health care professional, such as a psychiatrist. It is important that schizophrenia is diagnosed as early as possible, as the chances of recovery improve the earlier it is treated. Read more about diagnosing schizophrenia. How is schizophrenia treated? Schizophrenia is usually treated with a combination of medication and therapy appropriate to each individual. In most cases, this will be antipsychotic medicines and cognitive behavioural therapy (CBT). People with schizophrenia will usually receive help from a community mental health team (CMHT), which will offer day-to-day support and treatment. Many people recover from schizophrenia, although they may have periods when symptoms return (relapses). Support and treatment can help reduce the impact of the condition on your life. Read more about treating schizophrenia. Living with schizophrenia If schizophrenia is well managed, it is possible to reduce the chances of severe relapses. This can include: - recognising signs of an acute episode - taking medication as prescribed - talking to others about the condition There are many charities and support groups offering help and advice on living with schizophrenia. Most people find it comforting to talk to others with a similar condition. Read more about living with schizophrenia. Changes in thinking and behaviour are the most obvious signs of schizophrenia, but people can experience the symptoms in different ways. Changes in thinking and behaviour are the most obvious signs of schizophrenia, but people can experience symptoms in different ways. The symptoms of schizophrenia are usually classified into one of two categories: positive or negative. - Positive symptoms represent a change in behaviour or thoughts, such as hallucinations or delusions. - Negative symptoms represent a withdrawal or lack of function which you would usually expect to see in a healthy person. For example, people with schizophrenia often appear emotionless, flat and apathetic. The condition may develop slowly. The first signs of schizophrenia, such as becoming socially withdrawn and unresponsive or experiencing changes in sleeping patterns, can be hard to identify. This is because the first symptoms often develop during adolescence and changes can be mistaken for an adolescent 'phase'. People often have episodes of schizophrenia, during which their symptoms are particularly severe, followed by periods where they experience few or no positive symptoms. This is known as acute schizophrenia. If you are experiencing symptoms of schizophrenia, see your GP as soon as possible. The earlier schizophrenia is treated, the more successful the outcome tends to be. Read more information about how schizophrenia is diagnosed. Positive symptoms of schizophrenia A hallucination is when a person experiences a sensation but there is nothing or nobody there to account for it. A hallucination can involve any of the senses, but the most common is hearing voices. Hallucinations are very real to the person experiencing them, even though people around them cannot hear the voices or experience the sensations. Research using brain-scanning equipment shows changes in the speech area of the brain in people with schizophrenia when they hear voices. These studies show the experience of hearing voices as a real one, as if the brain mistakes thoughts for real voices. Some people describe the voices they hear as friendly and pleasant, but more often they are rude, critical, abusive or annoying. The voices might describe activities taking place, discuss the hearer’s thoughts and behaviour, give instructions or talk directly to the person. Voices may come from different places or one place in particular, such as the television. A delusion is a belief held with complete conviction, even though it is based on a mistaken, strange or unrealistic view. It may affect the way people behave. Delusions can begin suddenly or may develop over weeks or months. Some people develop a delusional idea to explain a hallucination they are having. For example, if they have heard voices describing their actions, they may have a delusion that someone is monitoring their actions. Someone experiencing a paranoid delusion may believe they are being harassed or persecuted. They may believe they are being chased, followed, watched, plotted against or poisoned, often by a family member or friend. Some people who experience delusions find different meanings in everyday events or occurrences. They may believe people on TV or in newspaper articles are communicating messages to them alone, or that there are hidden messages in the colours of cars passing in the street. Confused thoughts (thought disorder) People experiencing psychosis often have trouble keeping track of their thoughts and conversations. Some people find it hard to concentrate and will drift from one idea to another. They may have trouble reading newspaper articles or watching a TV programme. People sometimes describe their thoughts as ‘misty’ or ‘hazy’ when this is happening to them. Thoughts and speech may become jumbled or confused, making conversation difficult and hard for other people to understand. Changes in behaviour and thoughts Behaviour may become more disorganised and unpredictable, and appearance or dress may seem unusual to others. People with schizophrenia may behave inappropriately or become extremely agitated and shout or swear for no reason. Some people describe their thoughts as being controlled by someone else, their thoughts are not their own, or that thoughts have been planted in their mind by someone else. Another recognised feeling is that thoughts are disappearing, as though someone is removing them from their mind. Some people feel their body is being taken over and someone else is directing their movements and actions. Negative symptoms of schizophrenia The negative symptoms of schizophrenia can often appear several years before somebody experiences their first acute schizophrenic episode. These initial negative symptoms are often referred to as the prodromal period of schizophrenia. Symptoms during the prodromal period usually appear gradually and slowly get worse. They include becoming more socially withdrawn and experiencing an increasing lack of care about your appearance and personal hygiene. It can be difficult to tell whether the symptoms are part of the development of schizophrenia or caused by something else. Negative symptoms experienced by people living with schizophrenia include: - losing interest and motivation in life and activities, including relationships and sex - lack of concentration, not wanting to leave the house and changes in sleeping patterns - being less likely to initiate conversations and feeling uncomfortable with people, or feeling there is nothing to say The negative symptoms of schizophrenia can often lead to relationship problems with friends and family because they can sometimes be mistaken for deliberate laziness or rudeness. The exact causes of schizophrenia are unknown, but research suggests that a combination of physical, genetic, psychological and environmental factors The exact causes of schizophrenia are unknown, but research suggests that a combination of physical, genetic, psychological and environmental factors can make people more likely to develop the condition. Current thinking is that some people may be prone to schizophrenia, and a stressful or emotional life event might trigger a psychotic episode. However, it is not known why some people develop symptoms while others do not. Things that increase the chances of schizophrenia developing include: Schizophrenia tends to run in families, but no individual gene is responsible. It is more likely different combinations of genes might make people more vulnerable to the condition. However, having these genes does not necessarily mean you will develop schizophrenia. Evidence the disorder is partly inherited comes from studies of identical twins brought up separately. They were compared with non-identical twins raised separately and the general public. For identical twins raised separately, if one twin develops schizophrenia, the other twin has a one in two chance of developing it. In non-identical twins, who share only half of each other's genetic make-up, when one twin develops schizophrenia, the other twin has a one in seven chance of developing the condition. While this is higher than in the general population (where the chance is about one in a 100), it suggests genes are not the only factor influencing the development of schizophrenia. Many studies of people with schizophrenia have shown there are subtle differences in the structure of their brains or small changes in the distribution or number of brain cells. These changes are not seen in everyone with schizophrenia and can occur in people who do not have a mental illness. They suggest schizophrenia may partly be a disorder of the brain. These are chemicals that carry messages between brain cells. There is a connection between neurotransmitters and schizophrenia because drugs that alter the levels of neurotransmitters in the brain are known to relieve some of the symptoms of schizophrenia. Research suggests schizophrenia may be caused by a change in the level of two neurotransmitters, dopamine and serotonin. Some studies indicate an imbalance between the two may be the basis of the problem. Others have found a change in the body’s sensitivity to the neurotransmitters is part of the cause of schizophrenia. Pregnancy and birth complications Although the effect of pregnancy and birth complications is very small, research has shown the following conditions may make a person more likely to develop schizophrenia in later life: - bleeding during pregnancy, gestational diabetes or pre-eclampsia - abnormal growth of a baby while in the womb, including low birth weight or reduced head circumference - exposure to a virus while in the womb - complications during birth, such as a lack of oxygen (asphyxia) and emergency caesarean section Triggers are things that can cause schizophrenia to develop in people who are at risk. These include: The main psychological triggers of schizophrenia are stressful life events, such as a bereavement, losing your job or home, a divorce or the end of a relationship, or physical, sexual, emotional or racial abuse. These kinds of experiences, though stressful, do not cause schizophrenia, but can trigger its development in someone already vulnerable to it. Drugs do not directly cause schizophrenia, but studies have shown drug misuse increases the risk of developing schizophrenia or a similar illness. Certain drugs, particularly cannabis, cocaine, LSD or amphetamines, may trigger some symptoms of schizophrenia, especially in people who are susceptible. Using amphetamines or cocaine can lead to psychosis and can cause a relapse in people recovering from an earlier episode. Three major studies have shown teenagers under 15 who use cannabis regularly, especially ‘skunk’ and other more potent forms of the drug, are up to four times more likely to develop schizophrenia by the age of 26. Want to know more? There is no single test for schizophrenia. The condition is usually diagnosed after assessment by a specialist in mental health. There is no single test for schizophrenia. The condition is usually diagnosed after assessment by a specialist in mental health. If you are concerned you may be developing symptoms of schizophrenia, see your GP as soon as possible. The earlier schizophrenia is treated, the more successful the outcome tends to be. Your GP will ask about your symptoms and check they are not the result of other causes, such as recreational drug use. Community mental health team (CMHT) If a diagnosis of schizophrenia is suspected, your GP will probably refer you to your local community mental health team (CMHT). CMHTs are made up of different mental health professionals who support people with complex mental health conditions. A member of the CMHT team, usually a psychologist or psychiatrist, will carry out a more detailed assessment of your symptoms. They will also want to know your personal history and current circumstances. To make a diagnosis, most mental healthcare professionals use a 'diagnostic checklist', where the presence of certain symptoms and signs indicate a person has schizophrenia. Schizophrenia can usually be diagnosed if: - You have at least two of the following symptoms: delusions, hallucinations, disordered thoughts or behaviour or the presence of negative symptoms, such as a flattening of emotions. - Your symptoms have had a significant impact on your ability to work, study or perform daily tasks. - You have experienced symptoms for more than six months. - All other possible causes, such as recreational drug use or depression, have been ruled out. Sometimes, it might not be clear whether someone has schizophrenia. If you have other symptoms at the same time, a psychiatrist may have reason to believe you have a related mental illness. There are several related mental illnesses similar to schizophrenia. Your psychiatrist will ask how your illness has affected you so they can confidently confirm you have schizophrenia and not another mental illness, such as: - Bipolar disorder (manic depression). People with bipolar disorder swing from periods of mania (elevated moods and extremely active, excited behaviour) to periods of deep depression. Some people with bipolar disorder also hear voices or experience other kinds of hallucinations or may have delusions. - Schizoaffective disorder. Schizoaffective disorder is often described as a form of schizophrenia because its symptoms are similar to schizophrenia and bipolar disorder. But schizoaffective disorder is a mental illness in its own right. It may occur just once in a person’s life or may recur intermittently, often when triggered by stress. Getting help for someone else Due to their delusional thought patterns, people with schizophrenia may be reluctant to visit their GP if they believe there is nothing wrong with them. It is likely someone who has had acute schizophrenic episodes in the past will have been assigned a care co-ordinator. If this is the case, contact the person's care co-ordinator to express your concerns. If someone is having an acute schizophrenic episode for the first time, it may be necessary for a friend, relative or other loved one to persuade them to visit their GP. In the case of a rapidly worsening schizophrenic episode, you may need to go to the accident and emergency (A&E) department where a duty psychiatrist will be available. If a person who is having an acute schizophrenic episode refuses to seek help and it is believed they present a risk to themselves or others, their nearest relative can request a mental health assessment is carried out. The social services department of your local authority can advise how to do this. In severe cases of schizophrenia, people can be compulsorily detained in hospital for assessment and treatment under the Mental Health Act (2007). If you (or a friend or relative) are diagnosed with schizophrenia, you may feel anxious about what will happen. You may be worried about the stigma attached to the condition, or feel frightened and withdrawn. It is important to remember that a diagnosis can be a positive step towards getting good, straightforward information about the illness and the kinds of treatment and services available. Schizophrenia is usually treated with an individually tailored combination of therapy and medication. Community mental health teams Most people with schizophrenia are treated by community mental health teams (CMHTs). The goal of the CMHT is to provide day-to-day support and treatment while ensuring you have as much independence as possible. A CMHT can be made up of and provide access to: - social workers - community mental health nurses (a nurse with specialist training in mental health conditions) - counsellors and psychotherapists - psychologists and psychiatrists (the psychiatrist is usually the senior clinician in the team) Care programme approach (CPA) People with complex mental health conditions, such as schizophrenia, are usually entered into a treatment process known as a care programme approach (CPA). A CPA is essentially a way of ensuring you receive the right treatment for your needs. There are four stages to a CPA. - Assessment - your health and social needs are assessed. - Care plan - a care plan is created to meet your health and social needs. - Appointment of a care co-ordinator - a care co-ordinator, sometimes known as a keyworker, is usually a social worker or nurse and is your first point of contact with other members of the CMHT. - Reviews - your treatment will be regularly reviewed and, if needed, changes to the care plan can be agreed. Not everyone uses the CPA. Some people may be cared for by their GP and others may be under the care of a specialist. You will work together with your healthcare team to develop a care plan. Your care co-ordinator will be responsible for making sure all members of your healthcare team, including your GP, have a copy of your care plan. The care plan may involve an advance statement or crisis plan, which can be followed in an emergency. People who have serious psychotic symptoms as a result of an acute schizophrenic episode may require a more intensive level of care than a CMHT can provide. These episodes are usually dealt with by antipsychotic medication (see below) and special care. Crisis resolution teams (CRT) One treatment option is to contact a crisis resolution team (CRT). CRTs treat people with serious mental health conditions who are currently experiencing an acute and severe psychiatric crisis. Without the involvement of the CRT, these people would require treatment in hospital. The CRT will aim to treat a person in the least restrictive environment possible, ideally in or near the person's home. This can be in your own home, in a dedicated crisis residential home or hostel, or in a day care centre. CRTs are also responsible for planning aftercare once the crisis has passed to prevent a further crisis from occurring. Your care co-ordinator should be able to provide you and your friends or family with contact information in the event of a crisis. Voluntary and compulsory detention More serious, acute schizophrenic episodes may require admission to a psychiatric ward at a hospital or clinic. You can admit yourself voluntarily to hospital if your psychiatrist agrees it is necessary. People can also be compulsorily detained at a hospital under the Mental Health Act (2007). However, this is rare. It is only possible for someone to be compulsorily detained at a hospital if they have a severe mental disorder, such as schizophrenia, and if detention is necessary: - in the interests of the person's own health - in the interests of the person's own safety - to protect others People with schizophrenia who are compulsorily detained may need to be kept in locked wards. All people being treated in hospital will stay only as long as is absolutely necessary to receive appropriate treatment and arrange aftercare. An independent panel will regularly review your case and your progress. Once they feel you are no longer a danger to yourself and others, you will be discharged from hospital. However, your care team may recommend you remain in hospital voluntarily. If it is felt there is a significant risk of future acute schizophrenic episodes occurring, you may want to write an advance statement. An advance statement is a series of written instructions about what you would like your family or friends to do in case you experience another acute schizophrenic episode. You may also want to include contact details for your care co-ordinator. Want to know more? Antipsychotics are usually recommended as the initial treatment for the symptoms of an acute schizophrenic episode. Antipsychotics work by blocking the effect of the chemical dopamine on the brain. Antipsychotics can usually reduce feelings of anxiety or aggression within a few hours of use, but may take several days or weeks to reduce other symptoms, such as hallucinations or delusional thoughts. Antipsychotics can be taken orally (as a pill) or given as an injection (known as a 'depot'). Several 'slow release' antipsychotics are available. These require you to have one injection every two to four weeks. You may only need antipsychotics until your acute schizophrenic episode has passed. However, most people take medication for one or two years after their first psychotic episode to prevent further acute schizophrenic episodes occurring and for longer if the illness is recurrent. There are two main types of antipsychotics: - Typical antipsychotics are the first generation of antipsychotics developed during the 1950s. - Atypical antipsychotics are a newer generation of antipsychotics developed during the 1990s. Atypical antipsychotics are usually recommended as a first choice because of the sorts of side effects associated with their use. However, they are not suitable or effective for everyone. Both typical and atypical antipsychotics can cause side effects, although not everyone will experience them and their severity will differ from person to person. The side effects of typical antipsychotics include: - muscle twitches - muscle spasms Side effects of both typical and atypical antipsychotics include: - weight gain, particularly with some atypical antipsychotics - blurred vision - lack of sex drive - dry mouth Tell your care co-ordinator or GP if your side effects become severe. There may be an alternative antipsychotic you can take or additional medicines which will help you deal with the side effects. Do not stop taking your antipsychotics without first consulting your care co-ordinator, psychiatrist or GP. If you do, you could have a relapse of symptoms. See our medicines guide for schizophrenia. Want to know more? Psychological treatment can help people with schizophrenia cope better with the symptoms of hallucinations or delusions. They can also help treat some of the negative symptoms of schizophrenia, such as apathy or a lack of enjoyment. Common psychological treatments include: Cognitive behavioural therapy (CBT) Cognitive behavioural therapy (CBT) aims to help you identify the thinking patterns that are causing you to have unwanted feelings and behaviour, and learn to replace this thinking with more realistic and useful thoughts. For example, you may be taught to recognise examples of delusional thinking in yourself. You may then receive help and advice about how to avoid acting on these thoughts. Most people will require 8-20 sessions of CBT over the space of 6-12 months. CBT sessions usually last for about an hour. Your GP or care co-ordinator should be able to arrange a referral to a CBT therapist. Many people with schizophrenia rely on family members for their care and support. While most family members are happy to help, caring for somebody with schizophrenia can place a strain on any family. Family therapy is a way of helping you and your family cope better with your condition. Family therapy involves a series of informal meetings over a period of around six months. Meetings may include: - discussing information about schizophrenia - exploring ways of supporting somebody with schizophrenia - deciding how to solve practical problems that can be caused by the symptoms of schizophrenia If you think you and your family could benefit from family therapy, speak to your care co-ordinator or GP. Arts therapies are designed to promote creative expression. Working with an arts therapist in a small group or individually can allow you to express your experiences with schizophrenia. Some people find expressing things in a non-verbal way through the arts can provide a new experience of schizophrenia and help them develop new ways of relating to others. Arts therapies have been shown to alleviate the negative symptoms of schizophrenia in some people. NICE recommends arts therapies are provided by an arts therapist registered with the Health and Care Professions Council and who has experience of working with people with schizophrenia. Want to know more? Most people with schizophrenia make a recovery, although many will experience the occasional return of symptoms (relapses). As well as monitoring your mental health, your healthcare team and GP should monitor your physical health. A healthy lifestyle, including a balanced diet with lots of fruits and vegetables and regular exercise, is good for you and can reduce your risk of developing cardiovascular disease or diabetes. Avoid too much stress and get a proper amount of sleep. You should have a check-up at least once a year to monitor your risk of developing cardiovascular disease or diabetes. This will include recording your weight, checking your blood pressure and any appropriate blood tests. Rates of smoking in people with schizophrenia are three times higher than in the general population. If you are a smoker, you are at a higher risk of developing cancer, heart disease and stroke. Stopping smoking has both short- and long-term health benefits. Research has shown that you are up to four times more likely to quit smoking if you use NHS support as well as stop-smoking medicines, such as patches, gum or inhalators. Ask your doctor about this or go to the NHS Smokefree website. Want to know more? Who is available to help me? In the course of your treatment for schizophrenia, you will be involved with many different services. Some are accessed through referral from your GP, others through your local authority. These services may include the following: - Community mental health teams (CMHTs) provide the main part of local specialist mental health services and offer assessment, treatment and social care to people living with schizophrenia and other mental illnesses. - Early intervention teams provide early identification and treatment for people with the first symptoms of psychosis. Your GP may be able to refer you directly to an early intervention team. - Crisis services allow people to be treated at home, instead of in hospital, for an acute episode of illness. They are specialist mental health teams that help with crises that occur outside normal office hours. - Acute day hospitals are an alternative to inpatient care in a hospital, where you can visit every day or as often as necessary. - Assertive outreach teams deliver intensive treatment and rehabilitation in the community for people with severe mental health problems. They provide rapid help in a crisis situation. Staff often visit people at home, act as advocates and liaise with other services, such as your GP or social services. They can also help with practical problems, such as helping to find housing and work, and daily tasks, such as shopping and cooking. - Advocates are trained and experienced workers who help people communicate their needs or wishes, get impartial information, and represent their views to other people. Advocates can be based in your hospital or mental health support groups, or you can find an independent advocate to act on your behalf, if you wish. Want to know more? Employment and financial support Avoid too much stress, including work-related stress. If you are employed, you may be able to work shorter hours or in a more flexible way. Under the Equality Act 2010, all employers must make reasonable adjustments for people with disabilities, including people diagnosed with schizophrenia or other mental illnesses. Several organisations provide support, training and advice for people with schizophrenia who wish to continue working. Your community mental health team is a good first point of contact to find out what services and support are available for you. Mental health charities such as Mind or Rethink are also an excellent source of information on training and employment. If you are unable to work as a result of your mental illness, you are entitled to financial support, such as Incapacity Benefit. Want to know more? Talk to others Many people find it helpful to meet other people with the same experiences for mutual support and to share ideas. It is also an important reminder that you are not alone. Charities and support groups allow individuals and families to share experiences and coping strategies, campaign for better services and provide support. Useful charities, support groups and associations include: Other places that offer support to people with schizophrenia and other mental illnesses include: Even if you do not have a job or are unable to work, it is still important to go out and do everyday things and provide a structure to your week. Many people regularly go to a day hospital, day centre or community mental health centre. These offer a range of activities that enable you to get active again and spend some time in the company of other people. These provide training to help you develop your work skills and support you back into work. They often have contacts with local employers. This could be a bedsit or flat where there is someone around who is trained to support you and help you deal with day-to-day problems. Want to know more? - Mind: Housing and mental health. What can family, friends and partners do to help? Friends, relatives and partners have a vital role in helping people with schizophrenia recover and make a relapse less likely. It is very important not to blame the person with schizophrenia or tell them to “pull themselves together”, or to blame other people. When dealing with a friend or loved one’s mental illness, it is important to stay positive and supportive. As well as supporting the person with schizophrenia, you may want to get support to cope with your own feelings. Several voluntary organisations provide help and support for carers. Friends and family should try to understand what schizophrenia is, how it affects people, and how best they can help. They can provide emotional and practical support, and can encourage people to seek appropriate support and treatment. As part of the treatment, you may be offered family therapy. This can provide information and support for the person with schizophrenia and their family. Friends and family can play a major role by monitoring the person’s mental state, watching out for any signs of relapse, encouraging them to take their medication and attend medical appointments. If you are the nearest relative of a person who has schizophrenia, you have certain rights that can be used to protect the patient's interests. These include requesting that the local social services authority ask an approved mental health professional to consider whether the person with schizophrenia should be detained in hospital. Want to know more? Depression and suicide Many people with schizophrenia experience periods of depression. Do not ignore these symptoms. If depression is not treated, it can worsen and lead to suicidal thoughts. Studies have shown that people with schizophrenia have a higher chance of commiting suicide. If you have been feeling particularly down over the last month and no longer take pleasure in the things you used to enjoy, you may be depressed. See your GP for advice and treatment. Immediately report any suicidal thoughts to your GP or care co-ordinator. The warning signs of suicide The warning signs that people with depression and schizophrenia may be considering suicide include: - making final arrangements, such as giving away possessions, making a will or saying goodbye to friends - talking about death or suicide. This may be a direct statement such as, "I wish I was dead," or indirect phrases such as "I think that dead people must be happier than us" or "Wouldn't it be nice to go to sleep and never wake up?" - self-harm, such as cutting their arms or legs or burning themselves with cigarettes - a sudden lifting of mood, which could mean that a person has decided to commit suicide and feels better because of their decision Helping a suicidal friend or relative If you see any of these warning signs: - Get professional help for the person, such as from a crisis resolution team (CRT) or the duty psychiatrist at your local A&E department. - Let them know that they are not alone and that you care about them. - Offer your support in finding other solutions to their problems. If you feel that there is an immediate danger of the person committing suicide, stay with them or have someone else stay with them and remove all available means of suicide, such as sharp objects and medication. Want to know more? Stuart, 43, was diagnosed with paranoid schizophrenia when he was 31. After a difficult time coping with depression, anxiety and paranoia he feels his Stuart was diagnosed with paranoid schizophrenia when he was 31. After a difficult period of coping with depression, anxiety and paranoia, Stuart feels his illness is under control, thanks to a very effective antipsychotic drug. His goal is to climb Mount Everest, having already conquered base camp. "In August 1991, I was on holiday in Moscow taking part in a march against communism. It was a very stressful time as hardline communists were attempting a coup against Mikhail Gorbachev, then president of the Soviet Union. "That night, in my hotel room, I got a phone call at about 2am. A very angry Russian man was shouting and swearing down the line at me, asking why I was involving myself in their business. I put the phone down and my heart started to pound. I began to get quite scared and paranoid. "About eight days later, I arrived back in London. I felt I was being followed by the KGB. From there, fears of persecution and depression gradually built up. I got so stressed. About a month after returning from Moscow, I was unable to work and my doctor signed me off. "I remember having my first psychotic attack, which was absolutely terrifying. I think it was brought on by sheer stress and anxiety. I was lying on my bed and I suddenly felt pressure on the top of my head, and found myself in total darkness. It was like I’d been sucked into my own mind and had lost all sense of reality. I screamed out loud, then suddenly found myself back in my bedroom again with this really strange sensation round my head. "I didn’t have a clue what was going on. I decided to move away from London to Devon, to try to escape persecution from the KGB. I thought nobody would find me there. "In 1996, I moved to Dorchester. I saw my local GP and was referred immediately to the psychiatric team, where I was diagnosed with schizophrenia. The diagnosis was a relief. Yet all I knew about schizophrenia was what I’d read in the papers, that it was related to violence. "I did some research and got in touch with the mental health charity Rethink. I met one of Rethink's volunteers, Paul. He is the kindest man I’ve ever met in my life. I could tell Paul my deepest thoughts and fears and completely trust him. He never judged me at all. "After doctors gave me various medicines, some with unpleasant side effects, I was prescribed a drug that worked for me. It was one of the newer, atypical antipsychotics. I’m now on an extremely low dose of this drug and I don’t really have any symptoms of schizophrenia anymore. I feel it’s completely under control. "In 2003, I won a Winston Churchill Memorial Trust travel fellowship. I went to Everest for the first time and trekked to base camp. It was symbolic of my own journey with schizophrenia and conquering my own mountains. I want to climb Everest in the future. I think I can do it. I want to do something to inspire people and to show people that recovery is possible." Delusions and voices have been a daily feature of Richard’s life for more than ten years. Despite this he recently completed a master's degree in Broadcast Delusions and voices have been a daily feature of Richard’s life for more than ten years. Despite this, he recently completed a master's degree in broadcast journalism and successfully runs his own business. "When I was about 21, I had a bad experience with hallucinogenic mushrooms, after which I started having delusions and hallucinations. Voices in my head would say unkind things and I had suspicious thoughts that felt like they came from outside me. I was diagnosed with paranoid schizophrenia shortly afterwards and the thoughts and voices have been with me ever since. "A lot of the time the thoughts and voices are like another layer of interaction with people and the world. It's as if there are two coexisting realities. If I'm listening to the radio, for instance, the rational part of me knows that the programme is being transmitted to lots of listeners and that it is a one-way form of communication. My delusional thinking, however, makes me believe that the radio can project what I say out loud to the people making the show and all the listeners. "My delusions will also make me think that a lot of the discussion in the programme has a special meaning or relevance to me. For example, the host of a show might mention that they are going to the dentist soon. If I happen to have a dental appointment in the near future, then it can seem like the presenter has just dropped that into the conversation as a hidden message. They aren’t going to the dentist, but they want me to understand that they know I will be. "In truth, when something like that happens it is, of course, just a coincidence, but there's a part of my thinking for which it becomes another reality. "I've come to accept that they are an ongoing part of my life, but there are times when it is hard to deal with. Out shopping, it sometimes seems people are looking at me in a sinister way because they don’t like something about me. The truth is they're probably noticing my clothes or are just looking in my direction. "Nonetheless it can get me down, to the point where I won’t go out of the house. In the past it has made me feel depressed, even suicidal. At times like that, it helps to have friends around who can either tell me to stop thinking rubbish or, if needs be, help me work through my delusions and do some reality checking. "I had some cognitive behavioural therapy when I first got these symptoms. It was helpful because it gave me another way to work through negative emotions and keep on top of things that could be disabling. I also take medication and have decided that I always will. "The media consultancy company I've just set up keeps me busy. That’s important too, because when I have lots of work on, it helps me keep focused, rather than drift off with my delusions."
http://www.thefamilygp.com/nhstopic/Schizophrenia.htm
13
27
The Our Region theme focuses on regions through a series of five activities. Students focus on the production and distribution of goods and services in a state or region. They learn about the importance of human, capital, and natural resources to the operation of a business. They learn about the flow of money through the economy and that businesses and industries are interdependent. They solve basic business problems in a regional economy. Activity One: What are Regions and Resources? Students distinguish economic regions in the United States. They examine natural, human, and capital resources available in different regions. They learn that businesses need resources to produce and sell a product. Objectives:The students will define region, resource, business, and entrepreneur. Identify resources as natural, human, and capital. Locate a business of their choosing in a region Concepts:business, capital resources, entrepreneur, goods, human resources, natural resources, products, region, services Skills:following directions, making choices, map interpretation, reading, understanding symbols Activity Two: Exploring Resources Students examine regions of resources in the United States. They identify resources businesses use to make their products. They learn about the importance of location to a business. Objectives:The students will analyze resources in different regions, list resources required to produce a good or service, determine a location for their business based on resources Concepts:business, capital resources, human resources, natural resources, products, region Skills:conducting research, comparing data, following directions, making choices, teamwork Activity Three: Resources on the Move Students recognize that businesses find resources throughout different regions. They discover ways businesses must work together to create a product. Objectives:The students will identify resources involved in producing a product, define economy and specialization, recognize economic interdependence in a region and among regions Concepts:business, capital resources, economy, goods, human resources, interdependence, natural resources, product, region, services, specialization Skills:conducting research, following directions, map reading, organizing resources Activity Four: Where’s the Money? Students identify how resources relate to business income and expenses. They complete calculations to demonstrate how a business determines its profit or loss. Students learn a five-step, decision-making process and solve simple business problems. Objectives:The students will define income, expenses, profit, and loss, demonstrate how a business tracks income and expenses, solve simple business problems Concepts:advantage, business, decision, disadvantage, economy, expense, financial report, income, loss, product, profit, resources Skills:comparing, following directions, making decisions, math computation, problem solving, teamwork Activity Five: The Bottom Line Students play a game that illustrates the flow of money in and out of a business. They calculate profits and losses and learn the importance of loans. Students search a region for the resources they need to make a product. Objectives:The students will understand the importance of cash flow to businesses, record business income and expenses, calculate profit and loss, recognize the role of loans in business Concepts:business, decisions, expenses, government, income, loss, opportunity cost, profit, taxes Skills:building consensus, following directions, listening critically, mathematical computation, predicting results, selecting and applying information, teamwork
http://www.jacentex.org/site/index.php?option=com_content&view=article&id=85&Itemid=214
13
54
representative money or modern money, the following types of money remained in operation: The money whose face value is equal to its real value is called full bodied money. All the moneys which were used under commodity standard like wool, boat, sheep, cow, goat and arrows etc., had equal monetary and non-monetary values. Again the silver coin of Rs. 1 which was used in Subcontinent before 1857 was the representative of full bodied money because the value of the metal of such rupees was also equal to Rs. 1. The same like situation was also available in case of gold coins issued under gold standard. Representative Full Bodied Money: The money in coins or in paper-lacking its own value may be accorded as representative full bodied money, if it is backed by equal amount of gold or silver. In such case the gold or silver is accumulated by govt. and some representative of such commodities is issued for circulation. As during 1900 to 1930 US govt. issued gold certificates. Such certificates were gold guaranteed which was possessed by US treasury. By giving back a holder of such certificates could entail upon 100% gold equal to the value of certificates. It means that gold and such certificates were equally well. Like gold certificates US govt. also issued 'Silver certificates'. Such certificates were also convertible at official price of 1.29 dollar per ounce of silver. Because of representative money the transaction cost decreased. The cost of melting the gold and silver for coins were also (3) Credit Money or Fiat Money: The money whose face value is more than its intrinsic value is called credit money. In other words, the money whose non-monetary value is more than its monetary value is given the name of credit or fiat money. Thus all the coins and paper currency notes which are in circulation in a country represent credit money. In addition to official currency, the cheques of commercial banks also represent credit money because the face value of cheques is far more than value of cheques as papers. The credit money - the major component of money supply of present time is decomposed into two parts: (i) The credit money issued by govt. or central bank. (ii) The credit money issued by commercial banks of a country. Credit Money Issued by Govt. and Central Bank: The coins which are issued by Govt. and central bank have more face value as compared with the value of metals possessed by such coins. Accordingly such coins represent credit money - as the case of one rupee, two rupees and five rupees coins in Pakistan. The coins which are not full bodied represent token coins. In addition to coins govts. and central banks also issue credit money in the form of paper currency - as the case of all currency notes from Rs. 10 to Rs. 5000 in Pakistan. The paper credit money which is guaranteed to convert into standard money is called convertible paper currency - the case of all currency notes from Rs. 10 to Rs. 5000. While the paper credit money which is not guaranteed to convert into standard money is called inconvertible paper money - as the case of one rupee, two rupee or five rupee coins in This must be remembered that the coins and currency notes which are issued by govt. and central bank are known as 'Legal Money'. The legal money whose payments and receipts can be made without any limit is called unlimited legal money ..While the legal money whose payments and receipts can be made up to a specific limit is called limited legal money. (ii) Credit Money Issued by The money issued by govts. and central banks has a flaw that: (a) It is inadequate to meet the rising needs of present time. (b) It is a troublesome job to transfer when it is in huge amount. need was realized to invent such a form of money which could be quickly used for transactions and it should not be bulky. Accordingly, commercial banks introduced "Cheque" as a medium of exchange. Cheques do not represent money, rather money consists of those amounts which are with the banks. Cheques are the means for transferring the money. In addition to cheques, commercial banks have introduced a variety of other monetary instruments like drafts, call deposit receipts and credit cards. All the above discussion shows that the major part of money today consists of credit money like coins, paper currency and cheques etc. As far as the backward countries are concerned legal money still dominates. While in case of rich countries the credit money issued by commercial banks etc. has attained extra-ordinary importance. In such countries people are extensively using credit cards for the sake of transactions. The credit cards known as 'Plastic Money' is the latest form of money. Due to spread of computer and advanced technology, the payment system through banks is becoming easy and easy. Accordingly, it is said that as the electronic means of payments are popularized, all the paper formalities regarding clearance of cheques will come to an end. In this connection, the most important is Electronic Money i.e. E-Money which has the (a) Debit Cards: The debit cards are like Credit Cards. With the help of these cards, the card holders transfer the amounts from their accounts to those people of stores wherefrom they purchase the goods. As, if anybody makes the payment at some super store such is made with the help of credit card. But the same is also done with the help of debit card where the amount to be paid is debited from one's account just by pressing a button in the electronic machine placed in the departmental store. So, many big companies like VISA and Mater Cards are issuing debit cards. Moreover, so many commercial banks have also issued Automatic Transfer Cards (ATM) to its account holders. These cards are also used as the cards of payments. (b) Stored Value Cards: These are like credit cards and debit cards. However, they are attached with some specific amount Such cards are normally used for those small payments which are well-anticipated by the consumers. The most important of them is the Smart Card. Such card has its own computer chip which is filled with Digital cash from the bank account of the card holder. These cards are used in Australia, Canada, Colombia, Denmark, France, Italy, Singapore, Spain and U. K. However, their use is less in US. Electronic Cash (E-Cash): E-Cash is such a type of electronic money where the goods and services can be purchased through internet. Thus, the buyers who are account holders keep the record of their accounts in their personal computers. Through such computers the amounts are transferred from the buyer's computer to the seller's computer. These amounts are transferred from buyer's account to the seller's account even before the sale and purchase of goods. This system was firstly introduced by a Dutch company Electronic Cheques: The internet users make the payments of their bills through internet rather than cheques. As, if a person has to make the payment of his telephone bill, he will write down such amount to the concerned company from his personal computer through internet The company will shift such amount from the customer's account to his own account All such will be processed through internet. This system of transaction is easy and .cheap. It will be popularized in the coming days. The cost of transferring money through this method are one-third than those of paper cheques. Although the electronic money is beneficial, it is also furnished with so many problems.
http://www.economicsconcepts.com/representative_money_or_modern_money.htm
13
25
The Compromise of 1867, which created the Dual Monarchy, gave the Hungarian government more control of its domestic affairs than it had possessed at any time since the Battle of Mohacs. However, the new government faced severe economic problems and the growing restiveness of ethnic minorities. World War I led to the disintegration of Austria-Hungary, and in the aftermath of the war, a series of governments--including a communist regime--assumed power in Buda and Pest (in 1872 the cities of Buda and Pest united to become Budapest). Constitutional and Legal Framework Once again a Habsburg emperor became king of Hungary, but the compromise strictly limited his power over the country's internal affairs, and the Hungarian government assumed control over its domestic affairs. The Hungarian government consisted of a prime minister and cabinet appointed by the emperor but responsible to a bicameral parliament elected by a narrow franchise. Joint Austro-Hungarian affairs were managed through "common" ministries of foreign affairs, defense, and finance. The respective ministers were responsible to delegations representing separate Austrian and Hungarian parliaments. Although the "common" ministry of defense administered the imperial and royal armies, the emperor acted as their commander in chief, and German remained the language of command in the military as a whole. The compromise designated that commercial and monetary policy, tariffs, the railroad, and indirect taxation were "common" concerns to be negotiated every ten years. The compromise also returned Transylvania, Vojvodina, and the military frontier to Hungary's jurisdiction. At Franz Joseph's insistence, Hungary and Croatia reached a similar compromise in 1868, giving the Croats a special status in Hungary. The agreement granted the Croats autonomy over their internal affairs. The Croatian ban would now be nominated by the Hungarian prime minister and appointed by the king. Areas of "common" concern to Hungarians and Croats included finance, currency matters, commercial policy, the post office, and the railroad. Croatian became the official language of Croatia's government, and Croatian representatives discussing "common" affairs before the Hungarian diet were permitted to speak Croatian. The Nationalities Law enacted in 1868 defined Hungary as a single nation comprising different nationalities whose members enjoyed equal rights in all areas except language. Although non-Hungarian languages could be used in local government, churches, and schools, Hungarian became the official language of the central government and universities. Many Hungarians thought the act too generous, while minority-group leaders rejected it as inadequate. Slovaks in northern Hungary, Romanians in Transylvania, and Serbs in Vojvodina all wanted more autonomy, and unrest followed the act's passage. The government took no further action concerning nationalities, and discontent fermented. Anti-Semitism appeared in Hungary early in the century as a result of fear of economic competition. In 1840 a partial emancipation of the Jews allowed them to live anywhere except certain depressed mining cities. The Jewish Emancipation Act of 1868 gave Jews equality before the law and effectively eliminated all bars to their participation in the economy; nevertheless, informal barriers kept Jews from careers in politics and public life. Rise of the Liberal Party Franz Joseph appointed Gyula Andrassy--a member of Deak's party--prime minister in 1867. His government strongly favored the Compromise of 1867 and followed a laissez-faire economic policy. Guilds were abolished, workers were permitted to bargain for wages, and the government attempted to improve education and construct roads and railroads. Between 1850 and 1875, Hungary's farms prospered: grain prices were high, and exports tripled. But Hungary's economy accumulated capital too slowly, and the government relied heavily on foreign credits. In addition, the national and local bureaucracies began to grow immediately after the compromise became effective. Soon the cost of the bureaucracy outpaced the country's tax revenues, and the national debt soared. After an economic downturn in the mid-1870s, Deak's party succumbed to charges of financial mismanagement and scandal. As a result of these economic problems, Kalman Tisza's Liberal Party, created in 1875, gained power in 1875. Tisza assembled a bureaucratic political machine that maintained control through corruption and manipulation of a woefully unrepresentative electoral system. In addition, Tisza's government had to withstand both dissatisfied nationalities and Hungarians who thought Tisza too submissive to the Austrians. The Liberals argued that the Dual Monarchy improved Hungary's economic position and enhanced its influence in European politics. Tisza's government raised taxes, balanced the budget within several years of coming to power, and completed large road, railroad, and waterway projects. Commerce and industry expanded quickly. After 1880 the government abandoned its laissez-faire economic policies and encouraged industry with loans, subsidies, government contracts, tax exemptions, and other measures. The number of Hungarians who earned their living in industry doubled to 24.2 percent of the population between 1890 and 1910, while the number dependent on agriculture dropped from 82 to 62 percent. However, the 1880s and 1890s were depression years for the peasantry. Rail and steamship transport gave North American farmers access to European markets, and Europe's grain prices fell by 50 percent. Large landowners fought the downturn by seeking trade protection and other political remedies; the lesser nobles, whose farms failed in great numbers, sought positions in the still-burgeoning bureaucracy. By contrast, the peasantry resorted to subsistence farming and worked as laborers to earn money. Hungary's population rose from 13 million to 20 million between 1850 and 1910. After 1867 Hungary's feudal society gave way to a more complex society that included the magnates, lesser nobles, middle class, working class, and peasantry. However, the magnates continued to wield great influence through several conservative parties because of their massive wealth and dominant position in the upper chamber of the diet. They fought modernization and sought both closer ties with Vienna and a restoration of Hungary's traditional social structure and institutions, arguing that agriculture should remain the mission of the nobility. They won protection from the market by reestablishment of a system of entail and also pushed for restriction of middle-class profiteering and restoration of corporal punishment. The Roman Catholic Church was a major ally of the magnates. Some lesser-noble landowners survived the agrarian depression of the late nineteenth century and continued farming. Many others turned to the bureaucracy or to the professions. In the mid-1800s, Hungary's middle class consisted of a small number of German and Jewish merchants and workshop owners who employed a few craftsmen. By the turn of the century, however, the middle class had grown in size and complexity and had become predominantly Jewish. In fact, Jews created the modern economy that supported Tisza's bureaucratic machine. In return, Tisza not only denounced anti-Semitism but also used his political machine to check the growth of an anti-Semitic party. In 1896 his successors passed legislation securing the Jews' final emancipation. By 1910 about 900,000 Jews made up approximately 5 percent of the population and about 23 percent of Budapest's citizenry. Jews accounted for 54 percent of commercial business owners, 85 percent of financial institution directors and owners, and 62 percent of all employees in commerce. The rise of a working class came naturally with industrial development. By 1900 Hungary's mines and industries employed nearly 1.2 million people, representing 13 percent of the population. The government favored low wages to keep Hungarian products competitive on foreign markets and to prevent impoverished peasants from flocking to the city to find work. The government recognized the right to strike in 1884, but labor came under strong political pressure. In 1890 the Social Democratic Party was established and secretly formed alliances with the trade unions. The party soon enlisted one-third of Budapest's workers. By 1900 the party and union rolls listed more than 200,000 hard-core members, making it the largest secular organization the country had ever known. The diet passed laws to improve the lives of industrial workers, including providing medical and accident insurance, but it refused to extend them voting rights, arguing that broadening the franchise would give too many non-Hungarians the vote and threaten Hungarian domination. After the Compromise of 1867, the Hungarian government also launched an education reform in an effort to create a skilled, literate labor force. As a result, the literacy rate had climbed to 80 percent by 1910. Literacy raised the expectations of workers in agriculture and industry and made them ripe for participation in movements for political and social change. The plight of the peasantry worsened drastically during the depression at the end of the nineteenth century. The rural population grew, and the size of the peasants' farm plots shrank as land was divided up by successive generations. By 1900 almost half of the country's landowners were scratching out a living from plots too small to meet basic needs, and many farm workers had no land at all. Many peasants chose to emigrate, and their departure rate reached approximately 50,000 annually in the 1870s and about 200,000 annually by 1907. The peasantry's share of the population dropped from 72.5 percent in 1890 to 68.4 percent in 1900. The countryside also was characterized by unrest, to which the government reacted by sending in troops, banning all farm-labor organizations, and passing other repressive legislation. In the late nineteenth century, the Liberal Party passed laws that enhanced the government's power at the expense of the Roman Catholic Church. The parliament won the right to veto clerical appointments, and it reduced the church's nearly total domination of Hungary's education institutions. Additional laws eliminated the church's authority over a number of civil matters and, in the process, introduced civil marriage and divorce procedures. The Liberal Party also worked with some success to create a unified, Magyarized state. Ignoring the Nationalities Law, they enacted laws that required the Hungarian language to be used in local government and increased the number of school subjects taught in that language. After 1890 the government succeeded in Magyarizing educated Slovaks, Germans, Croats, and Romanians and co-opting them into the bureaucracy, thus robbing the minority nationalities of an educated elite. Most minorities never learned to speak Hungarian, but the education system made them aware of their political rights, and their discontent with Magyarization mounted. Bureaucratic pressures and heightened fears of territorial claims against Hungary after the creation of new nation-states in the Balkans forced Tisza to outlaw "national agitation" and to use electoral legerdemain to deprive the minorities of representation. Nevertheless, in 1901 Romanian and Slovak national parties emerged undaunted by incidents of electoral violence and police repression. SOURCE: Area Handbook of the US Library of Congress
http://motherearthtravel.com/history/hungary/history-7.htm
13
64
Understanding Climate Change: A Beginner's Guide to the UN Framework Convention and its Kyoto Protocol First act: the Convention A giant asteroid could hit the earth! Something else could happen! The global temperature could rise! Wake up! The last several decades have been a time of international soul-searching about the environment. What are we doing to our planet? More and more, we are realizing that the Industrial Revolution has changed forever the relationship between humanity and nature. There is real concern that by the middle or the end of the 21st century human activities will have changed the basic conditions that have allowed life to thrive on earth. The 1992 United Nations Framework Convention on Climate Change is one of a series of recent agreements through which countries around the world are banding together to meet this challenge. Other treaties deal with such matters as pollution of the oceans, dryland degradation, damage to the ozone layer, and the rapid extinction of plant and animal species. The Climate Change Convention focuses on something particularly disturbing: we are changing the way energy from the sun interacts with and escapes from our planet's atmosphere. By doing that, we risk altering the global climate. Among the expected consequences are an increase in the average temperature of the earth's surface and shifts in world-wide weather patterns. Other -- unforeseen -- effects cannot be ruled out. We have a few problems to face up to. Problem No. 1 (the big problem): Scientists see a real risk that the climate will change rapidly and dramatically over the coming decades and centuries. Can we handle it? A giant asteroid did hit the earth -- about 65 million years ago. Splat. Scientists speculate that the collision threw so much dust into the atmosphere that the world was dark for three years. Sunlight was greatly reduced, so many plants could not grow, temperatures fell, the food chain collapsed, and many species, including the largest ever to walk the earth, died off. That, at least, is the prevailing theory of why the dinosaurs became extinct. Even those who weren't actually hit by the asteroid paid the ultimate price. The catastrophe that befell the dinosaurs is only one illustration, if dramatic, of how changes in climate can make or break a species. According to another theory, human beings evolved when a drying trend some 10 million years ago was followed around three million years ago by a sharp drop in world temperature. The ape-like higher primates in the Great Rift Valley of Africa were used to sheltering in trees, but, under this long-term climate shift, the trees were replaced with grassland. The 'apes' found themselves on an empty plain much colder and drier than what they were used to, and extremely vulnerable to predators. Extinction was a real possibility, and the primates appear to have responded with two evolutionary jumps -- first to creatures who could walk upright over long distances, with hands free for carrying children and food; and then to creatures with much larger brains, who used tools and were omnivorous (could eat both plants and meat). This second, large-brained creature is generally considered to be the first human. Shifts in climate have shaped human destiny ever since, and people have largely responded by adapting, migrating, and growing smarter. During a later series of ice ages, sea levels dropped and humans moved across land bridges from Asia to the Americas and the Pacific islands. Many subsequent migrations, many innovations, many catastrophes have followed. Some can be traced to smaller climatic fluctuations, such as a few decades or centuries of slightly higher or lower temperatures, or extended droughts. Best known is the Little Ice Age that struck Europe in the early Middle Ages, bringing famines, uprisings, and the withdrawal of northern colonies in Iceland and Greenland. People have suffered under the whims of climate for millennia, responding with their wits, unable to influence these large events. Until now. Ironically, we humans have been so remarkably successful as a species that we may have backed ourselves into a corner. Our numbers have grown to the point where we have less room for large-scale migration should a major climate shift call for it. And the products of our large brains -- our industries, transport, and other activities -- have led to something unheard of in the past. Previously the global climate changed human beings. Now human beings seem to be changing the global climate. The results are uncertain, but if current predictions prove correct, the climatic changes over the coming century will be larger than any since the dawn of human civilization. The principal change to date is in the earth's atmosphere. The giant asteroid that felled the dinosaurs threw large clouds of dust into the air, but we are causing something just as profound if more subtle. We have changed, and are continuing to change, the balance of gases that form the atmosphere. This is especially true of such key "greenhouse gases" as carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O). (Water vapour is the most important greenhouse gas, but human activities do not affect it directly.) These naturally occurring gases make up less than one tenth of one per cent of the total atmosphere, which consists mostly of oxygen (21 per cent) and nitrogen (78 per cent). But greenhouse gases are vital because they act like a blanket around the earth. Without this natural blanket the earth's surface would be some 30°C colder than it is today. The problem is that human activity is making the blanket "thicker". For example, when we burn coal, oil, and natural gas we spew huge amounts of carbon dioxide into the air. When we destroy forests the carbon stored in the trees escapes to the atmosphere. Other basic activities, such as raising cattle and planting rice, emit methane, nitrous oxide, and other greenhouse gases. If emissions continue to grow at current rates, it is almost certain that atmospheric levels of carbon dioxide will double from pre-industrial levels during the 21st century. If no steps are taken to slow greenhouse gas emissions, it is quite possible that levels will triple by the year 2100. The most direct result, says the scientific consensus, is likely to be a "global warming" of 1 to 3.5°C over the next 100 years. That is in addition to an apparent temperature increase of around half a degree Centigrade since the pre-industrial period before 1850, at least some of which may be due to past greenhouse gas emissions. Just how this would affect us is hard to predict because the global climate is a very complicated system. If one key aspect -- such as the average global temperature -- is altered, the ramifications ripple outward. Uncertain effects pile onto uncertain effects. For example, wind and rainfall patterns that have prevailed for hundreds or thousands of years, and on which millions of people depend, may change. Sea-levels may rise and threaten islands and low-lying coastal areas. In a world that is increasingly crowded and under stress -- a world that has enough problems already -- these extra pressures could lead directly to more famines and other catastrophes. While scientists are scrambling to understand more clearly the effects of our greenhouse gas emissions, countries around the globe have joined together to confront the problem. How the Convention responds -- It recognizes that there is a problem. That's a significant step. It is not easy for the nations of the world to agree on a common course of action, especially one that tackles a problem whose consequences are uncertain and which will be more important for our grandchildren than for the present generation. Still, the Convention was negotiated in a little over two years, and over 175 states have ratified and so are legally bound by it. The treaty took effect on 21 March 1994. -- It sets an "ultimate objective" of stabilizing "greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic (human-induced) interference with the climate system." The objective does not specify what these concentrations should be, only that they be at a level that is not dangerous. This acknowledges that there is currently no scientific certainty about what a dangerous level would be. Scientists believe it will take about another decade (and the next generation of supercomputers) before today's uncertainties (or many of them) are significantly reduced. The Convention's objective thus remains meaningful no matter how the science evolves. -- It directs that "such a level should be achieved within a time-frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened and to enable economic development to proceed in a sustainable manner." This highlights the main concerns about food production -- probably the most climate-sensitive human activity -- and economic development. It also suggests (as most climatologists believe) that some change is inevitable and that adaptive as well as preventive measures are called for. Again, this leaves room for interpretation in the light of scientific findings and the trade-offs and risks that the global community is willing to accept. Problem No. 2: If the consequences of a problem are uncertain, do you ignore the problem or do you do something about it anyway? Climate change is a threat to mankind. But no one is certain about its future effects or their severity. Responding to the threat is expected to be complicated and difficult. There is even some remaining disagreement over whether any problem exists at all: while many people worry that the effects will be extremely serious, others still argue that scientists cannot prove that what they suspect will happen will actually happen. In addition, it is not clear who (in the various regions of the world) will suffer most. Yet if the nations of the world wait until the consequences and victims are clear, it will probably be too late to act. What should we do? The truth is that in most scientific circles the issue is no longer whether or not climate change is a potentially serious problem. Rather, it is how the problem will develop, what its effects will be, and how these effects can best be detected. Computer models of something as complicated as the planet's climate system are not far enough advanced yet to give clear and unambiguous answers. Nevertheless, while the when, where, and how remain uncertain, the big picture painted by these climate models cries out for attention. -- Regional rain patterns may change. At the global level, the evapo-transpiration cycle is expected to speed up. This means that it would rain more, but the rain would evaporate faster, leaving soils drier during critical parts of the growing season. New or worsening droughts, especially in poorer countries, could reduce supplies of clean, fresh water to the point where there are major threats to public health. Because they still lack confidence in regional scenarios, scientists are uncertain about which areas of the world risk becoming wetter and which drier. But with global water resources already under severe strain from rapid population growth and expanding economic activity, the danger is clear. -- Climate and agricultural zones may shift towards the poles. In the mid-latitude regions the shift is expected to be 150 to 550 kilometres for a warming of 1-3.5°C. Increased summer dryness may reduce mid-latitude crop yields, and it is possible that today's leading grain-producing areas (such as the Great Plains of the United States) would experience more frequent droughts and heat waves. The poleward edges of the mid-latitude agricultural zones -- northern Canada, Scandinavia, Russia, and Japan in the northern hemisphere, and southern Chile and Argentina in the southern hemisphere -- might benefit from higher temperatures. However, in some areas rugged terrain and poor soil would prevent these countries from compensating for reduced yields in today's more productive areas. -- Melting glaciers and the thermal expansion of sea water may raise sea levels, threatening low-lying coastal areas and small islands. The global mean sea level has already risen by around 10 to 15 centimetres during the past century, and global warming is expected to cause a further rise of 15 to 95 cm by the year 2100 (with a "best estimate" of 50 cm). The most vulnerable land would be the unprotected, densely populated coastal regions of some of the world's poorest countries. Bangladesh, whose coast is already prone to devastating floods, would be a likely victim, as would many small island states such as the Maldives. These scenarios are alarming enough to raise concern, but too uncertain for easy decisions by governments. The picture is fuzzy. Some governments, beleaguered by other problems and responsibilities and bills to pay, have understandably been tempted to do nothing at all. Maybe the threat will go away. Or someone else will deal with it. Maybe another giant asteroid will hit the earth. Who knows? How the Convention responds -- It establishes a framework and a process for agreeing to specific actions -- later. The diplomats who wrote the Framework Convention on Climate Change saw it as a launching pad for potential further action in the future. They recognized that it would not be possible in the year 1992 for the world's governments to agree on a detailed blueprint for tackling climate change. But by establishing a framework of general principles and institutions, and by setting up a process through which governments meet regularly, they got things started. A key benefit of this approach is that it allowed countries to begin discussing the issue even before they all fully agreed that it is, in fact, a problem. Even skeptical countries have felt it is worthwhile participating. (Or, to put it another way, they would have felt uneasy about being left out.) This created legitimacy for the issue, and a sort of international peer pressure to take the subject seriously. The Convention is designed to allow countries to weaken or strengthen the treaty in response to new scientific developments. For example, they can agree to take more specific actions (such as reducing emissions of greenhouse gases by a certain amount) by adopting "amendments" or "protocols" to the Convention. This is what happened in 1997 with the adoption of the Kyoto Protocol. The treaty promotes action in spite of uncertainty on the basis of a recent development in international law and diplomacy called the "precautionary principle." Under traditional international law, an activity generally has not been restricted or prohibited unless a direct causal link between the activity and a particular damage can be shown. But many environmental problems, such as damage to the ozone layer and pollution of the oceans, cannot be confronted if final proof of cause and effect is required. In response, the international community has gradually come to accept the precautionary principle, under which activities that threaten serious or irreversible damage can be restricted or even prohibited before there is absolute scientific certainty about their effects. -- The Convention takes preliminary steps that clearly make sense for the time being. Countries ratifying the Convention -- called "Parties to the Convention" in diplomatic jargon -- agree to take climate change into account in such matters as agriculture, energy, natural resources, and activities involving sea-coasts. They agree to develop national programmes to slow climate change. The Convention encourages them to share technology and to cooperate in other ways to reduce greenhouse gas emissions, especially from energy, transport, industry, agriculture, forestry, and waste management, which together produce nearly all greenhouse gas emissions attributable to human activity. -- The Convention encourages scientific research on climate change. It calls for data gathering, research, and climate observation, and it creates a "subsidiary body" for "scientific and technological advice" to help governments decide what to do next. Each country that is a Party to the Convention must also develop a greenhouse gas "inventory" listing its national sources (such as factories and transport) and "sinks" (forests and other natural ecosystems that absorb greenhouse gases from the atmosphere). These inventories must be updated regularly and made public. The information they provide on which activities emit how much of each gas is essential for monitoring changes in emissions and determining the effects of measures taken to control emissions. Problem No. 3: It's not fair. If a giant asteroid hits the earth, that's nobody's fault. The same cannot be said for global warming. There is a fundamental unfairness to the climate change problem that chafes at the already uneasy relations between the rich and poor nations of the world. Countries with high standards of living are mostly (if unwittingly) responsible for the rise in greenhouse gases. These early industrializers -- Europe, North America, Japan, and a few others -- created their wealth in part by pumping into the atmosphere vast amounts of greenhouse gases long before the likely consequences were understood. Developing countries now fear being told that they should curtail their own fledgling industrial activities -- that the atmosphere's safety margin is all used up. Because energy-related emissions are the leading cause of climate change, there will be growing pressure on all countries to reduce the amounts of coal and oil they use. There also will be pressure (and incentives) to adopt advanced technologies so that less damage is inflicted in the future. Buying such technologies can be costly. Countries in the early stages of industrialization -- countries struggling hard to give their citizens better lives -- don't want these additional burdens. Economic development is difficult enough already. If they agreed to cut back on burning the fossil fuels that are the cheapest, most convenient, and most useful for industry, how could they make any progress? There are other injustices to the climate change problem. The countries to suffer the most if the predicted consequences come about -- if agricultural zones shift or sea levels rise or rainfall patterns change -- will probably be in the developing world. These nations simply do not have the scientific or economic resources, or the social safety nets, to cope with disruptions in climate. Also, in many of these countries rapid population growth has pushed many millions of people onto marginal land -- the sort of land that can change most drastically due to variations in climate. How the Convention responds -- It puts the lion's share of the responsibility for battling climate change -- and the lion's share of the bill -- on the rich countries. The Convention tries to make sure that any sacrifices made in protecting our shared atmosphere will be shared fairly among countries -- in accordance with their "common but differentiated responsibilities and respective capabilities and their social and economic conditions". It notes that the largest share of historical and current emissions originates in developed countries. Its first basic principle is that these countries should take the lead in combating climate change and its adverse impacts. Specific commitments in the treaty relating to financial and technological transfers apply only to very richest countries, essentially the members of the Organization for Economic Cooperation and Development (OECD). They agree to support climate change activities in developing countries by providing financial support above and beyond any financial assistance they already provide to these countries. Specific commitments concerning efforts to limit greenhouse gas emissions and enhance natural sinks apply to the OECD countries as well as to 12 "economies in transition" (Central and Eastern Europe and the former Soviet Union). Under the Convention, the OECD and transition countries are expected to try to return by the year 2000 to the greenhouse gas emission levels they had in 1990. -- The Convention recognizes that poorer nations have a right to economic development. It notes that the share of global emissions of greenhouse gases originating in developing countries will grow as these countries expand their industries to improve social and economic conditions for their citizens. -- It acknowledges the vulnerability of poorer countries to the effects of climate change. One of the Convention's basic principles is that the specific needs and circumstances of developing countries should be given "full consideration" in any actions taken. This applies in particular to those whose fragile ecosystems are highly vulnerable to the impacts of climate change. The Convention also recognizes that states which depend on income from coal and oil would face difficulties if energy demand changes. Problem No. 4: If the whole world starts consuming more and living the good life, can the planet stand the strain? As the human population continues to grow, the demands human beings place on the environment increase. The demands are becoming all the greater because these rapidly increasing numbers of people also want to live better lives. More and better food, more and cleaner water, more electricity, refrigerators, automobiles, houses and apartments, land on which to put houses and apartments . . . Already there are severe problems supplying enough fresh water to the world's billions. Burgeoning populations are draining the water from rivers and lakes, and vast underground aquifers are steadily being depleted. What will people do when these natural "tanks" are empty? There are also problems growing and distributing enough food -- widespread hunger in many parts of the world attests to that. There are other danger signals. The global fish harvest has declined sharply; as large as the oceans are, the most valuable species have been effectively fished out. Global warming is a particularly ominous example of humanity's insatiable appetite for natural resources. During the last century we have dug up and burned massive stores of coal, oil, and natural gas that took millions of years to accumulate. Our ability to burn up fossil fuels at a rate that is much, much faster than the rate at which they were created has upset the natural balance of the carbon cycle. The threat of climate change arises because one of the only ways the atmosphere -- also a natural resource -- can respond to the vast quantities of carbon being liberated from beneath the earth's surface is to warm up. Meanwhile, human expectations are not tapering off. They are increasing. The countries of the industrialized "North" have 20 per cent of the world's people but use about 80 per cent of the world's resources. By global standards, they live extremely well. It's nice living the good life, but if everyone consumed as much as the North Americans and Western Europeans consume -- and billions of people aspire to do just that -- there probably would not be enough clean water and other vital natural resources to go around. How will we meet these growing expectations when the world is already under so much stress? How the Convention responds -- It supports the concept of "sustainable development." Somehow, mankind must learn how to alleviate poverty for huge and growing numbers of people without destroying the natural environment on which all human life depends. Somehow a way has to be found to develop economically in a fashion that is sustainable over a long period of time. The buzzword for this challenge among environmentalists and international bureaucrats is "sustainable development". The trick will be to find methods for living well while using critical natural resources at a rate no faster than that at which they are replaced. Unfortunately, the international community is a lot farther along in defining the problems posed by sustainable development than it is in figuring out how to solve them. -- The Convention calls for developing and sharing environmentally sound technologies and know-how. Technology will clearly play a major role in dealing with climate change. If we can find practical ways to use cleaner sources of energy, such as solar power, we can reduce the consumption of coal and oil. Technology can make industrial processes more efficient, water purification more viable, and agriculture more productive for the same amount of resources invested. Such technology must be made widely available -- it must somehow be shared by richer and more scientifically advanced countries with poorer countries that have great need of it. -- The Convention emphasizes the need to educate people about climate change. Today's children and future generations must learn to look at the world in a different way than it was looked at by most people during the 20th century. This is both an old and a new idea. Many (but not all!) pre-industrial cultures lived in balance with nature. Now scientific research is telling us to do much the same thing. Economic development is no longer a case of "bigger is better" -- bigger cars, bigger houses, bigger harvests of fish, bigger doses of oil and coal. We must no longer think of human progress as a matter of imposing ourselves on the natural environment. The world -- the climate and all living things -- is a closed system; what we do has consequences that eventually come back to affect us. Tomorrow's children -- and today's adults, for that matter -- will have to learn to think about the effects of their actions on the climate. When they make decisions as members of governments and businesses, and as they go about their private lives, they will have to take the climate into account. In other words, human behaviour will have to change -- probably the sooner the better. But such things are difficult to prescribe and predict. People will need stronger signals and incentives if they are to do more for the good of the global climate. That leads to... Second act: the Protocol The 1992 Convention was a good start. But as the years passed, and the scientific evidence continued to accumulate, people naturally asked, "what's next"? In 1997, governments responded to growing public pressure by adopting the Kyoto Protocol. A protocol is an international agreement that stands on its own but is linked to an existing treaty. This means that the climate protocol shares the concerns and principles set out in the climate convention. It then builds on these by adding new commitments -- which are stronger and far more complex and detailed than those in the Convention. This complexity is a reflection of the enormous challenges posed by the control of greenhouse gas emissions. It is also a result of the diverse political and economic interests that had to be balanced in order to reach an agreement. Billion-dollar industries will be reshaped; some will profit from the transition to a climate-friendly economy, others will not. Because the Kyoto Protocol will affect virtually all major sectors of the economy, it is considered to be the most far-reaching agreement on environment and sustainable development ever adopted. This is a sign that the international community is willing to face reality and start taking concrete actions to minimize the risk of climate change. The Protocol's negotiators were able to take this important step forward only after facing up to some tough questions. Problem No 5: Emissions are still growing. Isn't it time to take some serious action? Three years after the Climate Change Convention was adopted at the Rio Earth Summit, the Intergovernmental Panel on Climate Change (IPCC) published its second major assessment of climate change research. Written and reviewed by some 2,000 scientists and experts, the report was soon famous for concluding that the climate may have already started responding to past emissions. It also confirmed the availability of many cost-effective strategies for reducing greenhouse gas emissions. Meanwhile, although emissions in some countries stabilized, emissions levels continued to rise around the world. More and more people came to accept that only a firm and binding commitment by developed countries to reduce greenhouse gases could send a signal strong enough to convince businesses, communities, and individuals to change their ways. Finally, there was the practical matter that the year 2000 was fast approaching, and with it the Convention's non-binding "aim" for industrialized countries -- to return emissions to 1990 levels by the year 2000 -- would expire. Clearly, new steps were needed. How the Protocol responds -- It sets legally binding targets and timetables for cutting developed country emissions. The Convention encouraged these countries to stabilize emissions; the Protocol will commit them to reducing their collective emissions by at least 5%. Each countrys emissions levels will be calculated as an average of the years 2008-2012; these five years are known as the first commitment period. Governments must make "demonstrable progress" towards this goal by the year 2005. These arrangements will be periodically reviewed. The first review is likely to take place in the middle of the first decade of the new century. At this time the Parties will take "appropriate action" on the basis of the best available scientific, technical, and socio-economic information. Talks on targets for the second commitment period must start by 2005. The Protocol will only become legally binding when at least 55 countries, including developed countries accounting for at least 55% of developed countries' 1990 CO2 emissions, have ratified it. This should happen some time after the year 2000. -- The Protocol addresses the six main greenhouse gases. These gases are to be combined in a "basket", so that reductions in each gas are credited towards a single target number. This is complicated by the fact that, for example, a kilo of methane has a stronger effect on the climate than does a kilo of carbon dioxide. Cuts in individual gases are therefore translated into "CO2 equivalents" that can be added up to produce one figure. Cuts in the three major gases carbon dioxide, methane, and nitrous oxide -- will be measured against a base year of 1990 (with exceptions for some countries with economies in transition). Cuts in the three long-lived industrial gases hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulphur hexafluoride (SF6) can be measured against either a 1990 or 1995 baseline. Carbon dioxide is by far the most important gas in the basket. It accounted for over four fifths of total greenhouse gas emissions from developed countries in 1995, with fuel combustion representing all but several percent of this amount. Fortunately, CO2 emissions from fuel are relatively easy to measure and monitor. Deforestation is the second largest source of carbon dioxide emissions in developed countries. Under the Protocol, targets can be met in part by improving the ability of forests and other natural sinks to absorb carbon dioxide from the atmosphere. Calculating the amount absorbed, however, is methodologically complex. Governments must still agree on a common approach. The second most important gas covered by the Protocol is methane. Methane is released by rice cultivation, domesticated animals such as cattle, and the disposal and treatment of garbage and human wastes. Methane emissions are generally stable or declining in the developed countries and their control does not seem to pose as great a challenge as carbon dioxide. Nitrous oxide is emitted mostly as a result of fertilizer use. As with methane, emissions from developed countries are stable or declining. Nitrous oxide and methane emissions are also similar in being relatively difficult to measure. One major group of greenhouse gases that the Protocol does not cover is chlorofluorocarbons. This is because CFCs are being phased out under the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer. Thanks to this agreement, atmospheric concentrations of many CFCs are stabilizing and expected to decline over the coming decades. However, the Protocol does address three long-lived and potent greenhouse gases that, like CFCs, have been created by industry for specialized applications. The use of HFCs and PFCs threatens to go up dramatically in part because they are being adopted as ozone-safe replacements for CFCs. Governments are now working to make sure that the incentives and controls for ozone depletion and global warming are compatible. The third man-made gas, sulphur hexafluoride, is used as an electric insulator, heat conductor, and freezing agent. Molecule for molecule, its global warming potential is thought to be 23,900 times greater than that of carbon dioxide. -- The Protocol recognizes that emissions cuts must be credible and verifiable. Ensuring that governments comply with their targets will be essential to the Protocol's success. Each country will need an effective national system for estimating emissions and confirming reductions. Standardized guidelines must be crafted to make figures comparable from one country to the next and the whole process transparent. The Protocol allows governments that cut emissions more than they are required to by their national target to "bank" the "excess" as credits for future commitment periods. But what happens if a country's emissions are higher than what is permitted by its target? Non-compliance provisions still need to be developed. Clearly, though, the best approach both politically and environmentally will be to start by helping governments to comply rather than emphasizing punitive or confrontational measures. Problem No 6: How can we make our behavior and our economies more climate-friendly? Minimizing greenhouse gas emissions will require policymakers to take some tough decisions. Every time a subsidy is added or removed, and every time a regulation or reform is put in place, somebody says "ouch". Even though the economy as a whole stands to benefit from well-designed, market-oriented policies for reducing emissions, action -- or inaction -- by government always helps create winners and losers in the marketplace. The challenge for policymakers is to design policies that fully engage the energies of civil society. Their goal must be to open the floodgates of industrial creativity. Experience shows that companies often respond rapidly and positively to incentives and pressures. Given the right policy environment, the business sector will roll out low-emissions technologies and services faster than many now believe possible. Schools, community groups, the media, families, and consumers also have a crucial role to play. Individuals can make a real difference by changing their habits and making thoughtful purchases and investments. If consumers are convinced that the rules of the game are changing, they will start taking the myriad small decisions that, when added together, can have a dramatic impact on emissions. If large segments of society are willing to make these changes, we can expect an early transition to more energy-efficient, technologically innovative, and environmentally sustainable societies. The trick is getting started. How the Protocol responds -- It highlights effective domestic policies and measures for reducing emissions. National governments can build a fiscal and policy framework that discourages emissions. They can phase out counter-productive subsidies on carbon-intensive activities, and they can introduce energy-efficiency and other regulatory standards that promote the best current and future technologies. Taxes, tradable emissions permits, information programmes, and voluntary programmes can all contribute. Local and urban governments -- which often have direct responsibility for transport, housing, and other greenhouse gas-emitting sectors of the economy -- can also play a role. They can start designing and building better public transport systems and creating incentives for people to use them rather than private automobiles. They can tighten construction codes so that new houses and office buildings will be heated or cooled with less fuel. Meanwhile, industrial companies need to start shifting to new technologies that use fossil fuels and raw materials more efficiently. Wherever possible they should switch to renewable energy sources such as wind and solar power. They should also redesign products such as refrigerators, automobiles, cement mixes, and fertilizers so that they produce lower greenhouse gas emissions. Farmers should look to technologies and methods that reduce the methane emitted by livestock and rice fields. Individual citizens, too, must cut their use of fossil fuels -- take public transport more often, switch off the lights in empty rooms -- and be less wasteful of all natural resources. The Protocol also flags the importance of conducting research into innovative technologies, limiting methane emissions from waste management and energy systems, and protecting forests and other carbon sinks. -- The Protocol encourages governments to work together. Policymakers can learn from one other and share ideas and experiences. They may choose to go further, coordinating national policies in order to have more impact in a globalized marketplace. Governments should also consider the effects of their climate policies on others, notably developing countries, and seek to minimize any negative economic consequences. Problem No 7: How should we divide up the work -- while sharing the burden fairly? The Climate Change Convention calls on the rich countries to take the initiative in controlling emissions. In line with this, the Kyoto Protocol sets emission targets for the industrialized countries only -- although it also recognizes that developing countries have a role to play. Agreeing how to share the responsibility for cutting emissions amongst the 40 or so developed countries was a major challenge. Lumping all developed countries into one big group risks ignoring the many differences between them. Each country is unique, with its own mix of energy resources and price levels, population density, regulatory traditions, and political culture. For example, the countries of Western Europe tend to have lower per capita emissions than do countries such as Australia, Canada, and the US. Western Europe's emissions levels have generally stabilized since 1990 -- the base year for measuring emissions -- while other developed countries have seen their emissions rise. Japan made great strides in energy efficiency in the 1980s, while countries such as Norway and New Zealand have relatively low emissions because they rely on hydropower or nuclear energy. Meanwhile, the energy-intensive countries of Central and Eastern Europe and the former Soviet Union have seen emissions fall dramatically since 1990 due to their transition to market economies. These differing national profiles make it difficult to agree on a one-size-fits-all solution. How the Protocol responds -- It assigns a national target to each country. In the end, it was not possible to agree in Kyoto on a uniform target for all countries. The resulting individual targets were not based on any rigorous or objective formula. Rather, they were the outcome of political negotiation and compromise. The overall 5% target for developed countries is to be met through cuts of 8% in the European Union (EU), Switzerland, and most Central and East European states; 7% in the US; and 6% in Canada, Hungary, Japan, and Poland. New Zealand, Russia, and Ukraine are to stabilize their emissions, while Norway may increase emissions by up to 1%, Australia by up to 8%, and Iceland 10%. The EU has made its own internal agreement to meet its 8% target by distributing different rates to its member states, just as the entire developed group's 5% target was shared out. These targets range from a 28% reduction by Luxembourg and 21% cuts by Denmark and Germany to a 25% increase by Greece and +27% for Portugal. -- The Protocol offers additional flexibility to the countries with economies in transition. In particular, they have more leeway in choosing the base year against which emissions reductions are to be measured. They also do not share the commitment of the richer developed countries to provide "new and additional financial resources" and facilitate technology transfer for developing country Parties. -- It also reconfirms the broader commitments of all countries -- developed and developing. Under the Convention, both developed and developing countries agree to take measures to limit emissions and adapt to future climate change impacts; submit information on their national climate change programmes and emissions levels; facilitate technology transfer; cooperate on scientific and technical research; and promote public awareness, education, and training. These commitments are reaffirmed in the Protocol, which also sets out ways of advancing their implementation. The issue of emissions targets for developing countries, and the broader question of how commitments should evolve in the future given continuing growth in global emissions, has generated a great deal of intense debate. A proposal that the Protocol should establish a procedure whereby developing countries could take on voluntary commitments to limit (that is, reduce the rate of increase in) their emissions was not accepted in Kyoto. Many developing countries resist formal commitments, even if voluntary, that would put an upper limit on their emissions, noting that their per capita emissions are still low compared to those of developed countries. Once developed countries start to convincingly demonstrate that they are taking effective actions to achieve their emissions targets, the debate on how new countries might eventually be brought into the structure of specific commitments may be revived. This is in keeping with the step-by-step approach of the intergovernmental climate regime. The Kyoto Protocol is not an end result, and can be strengthened and built on in the future. What's more, although developing countries are not currently subject to any specific timetables and targets, they are expected to take measures to limit the growth rate of their emissions and to report on actions they are taking to address climate change. There is a good deal of evidence that many developing countries are indeed taking steps that should help their emissions grow at a slower rate than their economic output. This is particularly true in the field of energy. Problem No. 8: I don't want to spend more money on this than is absolutely necessary! People are keen to combat climate change because they fear it may be destructive and costly. At the same time, they naturally want to buy their "climate insurance" at the lowest price possible. Fortunately, the costs of climate change policies can be minimized through "no regrets" strategies. Such strategies make economic and environmental sense whether or not the world is moving towards rapid climate change. For example, boosting energy efficiency not only reduces greenhouse gas emissions but lowers the cost of energy, thus making industries and countries more competitive in international markets; it also eases the health and environmental costs of urban air pollution. At the same time, the precautionary principle and the expected net damages from climate change justify adopting policies that do entail some costs. Calculating the costs of climate change policies is not easy. How quickly power plants and other infrastructure are replaced by newer and cleaner equipment, how interest rate trends affect corporate planning and investment, and the way businesses and consumers respond to climate change policies are just a few of the variables to consider. Costs can also vary from place to place. In general, the costs of improving energy efficiency should be lower in countries that are the most energy inefficient. Countries in the early stages of industrialization may offer cheaper opportunities for installing modern environmentally friendly technologies than do countries whose industrial plant is already developed. And so on. How the Protocol responds -- The Protocol innovates by giving Parties credit for reducing emissions in other countries. It establishes three "mechanisms" for obtaining these credits. The idea is that countries that find it particularly expensive to reduce emissions at home can pay for cheaper emissions cuts elsewhere. The global economic efficiency of reducing emissions is increased while the overall 5% reduction target is still met. The Protocol stipulates, however, that credit for making reductions elsewhere must be supplementary to domestic emissions cuts. Governments must still decide just how the three mechanisms for doing this will function. The rules they adopt will strongly influence the costs of meeting emissions targets. They will also determine the environmental credibility of the mechanisms -- that is, their ability to contribute to the Protocol's aims rather than opening up "loopholes" in emissions commitments. -- An emissions trading regime will allow industrialized countries to buy and sell emissions credits amongst themselves. Countries that limit or reduce emissions more than is required by their agreed target will be able to sell the excess emissions credits to countries that find it more difficult or more expensive to meet their own targets. Trades will need to be approved by all of the involved parties. Beyond this, however, the rules have not yet been decided on. Some observers are concerned that the Kyoto targets of some countries are so low that they can be met them with minimal effort. These countries could then sell large quantities of emission credits (known as "hot air"), reducing pressure on other industrialized countries to make domestic cuts. Governments are debating the best way to ensure that emissions trading does not undermine incentives for countries to cut their own domestic emissions. -- Joint implementation (JI) projects will offer "emissions reduction units" for financing projects in other developed countries. A joint implementation project could work like this: Country A faces high costs for reducing domestic emissions, so it invests in low-emissions technologies for a new power plant in Country B (very likely an economy in transition). Country A gets credit for reducing emissions (at a lower cost that it could domestically), Country B receives foreign investment and advanced technologies, and global greenhouse gas emissions are reduced: a "win-win-win" scenario. Not only governments, but businesses and other private organizations will be able to participate directly in these projects. Some aspects of this approach have already been tested under the Convention through a voluntary programme for "Activities Implemented Jointly". Reporting rules, a monitoring system, institutions, and project guidelines must still be adopted. Not only must this infrastructure establish the system's credibility, but it must ensure that JI projects transfer appropriate and current technology, avoid adverse social and environmental impacts, and avoid distorting the local market. -- A Clean Development Mechanism will provide credit for financing emissions-reducing or emissions-avoiding projects in developing countries. This promises to be an important new avenue through which governments and private corporations will transfer clean technologies and promote sustainable development. Credit will be earned in the form of "certified emissions reductions". Whereas joint implementation and emissions trading merely shift around the pieces of the industrial countries' overall 5% target, the CDM involves emissions in developing countries (which do not have targets). This in effect increases the overall emissions cap. Verification is therefore particularly important for this mechanism. The Protocol already details some of the ground rules. The CDM will be governed by the Parties through an Executive Board, and reductions will be certified by one or more independent organizations. To be certified, a deal must be approved by all involved parties, demonstrate a measurable and long-term ability to reduce emissions, and promise reductions that would be additional to any that would otherwise occur. A share of the proceeds from CDM projects will be used to cover administrative expenses and to help the most vulnerable developing countries meet the costs of adapting to climate change impacts. Again, the operational guidelines must still be worked out. Conclusion: The 21st century and beyond Climate change would have lasting consequences. One giant asteroid came along 65 million years ago, and that was it for the dinosaurs. In facing up to man-made climate change, human beings are going to have to think in terms of decades and centuries. The job is just beginning. Many of the effects of climate shifts will not be apparent for two or three generations. In the future, everyone may be hearing about -- and living with -- this problem. The Framework Convention takes this into account. It establishes institutions to support efforts to carry out long-term commitments and to monitor long-term efforts to minimize -- and adjust to -- climate change. The Conference of the Parties, in which all states that have ratified the treaty are represented, is the Convention's supreme body. It met for the first time in 1995 and continues to meet on a regular basis to promote and review the implementation of the Convention. The Conference of the Parties is assisted by two subsidiary bodies (or committees), one for scientific and technological advice and the other for implementation. It can establish other bodies as well, whether temporary or permanent, to help it with its work. It can also strengthen the Convention, as it did in Kyoto in 1997. The Protocol's five per cent cut may seem a modest start, but given the rise in emissions that would otherwise be expected -- and remember that emissions in a number of developed countries have risen steadily since the 1990 base year -- many countries are going to have to make a significant effort to meet their commitment. The Kyoto Protocol makes an important promise: to reduce greenhouse gases in developed countries by the end of the first decade of the new century. It should be judged a success if it arrests and reverses the 200-year trend of rising emissions in the industrialized world and hastens the transition to a climate-friendly global economy. BOX: What is the greenhouse effect? In the long term, the earth must shed energy into space at the same rate at which it absorbs energy from the sun. Solar energy arrives in the form of short-wavelength radiation. Some of this radiation is reflected away by the earth's surface and atmosphere. Most of it, however, passes straight through the atmosphere to warm the earth's surface. The earth gets rid of this energy (sends it back out into space) in the form of long- wavelength, infra-red radiation. Most of the infra-red radiation emitted upwards by the earth's surface is absorbed in the atmosphere by water vapour, carbon dioxide, and the other naturally occurring "greenhouse gases". These gases prevent energy from passing directly from the surface out into space. Instead, many interacting processes (including radiation, air currents, evaporation, cloud-formation, and rainfall) transport the energy high into the atmosphere. From there it can radiate into space. This slower, more indirect process is fortunate for us, because if the surface of the earth could radiate energy into space unhindered, the earth would be a cold, lifeless place -- a bleak and barren planet rather like Mars. By increasing the atmosphere's ability to absorb infra-red energy, our greenhouse gas emissions are disturbing the way the climate maintains this balance between incoming and outgoing energy. A doubling of the concentration of long-lived greenhouse gases (which is projected to occur early in the next century) would, if nothing else changed, reduce the rate at which the planet can shed energy into space by about 2 per cent. Energy cannot simply accumulate. The climate somehow will have to adjust to get rid of the extra energy -- and while 2 per cent may not sound like much, over the entire earth that amounts to trapping the energy content of some 3 million tons of oil every minute. Scientists point out that we are altering the energy "engine" that drives the climate system. Something has to change to absorb the shock. Published by the United Nations Environment Programme (UNEP) and the Climate Change Secretariat (UNFCCC). Revised in September 1999. This booklet is intended for public information purposes only and is not an official document. Permission is granted to reproduce or translate the contents giving appropriate credit. For more information, contact UNEP's Information Unit for Conventions (UNEP/IUC), International Environment House (Geneva), Box 356, 1219 Châtelaine, Switzerland, or [email protected]. For more information, please contact: United Nations Environment Programme Information Unit for Conventions International Environment House, Geneva C.P. 356, 1219 Châtelaine, Switzerland Climate Change Secretariat PO Box 260124 D-53153 Bonn, Germany
http://unfccc.int/cop5/convkp/begconkp.html
13
15
Historical Sketch of the California Indians << previous - Historical Sketch of the California Indians - In 1836, Texas revolted against the Republic of Mexico and declared its independence. While there was strong support for annexation into the United States, this was delayed by the growing antagonism between slave and non-slave states. Annexation came, however, in December 1845. At this point, the United States attempted to negotiate a purchase of New Mexico and California from Mexico; but this was unsuccessful. In 1846, the United States declared war with Mexico. While the war was controversial at home, the US proceeded vigorously and, by September 1847, United States troops entered Mexico City and occupied it. A peace settlement was finally secured in the form of the Treaty of Guadalupe Hidalgo, on February 2, 1848. Mexico ceded all of its territories in the Southwest, including Alta California. An important condition of this treaty was that Mexican citizens who chose to remain in the ceded territories would become American citizens and their rights would be respected. Since Mexican citizenship had been granted indigenous people in the treaty with Spain, this included all indigenous people within these territories. In point of fact, the United States military had already seized control of Alta California in 1846 and had established its base of operations in Monterey. After Mexico's cession of territories, Anglo-Europeans and Americans moved quickly to promote California statehood, which was officially granted by 1850. California entered the Union as a non-slave state but, as we will see, only Black Africans were considered relevant to the Union's slavery issue. Gold was discovered at Sutter's mill, in Coloma, by James Marshall, in January 1848. Sutter and Marshall moved rapidly to secure their claim on the gold-bearing territories and they did this by attempting to negotiate a treaty with the local Indians, the Nisenan Maidu. This was one of the first treaties attempted with Indians in California. However, when Sutter filed this treaty with the new American military government, in Monterey, he was told that "the United States government did not recognize the right of Indians to lease, sell, or rent their lands." By the time that California had become a state, in September 1850, the rush for gold had brought hundreds of thousands of people into the territory and California Indians, for the first time ever, had become a minority. Also, for the first time ever, the entire population of Indians was threatened. This was no infiltration from the Pacific Coast inland; it was a pervasive, aggressive appropriation of the entire territory and all of its resources. As easy gold strikes were depleted, people turned to farming, ranching, or logging. A multitude of projects blossomed. The intruding population of gold-seeking miners was hostile toward the Indians, except where they could secure Indian labor for their mines. The wave of Gold-Rush immigration brought the usual burden of European diseases, to which the indigenous population had no immunity, but it also brought environmental degradation on an unprecedented scale. Rivers that had provided a clear entry to spawning salmon from eternity were becoming so choked with debris that the salmon were dying off without reproduction. Ranching and lumber operations soon added to the degradation. The environmental impact on the state was overwhelming. While the pre-mission population of 310,000 indigenous people had dropped to 200,000 during the mission period and dropped to 150,000 or fewer by the end of the Mexican period, it plummeted to less than 30,000 in the twenty years of Gold-Rush California, to 1870. Meanwhile, the population of non-indigenous people, still a minority in 1848, had shot to 700,000 by 1870. In the aftermath, many California tribes were declared extinct and almost none had successfully preserved their cultural ways of life. For most, even the retention of a cultural memory, for traditional purposes and social order, was close to impossible. (Recall that these were oral histories, completely dependent upon survival of old masters and training of young people who would maintain the traditions.) As constituted, the State of California made no recognition of Indians as citizens with civil rights; nor did the new state treat Indians in any way as sovereign people; indeed, a majority of Whites in the State hoped for the early removal of the Indian population. The attitude of California citizens and governors was shaped by a combination of early Spanish assumptions and American beliefs imported from the East, where the concept of "removal" had dominated policy for many decades. But "removal" had always before meant removal-to-the-west, to "Indian Territory." In California, there was nothing to the west! To the State of California, there were really only two concerns regarding the indigenous people of the state --- protection of white settlers and miners from attack and loss of property and regulation of Indians as a labor force. In April of 1850, the State's first legislature passed An Act for the Government and Protection of Indians. While the State carefully prohibited slavery of any form, it embraced a system in which any "able-bodied Indians were liable to arrest on the complaint of any resident if they could not support themselves or were found loitering or strolling about or were leading an immoral or profligate course of life. If it was determined by proper authority that an Indian was a vagrant, he or she could be hired out within twenty-four hours for the highest price for any term not exceeding four months." (Rawls, 86) In effect, all Indians, including (perhaps, especially) children, faced indentured servitude effected through a simple procedure of arrest and assignment through any local justice-of-the-peace. Once indentured, the term limitation was easily (and always) exceeded. The result was a profitable "slave trade" in able-bodied Indian men, women, and children throughout Northern California. Children were readily bought and sold, for household work; and women were purchased for both household work and sexual liaisons. The Federal government, however, had a long-standing relationship with Indians throughout the growing United States, and it accepted some degree of responsibility for their safety and well being. Since the Mexican administration of Alta California had at least technically extended citizenship to all Indians, this placed the Federal government in a position of opposition to State policy. In consequence, State and Federal relationships with Indians were always at odds, and the Federal role of protection was always difficult to realize across the great distance separating California from Washington. Several Indian sub-agencies had already been created through the military administration of California prior to statehood. However, in 1851, new Federal agents were sent to make treaties with the Indians. Three agents were selected for California; these were Redick McKee, George W. Barbour, and O. M. Wozencraft. Unfortunately, there was no appreciation, in the East, for the number of tribes and tribelets resident in California or for the multitude of languages spoken there; little of the previous Federal experience with Indians was relevant to making treaties in California. And furthermore, the treaties attempted an innovation in Federal Indian policy. Rather than being "peace treaties" that attempted to guarantee safe and non-hostile removal to other lands, these treaties attempted to locate reservations of land within the State itself. While the three agents "successfully" negotiated eighteen treaties with groups of California Indians, including substantial reservation lands for their occupation and more substantial surrender of traditional lands for white settlement, later studies have shown that their lack of experience with California Indians was telling. Heizer and Kroeber, a century later, reported that of the 139 signatory groups, 67 are identifiable as tribelets, 45 are merely village names, 14 are duplicates of names heard and spelled somewhat differently without the commissioners being aware of the fact, and 13 are either unidentifiable or personal names. Completed early in 1852, the treaties went to the United States Senate for ratification in July and ratification was denied, based on the overwhelming strength of opposition coming from the State of California itself. The treaties had set aside eighteen reserves of land for the exclusive use of the Indian groups (a total of 11,700 square miles) and had promised various kinds of Federal aid (school, farming instruction and equipment, seed, cloth, etc.) as well as specific rights to maintain traditional hunting and fishing practices. In proportion, the amount of land surrendered to White occupation and use was huge; but Californians were quick to argue that the land reserved was too much and too good for use of indigenous people. The persistent view of Californians was that indigenous people possessed no culture worthy of any claim to habitable land and that they should be disposed of in any convenient way. The Indian tribes were never informed of the Senate's decision against ratification and an unusual injunction of secrecy kept the treaty documents out of public scrutiny until 1905. With no legal treaties, however, the Federal government was still left with the problem of protecting the indigenous population of California; and it was becoming extremely clear that the Indians would be exterminated if nothing was done. The Federal solution, taken by Congress in 1853 and 1855, was to establish seven military reservations where Indians could be placed, isolated from contact with Whites, fed, and trained to become farmers and stock growers. The first of these was located at Tejon Pass in 1853, and the last was located in San Diego County as the Mission Indian reservation, in 1887. Among these was a reservation in Hoopa Valley, established in 1864, under the guise of a "treaty of peace and friendship between the United States Government and the Hoopa, South Fork, Redwood, and Grouse Creek Indians." But the Hoopa military reservation was actually established as a part of the same Congressional program and was not respected by the Federal government as a treatied reservation granting sovereignty to the Hupa people. No treaties were ever successfully negotiated and ratified between the Federal government and the indigenous people of California, though the Federal government continued to struggle with the problem of protecting Indian rights. In 1928, after the failed treaties had finally been uncovered, an act of Congress allowed Indians to sue the Federal government for the lost compensation involved in the 18 unratified treaties. The basis of compensation would be the reservation lands promised, not the vast amount of lands surrendered. The suit was prosecuted by the Attorney General for the State of California and was settled in 1944 with a total award of $17,053,941. However, the Federal government claimed to have spent $12,029,099 for the protection of California Indians and deducted this amount from the award. In 1950 Congress authorized a payment of $150 to each person "on the corrected and updated roster of California Indians prepared under the original provisions of the act." In many ways, it would seem that the Indians' plight was given up to fate by the early 1860s and the Federal system of establishing reservations withdrew into a skeleton operation that was mainly aimed at merely incarcerating Indians "for their own good." By the 1870s, the Indian population of California had almost hit bottom and the majority of Anglo-European Californians no longer viewed Indians as a problem. Indians had very largely disappeared from view. Ironically, the great Indian Wars of the Plains were just beginning and would continue until 1890. The Rush to California had left the majority of American Indians isolated on the Prairies and Plains, in the old "Indian Territory." The dissection of land west of the Mississippi was triggered by the Homestead Act of 1865 and was further accelerated by gold discoveries in South Dakota and Colorado. As the reserves of Indian land in the West were concentrated and carved apart and as Indians were moved from one place to another to meet the convenience of American farmers and cattlemen, the American public finally began to take note of the Indians' fate. On the eve of the beginnings of serious anthropological study and reconstruction of indigeous cultures, a new social and political era began for American Indians. It was the era of reform movements. Reformers accepted reservation life as a fact and, equally, accepted the fact that only reservations in relatively desolate areas and on useless land would be tolerable to the majority of White settlers. It was becoming obvious that American Indians would not be able to survive in their traditional lifeways. Environmental degradation in California had proceeded so far that hunting and gathering had become impossible in most areas of the state. Reformers naturally assumed that the only route to long-term survival of Native Americans was through cultural assimilation and, in their minds, this meant the destruction of tribal authority and culture. This position remained the official policy of the Bureau of Indian Affairs well into the Twentieth Century. Cultural assimilation meant teaching Indians to embrace Christianity and to become farmers who could raise more than needed for subsistence and could, consequently, sell their over-production on the open market for a profit. Ultimately, this meant destruction of tribal authority and sovereignty; Indians were to become individual citizens of the United States. Christian missionaries moved into Indian enclaves and onto the reservations; indeed, the Christian denominations competed with each other to win Native American souls to their own beliefs. At the same time and very much in the understanding that assimilation was possible only for the very young, the BIA oversaw creation of Indian schools. Most of these were boarding schools which provided separation of Indian children from their families. Carlisle Indian Industrial School, founded in 1879 by Richard Pratt, a staunch advocate of immediate assimilation, set the process in motion. Carlisle was followed by boarding schools at Santa Fe, Carson, and Phoenix, all in 1890, as well as others, later on. Calling these institutions "reforms" remains ironic since, for a society whose own Constitution provided for freedom of speech and religion, the BIA was actually overseeing a massive experiment in brain washing in which Native American religions were deemed illegal and in which little children were forbidden to speak their own languages. The boarding school project became a scandalous incarceration of at least one-fourth of the Indian children who grew up from the 1890s to the 1930s, depriving them of family relationships and warmth as well as access to their cultural heritage. Others conceived of the route to assimilation as a political process that would inevitably require termination of tribal sovereignty and deliverance of Native Americans to the laws of the United States. As early as 1871, an amendment to the annual Indian Appropriations Bill legally revoked tribal sovereignty and placed Native Americans under jurisdiction of the United States, blocking further treatment as independent nations. Federal government could now simply legislate for Native American communities; and the implicit message was that Indians must make swift progress toward participating in this process by becoming citizens. It is within this framework that one must view the General Allotment Act of 1887, also called the Dawes Severalty Act after its sponsor Senator Henry L. Dawes of Massachusetts. While the Dawes Act allowed the President to move slowly in selecting Native American reservations that were ripe for allotment, its practical implications were ominous. Each adult head of family was allotted 160 acres; single adults were allotted 80 acres; and single minors were allotted 40 acres. While Indians could choose their own land, the President had the right to assign it within four years. Individuals who accepted allotment became citizens and fell under the laws of the state in which they resided. The allotment was given within a twenty-five year trust which prevented sale. While the Dawes Act seemed to provide exactly the conveyance from tribalism to citizenship that reformers had wanted, it also provided the legal key to acquisition of Indian land that White settlers wanted. The final provision of the Dawes Act was that "surplus land," all remaining reservation land that had not been allotted to individual Indians living on the reservation, could be sold to settlers. In 1881, Indians had owned 155,632,312 acres of land on reservations. As allotment proceeded and surplus land was sold out to non-Indians, this figure was reduced to 104,319,349 acres, in 1890, and 77,865,373 acres, in 1900. While, initially, Indians were prohibited from selling or leasing their allotted lands, these prohibitions eroded away rapidly. They had been put in place in order to guarantee that allotted Indians would move toward self-sufficiency through agriculture or cattle grazing. However, much of the land was useless or Indians were poorly prepared. It was in their own immediate interest and definitely the interests of White farmers, ranchers, or settlers to lease or sell their land. Hence, even more Indian land disappeared, in the interests of immediate survival. By 1907, even the Five Civilized Tribes, initially exempted from allotment, had passed through the process, lost most of their promised Indian Territory, and become citizens of the newly created State of Oklahoma. In California, allotment had a far smaller effect since the number of reservations was small, in the 1880s. In fact, the problem was quite the opposite; California Indians lacked treaties and reservations and, for that matter, much attention from the Federal government. Many of California's Indians, especially in the southern portion of the state, were living in small bands, attempting to survive through agriculture, residing on public or private lands either by permission, habit, or neglect. When Americans took an interest in the land, the Indians were simply evicted, usually with force and ignoring any improvements they had made. No single person is more important to Indian reform in California than Helen Hunt Jackson. Born in 1830 to an academic family at Amherst College, Massachusetts, and only recently remarried to a Colorado banker, William Jackson, Helen Hunt Jackson became a passionate advocate for the Indians, in 1879, when she learned about the plight of the Ponca Indians in South Dakota. In 1881, her book A Century of Dishonor told the sad story of the Ponca, and Jackson used it to lobby actively for reformed Federal Indian policies. With the successful completion of these efforts and an invitation to write about California in Century Magazine, Jackson arrived in Los Angeles in 1881. She traveled widely in Southern California and what she found there was the wreckage of the Mission Indians, remnants of all the Southern California tribes who had been missionized, secularized, and then abandoned to mere survival. In 1882, Helen Hunt Jackson and Abbot Kinney were appointed special Federal agents and were assigned the task of visiting Mission Indians with the purpose of locating lands in the public domain that could be designated as reservations for them. It was during this period that Jackson wrote her protest novel, Ramona, dramatizing the tragic treatment of the Mission Indians. Jackson and Kinney filed a powerful report with the Commissioner of Indian Affairs. This represented a detailed study of Indians in the three southern most counties and especially undertook an appraisal of the dismal condition of their claims to the lands that they had pastured and farmed since the era of Mexican desecularization. Their final report and recommendations were submitted to U. S. Indian Commissioner Hiram Price in January 1884; however, legislation based on their recommendations failed to pass Congress. While Helen Hunt Jackson died, in 1885, her efforts were carried onward by a collection of reform groups, including the Women's National Indian Association, the Indian Rights Association, and the Lake Mohonk Conference. Thanks to their persistence and the long process of re-initiating legislation, annually, the Act for the Relief of the Mission Indians in the State of California was finally passed in January 1891. Congress passed enabling legislation early in 1892. The actual tasks of surveying and exchanging land and the granting of legal titles lingered on into the Twentieth Century. One final twist of White-Indian relations throughout the period lies in a movement of the 1920s to transfer authority and responsibility out of Federal hands and into State hands. The movement became focused and intensified after passage of the California Indian Jurisdiction Act in 1928 and the transfer from Federal trust status to State jurisdiction became know, ironically, as "termination." Over the thirty years, from 1928 to 1958, there were aggressive attempts made by both Federal and State agencies to terminate California Indian reservations and rancherias. In 1958, Congress passed the California Indian "rancheria bill" which allowed forty-one rancherias to voluntarily leave Federal trust status and enter the status of "fee patent lands" under state jurisdiction. Only about five other rancherias have voluntarily terminated since 1958. In all, about half of Californias reservations and rancherias remain under Federal trusts. Rawls, James J. Indians of California: The Changing Image (University of Oklahoma Press, 1984) Hurtado, Albert L. Indian Survival on the California Frontier (Yale University Press, 1988) Forbes, Jack Native Americans of California and Nevada (Naturegraph Publishers, 1982) Heizer, Robert F.(ed) Federal Concern about Conditions of California Indians, 1853-1913 (Ballena Press, 1979 Prucha, Francis Paul Documents of United States Indian Policy (University of Nebraska Press, 1990)
http://mojavedesert.net/california-indian-history/04.html
13
125
|Mexico Table of Contents The Mexican wars of independence (1810-21) left a legacy of economic stagnation that persisted until the 1870s. Political instability and foreign invasion deterred foreign investment, risk-taking, and innovation. Most available capital left with its Spanish owners following independence. Instead of investing in productive enterprises and thereby spurring economic growth, many wealthy Mexicans converted their assets into tangible, secure, and often unproductive property. The seeds of economic modernization were laid under the restored Republic (1867-76) (see The Restoration, 1867-76, ch. 1). President Benito Juárez (1855-72) sought to attract foreign capital to finance Mexico's economic modernization. His government revised the tax and tariff structure to revitalize the mining industry, and it improved the transportation and communications infrastructure to allow fuller exploitation of the country's natural resources. The government let contracts for construction of a new rail line northward to the United States, and it completed the commercially vital Mexico City-Veracruz railroad, begun in 1837. Protected by high tariffs, Mexico's textile industry doubled its production of processed items between 1854 and 1877. But overall, manufacturing grew only modestly, and economic stagnation continued. During the Porfiriato (1876-1910), however, Mexico underwent rapid and sustained growth, and laid the foundations for a modern economy. Taking "order and progress" as his watchwords, President José de la Cruz Porfirio Díaz established the rule of law, political stability, and social peace, which brought the increased capital investment that would finance national development and modernization. Rural banditry was suppressed, communications and transportation facilities were modernized, and local customs duties that had hindered domestic trade were abolished. Revolution and Aftermath The Mexican Revolution (1910-20) severely disrupted the Mexican economy, erasing many of the gains achieved during the Porfiriato. The labor force declined sharply, with the economically active share of the population falling from 35 percent in 1910 to 31 percent in 1930. Between 1910 and 1921, the population suffered an overall net decline of 360,000 people. The livestock supply was severely depleted, as thousands of cattle were lost to the depredations of rival militias. Cotton, coffee, and sugarcane went unharvested as workers abandoned the fields either to join or flee the fighting. The result was a precipitous drop in agricultural output. The disruption of communications and rail transportation made distribution unreliable, prompting further reductions in the production of perishable goods. As agricultural and manufacturing output declined, black markets flourished in the major cities. The banking system was shattered, public credit disappeared, and the currency was destroyed. The mining sector suffered huge losses, with gold production falling some 80 percent between 1910 and 1916, and silver and copper output each declining 65 percent. The Great Depression The Great Depression brought Mexico a sharp drop in national income and internal demand after 1929, challenging the country's ability to fulfill its constitutional mandate to promote social equity. Still, Mexico did not feel the effects of the Great Depression as directly as some other countries did. In the early 1930s, manufacturing and other sectors serving the domestic economy began a slow recovery. The upturn was facilitated by several key structural reforms, notably the railroad nationalization of 1929 and 1930, the nationalization of the petroleum industry in 1938, and the acceleration of land reform, first under President Emilio Portes Gil (1928-30) and then under President Lázaro Cárdenas (1934-40) in the late 1930s. To foster industrial expansion, the administration of Manuel Ávila Camacho (1940-46) in 1941 reorganized the National Finance Bank (Nacional Financiera--Nafinsa), which had originally been created in 1934 as an investment bank. During the 1930s, agricultural production also rose steadily, and urban employment expanded in response to rising domestic demand. The government offered tax incentives for production directed toward the home market. Import-substitution industrialization (see Glossary) began to make a slow advance during the 1930s, although it was not yet official government policy. Postwar Economic Growth Mexico's inward-looking development strategy produced sustained economic growth of 3 to 4 percent and modest 3 percent inflation annually from the 1940s until the late 1960s. The government fostered the development of consumer goods industries directed toward domestic markets by imposing high protective tariffs and other barriers to imports. The share of imports subject to licensing requirements rose from 28 percent in 1956 to an average of more than 60 percent during the 1960s and about 70 percent in the 1970s. Industry accounted for 22 percent of total output in 1950, 24 percent in 1960, and 29 percent in 1970. The share of total output arising from agriculture and other primary activities declined during the same period, while services stayed constant. The government promoted industrial expansion through public investment in agricultural, energy, and transportation infrastructure. Cities grew rapidly during these years, reflecting the shift of employment from agriculture to industry and services. The urban population increased at a high rate after 1940 (see Urban Society, ch. 2). Growth of the urban labor force exceeded even the growth rate of industrial employment, with surplus workers taking low-paying service jobs. In the years following World War II, President Miguel Alemán Valdés's (1946-52) full-scale import-substitution program stimulated output by boosting internal demand. The government raised import controls on consumer goods but relaxed them on capital goods, which it purchased with international reserves accumulated during the war. The government progressively undervalued the peso to reduce the costs of imported capital goods and expand productive capacity, and it spent heavily on infrastructure. By 1950 Mexico's road network had expanded to 21,000 kilometers, of which some 13,600 were paved. Mexico's strong economic performance continued into the 1960s, when GDP growth averaged about 7 percent overall and about 3 percent per capita. Consumer price inflation averaged only 3 percent annually. Manufacturing remained the country's dominant growth sector, expanding 7 percent annually and attracting considerable foreign investment. Mining grew at an annual rate of nearly 4 percent, trade at 6 percent, and agriculture at 3 percent. By 1970 Mexico had diversified its export base and become largely self-sufficient in food crops, steel, and most consumer goods. Although its imports remained high, most were capital goods used to expand domestic production. Deterioration in the 1970s Although the Mexican economy maintained its rapid growth during most of the 1970s, it was progressively undermined by fiscal mismanagement and a resulting sharp deterioration of the investment climate. The GDP grew more than 6 percent annually during the administration of President Luis Echeverría Álvarez (1970-76), and at about a 6 percent rate during that of his successor, José López Portillo y Pacheco (1976-82). But economic activity fluctuated wildly during the decade, with spurts of rapid growth followed by sharp depressions in 1976 and 1982. Fiscal profligacy combined with the 1973 oil shock to exacerbate inflation and upset the balance of payments. Moreover, President Echeverría's leftist rhetoric and actions--such as abetting illegal land seizures by peasants--eroded investor confidence and alienated the private sector. The balance of payments disequilibrium became unmanageable as capital flight intensified, forcing the government in 1976 to devalue the peso by 45 percent. The action ended Mexico's twenty-year fixed exchange rate. Although significant oil discoveries in 1976 allowed a temporary recovery, the windfall from petroleum sales also allowed continuation of Echeverría's destructive fiscal policies. In the mid-1970s, Mexico went from being a net importer of oil and petroleum products to a significant exporter. Oil and petrochemicals became the economy's most dynamic growth sector. Rising oil income allowed the government to continue its expansionary fiscal policy, partially financed by higher foreign borrowing. Between 1978 and 1981, the economy grew more than 8 percent annually, as the government spent heavily on energy, transportation, and basic industries. Manufacturing output expanded modestly during these years, growing by 9 percent in 1978, 9 percent in 1979, and 6 percent in 1980. This renewed growth rested on shaky foundations. Mexico's external indebtedness mounted, and the peso became increasingly overvalued, hurting nonoil exports in the late 1970s and forcing a second peso devaluation in 1980. Production of basic food crops stagnated, forcing Mexico in the early 1980s to become a net importer of foodstuffs. The portion of import categories subject to controls rose from 20 percent of the total in 1977 to 24 percent in 1979. The government raised tariffs concurrently to shield domestic producers from foreign competition, further hampering the modernization and competitiveness of Mexican industry. 1982 Crisis and Recovery The macroeconomic policies of the 1970s left Mexico's economy highly vulnerable to external conditions. These turned sharply against Mexico in the early 1980s, and caused the worst recession since the 1930s. By mid-1981, Mexico was beset by falling oil prices, higher world interest rates, rising inflation, a chronically overvalued peso, and a deteriorating balance of payments that spurred massive capital flight. This disequilibrium, along with the virtual disappearance of Mexico's international reserves--by the end of 1982 they were insufficient to cover three weeks' imports--forced the government to devalue the peso three times during 1982. The devaluation further fueled inflation and prevented short-term recovery. The devaluations depressed real wages and increased the private sector's burden in servicing its dollar-denominated debt. Interest payments on long-term debt alone were equal to 28 percent of export revenue. Cut off from additional credit, the government declared an involuntary moratorium on debt payments in August 1982, and the following month it announced the nationalization of Mexico's private banking system. By late 1982, incoming President Miguel de la Madrid had to reduce public spending drastically, stimulate exports, and foster economic growth to balance the national accounts. Recovery was extremely slow to materialize, however. The economy stagnated throughout the 1980s as a result of continuing negative terms of trade, high domestic interest rates, and scarce credit. Widespread fears that the government might fail to achieve fiscal balance and have to expand the money supply and raise taxes deterred private investment and encouraged massive capital flight that further increased inflationary pressures. The resulting reduction in domestic savings impeded growth, as did the government's rapid and drastic reductions in public investment and its raising of real domestic interest rates to deter capital flight. Mexico's GDP grew at an average rate of just 0.1 percent per year between 1983 and 1988, while inflation stayed extremely high (see table 7, Appendix). Public consumption grew at an average annual rate of less than 2 percent, and private consumption not at all. Total investment fell at an average annual rate of 4 percent and public investment at an 11 percent pace. Throughout the 1980s, the productive sectors of the economy contributed a decreasing share to GDP, while the services sectors expanded their share, reflecting the rapid growth of the informal economy. De la Madrid's stabilization strategy imposed high social costs: real disposable income per capita fell 5 percent each year between 1983 and 1988. High levels of unemployment and underemployment, especially in rural areas, stimulated migration to Mexico City and to the United States. By 1988 inflation was at last under control, fiscal and monetary discipline attained, relative price adjustment achieved, structural reform in trade and public-sector management underway, and the preconditions for recovery in place. But these positive developments were inadequate to attract foreign investment and return capital in sufficient quantities for sustained recovery. A shift in development strategy became necessary, predicated on the need to generate a net capital inflow. In April 1989, President Carlos Salinas de Gortari an-nounced his government's national development plan for 1989-94, which called for annual GDP growth of 6 percent and an inflation rate similar to those of Mexico's main trading partners. Salinas planned to achieve this sustained growth by boosting the investment share of GDP and by encouraging private investment through denationalization of state enterprises and deregulation of the economy. His first priority was to reduce Mexico's external debt; in mid-1989 the government reached agreement with its commercial bank creditors to reduce its medium- and long-term debt. The following year, Salinas took his next step toward higher capital inflows by lowering domestic borrowing costs, reprivatizing the banking system, and broaching the idea of a free-trade agreement with the United States. These announcements were soon followed by increased levels of capital repatriation and foreign investment. After rising impressively during the early years of Salinas's presidency, the growth rate of real GDP began to slow during the early 1990s. During 1993 the economy grew by a negligible amount, but growth rebounded to almost 4 percent during 1994, as fiscal and monetary policy were relaxed and foreign investment was bolstered by United States ratification of the North American Free Trade Agreement (NAFTA). In 1994 the commerce and services sectors accounted for 22 percent of Mexico's total GDP. Manufacturing followed at 20 percent; transport and communications at 10 percent; agriculture, forestry, and fishing at 8 percent; construction at 5 percent; mining at 2 percent; and electricity, gas, and water at 2 percent (see fig. 9). Some two-thirds of GDP in 1994 (67 percent) was spent on private consumption, 11 percent on public consumption, and 22 percent on fixed investment. During 1994 private consumption rose by 4 percent, public consumption by 2 percent, public investment by 9 percent, and private investment by 8 percent. However, the collapse of the new peso in December 1994 and the ensuing economic crisis caused the economy to contract by an estimated 7 percent during 1995. Investment and consumption both fell sharply, the latter by some 10 percent. Agriculture, livestock, and fishing contracted by 4 percent; mining by 1 percent; manufacturing by 6 percent; construction by 22 percent; and transport, storage, and communications by 2 percent. The only sector to register positive growth was utilities, which expanded by 3 percent. By 1996 Mexican government and independent analysts saw signs that the country had begun to emerge from its economic recession. The economy contracted by a modest 1 percent during the first quarter of 1996. The Mexican government reported strong growth of 7 percent for the second quarter, and the Union Bank of Switzerland forecast economic growth of 4 percent for all of 1996. Source: U.S. Library of Congress
http://countrystudies.us/mexico/65.htm
13
21
The concept of sustainable development was popularized by the World Commission on Environment and Development in its report "Our Common Future" that was published in 1987. The Commission defined sustainable development as development that meets the needs of the present without compromising the ability of future generations to meet their own needs. This definition is the one most often cited, but the World Commission also made the following observations: - Sustainable development requires that overriding priority be given to meeting basic human needs, especially those of the poor, and recognition of the limitations associated with technology and social organizations that impact the capacity of the environment to meet both present and future needs. - Sustainable development requires the integration of economic and ecological considerations in decision-making. - Governments must make key national, economic, and sector-specific agencies directly responsible for ensuring that their policies and activities support development that is economically and ecologically sustainable. - No single blueprint exists for sustainable development, because conditions vary among countries. Each country will have to create its own approach to reflect its needs. - No quick-fix solutions exist. The journey towards sustainable development is often as important as the end product. - The outcome will not always leave everyone better off. There will be winners and losers, always making achievement of sustainable development difficult. Key Principles of Sustainable Development Regarding water management, sustainable development has generated attention on four principles. First, fresh water should be regarded as a finite and vulnerable resource. Effective management links both land and water across the whole of a catchment or groundwater aquifer, and therefore effective management requires a holistic approach in which social and economic development is linked to protection of natural ecosystems. Second, water development and management should be based on a participatory approach, involving users, planners, and policymakers at all levels. This also means that decisions should be taken at the lowest (most basic) appropriate level via open public consultation with, and involvement of, users. Third, because women play a central role globally in the provision, management, and safeguarding of water, they should have more opportunity to participate in planning and managing of water resources. Fourth, water has significant economic value, and thus should be recognized as an economic good. However, it also is essential to recognize the basic right of all humans to have access to safe, drinkable water and sanitation. Pricing water as an economic good will discourage wasteful and environmentally damaging uses of water by encouraging conservation and protection of water. Yet current policies and practices often do not reflect these four principles of sustainable water management. The basic human needs for drinking water and sanitation are not met for many people in various countries. During the 1990s, one billion people lacked an assured supply of good quality water, and 1.7 billion people had no adequate sanitation. Water-related diseases caused about 8 percent of all illnesses in developing countries, affecting two billion people each year. Moreover, most countries do not treat water as an economic good. And in many countries, water management is fragmented among many sectors and institutions, making it difficult to manage water holistically. Fragmentation also makes it difficult to integrate environmental, economic, and social considerations, or to link water quality to health, the environment, and economic development. Management often over-relies on centralized administration, with few opportunities for local people to participate in planning, management, and implementation. The World Water Council, with headquarters in Marseille, France, was established in 1996 to provide global leadership for sustainable water management. The council promotes a holistic and participatory approach, combining development of new sources of water supply with economic incentives, especially pricing, to encourage water conservation and to discourage wasteful water use practices. By 2001, the council had led preparation of global Different Perspectives on Sustainable Development Given the above observations from the World Commission, it is not surprising that many different interpretations have emerged for sustainable development. In developed countries, the main interest has focused upon integrating environmental and economic considerations into decisions about development. Particular emphasis has been given to intergenerational equity , or how to ensure that decisions taken today do not have unreasonably negative effects on future generations. For developed countries, there has been concern that in striving to avoid environmental degradation, decisions do not jeopardize economic competitiveness at a global scale. The perspective of developing countries has been different, with priority usually being on how to meet basic needs of present citizens. Thus, the focus has been on intragenerational equity (i.e., fair treatment for the present generation), in the belief that people whose basic needs are not met will not worry about long-term environmental degradation. Furthermore, to ensure meeting basic needs, developing countries often give priority to achieving economic development. These countries are resentful when developed countries argue they should forego the economic benefits, for example, from cutting down rainforests or damming rivers for hydroelectricity. At the 1992 Earth Summit in Rio de Janeiro, these different interpretations led to major disagreements between representatives of developed and developing countries. Pros and Cons of Sustainable Development In the debate over water management approaches, some view sustainable development as a vague and ambiguous concept, leading people to define it to suit their own interests—either economic development or environmental protection. Some suggest that its emphasis on achieving balance between economic development and environmental protection overlooks the importance of ensuring sensitivity to the social and cultural attributes of societies. Others argue that sustainable development imposes the values of Western capitalist systems, and therefore reject it on ideological grounds. Yet supporters of sustainable development argue that ambiguity provides desirable flexibility to customize strategies to reflect the needs and conditions of different countries and societies. Furthermore, its attention to the importance of protecting the environment is viewed as an essential counterbalance to a pattern of decision-making that often gives overriding precedence to economic benefits, regardless of environmental and social costs. As the World Commission on Environment and Development observed, sustainable development is not a magic formula to guarantee economic prosperity, ecological integrity, and cultural sensitivity. However, it has become a powerful concept, triggering much debate and discussion about the implications of development decisions, related to water and other resources, and has led to much more attention about what is an appropriate balance among economic and environmental considerations. Gleick, Peter H. "The Changing Water Paradigm: A Look at Twenty-first CenturyWater Resources Development." Water International. 25 (2000):127–138. Keating, Michael. The Earth Summit's Agenda for Change: A Plain Language Version of Agenda 21 and the Other Rio Agreements. Geneva, Switzerland: The Centre for Our Common Future, 1993. Loucks, David P., and John S. Gladwell, eds. "Sustainability Criteria for Water Resource Systems." IHP International Hydrology Series Cambridge, U.K.: Cambridge University Press. Serageldin, Ismail. Toward Sustainable Management of Water Resources. Washington, D.C.: The World Bank, 1995. World Commission on Environment and Development. Our Common Future. NewYork: Oxford University Press, 1987. Young, Gordon J., James C. I. Dooge, and John C. Rodda, eds. Global Water Resources Issues. Cambridge, U.K.: Cambridge University Press, 1994. WORLD SUMMIT ON SUSTAINABLE DEVELOPMENT In 2002, the city of Johannesburg, South Africa hosted a 10-year follow-up to the 1992 Earth Summit held in Rio de Janeiro, Brazil. The purpose of this international meeting was to bring together major groups, governments, and the United Nations to take action for sustainable development and to review progress since the 1992 Earth Summit. Known as the World Summit on Sustainable Development, the 2002 Summit focused on five key areas: water and sanitation, energy, health, agriculture, and biodiversity. At the meeting, negotiators for 191 countries agreed to the 71-page Summit Plan of Action, intended to set the world's environmental agenda for the next 10 years. The 2002 action plan included goals for reducing by half the proportion of people without access to proper sanitation by 2015, and similarly reducing by half the proportion of people without access to clean drinking water. More information about this Summit is available at <http://www.waterdome.net> . WATER GOALS FOR 2025 The main publication from the 1992 Earth Summit in Rio de Janeiro, Brazil, emphasized that fresh water is essential for a variety of activities: drinking, sanitation, agriculture, inland fisheries, industry, transportation, hydroelectricity generation, urban development, recreation, and other endeavors. The prime goal set in 1992 was to ensure that all humans have access to adequate and good quality water and sanitation. The year 2025 was set as a realistic target date to meet those criteria. Various approaches will be required, including: - protection of the integrity of aquatic ecosystems by anticipating, preventing, and attacking causes of environmental degradation; - effective water pollution and prevention policies and programs; - mandatory environmental assessment of proposed water projects; and - full-cost pricing, after ensuring that basic human needs are satisfied.
http://www.waterencyclopedia.com/St-Ts/Sustainable-Development.html
13
28
In January 1921, the total sum due was decided by an Inter-Allied Reparations Commission and was set at 269 billion gold marks (2,790 gold marks equalled 1 kilogram of pure gold), about £23.6 Billion, about $32 billion (roughly equivalent to $393.6 Billion US Dollars as of 2005). This was a sum that many economists deemed to be excessive because it would have taken Germany until 1984 to pay. Later that year, the amount was reduced to 132 billion marks, which still seemed astronomical to most German observers, both because of the amount itself as well as the terms which would have required Germany to pay until 1984. However, the Wall Street Crash of 1929 and the onset of the Great Depression resulted in calls for a moratorium. On June 20, 1931, realizing that Austria and Germany were on the brink of financial collapse, President Hoover proposed a one-year world moratorium on reparations and inter-governmental debt payments. Britain quickly accepted this proposal, but it met with stiff resistance and seventeen days of delay by André Tardieu of France . During this delay the situation in Germany as well as renewed fears of hyperinflation had resulted in a countrywide run on the bank, draining some $300,000,000. All banks in Germany were for a time closed. The worsening economic distress within Germany resulted in the Lausanne Conference, which voted to cancel reparations. By this time Germany had paid one eighth of the sum required under the Treaty of Versailles. However, the Lausanne agreement was contingent upon the United States agreeing to also defer payment of the war debt owed them by the Western European governments. The plan ultimately failed not because of the U.S. Congress refusal to go along but in fact because it became irrelevant upon Hitler's rise to power. Within Germany the general public largely saw the reparations as a betrayal. Expecting a treaty based on the widely propagandized fourteen points many minorities in Germany, such as the Jews, Communists and Social Democrats, had agitated for peace with the allies because of their religious or intellectual convictions. With the revealing of the actual treaty terms these groups within Germany felt betrayed and at the same time become a target for great distrust. The idea became common that these groups had the entire time been aware of the terms of the Versailles treaty, but secretly colluded with the Allies for personal gain. This phenomenon later became known as the "Stab in the back legend" or Dolchstosslegende, literally "Dagger stab legend". Economists such as Keynes assert that payment of the reparations would have been economically impossible. However, according to William R. Keylor in "Versailles and International Diplomacy", 'An increase in taxation and reduction in consumption in the Weimar Republic would have yielded the requisite export surplus to generate the foreign exchange needed to service the reparation debt.' However this export surplus and the resulting export deficit for those collecting reparations could have created a politically difficult situation. Indeed, this was one of the causes of the UK General Strike of 1926. It has been argued by some that it is a fallacy to consider the reparations as the primary source of the economic condition in Germany from 1919 to 1939. This perspective argues that Germany paid a small portion of the reparations and the hyper-inflation of the early 1920s was a result of the political and economic instability of the Weimar Republic. In fact, the occupation of the Ruhr by the French (which began when Germany failed to supply a required delivery of telegraph poles) did more damage to the economy than the reparations payments. Another fallacy is that these reparations were the single cause of the economic condition that saw Hitler's rise to power. Germany was in fact doing remarkably well after its hyper-inflation of 1923, and was once more one of the world's largest economies. The economy continued to perform reasonably well until the foreign investments funding the economy, and the loans funding reparations payments, were suddenly withdrawn with the Stock Market Crash of 1929. This collapse was magnified by the volume of loans provided to German companies by US lenders. Even the reduced payments of the Dawes plan were primarily financed through a large volume of international loans. From 1924 onward German officials were "virtually flooded with loan offers by foreigners." When these debts suddenly came due it was as if years of reparations payments were compressed into a few short weeks. Also of note are the ideas of A. J. P. Taylor from his book The Origins of the Second World War, in which he claims that the settlement had been too indecisive: it was harsh enough to be seen as punitive, without being crippling enough to prevent Germany regaining its superpower status, and can thus be blamed for the rise of the Reich under Hitler within decades. Infrastructure damage caused by the retreating German troops was also cited. In her book, Peacemakers: The Paris Peace Conference of 1919 and Its Attempt to End War, Margaret MacMillan described the significance of the claims for French and Belgium: "From the start, France and Belgium argued that claims for direct damage should receive priority in any distribution of reparations. In the heavily industrialized north of France, the Germans had shipped out what they wanted for their own use and destroyed much of the rest. Even as German forces were retreating in 1918, they found time to blow up France's most important coal mine". Belgium, however, received none of the financial reparations promised under the treaty, as France and Britain failed to redistribute any of the payments they received. The monetary reparations also helped to create mass hyperinflation in the German economy, which then led to political upheaval. This led Hitler to the forefront of German politics. After Germany’s defeat in World War II, an international conference decided (1953) that Germany would pay the remaining debt only after the country was reunified. Nonetheless, as a show of continued self-effacement, West Germany paid off the principal by 1980. In 1995, after reunification, the new German government announced it would resume payments of the reparations. Germany will finish paying off the Americans in 2010. Iraq's debt burden too big for broke economy to bear; Group of Seven industrial nations edge toward debt forgiveness, seeking to focus oil money on reconstruction.(USA) Apr 14, 2003; Byline: David R. Francis Staff writer of The Christian Science Monitor After World War I, the victorious allies tried to collect...
http://www.reference.com/browse/wiki/World_War_I_reparations
13
42
|A hearing impairment or hearing loss is a full or partial decrease in the ability to detect or understand sounds. Caused by a wide range of biological and environmental factors, loss of hearing can happen to any organism that perceives sound. Sound waves vary in amplitude and in frequency. Amplitude is the sound wave's peak pressure variation. Frequency is the number of cycles per second of a sinusoidal component of a sound wave. Loss of the ability to detect some frequencies, or to detect low-amplitude sounds that an organism naturally detects, is a hearing impairment. Hearing sensitivity is indicated by the quietest sound that an individual can detect, called the hearing threshold. In the case of people and some animals, this threshold can be accurately measured by a behavioral audiogram. A record is made of the quietest sound that consistently prompts a response from the listener. The test is carried out for sounds of different frequencies. There are also electro-physiological tests that can be performed without requiring a behavioral response. Normal hearing thresholds are not the same for all frequencies in any species of animal. If different frequencies of sound are played at the same amplitude, some will be loud, and others quiet or even completely inaudible. Generally, if the gain or amplitude is increased, a sound is more likely to be perceived. Ordinarily, when animals use sound to communicate, hearing in that type of animal is most sensitive for the frequencies produced by calls, or, in the case of humans, speech. This tuning of hearing exists at many levels of the auditory system, all the way from the physical characteristics of the ear to the nerves and tracts that convey the nerve impulses of the auditory portion of the brain. A hearing impairment exists when an individual is not sensitive to the sounds normally heard by its kind. In human beings, the term hearing impairment is usually reserved for people who have relative insensitivity to sound in the speech frequencies. The severity of a hearing impairment is categorized according to how much louder a sound must be made over the usual levels before the listener can detect it. In profound deafness, even the loudest sounds that can be produced by the instrument used to measure hearing (audiometer) may not be detected. There is another aspect to hearing that involves the quality of a sound rather than amplitude. In people, that aspect is usually measured by tests of speech discrimination. Basically, these tests require that the sound is not only detected but understood. There are very rare types of hearing impairments which affect discrimination alone. Types of hearing loss Hearing loss can be classified as conductive or sensorineural. Conductive hearing loss occurs when sound is not normally conducted through the outer or middle ear or both. Since sound can be picked up by a normally sensitive inner ear even if the ear canal, ear drum, and ear ossicles are not working, conductive hearing loss is often only mild and is never worse than a moderate impairment. Hearing thresholds will not rise above 55-60 dB from outer or middle ear problems alone. Generally, with pure conductive hearing loss, the quality of hearing (speech discrimination) is good, as long as the sound is amplified loud enough to be easily heard. A conductive loss can be caused by any of the following: Ear canal obstruction Middle ear abnormalities: Tympanic membrane Ossicles Inner ear abnormalities: Superior canal dehiscence syndrome A sensorineural hearing loss is due to insensitivity of the inner ear, the cochlea, or to impairment of function in the auditory nervous system. It can be mild, moderate, severe, or profound, to the point of total deafness. This is classified as a disability under the ADA and if unable to work is eligible for disability payments. The great majority of human sensorineural hearing loss is caused by abnormalities in the hair cells of the organ of Corti in the cochlea. There are also very unusual sensorineural hearing impairments that involve the VIIIth cranial nerve, the Vestibulocochlear nerve or the auditory portions of the brain. In the rarest of these sorts of hearing loss, only the auditory centers of the brain are affected. In this situation, central hearing loss, sounds may be heard at normal thresholds, but the quality of the sound perceived is so poor that speech can not be understood. Most sensory hearing loss is due to poor hair cell function. The hair cells may be abnormal at birth, or damaged during the lifetime of an individual. There are both external causes of damage, like noise trauma and infection, and intrinsic abnormalities, like deafness genes. Sensorineural hearing loss that results from abnormalities of the central auditory system in the brain is called Central Hearing Impairment. Since the auditory pathways cross back and forth on both sides of the brain, deafness from a central cause is unusual. Long-term exposure to environmental noise Populations of people living near airports or freeways are exposed to levels of noise typically in the 65 to 75 dB(A) range. If lifestyles include significant outdoor or open window conditions, these exposures over time can degrade hearing. The U.S. EPA and various states have set noise standards to protect people from these adverse health risks. The EPA has identified the level of 70 dB(A) for 24 hour exposure as the level necessary to protect the public from hearing loss and other disruptive effects from noise, such as sleep disturbance, stress-related problems, learning detriment, etc. (EPA, 1974). Noise-Induced Hearing Loss (NIHL) typically is centered at 3000, 4000, or 6000 Hz. As noise damage progresses, damage starts affecting lower and higher frequencies. On an audiogram, the resulting configuration has a distinctive notch, sometimes referred to as a "noise notch." As aging and other effects contribute to higher frequency loss (6-8 kHz on an audiogram), this notch may be obscured and entirely disappear. Louder sounds cause damage in a shorter period of time. Estimation of a "safe" duration of exposure is possible using an exchange rate of 3 dB. As 3 dB represents a doubling of intensity of sound, duration of exposure must be cut in half to maintain the same energy dose. For example, the "safe" daily exposure amount at 85 dB A, known as an exposure action value, is 8 hours, while the "safe" exposure at 91 dB(A) is only 2 hours (National Institute for Occupational Safety and Health, 1998). Note that for some people, sound may be damaging at even lower levels than 85 dB A. Exposures to other ototoxins (such as pesticides, some medications including chemotherapy, solvents, etc.) can lead to greater susceptibility to noise damage, as well as causing their own damage. This is called a synergistic interaction. Some American health and safety agencies (such as OSHA and MSHA), use an exchange rate of 5 dB. While this exchange rate is simpler to use, it drastically underestimates the damage caused by very loud noise. For example, at 115 dB, a 3 dB exchange rate would limit exposure to about half a minute; the 5 dB exchange rate allows 15 minutes. While OSHA, MSHA, and FRA provide guidelines to limit noise exposure on the job, there is essentially no regulation or enforcement of sound output for recreational sources and environments, such as sports arenas, musical venues, bars, etc. This lack of regulation resulted from the defunding of ONAC, the EPA's Office of Noise Abatement and Control, in the early 1980s. ONAC was established in 1972 by the Noise Control Act and charged with working to assess and reduce environmental noise. Although the Office still exists, it has not been assigned new funding. Most people in the United States are unaware of the presence of environmental sound at damaging levels, or of the level at which sound becomes harmful. Common sources of damaging noise levels include car stereos, children's toys, transportation, crowds, lawn and maintenance equipment, power tools, gun use, and even hair dryers. Noise damage is cumulative; all sources of damage must be considered to assess risk. If one is exposed to loud sound (including music) at high levels or for extended durations (85 dB A or greater), then hearing impairment will occur. Sound levels increase with proximity; as the source is brought closer to the ear, the sound level increases. This is why music is more likely to cause damage at the same output when listened to through headphones, as the headphones are in closer proximity to the ear drum than a loudspeaker. With the invention of in-ear headphones, these dangers are increased. Hearing loss can be inherited. Both dominant gene and recessive genes exist which can cause mild to profound impairment. If a family has a dominant gene for deafness it will persist across generations because it will manifest itself in the offspring even if it is inherited from only one parent. If a family had genetic hearing impairment caused by a recessive gene it will not always be apparent as it will have to be passed onto offspring from both parents. Dominant and recessive hearing impairment can be syndromic or nonsyndromic. Recent gene mapping has identified dozens of nonsyndromic dominant (DFNA#) and recessive (DFNB#) forms of deafness. - The most common type of congenital hearing impairment in developed countries is DFNB1, also known as Connexin 26 deafness or GJB2-related deafness. - The most common dominant syndromic forms of hearing impairment include Stickler syndrome and Waardenburg syndrome. - The most common recessive syndromic forms of hearing impairment are Pendred syndrome, Large vestibular aqueduct syndrome and Usher syndrome. - The congenital defect microtia can cause full or partial deafness depending upon the severity of the deformity and whether or not certain parts of the inner or middle ear are corrected. Disease or illness - Measles may result in auditory nerve damage - Meningitis may damage the auditory nerve or the cochlea - Autoimmune disease has only recently been recognized as a potential cause for cochlear damage. Although probably rare, it is possible for autoimmune processes to target the cochlea specifically, without symptoms affecting other organs.Wegener's granulomatosis is one of the autoimmune conditions that may precipitate hearing loss. - Mumps (Epidemic parotitis) may result in profound sensorineural hearing loss(90 Decibel|dB or more), unilateral (one ear) or bilateral (both ears). - Presbycusis is a progressive hearing impairment accompanying age, typically affecting sensitivity to higher frequencies (above about 2 kHz). - Adenoids that do not disappear by adolescence may continue to grow and may obstruct the Eustachian tube, causing conductive hearing impairment and nasal infections that can spread to the middle ear. - AIDS and AIDS-related complex|ARC patients frequently experience auditory system anomalies. - HIV (and subsequent opportunistic infections) may directly affect the cochlea and central auditory system. - Chlamydia may cause hearing loss in newborns to whom the disease has been passed at birth. - Fetal alcohol syndrome is reported to cause hearing loss in up to 64% of infants born to alcoholism|alcoholic mothers, from the ototoxic effect on the developing fetus plus malnutrition during pregnancy from the excess ethanol|alcohol intake. - Premature birth results in sensorineural hearing loss approximately 5% of the time. - Syphilis is commonly transmitted from pregnant women to their fetuses, and about a third of the infected children will eventually become deaf. - Otosclerosis is a hardening of the stapes (or stirrup) in the middle ear and causes conductive hearing loss. - Superior canal dehiscence, a gap in the bone cover above the inner ear, can lead to low-frequency conductive hearing loss, autophony and vertigo Some medications cause irreversible damage to the ear, and are limited in their use for this reason. The most important group is the aminoglycosides (main member gentamicin). Various other medications may reversibly affect hearing. This includes some diuretics, aspirin and NSAIDs, and macrolide antibiotics. Extremely heavy hydrocodone (Vicodin or Lorcet) abuse is known to cause hearing impairment. Commentators have speculated that radio talk show host Rush Limbaugh's hearing loss was at least in part caused by his admitted addiction to narcotic pain killers, in particular Vicodin and OxyContin. - There can be damage either to the ear itself or to the brain centers that process the aural information conveyed by the ears. - People who sustain head injury are especially vulnerable to hearing loss or tinnitus, either temporary or permanent. - Exposure to very loud noise (90 Decibel|dB or more, such as jet engines at close range) can cause progressive hearing loss. Exposure to a single event of extremely loud noise (such as explosions) can also cause temporary or permanent hearing loss. A typical source of acoustic trauma is an excessively loud music concert. Quantification of hearing loss The severity of hearing loss is measured by the degree of loudness, as measured in decibels, a sound must attain before being detected by an individual. Hearing loss may be ranked as mild, moderate, severe or profound. It is quite common for someone to have more than one degree of hearing loss (i.e. mild sloping to severe). The following list shows the rankings and their corresponding decibel ranges: Mild: for adults: between 25 and 40 dB; for children: between 20 and 40 dB Moderate: between 41 and 55 dB Moderately severe: between 56 and 70 dB Severe: between 71 and 90 dB Profound: 90 dB or greater The quietest sound one can hear at different frequencies is plotted on an audiogram to reflect one's ability to hear at different frequencies. The range of normal human hearing (from the softest audible sound to the loudest comfortable sound) is so great that the audiogram must be plotted using a logarithmic scale. This large normal range, and the different amounts of hearing loss at different frequencies, make it virtually impossible to accurately describe the amount of hearing loss in simple terms such as percentages or the rankings above. Measuring hearing loss in terms of a percentage is debatable in terms of effectiveness, and has been compared to measuring weight in inches. Though in specific legal situations, where decibels of loss are converted via a recognized legal formula, one can infer a standardized "percentage of hearing loss" which is suitable for legal purposes only. Another method for determining hearing loss, is the Hearing in Noise Test (HINT). HINT technology was developed by the House Ear Institute, and is intended to measure an ability to understand speech in quiet and noisy environments. Unlike pure-tone tests, where only one ear is tested at a time, HINT evaluates hearing using both ears simultaneously (binaural), as binaural hearing is essential for communication in noisy environments, and for sound localization. Some types of hearing impairment can be treated with custom hearing aids. In addition to hearing aids there exist cochlear implants of increasing complexity and effectiveness. These are useful in treating the mild to profound hearing impairment when the onset follows the acquisitions of language and in some cases in children whose hearing loss came before language was acquired. Recent research shows variations in efficacy but some promising studies show that if implanted at a very young age, some profoundly impaired children can acquire effective hearing and speech.. Future strategies include hair cell regeneration. A 2005 study, achieved successful regrowth of cochlea cells in guinea pigs. It is important to note however, that the regrowth of cochlear hair cells does not imply the restoration of hearing sensitivity as the sensory cells may or may not make connections with neurons that carry the signals from hair cells to the brain. A 2008 study has shown in mice, that gene therapy targeting Atoh1 can cause hair cell regrowth and attract neuronal processes. It is expected that a similar treatment will ameliorate hearing in humans.
http://www.otolaryngology.com/hearingloss.html
13
41
|Political structure||Military alliance| |Historical era||World War II| |-||Tripartite Pact||27 September 1940| |-||Anti-Comintern Pact||25 November 1936| |-||Pact of Steel||22 May 1939| |-||Dissolved||2 September 1945| The Axis powers (German: Achsenmächte, Italian: Potenze dell'Asse, Japanese: 枢軸国 Sūjikukoku), also known as the Axis alliance, Axis nations, Axis countries, or just the Axis, was the alignment of nations that fought in the Second World War against the Allied forces. The Axis promoted the alliance as a part of a revolutionary process aimed at breaking the hegemony of plutocratic-capitalist Western powers and defending civilization from communism. The Axis grew out of the Anti-Comintern Pact, an anti-communist treaty signed by Germany and Japan in 1936. Italy joined the Pact in 1937. The "Rome–Berlin Axis" became a military alliance in 1939 under the Pact of Steel, with the Tripartite Pact of 1940 leading to the integration of the military aims of Germany and its two treaty-bound allies. At their zenith during World War II, the Axis powers presided over empires that occupied large parts of Europe, Africa, Asia, and the islands of the Pacific Ocean. The war ended in 1945 with the defeat of the Axis powers and the dissolution of the alliance. Like the Allies, membership of the Axis was fluid, with nations fighting and not fighting over the course of the war. Origins and creation The term "axis" is believed to have been first coined by Hungary's fascist prime minister Gyula Gömbös, who advocated an alliance of Germany, Hungary, and Italy. He worked as an intermediary between Germany and Italy to lessen differences between the two countries to achieve such an alliance. Gömbös' sudden death in 1936 while negotiating with Germany in Munich and the arrival of Kálmán Darányi, a non-fascist successor to him, ended Hungary's initial involvement in pursuing a trilateral axis. The lessening of differences between Germany and Italy led to the formation of a bilateral axis. Initial proposals of a German-Italian alliance Italy under Duce Benito Mussolini had pursued a strategic alliance of Italy with Germany against France since the early 1920s. Mussolini, prior to becoming head of government in Italy, as leader of the Italian Fascist movement, had advocated alliance with recently-defeated Germany after the Paris Peace Conference of 1919 settled World War I. He believed that Italy could expand its influence in Europe by allying with Germany against France. In early 1923, as a goodwill gesture to Germany, Italy secretly delivered weapons to Germany for use in the German Army that had faced major disarmament under the provisions of the Treaty of Versailles. In September 1923, Mussolini offered German Chancellor Gustav Stresemann a "common policy", he sought German military support against potential French military intervention over Italy's diplomatic dispute with Yugoslavia over Fiume, should an Italian seizure of Fiume result in war between Italy and Yugoslavia. The German ambassador to Italy in 1924 reported that Mussolini saw a nationalist Germany as an essential ally to Italy against France, and hoped to tap into the desire within the German army and the German political right for a war of revenge against France. Italy since the 1920s had identified the year 1935 as a crucial date for preparing for a war against France, as 1935 was the year when Germany's obligations to the Treaty of Versailles were scheduled to expire. However Mussolini at this time stressed one important condition that Italy must pursue in an alliance with Germany, that Italy "must ... tow them, not be towed by them". Italian foreign minister Dino Grandi in the early 1930s stressed the importance of "decisive weight", involving Italy's relations between France and Germany, in which he recognized that Italy was not yet a major power, but perceived that Italy did have strong enough influence to alter the political situation in Europe by placing the weight of its support onto one side or another. However Grandi stressed that Italy must seek to avoid becoming a "slave of the rule of three" in order to pursue its interests, arguing that although substantial Italo-French tensions existed, that Italy would not unconditionally commit itself to an alliance with Germany, just as it would neither unconditionally commit itself to an alliance with France over conceivable Italo-German tensions. Grandi's attempts to maintain a diplomatic balance between France and Germany were challenged by pressure from the French in 1932 who had begun to prepare an alliance with Britain and the United States against the then-potential threat of a revanchist Germany. The French government warned Italy that it had to choose whether to be on the side of the pro-Versailles powers or the side of the anti-Versailles revanchists. Grandi responded by stating that Italy would be willing to offer France support against Germany if France gave Italy its mandate over Cameroon and allowed Italy a free hand in Ethiopia. France refused Italy's proposed exchange for support, as it believed Italy's demands were unacceptable and the threat from Germany was not yet immediate. On 23 October 1932, Mussolini declared support for a Four Power Directorate that included Britain, France, Germany, and Italy, that would bring about an orderly treaty revision outside of what he considered the outmoded League of Nations. The proposed Directorate was pragmatically designed to reduce French hegemony in continental Europe, to reduce tensions between the great powers in the short term to buy Italy time from being pressured into a specific war alliance while at the same time being able to benefit from diplomatic deals on treaty revisions. Danube alliance, dispute over Austria In 1932, Gyula Gömbös and the Party of National Unity rose to power in Hungary, and immediately sought alliance with Italy. Gömbös sought to end Hungary's post-Treaty of Trianon borders, but knew that Hungary alone was not capable of challenging the Little Entente powers by forming an alliance with Austria and Italy. Mussolini was elated by Gömbös' offer of alliance with Italy, and both Mussolini and Gömbös cooperated in seeking to win over Austrian Chancellor Engelbert Dollfuss to joining an economic tripartite agreement with Italy and Hungary. During the meeting between Gömbös and Mussolini in Rome on 10 November 1932, the question came up over the sovereignty of Austria in regards to the predicted inevitable rise of the Nazi Party to power in Germany. Mussolini was worried over Nazi ambitions towards Austria, and indicated that at least in the short-term he was committed to maintaining Austria as a sovereign state. Italy had concerns over a Germany with Austria within it, laying land claims to German-populated territories of the South Tyrol (also known as Alto-Adige) within Italy that bordered Austria on the Brenner Pass. Gömbös responded to Mussolini by saying that as the Austrians primarily identified as Germans, that the Anschluss of Austria to Germany was inevitable, and advised that it would be better for Italy to have a friendly Germany on the border of the Brenner Pass than a hostile Germany bent on entering the Adriatic. Mussolini replied by expressing hope that the Anschluss could be postponed as long as possible until the breakout of a European war that he estimated would begin in 1938. In 1933, Adolf Hitler and the Nazi Party came to power in Germany. The first diplomatic visitor to meet Hitler was Gömbös who in a previous letter to Hitler within a day of Hitler being appointed Chancellor, in which Gömbös told the Hungarian ambassador to Germany to remind Hitler that "that ten years ago, on the basis of our common principles and ideology, we were in contact via Dr. Scheubner-Richter". Gömbös told the Hungarian ambassador to inform Hitler of Hungary's intentions "for the two countries to cooperate in foreign and economic policy". Hitler as Nazi leader had long advocated an alliance between Germany and Italy since the 1920s. Shortly after being appointed Chancellor, Hitler sent a personal message to Mussolini, declaring "admiration and homage" to him as well as declaring his anticipation of the prospects of German-Italian friendship and even alliance. Hitler was aware that Italy held concerns over potential German land claims on South Tyrol, and assured Mussolini that Germany was not interested in South Tyrol. Hitler in Mein Kampf had declared that South Tyrol was a non-issue considering the advantages that would be gained from a German-Italian alliance. After Hitler's rise to power, the Four Power Directorate proposal by Italy had been looked at with interest by Britain, but Hitler was not committed to it, resulting in Mussolini urging Hitler to consider the diplomatic advantages Germany would gain by breaking out of isolation by entering the Directorate and avoiding an immediate armed conflict. The Four Power Directorate proposal stipulated that Germany would no longer be required to have limited arms and would be granted the right to re-armament under foreign supervision in stages. Hitler completely rejected the idea of controlled rearmament under foreign supervision. Mussolini did not trust Hitler's intentions regarding Anschluss nor Hitler's promise of no territorial claims on South Tyrol. Mussolini informed Hitler that he was satisfied with the presence of the anti-Marxist government of Dollfuss in Austria, and warned Hitler that he was adamantly opposed to Anschluss. Hitler responded in contempt to Mussolini that he intended "to throw Dollfuss into the sea". With this disagreement over Austria, relations between Hitler and Mussolini steadily became more distant. Hitler attempted to break the impasse with Italy over Austria by sending Hermann Göring to negotiate with Mussolini in 1933 to convince Mussolini to press the Austrian government to appoint members of Austria's Nazis to the government. Göring claimed that Nazi domination of Austria was inevitable and that Italy should accept this, as well as repeating to Mussolini of Hitler's promise to "regard the question of the South Tyrol frontier as finally liquidated by the peace treaties". In response to Göring's visit with Mussolini, Dollfuss immediately went to Italy to counter any German diplomatic headway. Dollfuss claimed that his government was actively challenging Marxists in Austria and claimed that once the Marxists were defeated in Austria, that support for Austria's Nazis would decline. In 1934, Hitler and Mussolini met in person for the first time in Venice. The meeting did not proceed amicably, Hitler demanded that Mussolini compromise on Austria by pressuring Dollfuss to appoint Austrian Nazis his cabinet, in which Mussolini flatly refused the demand. In response, Hitler promised that he would accept Austria's independence for the time being, saying that due to the internal tensions in Germany (referring to sections of the Nazi SA that Hitler would soon kill in the Night of the Long Knives) that Germany could not afford to provoke Italy. Several weeks after the Venice meeting between Hitler and Mussolini, on 25 June 1934, Austrian Nazis assassinated Dollfuss. Mussolini was outraged as he held Hitler directly responsible for the assassination that violated Hitler's promise made only weeks ago to respect Austrian independence. Mussolini violently responded to the assassination of Dollfuss by rapidly deploying several army divisions and air squadrons to the Brenner Pass, and warned that a German move against Austria would result in war between Germany and Italy. Hitler responded by both denying responsibility for the Austrian Nazis' assassination of Dollfuss and issuing orders to dissolve all ties between the German Nazi Party and its Austrian branch which Germany claimed was responsible for the political crisis. Italy effectively abandoned diplomatic relations Germany, and turned to France to challenge Germany's intransigence by signing a Franco-Italian accord to protect Austrian independence. French and Italian military staff discussed possible military cooperation involving a war with Germany should Hitler dare to attack Austria. As late as May 1935, Mussolini spoke of his desire to destroy Hitler. Relations between Germany and Italy recovered due to Hitler's support of Italy's invasion of Ethiopia in 1935, while other countries condemned the invasion and advocated sanctions against Italy. Development of German-Italian-Japanese alliance Interest between Germany and Japan in forming an alliance began when Japanese diplomat Oshima Hiroshi visited Joachim von Ribbentrop in Berlin in 1935. Oshima informed von Ribbentrop of Japan's interest in forming a German-Japanese alliance against the Soviet Union. Von Ribbentrop expanded on Oshima's proposal by advocating that the alliance be based in a political context of a pact to oppose the Comintern. The proposed pact was met with mixed reviews in Japan, with a faction of ultra-nationalists within the government supporting the pact while the Japanese Navy and the Japanese Foreign Ministry were staunchly opposed to the pact. There was great concern in the Japanese government that such a pact with Germany could alienate Japan's relations with Britain, endangering years of a beneficial Anglo-Japanese accord, that had allowed Japan to ascend in the international community in the first place. The response to the pact was met with similar division in Germany; while the proposed pact was popular amongst the upper echelons of the Nazi Party, it was opposed by many in the German Foreign Ministry, the German Army, and the German business community who held financial interests in China that Japan was hostile to. Italy upon learning of German-Japanese negotiations, also began to take an interest in forming an alliance with Japan. Italy had hoped that due to Japan's long-term close relations with Britain, that an Italo-Japanese alliance could pressure Britain into adopting a more accommodating stance towards Italy in the Mediterranean. In the summer of 1936, Ciano informed Japanese Ambassador to Italy, Sugimura Yotaro, "I have heard that a Japanese-German agreement concerning the Soviet Union has been reached, and I think it would be natural for a similar agreement to be made between Italy and Japan". Initially Japan's attitude towards Italy's proposal was generally dismissive, viewing a German-Japanese alliance against the Soviet Union as imperative while regarding an Italo-Japanese alliance as secondary, as Japan anticipated that an Italo-Japanese alliance would antagonize Britain that had condemned Italy's invasion of Ethiopia. This attitude by Japan towards Italy altered in 1937 after the League of Nations condemned Japan for aggression in China and faced international isolation, while Italy remained favourable to Japan. As a result of Italy's support for Japan against international condemnation, Japan took a more positive attitude towards Italy and offered proposals for a non-aggression or neutrality pact with Italy. The "Axis powers" formally took the name after the Tripartite Pact was signed by Germany, Italy, and Japan on 27 September 1940, in Berlin. The pact was subsequently joined by Hungary (20 November 1940), Romania (23 November 1940), Slovakia (24 November 1940), and Bulgaria (1 March 1941). Economic resources The total Axis population in 1938 was 258.9 million, while the total Allied population (excluding the Soviet Union and the United States, who later joined the Allies) was 689.7 million. Thus the Allied powers at that time outnumbered the Axis powers in terms of population by 2.7 to 1. The leading Axis states had the following domestic populations: Germany (including recently annexed Austria, with a population of 6.8 million) had 75.5 million, Japan (excluding its colonies) had a population of 71.9 million, and Italy (excluding its colonies) had 43.4 million. The United Kingdom (excluding its colonies) had a domestic population of 47.5 million and France (excluding its colonies) had 42 million. The wartime gross domestic product (GDP) of the Axis powers combined was $911 billion at its highest in 1941 in international dollars by 1990 prices. The total GDP of the Allied powers in 1941 was $1,798 billion – with the United States alone providing $1,094 billion, more GDP than all the Axis powers combined. The burden of the war upon the economies of the participating countries has been measured through the percentage of gross national product (GNP) devoted to military expenditures. Nearly one-quarter of Germany's GNP was committed to the war effort in 1939, and this rose three-quarters of GNP in 1944, prior to the collapse of the economy. In 1939, Japan committed 22 percent of its GNP to its war effort in China; this rose to three-quarters of Japan's GNP in 1944. Italy did not mobilize its economy; its GNP committed to the war effort remained at prewar levels. Italy and Japan lacked industrial capacity; their economies were small, dependent on international trade, external sources of fuel and other industrial resources. As a result, Italian and Japanese mobilization remained low, even by 1943. Among the three major Axis powers – Germany, Italy, and Japan – Japan had the lowest per capita income, while Germany and Italy had an income level comparable to the United Kingdom. Major powers War justifications Führer Adolf Hitler in 1941 described the outbreak of World War II as the fault of the intervention of Western powers against Germany during its war with Poland, describing it as the result of "the European and American warmongers". Hitler denied accusations by the Allies that he wanted a world war, and invoked anti-Semitic claims that the war was wanted and provoked by politicians either of Jewish origin or associated with Jewish interests. However Hitler clearly had designs for Germany to become the dominant and leading state in the world, such as his intention for Germany's capital of Berlin to become the Welthauptstadt ("World Capital") renamed as Germania. The German government also justified its actions by claiming that Germany inevitably needed to territorially expand because it was facing an overpopulation crisis that Hitler described: "We are overpopulated and cannot feed ourselves from our own resources". Thus expansion was justified as an inevitable necessity to provide lebensraum ("living space") for the German nation and end the country's overpopulation within existing confined territory, and provide resources necessary to its people's well-being. Since the 1920s, the Nazi Party publicly promoted the expansion of Germany into territories held by the Soviet Union. On the issue of Germany's war with Poland that provoked Allied intervention against Germany, Germany claimed that it had sought to resolve its dispute with Poland over its German minorities particularly within the densely German-populated "Polish Corridor" by an agreement in 1934 between Germany and Poland whereby Poland would end its assimilationist policies towards Germans in Poland; however Germany later complained that Poland was not upholding the agreement. In 1937, Germany condemned Poland for violating the minorities agreement, but proposed that it would accept a resolution whereby Germany would reciprocally accept the Polish demand for Germany abandon assimilation of Polish minorities if Poland upheld its agreement to abandon assimilation of Germans. Germany's proposal was met with resistance in Poland, particularly by the Polish Western Union (PZZ) and the National Democratic party, with Poland only agreeing to a watered down version of the Joint Declaration on Minorities, on 5 November 1937. On the same day, Hitler declared his intention to prepare for a war to destroy Poland. Germany used legal precedents to justify its intervention against Poland and annexation of the German-majority Free City of Danzig (led by a local Nazi government that sought incorporation into Germany) in 1939 was justified because of Poland repeatedly violating the sovereignty of Danzig. Germany noted one such violation as being in 1933 when Poland sent additional troops into the city in violation of the limit of Polish troops admissible to Danzig as agreed to by treaty. After Poland had agreed only to a watered-down agreement to guarantee that its German minorities would not be assimilated, Hitler decided that the time had come to prepare for war with Poland to forcibly implement lebensraum by destroying Poland to allow for German settlement in its territories. Although Germany had prepared for war with Poland in 1939, Hitler still sought to use diplomatic means along with threat of military action to pressure Poland to make concessions to Germany involving Germany annexing Danzig without Polish opposition, and believed that Germany could gain concessions from Poland without provoking a war with Britain or France. Hitler believed that Britain's guarantee of military support to Poland was a bluff, and with a German-Soviet agreement on both countries recognizing their mutual interests involving Poland. The Soviet Union had diplomatic grievances with Poland since the Soviet-Polish War of 1919–1921 in which the Soviets were pressured to cede Western Belarus and Western Ukraine to Poland after intense fighting in those years over the territories, and the Soviet Union sought to regain those territories. Hitler believed that a conflict with Poland would be an isolated conflict, as Britain would not engage in a war with both Germany and the Soviet Union. Poland rejected the German demand for negotiations on the issue of proposed German annexation of Danzig, and Germany in response prepared a general mobilization on the morning of 30 August 1939. Hitler thought that the British would accept Germany's demands and pressure Poland to agree to them. At midnight 30 August 1939, German foreign minister Joachim Ribbentrop was expecting the arrival of the British ambassador Nevile Henderson as well as a Polish plenipotentiary to negotiate terms with Germany. Only Henderson arrived, and Henderson informed Ribbentrop that no Polish plenipotentiary was arriving. Ribbentrop became extremely upset and demanded the immediate arrival of a Polish diplomat, informing Henderson that the situation was "damned serious!", and read out to Henderson Germany's demands that Poland accept Germany annexing Danzig as well as Poland granting Germany the right to connect East Prussia to mainland Germany via an extraterritorial highway and railway that passed through the Polish Corridor, and a plebiscite to determine whether the Polish Corridor (with a German majority population) should remain within Poland or be transferred to Germany. Germany justified its invasion of the Low Countries of Belgium, Luxembourg, and the Netherlands in May 1940 by claiming that it suspected that Britain and France were preparing to use the Low Countries to launch an invasion of the industrial Ruhr region of Germany. When war between Germany versus Britain and France appeared likely in May 1939, Hitler declared that the Netherlands and Belgium would need to be occupied, saying: "Dutch and Belgian air bases must be occupied ... Declarations of neutrality must be ignored". In a conference with Germany's military leaders on 23 November 1939, Hitler declared to the military leaders that "We have an Achilles heel, the Ruhr", and said that "If England and France push through Belgium and Holland into the Ruhr, we shall be in the greatest danger", and thus claimed that Belgium and the Netherlands had to be occupied by Germany to protect Germany from a British-French offensive against the Ruhr, irrespective of their claims to neutrality. The issue of Germany's invasion of the Soviet Union in 1941 involved the issue of lebensraum and anti-communism. Hitler in his early years as Nazi leader had claimed that he would be willing to accept friendly relations with Russia on the tactical condition that Russia agree to return to the borders established by the German-Russian peace agreement of the Treaty of Brest-Litovsk signed by Vladimir Lenin of the Russian Soviet Federated Socialist Republic in 1918 which gave large territories held by Russia to German control in exchange for peace. Hitler in 1921 had commended the Treaty of Brest Litovsk as opening the possibility for restoration of relations between Germany and Russia, saying: Through the peace with Russia the sustenance of Germany as well as the provision of work were to have been secured by the acquisition of land and soil, by access to raw materials, and by friendly relations between the two lands.—Adolf Hitler, 1921 Hitler from 1921 to 1922 evoked rhetoric of both the achievement of lebensraum involving the acceptance of a territorially reduced Russia as well as supporting Russian nationals in overthrowing the Bolshevik government and establishing a new Russian government. However Hitler's attitudes changed by the end of 1922, in which he then supported an alliance of Germany with Britain to destroy Russia. Later Hitler declared how far into Russia he intended to expand Germany to: Asia, what a disquieting reservoir of men! The safety of Europe will not be assured until we have driven Asia back behind the Urals. No organized Russian state must be allowed to exist west of that line.—Adolf Hitler. Policy for lebensraum planned mass expansion of Germany eastwards to the Ural Mountains. Hitler planned for the "surplus" Russian population living west of the Urals were to be deported to the east of the Urals. After Germany invaded the Soviet Union in 1941, the Nazi regime's stance towards an independent, territorially-reduced Russia was affected by pressure beginning in 1942 from the German Army on Hitler to endorse a Russian national liberation army led by Andrey Vlasov that officially sought to overthrow Joseph Stalin and the communist regime and establish a new Russian state. Initially the proposal to support an anti-communist Russian army was met with outright rejection by Hitler, however by 1944 as Germany faced mounting losses on the Eastern Front, Vlasov's forces were recognized by Germany as an ally, particularly by Reichsführer-SS Heinrich Himmler. After the Japanese attack on Pearl Harbor and the outbreak of war between Japan and the United States, Germany supported its ally Japan by declaring war on the US. During the war Germany denounced the Atlantic Charter and the Lend-Lease Act that the US adopted to support the Allied powers prior to entry into the alliance, as imperialism directed at dominating and exploit countries outside of the continental Americas. Hitler denounced American President Roosevelt's invoking of the term "freedom" to describe US actions in the war, and accused the American meaning of "freedom" to be the freedom for democracy to exploit the world and the freedom for plutocrats within such democracy to exploit the masses. At the end of World War I, German citizens felt that their country had been humiliated as a result of the Treaty of Versailles, in which Germany was forced to pay enormous reparations payments and forfeit German-populated territories and all its colonies. The pressure of the reparations on the German economy led to hyperinflation during the early 1920s. In 1923 the French occupied the Ruhr region when Germany defaulted on its reparations payments. Although Germany began to improve economically in the mid-1920s, the Great Depression created more economic hardship and a rise in political forces that advocated radical solutions to Germany's woes. The Nazis, under Adolf Hitler, promoted the nationalist stab-in-the-back legend stating that Germany had been betrayed by Jews and Communists. The party promised to rebuild Germany as a major power and create a Greater Germany that would include Alsace-Lorraine, Austria, Sudetenland, and other German-populated territories in Europe. The Nazis also aimed to occupy and colonize non-German territories in Poland, the Baltic states, and the Soviet Union, as part of the Nazi policy of seeking Lebensraum ("living space") in eastern Europe. Germany renounced the Versailles treaty and remilitarized the Rhineland in March 1936. Germany had already resumed conscription and announced the existence of a German air force in 1935. Germany annexed Austria in 1938, the Sudetenland from Czechoslovakia, and the Memel territory from Lithuania in 1939. Germany then invaded the rest of Czechoslovakia in 1939, creating the Protectorate of Bohemia and Moravia and the country of Slovakia. On 23 August 1939, Germany and the Soviet Union signed the Molotov-Ribbentrop Pact, which contained a secret protocol dividing eastern Europe into spheres of influence. Germany's invasion of its part of Poland under the Pact eight days later triggered the beginning of World War II. By the end of 1941, Germany occupied a large part of Europe and its military forces were fighting the Soviet Union, nearly capturing Moscow. However, crushing defeats at the Battle of Stalingrad and the Battle of Kursk devastated the German armed forces. This, combined with Western Allied landings in France and Italy, led to a three-front war that depleted Germany's armed forces and resulted in Germany's defeat in 1945. There was substantial internal opposition within the German military to the Nazi regime's aggressive strategy of rearmament and foreign policy in the 1930s. From 1936 to 1938, Germany's top four military leaders, Ludwig Beck, Werner von Blomberg, Werner von Fritsch, Walther von Reichenau, all were in opposition to the Nazi regime's rearmament strategy and its foreign policy. They criticized the hurried nature of rearmament, the lack of planning, Germany's insufficient resources to carry out a war, the dangerous implications of Hitler's foreign policy, and the increasing subordination of the army to the Nazi Party's rules. These four military leaders were outspoken and public in their opposition to these tendencies. The Nazi regime responded with contempt to the four military leaders' opposition, and Nazi members brewed a false crass scandal that alleged that the two top army leaders von Blomberg and von Fritsch were homosexual lovers, in order to pressure them to resign. Though started by lower ranking Nazi members, Hitler took advantage of the scandal by forcing von Blomberg and von Fritsch to resign and replaced them with opportunists who were subservient and loyal to him. Shortly afterwards Hitler announced on 4 February 1938 that he was taking personal command over Germany's military with the new High Command of the Armed Forces with the Führer as its head. The opposition to the Nazi regime's aggressive foreign policy in the military became so strong from 1936 to 1938, that considerations of overthrowing the Nazi regime were discussed within the upper echelons of the military and remaining non-Nazi members of the German government. Minister of Economics, Hjalmar Schacht met with Beck in 1936 in which Schacht declared to Beck that he was considering an overthrow of the Nazi regime and was inquiring what the stance was by the German military on support of an overthrow of the Nazi regime. Beck was lukewarm to the idea, and responded that if a coup against the Nazi regime began with support at the civilian level, the military would not oppose it. Schacht considered this promise by Beck to be inadequate because he knew that without the support of the army, any coup attempt would be crushed by the Gestapo and the SS. However by 1938, Beck became a firm opponent of the Nazi regime out of his opposition to Hitler's military plans of 1937–38 that that told the military to prepare for the possibility of a world war as a result of German annexation plans for Austria and Czechoslovakia. Colonies and dependencies In Europe Belgium initially was under a military occupation authority from 1940 to 1944, however Belgium and its Germanic population were planned to be incorporated into the planned Greater Germanic Reich, this was initiated by the creation of Reichskommissariat Belgien, an authority run directly by the German government that sought the incorporation of the territory into the planned Germanic Reich. However Belgium was soon occupied by Allied forces in 1944. The Protectorate of Bohemia and Moravia was a protectorate and dependency considered an autonomous region within the sovereign territory of Germany. The General Government was the name given to the territories of occupied-Poland that were not directly annexed into German provinces, but was like Bohemia and Moravia, a dependency and autonomous region within the sovereign territory of Germany. Reichskommissariat Niederlande was an occupation authority and territory established in the Netherlands in 1940 designated as a colony to be incorporated into the planned Greater Germanic Reich. Reichskommissariat Norwegen was established in Norway in 1940. Like the Reichskommissariats in Belgium and the Netherlands, its Germanic peoples were to be incorporated into the Greater Germanic Reich. In Norway, the Quisling regime, headed by Vidkun Quisling, was installed by the Germans as a client regime during the occupation, while king Haakon VII and the legal government were in exile. Quisling encouraged Norwegians to serve as volunteers in the Waffen-SS, collaborated in the deportation of Jews, and was responsible for the executions of members of the Norwegian resistance movement. About 45,000 Norwegian collaborators joined the pro-Nazi party Nasjonal Samling (National Union), and some police units helped arrest many of Norway's Jews. However, Norway was one of the first countries where resistance during World War II was widespread before the turning point of the war in 1943. After the war, Quisling and other collaborators were executed. Quisling's name has become an international eponym for traitor. Reichskommissariat Ostland was established in the Baltic region in 1941. Unlike the western Reichskommissariats that sought the incorporation of their majority Germanic peoples, Ostland were designed for settlement by Germans who would displace the majority non-Germanic peoples living there, as part of lebensraum. Reichskommissariat Ukraine was established in Ukraine in 1941. It like Ostland was slated for settlement by Germans. War justifications The Japanese government justified its actions by claiming that it was seeking to unite East Asia under Japanese leadership in a Greater East Asia Co-Prosperity Sphere, that would free East Asians from domination and rule by clients of Western imperialism and particularly American imperialism. Japan invoked themes of Pan-Asianism and said that the Asian people needed to be free from Anglo-American influence. The United States opposed the Japanese war in China, and recognized Chiang Kai-Shek's Nationalist Government as the legitimate government of China. As a result, the United States sought to bring the Japanese war effort to a complete halt by imposing a full embargo on all trade between the United States and Japan. Japan was dependent on the United States for 80 percent of its petroleum, so the embargo resulted in an economic and military crisis for Japan, as Japan could not continue its war effort against China without access to petroleum. In order to maintain its military campaign in China with the major loss of petroleum trade with the United States, Japan saw the best means to secure an alternative source of petroleum in the petroleum-rich and natural-resources-rich Southeast Asia. This threat of retaliation by Japan to the total trade embargo by the United States was known by the American government, including American Secretary of State Cordell Hull, who was negotiating with the Japanese to avoid a war, feared that the total embargo would pre-empt a Japanese attack on the Dutch East Indies. Japan identified the American Pacific fleet based in Pearl Harbor as the principal threat to its designs to invade and capture Southeast Asia. Thus Japan initiated the attack on Pearl Harbor on 7 December 1941 as a means to inhibit an American response to the invasion of Southeast Asia, and buy time to allow Japan to consolidate itself with these resources to engage in a total war against the United States, and force the United States to accept Japan's acquisitions. The Empire of Japan, a constitutional monarchy ruled by Hirohito, was the principal Axis power in Asia and the Pacific. The Japanese constitution prescribed that "the Emperor is the head of the Empire, combining in Himself the rights of sovereignty, and exercises them, according to the provisions of the present Constitution" (article 4) and that "The Emperor has the supreme command of the Army and the Navy" (article 11). Under the emperor were a political cabinet and the Imperial General Headquarters, with two chiefs of staff. At its height, Japan's Greater East Asia Co-Prosperity Sphere included Manchuria, Inner Mongolia, large parts of China, Malaysia, French Indochina, Dutch East Indies, The Philippines, Burma, some of India, and various Pacific Islands in the central Pacific. As a result of the internal discord and economic downturn of the 1920s, militaristic elements set Japan on a path of expansionism. As the Japanese home islands lacked natural resources needed for growth, Japan planned to establish hegemony in Asia and become self-sufficient by acquiring territories with abundant natural resources. Japan's expansionist policies alienated it from other countries in the League of Nations and by the mid-1930s brought it closer to Germany and Italy, who had both pursued similar expansionist policies. Cooperation between Japan and Germany began with the Anti-Comintern Pact, in which the two countries agreed to ally to challenge any attack by the Soviet Union. Japan entered into conflict against the Chinese in 1937. The Japanese invasion and occupation of parts of China resulted in numerous atrocities against civilians, such as the Nanking massacre and the Three Alls Policy. The Japanese also fought skirmishes with Soviet–Mongolian forces in Manchukuo in 1938 and 1939. Japan sought to avoid war with the Soviet Union by signing a non-aggression pact with it in 1941. Japan's military leaders were divided on Japan's diplomatic relationships with Germany and Italy and the attitude towards the United States. The Imperial Japanese Army was in favour of war with the United States, and the Imperial Japanese Navy was generally strongly opposed. When Prime Minister of Japan General Hideki Tojo refused American demands that Japan withdraw its military forces from China, a confrontation became more likely. War with the United States was being discussed within the Japanese government by 1940. Commander of the Combined Fleet Admiral Isoroku Yamamoto was outspoken in his opposition, especially after the signing of the Tripartite Pact, saying on 14 October 1940: "To fight the United States is like fighting the whole world. But it has been decided. So I will fight the best I can. Doubtless I shall die on board Nagato [his flagship]. Meanwhile Tokyo will be burnt to the ground three times. Konoe and others will be torn to pieces by the revengeful people, I [shouldn't] wonder. " In October and November 1940, Yamamoto communicated with Navy Minister Oikawa, and stated, "Unlike the pre-Tripartite days, great determination is required to make certain that we avoid the danger of going to war. " With the European powers focused on the war in Europe, Japan sought to acquire their colonies. In 1940 Japan responded to the German invasion of France by occupying French Indochina. The Vichy France regime, a de facto ally of Germany, accepted the takeover. The allied forces did not respond with war. However, the United States instituted an embargo against Japan in 1941 because of the continuing war in China. This cut off Japan's supply of scrap metal and oil needed for industry, trade, and the war effort. To isolate the American forces stationed in the Philippines and to reduce American naval power, the Imperial General Headquarters ordered an attack on the U. S. naval base at Pearl Harbor, Hawaii, on 7 December 1941. They also invaded Malaya and Hong Kong. Initially achieving a series of victories, by 1943 the Japanese forces were driven back towards the home islands. The Pacific War lasted until the atomic bombings of Hiroshima and Nagasaki in 1945. The Soviets formally declared war in August 1945 and engaged Japanese forces in Manchuria and northeast China. Colonies and dependencies In Asia Korea was a Japanese protectorate and dependency formally established by the Japan–Korea Treaty of 1910. The South Pacific Mandate were territories granted to Japan in 1919 in the peace agreements of World War I, that designated to Japan the German South Pacific islands. Japan received these as a reward by the Allies of World War I, when Japan was then allied against Germany. Taiwan, then known as Formosa, was a Japanese dependency established in 1895. War justifications Duce Benito Mussolini described Italy's declaration of war against the Western Allies of Britain and France in June 1940 as the following: "We are going to war against the plutocratic and reactionary democracies of the West who have invariably hindered the progress and often threatened the very existence of the Italian people ...". Italy condemned the Western powers for enacting sanctions on Italy in 1935 for its actions in the Second Italo-Ethiopian War that Italy claimed was a response to an act of Ethiopian aggression against tribesman in Italian Eritrea in the Walwal incident of 1934. In 1938 Mussolini and foreign minister Ciano issued demands for concessions by France, particularly regarding the French colonial possessions of Djibouti, Tunisia and the French-run Suez Canal. Italy demanded a sphere of influence in the Suez Canal in Egypt, specifically demanding that the French-dominated Suez Canal Company accept an Italian representative on its board of directors. Italy opposed the French monopoly over the Suez Canal because under the French-dominated Suez Canal Company all Italian merchant traffic to its colony of Italian East Africa was forced to pay tolls upon entering the canal. Italy like Germany also justified its actions by claiming that Italy needed to territorially expand to provide spazio vitale ("vital space") for the Italian nation. Italy justified its intervention against Greece in October 1940 on the allegation that Greece was being used by Britain against Italy, Mussolini informed this to Hitler, saying: "Greece is one of the main points of English maritime strategy in the Mediterranean". Italy justified its intervention against Yugoslavia in 1941 by appealing to both Italian irredentist claims and the fact of Albanian, Croatian, and Vardar Macedonian separatists not wishing to be part of Yugoslavia. Croatian separatism soared after the assassination of Croatian political leaders in the Yugoslav parliament in 1928 including the death of Stjepan Radić, and Italy endorsed Croatian separatist Ante Pavelić and his fascist Ustaše movement that was based and trained in Italy with the Fascist regime's support prior to intervention against Yugoslavia. In the late 19th century, after Italian unification, a nationalist movement had grown around the concept of Italia irredenta, which advocated the incorporation into Italy of Italian-speaking areas under foreign rule. There was a desire to annex Dalmatian territories, which had formerly been ruled by the Venetians, and which consequently had Italian-speaking elites. The intention of the Fascist regime was to create a "New Roman Empire" in which Italy would dominate the Mediterranean. In 1935–1936 Italy invaded and annexed Ethiopia and the Fascist government proclaimed the creation of the "Italian Empire". Protests by the League of Nations, especially the British, who had interests in that area, led to no serious action. Italy later faced diplomatic isolation from several countries. In 1937 Italy left the League of Nations and joined the Anti-Comintern Pact, which had been signed by Germany and Japan the preceding year. In March/April 1939 Italian troops invaded and annexed Albania. Germany and Italy signed the Pact of Steel on May 22. Italy entered World War II on 10 June 1940. In September 1940 Germany, Italy, and Japan signed the Tripartite Pact. Italy was ill-prepared for war, in spite of the fact that it had continuously been involved in conflict since 1935, first with Ethiopia in 1935–1936 and then in the Spanish Civil War on the side of Francisco Franco's Nationalists. Military planning was deficient, as the Italian government had not decided on which theatre would be the most important. Power over the military was overcentralized to Mussolini's direct control; he personally undertook to direct the ministry of war, the navy, and the air force. The navy did not have any aircraft carriers to provide air cover for amphibious assaults in the Mediterranean, as the Fascist regime believed that the air bases on the Italian Peninsula would be able to do this task. Italy's army had outmoded artillery and the armoured units used outdated formations not suited to modern warfare. Diversion of funds to the air force and navy to prepare for overseas operations meant less money was available for the army; the standard rifle was a design that dated back to 1891. The Fascist government failed to learn from mistakes made in Ethiopia and Spain; it ignored the implications of the Italian Fascist volunteer soldiers being routed at the Battle of Guadalajara in the Spanish Civil War. Military exercises by the army in the Po Valley in August 1939 disappointed onlookers, including King Victor Emmanuel III. Mussolini who was angered by Italy's military unpreparedness, dismissed Alberto Pariani as Chief of Staff of the Italian military in 1939. Italy's only strategic natural resource was an abundance of aluminum. Petroleum, iron, copper, nickel, chrome, and rubber all had to be imported. The Fascist government's economic policy of autarky and a recourse to synthetic materials was not able to meet the demand. Prior to entering the war, the Fascist government sought to gain control over resources in the Balkans, particularly oil from Romania. The agreement between Germany and the Soviet Union to invade and partition Poland between them resulted in Hungary that bordered the Soviet Union after Poland's partition, and Romania viewing Soviet invasion as an immediate threat, resulting in both countries appealing to Italy for support, beginning in September 1939. Italy - then still officially neutral - responded to appeals by the Hungarian and Romanian governments for protection from the Soviet Union, by proposing a Danube-Balkan neutrals bloc. The proposed bloc was designed to increase Italian influence in the Balkans, it met resistance from France, Germany, and the Soviet Union that did not want to lose their influence in the Balkans; however Britain that still hoped that Italy would not enter the war on Germany's side, supported the neutral bloc. The efforts to form the bloc failed by November 1939 after Turkey made an agreement that it would protect Allied Mediterranean territory, along with Greece and Romania. Mussolini refused to heed warnings from his minister of exchange and currency, Felice Guarneri, who said that Italy's actions in Ethiopia and Spain meant the nation was on the verge of bankruptcy. By 1939 military expenditures by Britain and France far exceeded what Italy could afford. After entering the war in 1940, Italy had been slated to be granted a series of territorial concessions from France that Hitler had agreed to with Italian foreign minister Ciano, that included Italian annexation of claimed territories in southeastern France, a military occupation of southeastern France up to the river Rhone, and receiving the French colonies of Tunisia and Djibouti. However on 22 June 1940, Mussolini suddenly informed Hitler that Italy was abandoning its claims "in the Rhone, Corsica, Tunisia, and Djibouti", instead requesting a demilitarized zone along the French border, and on 24 June Italy agreed to an armistice with the Vichy regime to that effect. Later on 7 July 1940, the Italian government changed its decision, and Ciano attempted to make an agreement with Hitler to have Nice, Corsica, Tunisia, and Djibouti be transferred to Italy; Hitler adamantly rejected any new settlement or separate French-Italian peace agreement for the time being prior to the defeat of Britain in the war. However Italy continued to press Germany for the incorporation of Nice, Corsica, and Tunisia into Italy, with Mussolini sending a letter to Hitler in October 1940, informing him that as the 850,000 Italians living under France's current borders formed the largest minority community, that ceding these territories to Italy would be beneficial to both Germany and Italy as it would reduce France's population from 35 million to 34 and forestall any possibility of resumed French ambitions for expansion or hegemony in Europe. Germany had considered the possibility of invading and occupying the non-occupied territories of Vichy France including occupying Corsica Germany capturing the Vichy French fleet for use by Germany, in December 1940 with the proposed Operation Attila. An invasion of Vichy France by Germany and Italy took place with Case Anton in November 1942. In mid-1940, in response to an agreement by Romanian Conductator Ion Antonescu to accept German "training troops" to be sent to Romania, both Mussolini and Stalin in the Soviet Union were angered by Germany's expanding sphere of influence into Romania, and especially because neither was informed in advance of the action in spite of German agreements with Italy and the Soviet Union at that time. Mussolini in a conversation with Ciano responded to Hitler's deployment of troops into Romania, saying: "Hitler always faces me with accomplished facts. Now I'll pay him back by his same currency. He'll learn from the papers that I have occupied Greece. So the balance will be re-established.". However Mussolini later decided to inform Hitler in advance of Italy's designs on Greece. Upon hearing of Italy's intervention against Greece, Hitler was deeply concerned as he said that the Greeks were not bad soldiers that Italy might not win in its war with Greece, as he did not want Germany to become embroiled in a Balkan conflict. By 1941, Italy's attempts to run an autonomous campaign from Germany's, collapsed as a result of multiple defeats in Greece, North Africa, and Eastern Africa; and the country became dependent and effectively subordinate to Germany. After the German-led invasion and occupation of Yugoslavia and Greece, that had both been targets of Italy's war aims, Italy was forced to accept German dominance in the two occupied countries. Furthermore, by 1941, German forces in North Africa under Erwin Rommel effectively took charge of the military effort ousting Allied forces from the Italian colony of Libya, and German forces were stationed in Sicily in that year. The German government in response to Italian military failures and dependence on German military assistance, viewed Italy with contempt as an unreliable ally, and no longer took any serious consideration of Italian interests. Germany's contempt for Italy as an ally was demonstrated that year when Italy was pressured to send 350,000 "guest workers" to Germany who were used as forced labour. While Hitler was deeply disappointed with the Italian military's performance, he maintained overall favourable relations with Italy because of his personal friendship and admiration of Mussolini. Mussolini by mid-1941 was left bewildered and recognized both that Italy's war objectives had failed and that Italy was completely subordinate and dependent to Germany. Mussolini henceforth believed that Italy was left with no choice in such a subordinate status other than to follow Germany in its war and hope for a German victory. However Germany supported Italian propaganda of the creation of a "Latin Bloc" of Italy, Vichy France, Spain, and Portugal to ally with Germany against the threat of communism, and after the German invasion of the Soviet Union, the prospect of a Latin Bloc seemed plausible. From 1940 to 1941, Francisco Franco of Spain had endorsed a Latin Bloc of Italy, Vichy France, Spain and Portugal, in order to balance the countries' powers to that of Germany, however the discussions failed to yield an agreement. After the invasion and occupation of Yugoslavia, Italy annexed numerous Adriatic islands and a portion of Dalmatia that was formed into the Italian Governorship of Dalmatia including territory from the provinces of Split, Zadar, and Kotor. Though Italy had initially larger territorial aims that extended from the Velebit mountains to the Albanian Alps, Mussolini decided against annexing further territories due to a number of factors, including that Italy held the economically valuable portion of that territory within its possession while the northern Adriatic coast had no important railways or roads and because a larger annexation would have included hundreds of thousands of Slavs who were hostile to Italy, within its national borders. Mussolini and foreign minister Ciano demanded that the Yugoslav region of Slovenia to be directly annexed into Italy, however in negotiations with German foreign minister Ribbentrop in April 1941, Ribbentrop insisted on Hitler's demands that Germany be allocated the eastern Slovenia while Italy would be allocated western Slovenia, Italy conceded to this German demand and Slovenia was partitioned between Germany and Italy. Internal opposition by Italians to the war and the Fascist regime accelerated by 1942, though significant opposition to the war had existed at the outset in 1940, as police reports indicated that many Italians were secretly listening to the BBC rather than Italian media in 1940. Underground Catholic, Communist, and socialist newspapers began to become prominent by 1942. By January 1943, King Victor Emmanuel III was persuaded by the Minister of the Royal Household, the Duke of Acquarone that Mussolini had to be removed from office. On 25 July 1943, King Victor Emmanuel III dismissed Mussolini, placed him under arrest, and began secret negotiations with the Allies. An armistice was signed on 8 September 1943, and Italy joined the Allies as a co-belligerent. On 12 September 1943, Mussolini was rescued by the Germans in Operation Oak and placed in charge of a puppet state called the Italian Social Republic (Repubblica Sociale Italiana/RSI, or Repubblica di Salò) in northern Italy. He was killed by Communist partisans on 28 April 1945. Colonies and dependencies In Europe Albania was an Italian protectorate and dependency from 1939 to 1943. In spite of Albania's long-standing protection and alliance with Italy, on 7 April 1939 Italian troops invaded Albania, five months before the start of the Second World War. Following the invasion, Albania became a protectorate under Italy, with King Victor Emmanuel III of Italy being awarded the crown of Albania. an Italian governor controlled Albania. Albanian troops under Italian control were sent to participate in the Italian invasion of Greece and the Axis occupation of Yugoslavia. Following Yugoslavia's defeat, Kosovo was annexed to Albania by the Italians. When the Fascist regime of Italy fell, in September 1943 Albania fell under German occupation. The Dodecanese Islands were an Italian dependency from 1912 to 1943. Montenegro was an Italian protectorate and dependence from 1941 to 1943 that was under the control of an Italian military governor. In Africa Italian East Africa was an Italian colony existing from 1936 to 1943. Prior to the invasion and annexation of Ethiopia into this united colony in 1936, Italy had two colonies, Eritrea and Somalia since the 1880s. Libya was an Italian colony existing from 1912 to 1943. The northern portion of Libya was directly into Italy in 1939, however the region remained united as a colony under a colonial governor. Self-governing sovereign dominions or protectorates Croatia (Independent State of Croatia) The Independent State of Croatia (NDH) established in 1941, was officially a self-governing sovereign protectorate under King Tomislav II, an Italian monarch from Italy's House of Savoy. The NDH was also under strong German influence, and after Italy capitulated its war effort in 1943, the NDH was no longer a monarchy, and became a German client state. Military-contributing minor powers Political instability plagued the country until Miklós Horthy, a Hungarian nobleman and Austro-Hungarian naval officer, became regent in 1920. Hungarian nationalists desired to recover territories lost through the Trianon Treaty. The country drew closer to Germany and Italy largely because of a shared desire to revise the peace settlements made after World War I. Many people sympathized with the anti-Semitic policy of the Nazi regime. Due to its pro-German stance, Hungary received favourable territorial settlements when Germany annexed Czechoslovakia in 1938–1939 and received Northern Transylvania from Romania via the Vienna Awards of 1940. Hungarians permitted German troops to transit through their territory during the invasion of Yugoslavia, and Hungarian forces took part in the invasion. Parts of Yugoslavia were annexed to Hungary; the United Kingdom immediately broke off diplomatic relations in response. Although Hungary did not initially participate in the German invasion of the Soviet Union, Hungary declared war on the Soviet Union on 27 June 1941. Over 500,000 soldiers served on the Eastern Front. All five of Hungary's field armies ultimately participated in the war against the Soviet Union; a significant contribution was made by the Hungarian Second Army. On 25 November 1941, Hungary was one of thirteen signatories to the revived Anti-Comintern Pact. Hungarian troops, like their Axis counterparts, were involved in numerous actions against the Soviets. By the end of 1943, the Soviets had gained the upper hand and the Germans were retreating. The Hungarian Second Army was destroyed in fighting on the Voronezh Front, on the banks of the Don River. In 1944, with Soviet troops advancing toward Hungary, Horthy attempted to reach an armistice with the Allies. However, the Germans replaced the existing regime with a new one. After fierce fighting, Budapest was taken by the Soviets. A number of pro-German Hungarians retreated to Italy and Germany, where they fought until the end of the war. When war erupted in Europe in 1939, the Kingdom of Romania was pro-British and allied to the Poles. Following the invasion of Poland by Germany and the Soviet Union, and the German conquest of France and the Low Countries, Romania found itself increasingly isolated; meanwhile, pro-German and pro-Fascist elements began to grow. The August 1939 Molotov–Ribbentrop Pact between Germany and the Soviet Union contained a secret protocol ceding Bessarabia, part of northern Romania, to the Soviet Union. On June 28, 1940, the Soviet Union occupied and annexed Bessarabia, as well as Northern Bukovina and the Hertza region. On 30 August 1940, Germany forced Romania to cede Northern Transylvania to Hungary as a result of the second Vienna Award. Southern Dobruja was ceded to Bulgaria in September 1940. In an effort to appease the Fascist elements within the country and obtain German protection, King Carol II appointed the General Ion Antonescu as Prime Minister on September 6, 1940. Two days later, Antonescu forced the king to abdicate and installed the king's young son Michael (Mihai) on the throne, then declared himself Conducător ("Leader") with dictatorial powers. Under King Michael I and the military government of Antonescu, Romania signed the Tripartite Pact on November 23, 1940. German troops entered the country in 1941 and used the country as platform for invasions of Yugoslavia and the Soviet Union. Romania was a key supplier of resources, especially oil and grain. Romania joined the German-led invasion of the Soviet Union on June 22, 1941; nearly 800,000 Romanian soldiers fought on the Eastern front. Areas that were annexed by the Soviets were reincorporated into Romania, along with the newly established Transnistria Governorate. After suffering devastating losses at Stalingrad, Romanian officials began secretly negotiating peace conditions with the Allies. By 1943, the tide began to turn. The Soviets pushed further west, retaking Ukraine and eventually launching an unsuccessful invasion of eastern Romania in the spring of 1944. Foreseeing the fall of Nazi Germany, Romania switched sides during King Michael's Coup on August 23, 1944. Romanian troops then fought alongside the Soviet Army until the end of the war, reaching as far as Czechoslovakia and Austria. The Kingdom of Bulgaria was ruled by Тsar Boris III when it signed the Tripartite Pact on 1 March 1941. Bulgaria had been on the losing side in the First World War and sought a return of lost ethnically and historically Bulgarian territories, specifically in Macedonia and Thrace. During the 1930s, because of traditional right-wing elements, Bulgaria drew closer to Nazi Germany. In 1940 Germany pressured Romania to sign the Treaty of Craiova, returning to Bulgaria the region of Southern Dobrudja, which it had lost in 1913. The Germans also promised Bulgaria — in case it joined the Axis — an enlargement of its territory to the borders specified in the Treaty of San Stefano. Bulgaria participated in the Axis invasion of Yugoslavia and Greece by letting German troops attack from its territory and sent troops to Greece on April 20. As a reward, the Axis powers allowed Bulgaria to occupy parts of both countries—southern and south-eastern Yugoslavia (Vardar Banovina) and north-eastern Greece (parts of Greek Macedonia and Greek Thrace). The Bulgarian forces in these areas spent the following years fighting various nationalist groups and resistance movements. Despite German pressure, Bulgaria did not take part in the Axis invasion of the Soviet Union and actually never declared war on the Soviet Union. The Bulgarian Navy was nonetheless involved in a number of skirmishes with the Soviet Black Sea Fleet, which attacked Bulgarian shipping. Following the Japanese attack on Pearl Harbor in December 1941, the Bulgarian government declared war on the Western Allies. This action remained largely symbolic (at least from the Bulgarian perspective), until August 1943, when Bulgarian air defense and air force attacked Allied bombers, returning (heavily damaged) from a mission over the Romanian oil refineries. This turned into a disaster for the citizens of Sofia and other major Bulgarian cities, which were heavily bombed by the Allies in the winter of 1943–1944. On 2 September 1944, as the Red Army approached the Bulgarian border, a new Bulgarian government came to power and sought peace with the Allies, expelled the few remaining German troops, and declared neutrality. These measures however did not prevent the Soviet Union from declaring war on Bulgaria on 5 September, and on 8 September the Red Army marched into the country, meeting no resistance. This was followed by the coup d'état of 9 September 1944, which brought a government of the pro-Soviet Fatherland Front. After this, the Bulgarian army (as part of the Red Army's Third Ukrainian Front) fought the Germans in Yugoslavia and Hungary, sustaining numerous casualties. Despite this, the Paris Peace Treaty treated Bulgaria as one of the defeated countries. Bulgaria was allowed to keep Southern Dobrudja, but had to give up all claims to Greek and Yugoslav territory. 150,000 ethnic Bulgarians were expelled from Greek Thrace alone. Various countries fought side by side with the Axis powers for a common cause. These countries were not signatories of the Tripartite Pact and thus not formal members of the Axis. Japanese forces invaded Thailand's territory on the morning of 8 December 1941, the day after the attack on Pearl Harbor. Only hours after the invasion, prime minister Field Marshal Phibunsongkhram ordered the cessation of resistance against the Japanese. On 21 December 1941, a military alliance with Japan was signed and on 25 January 1942, Sang Phathanothai read over the radio Thailand's formal declaration of war on the United Kingdom and the United States. The Thai ambassador to the United States, Mom Rajawongse Seni Pramoj, did not deliver his copy of the declaration of war. Therefore, although the British reciprocated by declaring war on Thailand and considered it a hostile country, the United States did not. On 21 March, the Thais and Japanese also agreed that Shan State and Kayah State were to be under Thai control. The rest of Burma was to be under Japanese control, On 10 May 1942, the Thai Phayap Army entered Burma's eastern Shan State, which had been claimed by Siamese kingdoms. Three Thai infantry and one cavalry division, spearheaded by armoured reconnaissance groups and supported by the air force, engaged the retreating Chinese 93rd Division. Kengtung, the main objective, was captured on 27 May. Renewed offensives in June and November evicted the Chinese into Yunnan. The area containing the Shan States and Kayah State was annexed by Thailand in 1942. The areas were ceded back to Burma in 1946. The Free Thai Movement ("Seri Thai") was established during these first few months. Parallel Free Thai organizations were also established in the United Kingdom. Queen Ramphaiphanni was the nominal head of the British-based organization, and Pridi Phanomyong, the regent, headed its largest contingent, which was operating within Thailand. Aided by elements of the military, secret airfields and training camps were established, while Office of Strategic Services and Force 136 agents slipped in and out of the country. As the war dragged on, the Thai population came to resent the Japanese presence. In June 1944, Phibun was overthrown in a coup d'état. The new civilian government under Khuang Aphaiwong attempted to aid the resistance while maintaining cordial relations with the Japanese. After the war, U. S. influence prevented Thailand from being treated as an Axis country, but the British demanded three million tons of rice as reparations and the return of areas annexed from Malaya during the war. Thailand also returned the portions of British Burma and French Indochina that had been annexed. Phibun and a number of his associates were put on trial on charges of having committed war crimes and of collaborating with the Axis powers. However, the charges were dropped due to intense public pressure. Public opinion was favourable to Phibun, since he was thought to have done his best to protect Thai interests. Although Finland never signed the Tripartite Pact and legally (de jure) was not a part of the Axis, it was Axis-aligned in its fight against the Soviet Union. Finland signed the revived Anti-Comintern Pact of November 1941. The August 1939 Molotov-Ribbentrop Pact between Germany and the Soviet Union contained a secret protocol dividing much of eastern Europe and assigning Finland to the Soviet sphere of influence. After unsuccessfully attempting to force territorial and other concessions on the Finns, the Soviet Union tried to invade Finland in November 1939 during the Winter War, intending to establish a communist puppet government in Finland. The conflict threatened Germany's iron-ore supplies and offered the prospect of Allied interference in the region. Despite Finnish resistance, a peace treaty was signed in March 1940, wherein Finland ceded some key territory to the Soviet Union, including the Karelian Isthmus, containing Finland's second-largest city, Viipuri, and the critical defensive structure of the Mannerheim Line. After this war, Finland sought protection and support from the United Kingdom and neutral Sweden, but was thwarted by Soviet and German actions. This resulted in Finland being drawn closer to Germany, first with the intent of enlisting German support as a counterweight to thwart continuing Soviet pressure, and later to help regain lost territories. In the opening days of Operation Barbarossa, Germany's invasion of the Soviet Union, Finland permitted German planes returning from mine dropping runs over Kronstadt and Neva River to refuel at Finnish airfields before returning to bases in East Prussia. In retaliation, the Soviet Union launched a major air offensive against Finnish airfields and towns, which resulted in a Finnish declaration of war against the Soviet Union on 25 June 1941. The Finnish conflict with the Soviet Union is generally referred to as the Continuation War. Finland's main objective was to regain territory lost to the Soviet Union in the Winter War. However, on 10 July 1941, Field Marshal Carl Gustaf Emil Mannerheim issued an Order of the Day that contained a formulation understood internationally as a Finnish territorial interest in Russian Karelia. Diplomatic relations between the United Kingdom and Finland were severed on 1 August 1941, after the British bombed German forces in the Finnish village and port of Petsamo. The United Kingdom repeatedly called on Finland to cease its offensive against the Soviet Union, and declared war on Finland on 6 December 1941, although no other military operations followed. War was never declared between Finland and the United States, though relations were severed between the two countries in 1944 as a result of the Ryti-Ribbentrop Agreement. Finland maintained command of its armed forces and pursued war objectives independently of Germany. Germans and Finns did work closely together during Operation Silverfox, a joint offensive against Murmansk. Finland refused German requests to participate actively in the Siege of Leningrad, and also granted asylum to Jews, while Jewish soldiers continued to serve in its army. The relationship between Finland and Germany more closely resembled an alliance during the six weeks of the Ryti-Ribbentrop Agreement, which was presented as a German condition for help with munitions and air support, as the Soviet offensive coordinated with D-Day threatened Finland with complete occupation. The agreement, signed by President Risto Ryti but never ratified by the Finnish Parliament, bound Finland not to seek a separate peace. After Soviet offensives were fought to a standstill, Ryti's successor as president, Marshall Mannerheim, dismissed the agreement and opened secret negotiations with the Soviets, which resulted in a ceasefire on 4 September and the Moscow Armistice on 19 September 1944. Under the terms of the armistice, Finland was obliged to expel German troops from Finnish territory, which resulted in the Lapland War. Finland signed a peace treaty with the Allied powers in 1947. San Marino San Marino, ruled by the Sammarinese Fascist Party (PFS) since 1923, was closely allied to Italy. On 17 September 1940, San Marino declared war on Britain; Britain did not reciprocate. A day earlier, San Marino restored diplomatic relations with Germany, as it did not attend the 1919 Paris Peace Conference. San Marino's 1,000 man army remained garrisoned within the country. The war declaration was intended for propaganda purposes to further isolate and demoralize Britain. Three days after the fall of Mussolini, the PFS was removed and the new government declared neutrality in the conflict. The Fascists regained power on 1 April 1944, but kept neutrality intact. On 26 June, the Royal Air Force accidentally bombed the country, killing 63. Germany used this tragedy in propaganda about Allied aggression against a neutral country. Retreating Axis forces occupied San Marino on 17 September, but were forced out by the Allies in less than three days. The Allied occupation removed the Fascists from power, and San Marino declared war on Germany on 21 September. The newly elected government banned the Fascists on 16 November. Anti-British sentiments were widespread in Iraq prior to 1941. Seizing power on 1 April 1941, the nationalist government of Prime Minister Rashid Ali repudiated the Anglo-Iraqi Treaty of 1930 and demanded that the British abandon their military bases and withdraw from the country. Ali sought support from Germany and Italy in expelling British forces from Iraq. On 9 May 1941, Mohammad Amin al-Husayni, the Mufti of Jerusalem and associate of Ali, declared holy war against the British and called on Arabs throughout the Middle East to rise up against British rule. On 25 May 1941, the Germans stepped up offensive operations in the Middle East. Hitler issued Order 30: "The Arab Freedom Movement in the Middle East is our natural ally against England. In this connection special importance is attached to the liberation of Iraq ... I have therefore decided to move forward in the Middle East by supporting Iraq. " Hostilities between the Iraqi and British forces began on 2 May 1941, with heavy fighting at the RAF air base in Habbaniyah. The Germans and Italians dispatched aircraft and aircrew to Iraq utilizing Vichy French bases in Syria, which would later invoke fighting between Allied and Vichy French forces in Syria. The Germans planned to coordinate a combined German-Italian offensive against the British in Egypt, Palestine, and Iraq. Iraqi military resistance ended by 31 May 1941. Rashid Ali and the Mufti of Jerusalem fled to Iran, then Turkey, Italy, and finally Germany, where Ali was welcomed by Hitler as head of the Iraqi government-in-exile in Berlin. In propaganda broadcasts from Berlin, the Mufti continued to call on Arabs to rise up against the British and aid German and Italian forces. He also helped recruit Muslim volunteers in the Balkans for the Waffen-SS. Client states The Empire of Japan created a number of client states in the areas occupied by its military, beginning with the creation of Manchukuo in 1932. These puppet states achieved varying degrees of international recognition. Manchukuo (Manchuria) Manchukuo, in the northeast region of China, had been a Japanese puppet state in Manchuria since the 1930s. It was nominally ruled by Puyi, the last emperor of the Qing Dynasty, but was in fact controlled by the Japanese military, in particular the Kwantung Army. While Manchukuo ostensibly was a state for ethnic Manchus, the region had a Han Chinese majority. Following the Japanese invasion of Manchuria in 1931, the independence of Manchukuo was proclaimed on 18 February 1932, with Puyi as head of state. He was proclaimed the Emperor of Manchukuo a year later. The new Manchu nation was recognized by 23 of the League of Nations' 80 members. Germany, Italy, and the Soviet Union were among the major powers who recognised Manchukuo. Other countries who recognized the State were the Dominican Republic, Costa Rica, El Salvador, and Vatican City. Manchukuo was also recognised by the other Japanese allies and puppet states, including Mengjiang, the Burmese government of Ba Maw, Thailand, the Wang Jingwei regime, and the Indian government of Subhas Chandra Bose. The League of Nations later declared in 1934 that Manchuria lawfully remained a part of China. This precipitated Japanese withdrawal from the League. The Manchukuoan state ceased to exist after the Soviet invasion of Manchuria in 1945. Mengjiang (Inner Mongolia) Mengjiang was a Japanese puppet state in Inner Mongolia. It was nominally ruled by Prince Demchugdongrub, a Mongol nobleman descended from Genghis Khan, but was in fact controlled by the Japanese military. Mengjiang's independence was proclaimed on 18 February 1936, following the Japanese occupation of the region. The Inner Mongolians had several grievances against the central Chinese government in Nanking, including their policy of allowing unlimited migration of Han Chinese to the region. Several of the young princes of Inner Mongolia began to agitate for greater freedom from the central government, and it was through these men that Japanese saw their best chance of exploiting Pan-Mongol nationalism and eventually seizing control of Outer Mongolia from the Soviet Union. Japan created Mengjiang to exploit tensions between ethnic Mongolians and the central government of China, which in theory ruled Inner Mongolia. When the various puppet governments of China were unified under the Wang Jingwei government in March 1940, Mengjiang retained its separate identity as an autonomous federation. Although under the firm control of the Japanese Imperial Army, which occupied its territory, Prince Demchugdongrub had his own independent army. Mengjiang vanished in 1945 following Japan's defeat in World War II. As Soviet forces advanced into Inner Mongolia, they met limited resistance from small detachments of Mongolian cavalry, which, like the rest of the army, were quickly overwhelmed. Reorganized National Government of China During the Second Sino-Japanese War, Japan advanced from its bases in Manchuria to occupy much of East and Central China. Several Japanese puppet states were organized in areas occupied by the Japanese Army, including the Provisional Government of the Republic of China at Beijing, which was formed in 1937, and the Reformed Government of the Republic of China at Nanjing, which was formed in 1938. These governments were merged into the Reorganized National Government of China at Nanjing on 29 March 1940. Wang Jingwei became head of state. The government was to be run along the same lines as the Nationalist regime and adopted its symbols. The Nanjing Government had no real power; its main role was to act as a propaganda tool for the Japanese. The Nanjing Government concluded agreements with Japan and Manchukuo, authorising Japanese occupation of China and recognising the independence of Manchukuo under Japanese protection. The Nanjing Government signed the Anti-Comintern Pact of 1941 and declared war on the United States and the United Kingdom on 9 January 1943. The government had a strained relationship with the Japanese from the beginning. Wang's insistence on his regime being the true Nationalist government of China and in replicating all the symbols of the Kuomintang led to frequent conflicts with the Japanese, the most prominent being the issue of the regime's flag, which was identical to that of the Republic of China. The worsening situation for Japan from 1943 onwards meant that the Nanking Army was given a more substantial role in the defence of occupied China than the Japanese had initially envisaged. The army was almost continuously employed against the communist New Fourth Army. Wang Jingwei died on 10 November 1944, and was succeeded by his deputy, Chen Gongbo. Chen had little influence; the real power behind the regime was Zhou Fohai, the mayor of Shanghai. Wang's death dispelled what little legitimacy the regime had. The state stuttered on for another year and continued the display and show of a fascist regime. On 9 September 1945, following the defeat of Japan, the area was surrendered to General He Yingqin, a nationalist general loyal to Chiang Kai-shek. The Nanking Army generals quickly declared their alliance to the Generalissimo, and were subsequently ordered to resist Communist attempts to fill the vacuum left by the Japanese surrender. Chen Gongbo was tried and executed in 1946. Philippines (Second Republic) After the surrender of the Filipino and American forces in Bataan Peninsula and Corregidor Island, the Japanese established a puppet state in the Philippines in 1942. The following year, the Philippine National Assembly declared the Philippines an independent Republic and elected José Laurel as its President. There was never widespread civilian support for the state, largely because of the general anti-Japanese sentiment stemming from atrocities committed by the Imperial Japanese Army. The Second Philippine Republic ended with Japanese surrender in 1945, and Laurel was arrested and charged with treason by the US government. He was granted amnesty by President Manuel Roxas, and remained active in politics, ultimately winning a seat in the post-war Senate. India (Provisional Government of Free India) The Arzi Hukumat-e-Azad Hind, the Provisional Government of Free India, was a government in exile led by Subhas Chandra Bose, an Indian nationalist who rejected Mohandas K. Gandhi's nonviolent methods for achieving independence. One of the most prominent leaders of the Indian independence movement of the time and former president of the Indian National Congress, Bose was arrested by British authorities at the outset of the Second World War. In January 1941 he escaped from house arrest, eventually reaching Germany. He arrived in 1942 in Singapore, base of the Indian National Army, made up largely from Indian prisoners of war and Indian residents in south east Asia who joined on their own initiative. Bose and local leader A.M. Sahay received ideological support from Mitsuru Toyama, chief of the Dark Ocean Society, along with Japanese Army advisers. Other Indian thinkers in favour of the Axis cause were Asit Krishna Mukherji, a friend of Bose, and Mukherji's wife, Savitri Devi, a French writer who admired Hitler. Bose was helped by Rash Behari Bose, founder of the Indian Independence League in Japan. Bose declared India's independence on October 21, 1943. The Japanese Army assigned to the Indian National Army a number of military advisors, among them Hideo Iwakuro and Saburo Isoda. The provisional government formally controlled the Andaman and Nicobar Islands; these islands had fallen to the Japanese and been handed over by Japan in November 1943. The government created its own currency, postage stamps, and national anthem. The government would last two more years, until 18 August 1945, when it officially became defunct. During its existence it received recognition from nine governments: Germany, Japan, Italy, Croatia, Manchukuo, China (under the Nanking Government of Wang Jingwei), Thailand, Burma (under the regime of Burmese nationalist leader Ba Maw), and the Philippines under de facto (and later de jure) president José Laurel. Vietnam (Empire of Vietnam) The Empire of Vietnam was a short-lived Japanese puppet state that lasted from 11 March to 23 August 1945. When the Japanese seized control of French Indochina, they allowed Vichy French administrators to remain in nominal control. This French rule ended on 9 March 1945, when the Japanese officially took control of the government. Soon after, Emperor Bảo Đại voided the 1884 treaty with France and Trần Trọng Kim, a historian, became prime minister. The Kingdom of Cambodia was a short-lived Japanese puppet state that lasted from 9 March 1945 to 15 April 1945. the Japanese entered Cambodia in mid-1941, but allowed Vichy French officials to remain in administrative posts. The Japanese calls for an "Asia for the Asiatics" won over many Cambodian nationalists. This policy changed during the last months of the war. The Japanese wanted to gain local support, so they dissolved French colonial rule and pressured Cambodia to declare its independence within the Greater East Asia Co-Prosperity Sphere. Four days later, King Sihanouk declared Kampuchea (the original Khmer pronunciation of Cambodia) independent. Co-editor of the Nagaravatta, Son Ngoc Thanh, returned from Tokyo in May and was appointed foreign minister. On the date of Japanese surrender, a new government was proclaimed with Son Ngoc Thah as prime minister. When the Allies occupied Phnom Penh in October, Son Ngoc Thanh was arrested for collaborating with the Japanese and was exiled to France. Some of his supporters went to northwestern Cambodia, which had been under Thai control since the French-Thai War of 1940, where they banded together as one faction in the Khmer Issarak movement, originally formed with Thai encouragement in the 1940s. Fears of Thai irredentism led to the formation of the first Lao nationalist organization, the Movement for National Renovation, in January 1941. The group was led by Prince Phetxarāt and supported by local French officials, though not by the Vichy authorities in Hanoi. This group wrote the current Lao national anthem and designed the current Lao flag, while paradoxically pledging support for France. The country declared its independence in 1945. The liberation of France in 1944, bringing Charles de Gaulle to power, meant the end of the alliance between Japan and the Vichy French administration in Indochina. The Japanese had no intention of allowing the Gaullists to take over, and in late 1944 they staged a military coup in Hanoi. Some French units fled over the mountains to Laos, pursued by the Japanese, who occupied Viang Chan in March 1945 and Luang Phrabāng in April. King Sīsavāngvong was detained by the Japanese, but his son Crown Prince Savāngvatthanā called on all Lao to assist the French, and many Lao died fighting against the Japanese occupiers. Prince Phetxarāt opposed this position. He thought that Lao independence could be gained by siding with the Japanese, who made him Prime Minister of Luang Phrabāng, though not of Laos as a whole. The country was in chaos, and Phetxarāt's government had no real authority. Another Lao group, the Lao Sēri (Free Lao), received unofficial support from the Free Thai movement in the Isan region. Burma (Ba Maw regime) The Japanese Army and Burma nationalists, led by Aung San, seized control of Burma from the United Kingdom during 1942. A State of Burma was formed on 1 August under the Burmese nationalist leader Ba Maw. The Ba Maw regime established the Burma Defence Army (later renamed the Burma National Army), which was commanded by Aung San. Italy occupied several nations and set up clients in those regions to carry out administrative tasks and maintain order. Politically and economically dominated by Italy from its creation in 1913, Albania was occupied by Italian military forces in 1939 as the Albanian king [Zog] fled the country with his family. The Albanian parliament voted to offer the Albanian throne to the King of Italy, resulting in a personal union between the two countries. The Albanian army, having been trained by Italian advisors, was reinforced by 100,000 Italian troops. A Fascist militia was organized, drawing its strength principally from Albanians of Italian descent. Albania served as the staging area for the Italian invasions of Greece and Yugoslavia. Albania annexed Kozovo in 1941 when Yugoslavia was dissolved, achieving the dream of Greater Albania. Albanian troops were dispatched to the Eastern Front to fight the Soviets as part of the Italian Eighth Army. Albania declared war on the United States in 1941. Montenegro, a former kingdom which was merged into Serbia to form Yugoslavia after the First World War, had long ties to Italy. When Yugoslavia came under Axis occupation, Montenegrin nationalist jumped at the opportunity to create a new Montenegro. Sekula Drljević and the core of the Montenegrin Federalist Party formed the Provisional Administrative Committee of Montenegro on 12 July 1941, and proclaimed on the Saint Peter's Congress the "Kingdom of Montenegro" under under the protection of Italy. The country served Italy as part of its goal of fragmenting the former Kingdom of Yugoslavia, expanding the Italian Empire throughout the Adriatic. The country was caught up in the rebellion of the Yugoslav Army in the Fatherland. Drljevic was expelled from Montenegro in October 1941. The country came under direct Italian control. With the Italian capitulation of 1943, Montenegro came directly under the control of Nazi Germany. In 1944 Drljević formed a pro-Ustaše Montenegrin State Council in exile based in the Independent State of Croatia, with the aim of restoring rule over Montenegro. The Montenegrin People's Army was formed out of various Montenegrin nationalist troops. By then the partisans had already liberated most of Montenegro, which became a federal state of the new Democratic Federal Yugoslavia. Montenegro endured intense air bombing by the Allied air forces in 1944. The Principality of Monaco was officially neutral during the war. The population of the country was largely of Italian descent and sympathized with Italy. Its prince was a close friend of the Vichy French leader, Marshal Philippe Pétain, an Axis collaborator. A fascist regime was established under the nominal rule of the prince when the Italian Fourth Army occupied the country on November 10, 1942 as part of Case Anton. Monaco's military forces, consisting primarily of police and palace guards, collaborated with the Italians during the occupation. German troops occupied Monaco in 1943, and Monaco was liberated by Allied forces in 1944. The collaborationist administrations of German-occupied countries in Europe had varying degrees of autonomy, and not all of them qualified as fully recognized sovereign states. The General Government in occupied Poland did not qualify as a legitimate Polish government and was essentially a German administration. In occupied Norway, the National Government headed by Vidkun Quisling – whose name came to symbolize pro-Axis collaboration in several languages – was subordinate to the Reichskommissariat Norwegen. It was never allowed to have any armed forces, be a recognized military partner, or have autonomy of any kind. In the occupied Netherlands, Anton Mussert was given the symbolic title of "Führer of the Netherlands' people". His National Socialist Movement formed a cabinet assisting the German administration, but was never recognized as a real Dutch government. Slovakia (Tiso regime) Slovakia had been closely aligned with Germany almost immediately from its declaration of independence from Czechoslovakia on 14 March 1939. Slovakia entered into a treaty of protection with Germany on 23 March 1939. Slovak troops joined the German invasion of Poland, having interest in Spiš and Orava. Those two regions, along with Cieszyn Silesia, had been disputed between Poland and Czechoslovakia since 1918. The Poles fully annexed them following the Munich Agreement. After the invasion of Poland, Slovakia reclaimed control of those territories. Slovakia invaded Poland alongside German forces, contributing 50,000 men at this stage of the war. Slovakia declared war on the Soviet Union in 1941 and signed the revived Anti-Comintern Pact of 1941. Slovak troops fought on Germany's Eastern Front, furnishing Germany with two divisions totaling 80,000 men. Slovakia declared war on the United Kingdom and the United States in 1942. Slovakia was spared German military occupation until the Slovak National Uprising, which began on 29 August 1944, and was almost immediately crushed by the Waffen SS and Slovak troops loyal to Josef Tiso. After the war, Tiso was executed and Slovakia was rejoined with Czechoslovakia. The border with Poland was shifted back to the pre-war state. Slovakia and the Czech Republic finally separated into independent states in 1993. Croatia (Independent State of Croatia) On 10 April 1941, the Independent State of Croatia (Nezavisna Država Hrvatska, or NDH) was declared to be a member of the Axis, co-signing the Tripartite Pact. The NDH remained a member of the Axis until the end of Second World War, its forces fighting for Germany even after NDH had been overrun by Yugoslav Partisans. On 16 April 1941, Ante Pavelić, a Croatian nationalist and one of the founders of the Ustaša – Croatian Liberation Movement, was proclaimed Poglavnik (leader) of the new regime. Initially the Ustaše had been heavily influenced by Italy, it was actively supported by Mussolini's Fascist regime in Italy, which gave the movement training grounds to prepare for war against Yugoslavia, as well as accepting Pavelić as an exile and allowing him to reside in Rome. Italy intended to use the movement to destroy Yugoslavia, which would allow Italy to expand its power through the Adriatic. Hitler did not want to engage in a war in the Balkans until the Soviet Union was defeated. The Italian occupation of Greece was not going well; Mussolini wanted Germany to invade Yugoslavia to save the Italian forces in Greece. Hitler reluctantly submitted; Yugoslavia was invaded and the Independent State of Croatia was created. Pavelić led a delegation to Rome and offered the crown of Croatia to an Italian prince of the House of Savoy, who was crowned Tomislav II, King of Croatia, Prince of Bosnia and Herzegovina, Voivode of Dalmatia, Tuzla and Knin, Prince of Cisterna and of Belriguardo, Marquess of Voghera, and Count of Ponderano. The next day, Pavelić signed the Contracts of Rome with Mussolini, ceding Dalmatia to Italy and fixing the permanent borders between the NDH and Italy. Italian armed forces were allowed to control all of the coastline of the NDH, effectively giving Italy total control of the Adriatic coastline. However German influence became strong upon the NDH being founded. After the King of Italy ousted Mussolini from power and Italy capitulated, the NDH became completely under German influence. The platform of the Ustaše movement proclaimed that Croatians had been oppressed by the Serb-dominated Kingdom of Yugoslavia, and that Croatians deserved to have an independent nation after years of domination by foreign empires. The Ustaše perceived Serbs to be racially inferior to Croats and saw them as infiltrators who were occupying Croatian lands. They saw the extermination of Serbs as necessary to racially purify Croatia. While part of Yugoslavia, many Croatian nationalists violently opposed the Serb-dominated Yugoslav monarchy, and assassinated Alexander I of Yugoslavia, together with the Internal Macedonian Revolutionary Organization. The regime enjoyed support amongst radical Croatian nationalists. Ustashe forces fought against Serbian Chetnik and communist Yugoslav Partisan guerrillas throughout the war. Upon coming to power, Pavelić formed the Croatian Home Guard (Hrvatsko domobranstvo) as the official military force of the NDH. Originally authorized at 16,000 men, it grew to a peak fighting force of 130,000. The Croatian Home Guard included an air force and navy, although its navy was restricted in size by the Contracts of Rome. In addition to the Croatian Home Guard, Pavelić was also the supreme commander of the Ustaše militia, although all NDH military units were generally under the command of the German or Italian formations in their area of operations. Many Croats volunteered for the German Waffen SS. The Ustaše government declared war on the Soviet Union, signed the Anti-Comintern Pact of 1941, and sent troops to Germany's Eastern Front. Ustaše militia were garrisoned the Balkans, battling the Chetniks and communist partisans. The Ustaše government applied racial laws on Serbs, Jews, and Romas, and after June 1941 deported them to the Jasenovac concentration camp or to German camps in Poland. The racial laws were enforced by the Ustaše militia. The exact number of victims of the Ustaše regime is uncertain due to the destruction of documents and varying numbers given by historians. The estimates range between 56,000 and 97,000 to 700,000 or more. Ustaše never had widespread support among the population of the NDH. Their own estimates put the number of sympathizers, even in the early phase, at around 40,000 out of total population of 7 million. However, they were able to rely on the passive acceptance of much of the Croat population of the NDH. Serbia (Government of National Salvation) In April 1941 Germany invaded and occupied Yugoslavia. On 30 April a pro-German Serbian administration was formed under Milan Aćimović. Forces loyal to Yugoslav government in exile organized resistance movement on Ravna Gora Mountain on 13 May 1941, under command of colonel Dragoljub Draža Mihailović. In 1941, after the invasion of the Soviet Union, a guerrilla campaign against the Germans and Italians was launched also by the communist partisans under Josip Broz Tito. The uprising became a serious concern for the Germans, as most of their forces were deployed to Russia; only three divisions were in the country. On 13 August 546 Serbs, including some of them country's prominent and influential leaders, issued an appeal to the Serbian nation that condemned the partisan and royalist resistance as unpatriotic. Two weeks after the appeal, with the partisan and royalist insurgency beginning to gain momentum, 75 prominent Serbs convened a meeting in Belgrade and formed a Government of National Salvation under Serbian General Milan Nedić to replace the existing Serbian administration. Former Yougoslav army general and minister of defense accepted to take a position of Prime minister only after Germans let him known that the rest of Serbia will be divided between Independent State of Croatia, Bulgaria, Hungary and Greater Albania. On 29 August the German authorities installed General Nedić and his government in power. The Germans were short of police and military forces in Serbia, and came to rely on poor armed Serbian formations to maintain order. Those forces were not able to defeat royalist forces, and for the most of the war large parts of Serbia were under control of the Yugoslav Army in Fatherland. Most of administration helped resistance movement, and some of them like colonel Milan Kalabić from Serbian State Guards were shot by the Gestapo as British agents and supporters of the royalist forces. Because of the large scale resistance taking place on Serbian soil, Germany installed a brutal regime of reprisals, resulting in the shooting of 100 Serbs for one killed German soldier, and 50 for one wounded. Large scale shootings took place in the Serbian towns of Kraljevo and Kragujevac during October 1941. Nedić's forces included the Serbian State Guards and the Serbian Volunteer Corps, which were initially largely members of the Yugoslav National Movement "Zbor" (Jugoslovenski narodni pokret "Zbor", or ZBOR) party. Some of these formations wore the uniform of the Royal Yugoslav Army as well as helmets and uniforms purchased from Italy, while others had equipment from Germany mostly with obsolete equipment from occupied European states such as Belgium. German forces conducted mass killings of the Serbian Jews who mostly lived in Belgrade and Šabac. By Spring of 1942. most of the Serbian Jews were killed by SS, SD and Gestapo in Sajmište concentration camp (on the territory of Independent State of Croatia) and Jajinci near Belgrade. By June 1942. Germans proclaimed Belgrade ad Judenfreei. Italy (Italian Social Republic) Mussolini had been removed from office and arrested by King Victor Emmanuel III on 25 July 1943. After the Italian armistice, in a raid led by German paratrooper Otto Skorzeny, Mussolini was rescued from arrest. Once restored in power, Mussolini declared that Italy was a republic and that he was the new head of state. He was subject to German control for the duration of the war. Albania (under German control) After the Italian armistice, a void of power opened up in Albania. The Italian occupying forces could do nothing, as the National Liberation Movement took control of the south and National Front (Balli Kombëtar) took control of the north. Albanians in the Italian army joined the guerrilla forces. In September 1943 the guerrillas moved to take the capital of Tirana, but German paratroopers dropped into the city. Soon after the fight, the German High Command announced that they would recognize the independence of a greater Albania. They organized an Albanian government, police, and military with the Balli Kombëtar. The Germans did not exert heavy control over Albania's administration, but instead attempted to gain popular appeal by giving the Albanians what they wanted. Several Balli Kombëtar leaders held positions in the regime. The joint forces incorporated Kosovo, western Macedonia, southern Montenegro, and Presevo into the Albanian state. A High Council of Regency was created to carry out the functions of a head of state, while the government was headed mainly by Albanian conservative politicians. Albania was the only European country occupied by the Axis powers that ended World War II with a larger Jewish population than before the war. The Albanian government had refused to hand over their Jewish population. They provided Jewish families with forged documents and helped them disperse in the Albanian population. Albania was completely liberated on November 29, 1944. Hungary (Szálasi regime) Relations between Germany and the regency of Miklós Horthy collapsed in Hungary in 1944. Horthy was forced to abdicate after German armed forces held his son hostage as part of Operation Panzerfaust. Hungary was reorganized following Horthy's abdication in December 1944 into a totalitarian fascist regime called the Government of National Unity, led by Ferenc Szálasi. He had been Prime Minister of Hungary since October 1944 and was leader of the anti-Semitic fascist Arrow Cross Party. In power, his government was a Quisling regime with little authority other than to obey Germany's orders. Days after the government took power, the capital of Budapest was surrounded by the Soviet Red Army. German and fascist Hungarian forces tried to hold off the Soviet advance but failed. In March 1945, Szálasi fled to Germany to run the state in exile, until the surrender of Germany in May 1945. Ivan Mihailov, leader of the Internal Macedonian Revolutionary Organization (IMRO), wanted to solve the Macedonian Question by creating a pro-Bulgarian state on the territory of the region of Macedonia in the Kingdom of Yugoslavia. Romania left the Axis and declared war on Germany on 23 August 1944. and the Soviets declared war on Bulgaria on 5 September. While these events were taking place, Mihailov came out of hiding in the Independent State of Croatia and traveled to re-occupied Skopje. The Germans gave Mihailov the green light to create a Macedonian state. Negotiations were undertaken with the Bulgarian government. Contact was made with Hristo Tatarchev in Resen, who offered Mihailov the Presidency. Bulgaria switched sides on 8 September, and on the 9th the Fatherland Front staged a coup and deposed the monarchy. Mihailov refused the leadership and fled to Italy. Spiro Kitanchev took Mihailov's place and became Premier of Macedonia. He cooperated with the pro-Bulgarian authorities, the Wehrmacht, the Bulgarian Army, and the Yugoslav Partisans for the rest of September and October. In the middle of November, the communists won control over the region. Joint German-Italian puppet states Following the German invasion of Greece and the flight of the Greek government to Crete and then Egypt, the Hellenic State was formed in May 1941 as a puppet state of both Italy and Germany. Initially, Italy had wished to annex Greece, but was pressured by Germany to avoid civil unrest such as had occurred in Bulgarian-annexed areas. The result was Italy accepting the creation of a puppet regime with the support of Germany. Italy had been assured by Hitler of a primary role in Greece. Most of the country was held by Italian forces, but strategic locations (Central Macedonia, the islands of the northeastern Aegean, most of Crete, and parts of Attica) were held by the Germans, who seized most of the country's economic assets and effectively controlled the collaborationist government. The puppet regime never commanded any real authority, and did not gain the allegiance of the people. It was somewhat successful in preventing secessionist movements like the Principality of the Pindus from establishing themselves. By mid-1943, the Greek Resistance had liberated large parts of the mountainous interior ("Free Greece"), setting up a separate administration there. After the Italian armistice, the Italian occupation zone was taken over by the German armed forces, who remained in charge of the country until their withdrawal in autumn 1944. In some Aegean islands, German garrisons were left behind, and surrendered only after the end of the war. Controversial cases States listed in this section were not officially members of Axis, but at some point during the war engaged in cooperation with one or more Axis members on level that makes their neutrality disputable. On 31 May 1939, Denmark and Germany signed a treaty of non-aggression, which did not contain any military obligations for either party. On April 9, 1940, citing the intended laying of mines in Norwegian and Danish waters as a pretext, Germany invaded both countries. King Christian X and the Danish government, worried about German bombings if they resisted occupation, accepted "protection by the Reich" in exchange for nominal independence under German military occupation. Three successive Prime Ministers, Thorvald Stauning, Vilhelm Buhl, and Erik Scavenius, maintained this samarbejdspolitik ("cooperation policy") of collaborating with Germany. Denmark coordinated its foreign policy with Germany, extending diplomatic recognition to Axis collaborator and puppet regimes, and breaking diplomatic relations with the governments-in-exile formed by countries occupied by Germany. Denmark broke diplomatic relations with the Soviet Union and signed the Anti-Comintern Pact of 1941. In 1941 a Danish military corps, the Frikorps Danmark, was created at the initiative of the SS and the Danish Nazi Party, to fight alongside the Wehrmacht on Germany's Eastern Front. The government's following statement was widely interpreted as a sanctioning of the corps. Frikorps Danmark was open to members of the Danish Royal Army and those who had completed their service within the last ten years. Between 4,000 and 10,000 Danish citizens joined the Frikorps Danmark, including 77 officers of the Royal Danish Army. An estimated 3,900 of these soldiers died fighting for Germany during the Second World War. Denmark transferred six torpedo boats to Germany in 1941, although the bulk of its navy remained under Danish command until the declaration of martial law in 1943. Denmark supplied agricultural and industrial products to Germany as well as loans for armaments and fortifications. The German presence in Denmark, including the construction of the Danish part of the Atlantic Wall fortifications, was paid from an account in Denmark's central bank, Nationalbanken. The Danish government had been promised that these costs would be repaid, but this never happened. The construction of the Atlantic Wall fortifications in Jutland cost 5 billion Danish kroner. The Danish protectorate government lasted until 29 August 1943, when the cabinet resigned following a declaration of martial law by occupying German military officials. From that on, Denmark officially joined the allied. Germany declared war on Denmark and attacked the Danish military bases which led to 13 Danish soldiers dead in the fighting. The Danish navy scuttled 32 of its larger ships to prevent their use by Germany. Germany seized 14 larger and 50 smaller vessels, and later raised and refitted 15 of the sunken vessels. During the scuttling of the Danish fleet, a number of vessels attempted an escape to Swedish waters, and 13 vessels succeeded, four of which were larger ships. By the autumn of 1944, these ships officially formed a Danish naval flotilla in exile. In 1943 Swedish authorities allowed 500 Danish soldiers in Sweden to train as police troops. By the autumn of 1944, Sweden raised this number to 4,800 and recognized the entire unit as a Danish military brigade in exile. Danish collaboration continued on an administrative level, with the Danish bureaucracy functioning under German command. Active resistance to the German occupation among the populace, virtually nonexistent before 1943, increased after the declaration of martial law. The intelligence operations of the Danish resistance was described as "second to none" by Field Marshal Bernard Law Montgomery after the liberation of Denmark. France (Vichy government) The German invasion army entered Paris on June 14, 1940, following the battle of France. Pétain became the last Prime Minister of the French Third Republic on 16 June 1940. He sued for peace with Germany and on 22 June 1940, his government concluded an armistice with Hitler. Under the terms of the agreement, Germany occupied two-thirds of France, including Paris. Pétain was permitted to keep an "armistice army" of 100,000 men within the unoccupied southern zone. This number included neither the army based in the French colonial empire nor the French fleet. In French North Africa and French Equatorial Africa, the Vichy were permitted to maintain 127,000 men under arms after the colony of Gabon defected to the Free French. The French also maintained substantial garrisons at the French-mandated territory of Syria and Lebanon, the French colony of Madagascar, and in Djibouti. After the armistice, relations between the Vichy French and the British quickly deteriorated. Fearful that the powerful French fleet might fall into German hands, the British launched several naval attacks, most notable of which was against the Algerian harbour of Mers el-Kebir on 3 July 1940. Though Churchill defended his controversial decision to attack the French Fleet, the French people were less accepting. German propaganda trumpeted these attacks as an absolute betrayal of the French people by their former allies. France broke relations with the United Kingdom and considered declaring war. On 10 July 1940, Petain was given emergency "full powers" by a majority vote of the French National Assembly. The following day approval of the new constitution by the Assembly effectively created the French State (l'État Français), replacing the French Republic with the unofficial Vichy France, named for the resort town of Vichy, where Petain maintained his seat of government. The new government continued to be recognised as the lawful government of France by the United States until 1942. Racial laws were introduced in France and its colonies and many French Jews were deported to Germany. Albert Lebrun, last President of the Republic, did not leave the presidential office when he moved to Vizille on 10 July 1940. By 25 April 1945, during Petain's trial, Lebrun argued that he thought he would be able to return to power after the fall of Germany, since he had not resigned. In September 1940, Vichy France allowed Japan to occupy French Indochina, a federation of the French colonial possessions and protectorates roughly encompassing the territory of modern day Vietnam, Laos, and Cambodia. The Vichy regime continued to administer the colony under Japanese military occupation. French Indochina was the base for the Japanese invasions of Thailand, Malaya, and Borneo. In 1945, under Japanese sponsorship, the Empire of Vietnam and the Kingdom of Cambodia were proclaimed as Japanese puppet states. French General Charles de Gaulle headquartered his Free French movement in London in a largely unsuccessful effort to win over the French colonial empire. On 26 September 1940, de Gaulle led an attack by Allied forces on the Vichy port of Dakar in French West Africa. Forces loyal to Pétain fired on de Gaulle and repulsed the attack after two days of heavy fighting. Public opinion in vichy France was further outraged, and Vichy France drew closer to Germany. Vichy France assisted Iraq in the Anglo–Iraqi War of 1941, allowing Germany and Italy to utilize air bases in the French mandate of Syria to support the Iraqi revolt against the British. Allied forces responded by attacking Syria and Lebanon in 1941. In 1942 Allied forces attacked the French colony of Madagascar. There were considerable anti-communist movements in France, and as result, volunteers joined the German forces in their war against the Soviet Union. Almost 7,000 volunteers joined the anti-communist Légion des Volontaires Français (LVF) from 1941 to 1944, and some 7,500 formed the Division Charlemagne, a Waffen-SS unit, from 1944 to 1945. Both the LVF and the Division Charlemagne fought on the eastern front. Hitler never accepted that France could become a full military partner, and constantly prevented the buildup of Vichy's military strength. Vichy's collaboration with Germany was industrial as well as political, with French factories providing many vehicles to the German armed forces. In November 1942 Vichy French troops briefly but fiercely resisted the landing of Allied troops in French North Africa, but were unable to prevail. Admiral François Darlan negotiated a local ceasefire with the Allies. In response to the landings and Vichy's inability to defend itself, German troops occupied southern France and Tunisia, a French protectorate that formed part of French North Africa. The rump French army in mainland France was disbanded by the Germans. The Bey of Tunis formed a government friendly to the Germans. In mid-1943, former Vichy authorities in North Africa came to an agreement with the Free French and setup a temporary French government in Algiers, known as the French Committee of National Liberation (Comité Français de Libération Nationale, CFLN), initially led by Darlan. After his assassination De Gaulle emerged as the French leader. The CFLN raised more troops and re-organized, re-trained and re-equipped the French military, under Allied supervision. While deprived of armed forces, the Vichy government continued to function in mainland France until summer 1944, but had lost most of its territorial sovereignty and military assets, with the exception of the forces stationed in French Indochina. In 1943 it founded the Milice, a paramilitary force which assisted the Germans in rounding up opponents and Jews, as well as fighting the French Resistance. Soviet Union Relations between the Soviet Union and the major Axis powers were generally hostile before 1938. In the Spanish Civil War, the Soviet Union gave military aid to the Second Spanish Republic, against Spanish Nationalist forces, which were assisted by Germany and Italy. However, the Nationalist forces were victorious. The Soviets suffered another political defeat when their ally Czechoslovakia was partitioned and partially annexed by Germany and Hungary via the Munich Agreement. In 1938 and 1939, the USSR fought and defeated Japan in two separate border wars, at Lake Khasan and Khalkhin Gol, the latter being a major Soviet victory. In 1939 the Soviet Union considered forming an alliance with either Britain and France or with Germany. The Molotov-Ribbentrop Pact of August 1939 between the Soviet Union and Germany included a secret protocol whereby the independent countries of Finland, Estonia, Latvia, Lithuania, Poland, and Romania were divided into spheres of interest of the parties. The Soviet Union had been forced to cede Western Belarus and Western Ukraine to Poland after losing the Soviet-Polish War of 1919–1921, and the Soviet Union sought to regain those territories. On 1 September, barely a week after the pact had been signed, Germany invaded Poland. The Soviet Union invaded Poland from the east on 17 September and on 28 September signed a secret treaty with Nazi Germany to arrange coordination of fighting against Polish resistance. The Soviets targeted intelligence, entrepreneurs, and officers, committing a string of atrocities that culminated in the Katyn massacre and mass relocation to Siberian concentration camps (Gulags). Soon after that, the Soviet Union occupied the Baltic countries of Estonia, Latvia, and Lithuania, and annexed Bessarabia and Northern Bukovina from Romania. The Soviet Union attacked Finland on 30 November 1939, which started the Winter War. Finnish defences prevented an all-out invasion, resulting in an interim peace, but Finland was forced to cede strategically important border areas near Leningrad. The Soviet Union supported Germany in the war effort against Western Europe through the 1939 German-Soviet Commercial Agreement and the 1940 German-Soviet Commercial Agreement, with exports of raw materials (phosphates, chromium and iron ore, mineral oil, grain, cotton, and rubber). These and other export goods transported through Soviet and occupied Polish territories allowed Germany to circumvent the British naval blockade. In October and November 1940, Nazi-Soviet talks about the potential of joining the Axis took place in Berlin. Joseph Stalin later personally countered with a separate proposal in a letter later in November that contained several secret protocols, including that "the area south of Batum and Baku in the general direction of the Persian Gulf is recognized as the center of aspirations of the Soviet Union", referring to an area approximating present day Iraq and Iran, and a Soviet claim to Bulgaria. Hitler never returned Stalin's letter. Shortly thereafter, Hitler issued a secret directive on the eventual attempts to invade the Soviet Union. Germany then revived its Anti-Comintern Pact, enlisting many European and Asian countries in opposition to the Soviet Union. The Soviet Union and Japan remained neutral towards each other for most of the war by the Soviet-Japanese Neutrality Pact. The Soviet Union ended the Soviet-Japanese Neutrality Pact by invading Manchukuo on 8 August 1945, due to agreements reached at the Yalta Conference with Roosevelt and Churchill. Caudillo Francisco Franco's Spanish State gave moral, economic, and military assistance to the Axis powers, while nominally maintaining neutrality. Franco described Spain as a member of the Axis and signed the Anti-Comintern Pact of 1941 with Hitler and Mussolini. Members of the ruling Falange party in Spain held irredentist designs on Gibraltar. Falangists also supported Spanish colonial acquisition of Tangier, French Morocco and northwestern French Algeria. Spain also held ambitions on former Spanish colonies in Latin America. In June 1940 the Spanish government approached Germany to propose an alliance in exchange for Germany recognizing Spain's territorial aims: the annexation of the Oran province of Algeria, the incorporation of all Morocco, the extension of Spanish Sahara southward to the twentieth parallel, and the incorporation of French Cameroons into Spanish Guinea. In 1940 Spain invaded and occupied the Tangier International Zone, maintaining its occupation until 1945. The occupation caused a dispute between Britain and Spain in November 1940; Spain conceded to protect British rights in the area and promised not to fortify the area. The Spanish government secretly held expansionist plans towards Portugal that it made known to the German government. In a communiqué with Germany on 26 May 1942, Franco declared that Portugal should be annexed into Spain. Franco won the Spanish Civil War with the help of Nazi Germany and Fascist Italy, which were both eager to establish another fascist state in Europe. Spain owed Germany over $212 million for supplies of matériel during the Spanish Civil War, and Italian combat troops had actually fought in Spain on the side of Franco's Nationalists. From 1940 to 1941, Franco had endorsed a Latin Bloc of Italy, Vichy France, Spain, and Portugal, with support from the Vatican in order to balance the countries' powers to that of Germany. Franco discussed the Latin Bloc alliance with Petain of Vichy France in Montpellier, France in 1940, and with Mussolini in Bordighera, Italy. When Germany invaded the Soviet Union in 1941, Franco immediately offered to form a unit of military volunteers to join the invasion. This was accepted by Hitler and, within two weeks, there were more than enough volunteers to form a division – the Blue Division (División Azul) under General Agustín Muñoz Grandes. The possibility of Spanish intervention in World War II was of concern to the United States, which investigated the activities of Spain's ruling Falange party in Latin America, especially Puerto Rico, where pro-Falange and pro-Franco sentiment was high, even amongst the ruling upper classes. The Falangists promoted the idea of supporting Spain's former colonies in fighting against American domination. Prior to the outbreak of war, support for Franco and the Falange was high in the Philippines. The Falange Exterior, the international department of the Falange, collaborated with Japanese forces against US forces in the Philippines. The official policy of Sweden before, during, and after World War II was neutrality. It had held this policy for over a century, since the end of the Napoleonic Wars. However, Swedish neutrality during World War II has been much debated and challenged. In contrast to many other neutral countries, Sweden was not directly attacked during the war. It was subject to British and Nazi German naval blockades, which led to problems with the supply of food and fuels. From spring 1940 to summer 1941 Sweden and Finland were surrounded by Nazi Germany and the Soviet Union. This led to difficulties with maintaining the rights and duties of neutral states in the Hague Convention. Sweden violated this, as German troops were allowed to travel through Swedish territory between July 1940 to August 1943. In spite of the fact that it was allowed by the Hague Convention, Sweden has been criticized for exporting iron ore to Nazi Germany via the Baltic and the Norwegian port of Narvik. German dependence on Swedish iron ore shipments was the primary reason for Great Britain to launch Operation Wilfred and, together with France, the Norwegian Campaign in early April 1940. By early June 1940, the Norwegian Campaign stood as a failure for the allies. Nazi Germany could obtain the Swedish iron ore supply it needed for war production, despite the British naval blockade, by its forcefully securing access to Norwegian ports. On 25 March 1941, fearing that Yugoslavia would be invaded otherwise, Prince Paul signed the Tripartite Pact with significant reservations. Unlike other Axis powers, Yugoslavia was not obligated to provide military assistance, nor to provide its territory for Axis to move military forces during the war. Yugoslavia's inclusion in the Axis was not openly welcomed; Italy did not desire Yugoslavia to be a partner in the Axis alliance because Italy had territorial claims on Yugoslavia. Germany, on the other hand, initially wanted Yugoslavia to participate in Germany's then-planned Operation Marita in Greece by providing military access to German forces to travel from Germany through Yugoslavia to Greece. Two days after signing the alliance in 1941, after demonstrations in the streets of Belgrade, Prince Paul was removed from office by a coup d'état. Seventeen-year-old Prince Peter was proclaimed to be of age and was declared king, though he was not crowned nor anointed (a custom of the Serbian Orthodox Church). The new Yugoslavian government under King Peter II, still fearful of invasion, stated that it would remain bound by the Tripartite Pact. Hitler, however, suspected that the British were behind the coup against Prince Paul and vowed to invade the country. The German invasion began on 6 April 1941. Royal Yugoslav Army was thoroughly defeated in less than two weeks and an unconditional surrender was signed in Belgrade on 17 April. King Peter II and much of the Yugoslavian government had left the country because they did not want to cooperate with the Axis. While Yugoslavia was no longer capable of being a member of the Axis, several Axis-aligned puppet states emerged after the kingdom was dissolved. Local governments were set up in Serbia, Croatia, and Montenegro. The remainder of Yugoslavia was divided among the other Axis powers. Germany annexed parts of Drava Banovina. Italy annexed south-western Drava Banovina, coastal parts of Croatia (Dalmatia and the islands), and attached Kosovo to Albania (occupied since 1939). Hungary annexed several border territories of Vojvodina and Baranja. Bulgaria annexed Macedonia and parts of southern Serbia. German, Japanese and Italian World War II cooperation German-Japanese Axis-cooperation Germany's and Italy's declaration of war against the United States On 7 December 1941, Japan attacked the naval bases in Pearl Harbor, Hawaii. According to the stipulation of the Tripartite Pact, Nazi Germany was required to come to the defense of her allies only if they were attacked. Since Japan had made the first move, Germany and Italy were not obliged to aid her until the United States counterattacked. Hitler ordered the Reichstag to formally declare war on the United States. Italy also declared war. Hitler made a speech in the Reichstag on 11 December, saying that: The fact that the Japanese Government, which has been negotiating for years with this man (President Roosevelt), has at last become tired of being mocked by him in such an unworthy way, fills us all, the German people, and all other decent people in the world, with deep satisfaction ... Germany and Italy have been finally compelled, in view of this, and in loyalty to the Tri-Partite Pact, to carry on the struggle against the U. S. A. and England jointly and side by side with Japan for the defense and thus for the maintenance of the liberty and independence of their nations and empires ... As a consequence of the further extension of President Roosevelt's policy, which is aimed at unrestricted world domination and dictatorship, the U. S. A. together with England have not hesitated from using any means to dispute the rights of the German, Italian and Japanese nations to the basis of their natural existence ... Not only because we are the ally of Japan, but also because Germany and Italy have enough insight and strength to comprehend that, in these historic times, the existence or non-existence of the nations, is being decided perhaps forever. Historian Ian Kershaw suggests that this declaration of war against the United States was one of the most disastrous mistakes made by the Axis powers, as it allowed the United States to join the United Kingdom and the Soviet Union in war against Germany without any limitation. Americans played a key role in the strategic bombardment of Germany and the invasion of the continent, ending German domination in Western Europe. The Germans were aware that the Americans had drawn up a series of war plans based on a plethora of scenarios, and expected war with the United States no later than 1943. "You gave the right declaration of war. This method is the only proper one. Japan pursued it formerly and it corresponds with his own system, that is, to negotiate as long as possible. But if one sees that the other is interested only in putting one off, in shaming and humiliating one, and is not willing to come to an agreement, then one should strike as hard as possible, and not waste time declaring war." See also - Axis leaders of World War II - Axis of evil - Axis power negotiations on the division of Asia during World War II - Axis victory in World War II - Expansion operations and planning of the Axis Powers - Foreign relations of the Axis of World War II - Greater Germanic Reich - Imperial Italy - Greater Japanese Empire - Hakkō ichiu - List of pro-Axis leaders and governments or direct control in occupied territories - New Order (Nazism) - Participants in World War II - Zweites Buch ||This article cites its sources but does not provide page references. (September 2010)| - Stanley G. Payne. A History of Fascism, 1914–1945. Madison, Wisconsin, USA: University of Wisconsin Press, 1995. P. 379 - Hakim 1995, p. [page needed]. - Sinor 1959, p. 291. - MacGregor Knox. Common Destiny: Dictatorship, Foreign Policy, and War in Fascist Italy and Nazi Germany. Cambridge University Press, 2000. Pp. 124. - MacGregor Knox. Common Destiny: Dictatorship, Foreign Policy, and War in Fascist Italy and Nazi Germany. Cambridge University Press, 2000. Pp. 125. - Gerhard Schreiber, Bern Stegemann, Detlef Vogel. Germany and the Second World War. Oxford University Press, 1995. Pp. 113. - Gerhard Schreiber, Bern Stegemann, Detlef Vogel. Germany and the Second World War. Oxford University Press, 1995. P. 113. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 68. - Iván T. Berend, Tibor Iván Berend. Decades of Crisis: Central and Eastern Europe Before World War 2. First paperback edition. Berkeley and Los Angeles, California, USA: University of California Press, 2001. P. 310. - Christian Leitz. Nazi Foreign Policy, 1933–1941: The Road to Global War. Pp. 10. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 75. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 81. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 82. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 76. - H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. P. 78. - Peter Neville. Mussolini. London, England, UK: Routledge, 2004. P. 123. - Peter Neville. Mussolini. London, England, UK: Routledge, 2004. Pp. 123. - Peter Neville. Mussolini. London, England, UK: Routledge, 2004. Pp. 123–125. - Gordon Martel. Origins of Second World War Reconsidered: A. J. P. Taylor and Historians. Digital Printing edition. Routledge, 2003. Pp. 179. - Gordon Martel. Austrian Foreign Policy in Historical Context. New Brunswick, New Jersey, USA: Transaction Publishers, 2006. Pp. 179. - Peter Neville. Mussolini. London, England, UK: Routledge, 2004. Pp. 125. - Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. P. 32. - Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. P. 33. - Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. P. 38. - Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. Pp. 39–40. - Hill 2003, p. 91. - Harrison 2000, p. 3. - Harrison 2000, p. 4. - Harrison 2000, p. 10. - Harrison 2000, p. 10, 25. - Harrison 2000, p. 20. - Harrison 2000, p. 19. - Lewis Copeland, Lawrence W. Lamm, Stephen J. McKenna. The World's Great Speeches: Fourth Enlarged (1999) Edition. Pp. 485. - Dr Richard L Rubenstein, John King Roth. Approaches to Auschwitz: The Holocaust Amd Its Legacy. Louisville, Kentucky, USA: Westminster John Knox Press, 2003. P. 212. - Hitler's Germany: Origins, Interpretations, Legacies. London, England, UK: Routledge, 1939. P. 134. - Stephen J. Lee. Europe, 1890–1945. P. 237. - Peter D. Stachura. The Shaping of the Nazi State. P. 31. - Richard Blanke. Orphans of Versailles: The Germans in Western Poland, 1918–1939. Lexington, Kentucky, USA: University Press of Kentucky, 1993. P. 215. - A. C. Kiss. Hague Yearbook of International Law. Martinus Nijhoff Publishers, 1989. - William Young. German Diplomatic Relations 1871–1945: The Wilhelmstrasse and the Formulation of Foreign Policy. iUniverse, 2006. P. 266. - Eastern Europe, Russia and Central Asia 2004, Volume 4. London, England, UK: Europa Publications, 2003. Pp. 138–139. - William Young. German Diplomatic Relations 1871–1945: The Wilhelmstrasse and the Formulation of Foreign Policy. iUniverse, 2006. P. 271. - Gabrielle Kirk McDonald. Documents and Cases, Volumes 1-2. The Hague, Netherlands: Kluwer Law International, 2000. P. 649. - André Mineau. Operation Barbarossa: Ideology and Ethics Against Human Dignity. Rodopi, 2004. P. 36 - Rolf Dieter Müller, Gerd R. Ueberschär. Hitler's War in the East, 1941–1945: A Critical Assessment. Berghahn Books, 2009. P. 89. - Bradl Lightbody. The Second World War: Ambitions to Nemesis. London, England, UK; New York, New York, USA: Routledge, 2004. P. 97. - Geoffrey A. Hosking. Rulers And Victims: The Russians in the Soviet Union. Harvard University Press, 2006 P. 213. - Catherine Andreyev. Vlasov and the Russian Liberation Movement: Soviet Reality and Emigré Theories. First paperback edition. Cambridge, England, UK: Cambridge University Press, 1989. Pp. 53, 61. - Randall Bennett Woods. A Changing of the Guard: Anglo-American Relations, 1941–1946. University of North Carolina Press, 1990. P. 200. - Molotov-Ribbentrop Pact 1939. - Roberts 2006, p. 82. - Command Magagzine. Hitler's Army: The Evolution and Structure of German Forces 1933–1945. P. 175. - Command Magagzine. Hitler's Army: The Evolution and Structure of German Forces 1933–1945. Da Capo Press, 1996. P. 175. - Michael C. Thomsett. The German Opposition to Hitler: The Resistance, The Underground, And Assassination Plots, 1938–1945. McFarland, 2007. P. 40. - Michael C. Thomsett. The German Opposition to Hitler: The Resistance, The Underground, And Assassination Plots, 1938–1945. McFarland, 2007. P. 41. - Barak Kushner. The Thought War: Japanese Imperial Propaganda. University of Hawaii Press, P. 119. - Hilary Conroy, Harry Wray. Pearl Harbor Reexamined: Prologue to the Pacific War. University of Hawaii Press, 1990. P. 21. - Euan Graham. Japan's sea lane security, 1940–2004: a matter of life and death? Oxon, England, UK; New York, New York, USA: Routledge, 2006. Pp. 77. - Daniel Marston. The Pacific War: From Pearl Harbor to Hiroshima. Osprey Publishing, 2011. - Hilary Conroy, Harry Wray. Pearl Harbor Reexamined: Prologue to the Pacific War. University of Hawaii Press, 1990. P. 60. - Dull 2007, p. 5. - Asada 2006, pp. 275–276. - John Whittam. Fascist Italy. Manchester, England, UK; New York, New York, USA: Manchester University Press. P. 165. - Michael Brecher, Jonathan Wilkenfeld. Study of Crisis. University of Michigan Press, 1997. P. 109. - Reynolds Mathewson Salerno. Vital Crossroads: Mediterranean Origins of the Second World War, 1935–1940. Cornell University, 2002. p 82–83. - "French Army breaks a one-day strike and stands on guard against a land-hungry Italy", LIFE, 19 Dec 1938. pp. 23. - *Rodogno, Davide (2006). Fascism's European Empire: Italian Occupation During the Second World War. Cambridge, UK: Cambridge University Press. pp. 46–48. ISBN 978-0-521-84515-1. - John Lukacs. The Last European War: September 1939-December 1941. P. 116. - Jozo Tomasevich. War and Revolution in Yugoslavia, 1941–1945: Occupation and Collaboration. P. 30–31. - Lowe & Marzari 2002, p. 289. - McKercher & Legault 2001, p. 40–41. - McKercher & Legault 2001, pp. 38–40. - McKercher & Legault 2001, p. 40. - McKercher & Legault 2001, p. 41. - Neville Wylie. European Neutrals and Non-Belligerents during the Second World War. Cambridge, England, UK: Cambridge University Press, 2002. Pp. 143. - Neville Wylie. European Neutrals and Non-Belligerents during the Second World War. Cambridge, England, UK: Cambridge University Press, 2002. Pp. 142=143. - Aristotle A. Kallis. Fascist Ideology: Territory and Expansionism in Italy and Germany, 1922–1945. P. 175. - Deist, Wilhelm; Klaus A. Maier et al. (1990). Germany and the Second World War. Oxford University Press. p. 78. - Mussolini Unleashed, 1939–1941: Politics and Strategy in Fascist Italy's Last War. Pp. 284–285. - Patricia Knight. Mussolini and Fascism. Pp. 103. - Patricia Knight. Mussolini and Fascism. Routledge, 2003. P. 103. - Davide Rodogno. Fascism's European Empire: Italian Occupation during the Second World War. Cambridge, England, UK: Cambridge University Press, 2006. P. 30. - Patrick Allitt. Catholic Converts: British and American Intellectuals Turn to Rome. Ithaca, New York, USA: Cornell University, 1997. P. 228. - John Lukacs. The Last European War: September 1939-December 1941. Yale University Press, 2001. P. 364. - Davide Rodogno. Fascism's European empire: Italian occupation during the Second World War. Cambridge, England, UK: Cambridge University Press, 2006. Pp. 80–81. - Davide Rodogno. Fascism's European Empire: Italian Occupation during the Second World War. Cambridge, England, UK: Cambridge University Press, 2006. P. 31. - Peter Neville. Mussolini. Pp. 171. - Peter Neville. Mussolini. P. 171. - Peter Neville. Mussolini. P. 172. - Shirer 1960, p. 1131. - Montgomery 2002, p. [page needed]. - Senn 2007, p. [page needed]. - Thailand and the Second World War - Kirby 1979, p. 134. - Kirby 1979, p. 120. - Kirby 1979, pp. 120–121. - Kennedy-Pipe 1995, p. [page needed]. - Kirby 1979, p. 123. - Seppinen 1983, p. [page needed]. - British Foreign Office Archive, 371/24809/461-556. - Jokipii 1987, p. [page needed]. - "San Marino Ends Old War On Reich to Fight Britain". The New York Times, 18 September 1940. - "Southern Theatre: San Marino In". Time Magazine, 30 September 1940. - "San Marino Army of 900 Enters War Against Reich". The New York Times, 23 September 1944. - Jabārah 1985, p. 183. - Churchill, Winston (1950). The Second World War, Volume III, The Grand Alliance. Boston: Houghton Mifflin Company, p.234; Kurowski, Franz (2005). The Brandenburger Commandos: Germany's Elite Warrior Spies in World War II. Mechanicsburg, Pennsylvania: Stackpole Book. ISBN 978-0-8117-3250-5, 10: 0-8117-3250-9. p. 141 - Guillermo, Artemio R. (2012). Historical Dictionary of the Philippines. Scarecrow Press. pp. 211, 621. ISBN 978-0-8108-7246-2. Retrieved 22 March 2013. - Abinales, Patricio N; Amoroso, Donna J. (2005). State And Society In The Philippines. State and Society in East Asia Series. Rowman & Littlefield. pp. 160, 353. ISBN 978-0-7425-1024-1. Retrieved 22 March 2013. - Lebra 1970, p. 49–54. - Kaplan 1998. - Jasenovac United States Holocaust Memorial Museum web site - Sarner 1997, p. [page needed]. - org/odot_pdf/Microsoft%20Word%20-%205725.pdf Shoah Research Center – Albania - "Den Dansk-Tyske Ikke-Angrebstraktat af 1939". Flådens Historie. (Danish) - Trommer, Aage. ""Denmark". The Occupation 1940–45". Foreign Ministry of Denmark. Archived from the original on 2006-06-18. Retrieved 2006-09-20. - Lidegaard 2003, pp. 461–463. - "Danish Legion Military and Feldpost History". com/DanishFeldpost.htm Archived from the original on 11 October 2006. Retrieved 2006-09-20. - Søværnets mærkedage – August - Flåden efter 29 August 1943 - Den danske Flotille 1944–1945 - Den Danske Brigade DANFORCE – Den Danske Brigade "DANFORCE" Sverige 1943–45 - dk/temaer/befrielsen/jubel/index.html "Jubel og glæde". befrielsen1945.dk. (Danish) - Bachelier 2000, p. 98. - Albert Lebrun's biography, French Republic Presidential official website[dead link] - Paxton 1993. - Nekrich, Ulam & Freeze 1997, pp. 112–120. - Shirer 1960, pp. 495–496. - Senn 2007, p. [page needed]. - Wettig 2008, pp. 20–21. - Kennedy-Pipe 1995, p. [page needed]. - Roberts 2006, p. 58. - Brackman 2001, p. 341–343. - Nekrich, Ulam & Freeze 1997, pp. 202–205. - Donaldson & Nogee 2005, pp. 65–66. - Churchill 1953, pp. 520–521. - Roberts 2006, p. 59. - Wylie 2002, p. 275. - Rohr 2007, p. 99. - Bowen 2000, p. 59. - Payne 1987, p. 269. - Preston 1994, p. 857. - Leonard & Bratzel 2007, p. 96. - Steinberg 2000, p. 122. - Payne 1999, p. 538. - Corvaja 2008, p. 161. - Kershaw 2007, p. 385. - German Declaration of War - Kershaw 2007, Chapter 10. - United States Navy and WWII[dead link] - Nuremberg Trial transcripts, December 11, 1945. More details of the exchanges at the meeting are available online at nizkor.org - Asada, Sadao (2006). From Mahan to Pearl Harbor: The Imperial Japanese Navy and the United States. Annapolis: Naval Institute Press. ISBN 978-1-55750-042-7. - Bachelier, Christian (2000). "L'armée française entre la victoire et la défaite". In Azéma & Bédarida. La France des années noires 1 (Le Seuil) - Bowen, Wayne H. (2000). Spaniards and Nazi Germany: Collaboration in the New Order. Columbia, Missouri: University of Missouri Press. ISBN 978-0-8262-1300-6. - Brackman, Roman (2001). The Secret File of Joseph Stalin: A Hidden Life. London; Portland: Frank Cass. ISBN 978-0-7146-5050-0. - Leonard, Thomas M.; Bratzel, John F. (2007). Latin America During World War II. Lanham Road, Maryland; Plymouth, England: Rowman & Littlefield. ISBN 978-0-7425-3740-8. - Churchill, Winston (1953). The Second World War. Boston: Houghton Mifflin Harcourt. ISBN 978-0-395-41056-1. - Cohen, Philip J. (1996). Serbia's Secret War: Propaganda and the Deceit of History. College Station, Tex: Texas A&M University Press. ISBN 978-0-89096-760-7. - Corvaja, Santi (2008) . Hitler & Mussolini: The Secret Meetings. New York: Enigma. - Donaldson, Robert H; Nogee, Joseph L (2005). The Foreign Policy of Russia: Changing Systems, Enduring Interests. Armonk, NY: M. E. Sharpe. ISBN 978-0-7656-1568-8. - Dull, Paul S (2007) . A Battle History of the Imperial Japanese Navy, 1941–1945. Annapolis: Naval Institute Press. - Hakim, Joy (1995). A History of Us: War, Peace and all that Jazz. New York: Oxford University Press. ISBN 978-0-19-509514-2. - Harrison, Mark (2000) . The Economics of World War II: Six Great Powers in International Comparison. Cambridge: Cambridge University Press. ISBN 978-0-521-78503-7. - Hill, Richard (2003) . Hitler Attacks Pearl Harbor: Why the United States Declared War on Germany. Boulder, CO: Lynne Rienner. - Jabārah, Taysīr (1985). Palestinian leader, Hajj Amin al-Husayni, Mufti of Jerusalem. Kingston Press. p. 183. ISBN 978-0-940670-10-5. - Jokipii, Mauno (1987). Jatkosodan synty: tutkimuksia Saksan ja Suomen sotilaallisesta yhteistyöstä 1940–41 [Birth of the Continuation War: Analysis of the German and Finnish Military Co-operation, 1940–41] (in Finnish). Helsinki: Otava. ISBN 978-951-1-08799-1. - Kaplan, Jeffrey (1998). "Hitler's Priestess: Savitri Devi, the Hindu-Aryan Myth, and Occult Neo-Nazism". Nova Religio (University of California Press) 2 (1): 148–149. doi:10.1525/nr.19220.127.116.11. OCLC 361148795. - Kennedy-Pipe, Caroline (1995). Stalin's Cold War: Soviet Strategies in Europe, 1943 to 1956. New York: Manchester University Press. ISBN 978-0-7190-4201-0. - Kershaw, Ian (2007). Fateful Choices: Ten Decisions That Changed the World, 1940–1941. London: Allen Lane. ISBN 978-1-59420-123-3. - Kirby, D. G. (1979). Finland in the Twentieth Century: A History and an Interpretation. London: C. Hurst & Co. ISBN 978-0-905838-15-1. - Lebra, Joyce C (1970). The Indian National Army and Japan. Singapore: Institute of Southeast Asian Studies. ISBN 978-981-230-806-1. - Lewis, Daniel K. (2001). The History of Argentina. New York; Hampshire: Palgrave MacMillan. - Lidegaard, Bo (2003). Dansk Udenrigspolitisk Historie, vol. 4 (in Danish). Copenhagen: Gyldendal. ISBN 978-87-7789-093-2. - Lowe, Cedric J.; Marzari, Frank (2002) . Italian Foreign Policy, 1870–1940. Foreign Policies of the Great Powers. London: Routledge. - McKercher, B. J. C.; Legault, Roch (2001) . Military Planning and the Origins of the Second World War in Europe. Westport, Connecticut: Greenwood Publishing Group. - Montgomery, John F. (2002) . Hungary: The Unwilling Satellite. Simon Publications. - Nekrich, Aleksandr Moiseevich; Ulam, Adam Bruno; Freeze, Gregory L. (1997). Pariahs, Partners, Predators: German-Soviet Relations, 1922–1941. Columbia University Press. ISBN 0-231-10676-9. - Paxton, Robert O (1993). "La Collaboration d'État". In J. P. Azéma & François Bédarida. La France des Années Noires (Paris: Éditions du Seuil) - Payne, Stanley G. (1987). The Franco Regime, 1936–1975. Madison, Wisconsin: University of Wisconsin Press. ISBN 978-0-299-11074-1. - Payne, Stanley G. (1999). Fascism in Spain, 1923–1977. Madison, Wisconsin: University of Wisconsin Press. ISBN 978-0-299-16564-2. - Potash, Robert A. (1969). The Army And Politics in Argentina: 1928–1945; Yrigoyen to Perón. Stanford: Stanford University Press. - Roberts, Geoffrey (2006). Stalin's Wars: From World War to Cold War, 1939–1953. Yale University Press. ISBN 0-300-11204-1. - Preston, Paul (1994). Franco: A Biography. New York: Basic Books. ISBN 978-0-465-02515-2. - Rodao, Florentino (2002). Franco y el imperio japonés: imágenes y propaganda en tiempos de guerra. Barcelona: Plaza & Janés. ISBN 978-84-01-53054-8. - Rohr, Isabelle (2007). The Spanish Right and the Jews, 1898–1945: Antisemitism and Opportunism. Eastbourne, England; Portland, Oregon: Sussex Academic Press. - Sarner, Harvey (1997). Rescue in Albania: One Hundred Percent of Jews in Albania Rescued from the Holocaust. Cathedral City, California: Brunswick Press. - Senn, Alfred Erich (2007). Lithuania 1940: Revolution From Above. Amsterdam; New York: Rodopi Publishers. ISBN 978-90-420-2225-6. - Seppinen, Ilkka (1983). Suomen ulkomaankaupan ehdot 1939–1940 [Conditions of Finnish Foreign Trade 1939–1940] (in Finnish). Helsinki: Suomen historiallinen seura. ISBN 978-951-9254-48-7. - Shirer, William L. (1960). The Rise and Fall of the Third Reich. New York: Simon & Schuster. ISBN 978-0-671-62420-0. - Sinor, Denis (1959). History of Hungary. Woking; London: George Allen and Unwin. - Steinberg, David Joel (2000) . The Philippines: A Singular and A Plural Place. Boulder Hill, Colorado; Oxford: Westview Press. ISBN 978-0-8133-3755-5. - Walters, Guy (2009). Hunting Evil: The Nazi War Criminals Who Escaped and the Quest to Bring Them to Justice. New York: Broadway Books. - Wettig, Gerhard (2008). Stalin and the Cold War in Europe. Landham, Md: Rowman & Littlefield. ISBN 978-0-7425-5542-6. - Wylie, Neville (2002). European Neutrals and Non-Belligerents During the Second World War. Cambridge; New York: Cambridge University Press. ISBN 978-0-521-64358-0. - Halsall, Paul (1997). "The Molotov-Ribbentrop Pact, 1939". New York: Fordham University. Retrieved 2012-03-22. Further reading - Dear, Ian C. B.; Foot, Michael; Richard Daniell (eds.) (2005). The Oxford Companion to World War II. Oxford University Press. ISBN 0-19-280670-X. - Kirschbaum, Stanislav (1995). A History of Slovakia: The Struggle for Survival. New York: St. Martin's Press. ISBN 0-312-10403-0. - Roberts, Geoffrey (1992). "Infamous Encounter? The Merekalov-Weizsacker Meeting of 17 April 1939". The Historical Journal (Cambridge University Press) 35 (4): 921–926. doi:10.1017/S0018246X00026224. JSTOR 2639445. - Weinberg, Gerhard L. (2005). A World at Arms: A Global History of World War II (2nd ed.). NY: Cambridge University Press. ISBN 978-0-521-85316-3. |Look up Axis Powers in Wiktionary, the free dictionary.| - Axis History Factbook - Full text of The Tripartite Pact - Silent movie of the signing of The Tripartite Pact |Wikimedia Commons has media related to: Axis powers|
http://en.wikipedia.org/wiki/Rome-Berlin_Axis
13
16
Romania in World War II |Part of a series on the| |History of Romania| |Early Middle Ages| |Early Modern Times| |Kingdom of Romania| |Socialist Republic of Romania| |Romania since 1989| |By historical region| Following the outbreak of World War II on 1 September 1939, the Kingdom of Romania under King Carol II officially adopted a position of neutrality. However, the rapidly changing situation in Europe during 1940, as well as domestic political upheaval, undermined this stance. Fascist political forces such as the Iron Guard rose in popularity and power, urging an alliance with Nazi Germany and its allies. As the military fortunes of Romania's two main guarantors of territorial integrity — France and Britain — crumbled in the Fall of France, the government of Romania turned to Germany in hopes of a similar guarantee, unaware that the currently dominant European power had already granted its consent to Soviet territorial claims in a secret protocol of the Molotov-Ribbentrop Pact, signed back in 1939. In summer 1940, a series of territorial disputes were resolved unfavorably to Romania, resulting in the loss of most of the territory gained in the wake of World War I. This caused the popularity of Romania's government to plummet, further reinforcing the fascist and military factions, who had eventually staged a coup that turned the country into a fascist dictatorship under Maresal Ion Antonescu. The new regime firmly set the country on a course towards the Axis camp, officially joining the Axis Powers on 23 November 1940. "When it's a question of action against the Slavs, you can always count on Romania," Antonescu stated ten days before the start of Operation Barbarossa. As a member of the Axis, Romania joined the invasion of the Soviet Union on 22 June 1941, providing equipment and oil to Nazi Germany as well as committing more troops to the Eastern Front than all the other allies of Germany combined. Romanian forces played a large role during the fighting in Ukraine, Bessarabia, Stalingrad, and elsewhere. Romanian troops were responsible for the persecution and massacre of 280,000 to 380,000 Jews inside and outside of Romania, though most Jews living within Romanian borders survived the harsh conditions. After the tide of war turned against the Axis, Romania was bombed by the Allies from 1943 onwards and invaded by advancing Soviet armies in 1944. With popular support for Romania's participation in the war faltering and German-Romanian fronts collapsing under Soviet onslaught, King Michael of Romania led a coup d'état, which deposed the Antonescu regime and put Romania on the side of the Allies for the remainder of the war. Despite this late association with the winning side, Greater Romania was largely dismantled, losing territory to Bulgaria and the Soviet Union, but regaining Northern Transylvania from Hungary. Approximately 370,000 Romanian soldiers were killed during the conflict. On 13 April 1939, France and the United Kingdom had pledged to guarantee the independence of the Kingdom of Romania. Negotiations with the Soviet Union concerning a similar guarantee collapsed when Romania refused to allow the Red Army to cross its frontiers. On 23 August 1939 Germany and the Soviet Union signed the Molotov-Ribbentrop Pact. Among other things, this pact recognized the Soviet "interest" in Bessarabia (which had been ruled by the Russian Empire from 1812–1918). This Soviet interest was combined with a clear indication that there was an explicit lack of any German interest in the area. Eight days later, Nazi Germany invaded the Second Polish Republic. Expecting military aid from Britain and France, Poland chose not to execute its alliance with Romania in order to be able to use the Romanian Bridgehead. Romania officially remained neutral and, under pressure from the Soviet Union and Germany, interned the fleeing Polish government after its members had crossed the Polish-Romanian border on 17 September, forcing them to relegate their authority to what became the Polish government-in-exile. After the assassination of Prime Minister Armand Călinescu on 21 September King Carol II tried to maintain neutrality for several months more, but the surrender of the Third French Republic and the retreat of British forces from continental Europe rendered the assurances that both countries had made to Romania meaningless. In 1940, Romania's territorial gains made following World War I were largely undone. In July, after a Soviet ultimatum, Romania agreed to give up Bessarabia and Northern Bukovina. Two thirds of Bessarabia were combined with a small part of the Soviet Union to form the Moldavian Soviet Socialist Republic. The rest (Northern Bukovina, northern half of the Hotin county and Budjak) was apportioned to the Ukrainian Soviet Socialist Republic. Shortly thereafter, on 30 August, under the Second Vienna Award, Germany and Italy mediated a compromise between Romania and the Kingdom of Hungary: Hungary received a region referred to as "Northern Transylvania", while "Southern Transylvania" remained part of Romania. Hungary had lost all of Transylvania after World War I in the Treaty of Trianon. On 7 September, under the Treaty of Craiova, the "Quadrilateral" (the southern part of Dobrudja), under pressure from Germany, was ceded to Bulgaria (from which it had been taken at the end of the Second Balkan War in 1913). Despite the relatively recent acquisition of these territories, Romanians had seen them as historically belonging to Romania, and the fact that so much land was lost without a fight shattered the underpinnings of King Carol's power. On 4 July Ion Gigurtu formed the first Romanian government to include an Iron Guardist minister, Horia Sima. Sima was a particularly virulent anti-Semite who had become the nominal leader of the movement after the death of Corneliu Codreanu. He was one of the few prominent far-Right leaders to survive the bloody infighting and government suppression of the preceding years. Antonescu comes to power In the immediate wake of the loss of Northern Transylvania, on 4 September the Iron Guard (led by Horia Sima) and General (later Marshal) Ion Antonescu united to form a "National Legionary State" government, which forced the abdication of Carol II in favor of his 19-year-old son Michael. Carol and his mistress Magda Lupescu went into exile, and Romania, despite the unfavorable outcome of recent territorial disputes, leaned strongly toward the Axis. As part of the deal, the Iron Guard became the sole legal party in Romania. Antonescu became the Iron Guard's honorary leader, while Sima became deputy premier. In power, the Iron Guard stiffened the already harsh anti-Semitic legislation, enacted legislation directed against minority businessmen, tempered at times by the willingness of officials to take bribes, and wreaked vengeance upon its enemies. On 8 October Nazi troops began crossing into Romania. They soon numbered over 500,000. On 23 November Romania joined the Axis Powers. On 27 November, 64 former dignitaries or officials were executed by Iron Guard in Jilava prison while awaiting trial (see Jilava Massacre). Later that day, historian and former prime minister Nicolae Iorga and economist Virgil Madgearu, a former government minister, were assassinated. The cohabitation between the Iron Guard and Antonescu was never an easy one. On 20 January 1941, the Iron Guard attempted a coup, combined with a pogrom against the Jews of Bucharest. Within four days, Antonescu had successfully suppressed the coup. The Iron Guard was forced out of the government. Sima and many other legionnaires took refuge in Germany; others were imprisoned. Antonescu abolished the National Legionary State, in its stead declaring Romania a "National and Social State." The war on the Eastern Front On 22 June 1941 Germany launched Operation Barbarossa, attacking the Soviet Union on a wide front. Romania joined in the offensive, with Romanian troops crossing the River Prut. After recovering Bessarabia and Bukovina (Operation München), Romanian units fought side by side with the Germans onward to Odessa, Sevastopol, Stalingrad and the Caucasus. The Romanian contribution of troops was enormous. The total number of troops involved in the Romanian Third Army and the Romanian Fourth Army was second only to Nazi Germany itself. The Romanian Army had a total of 686,258 men under arms in the summer of 1941 and a total of 1,224,691 men in the summer of 1944. The number of Romanian troops sent to fight in Russia exceeded that of all of Germany's other allies combined. A Country Study by the U.S. Federal Research Division of the Library of Congress attributes this to a "morbid competition with Hungary to curry Hitler's favor... [in hope of]... regaining northern Transylvania." Romania instituted a civil government in occupied Soviet lands immediately east of the Dniester. After the Battle of Odessa, this included the city of Odessa. Romanian armies advanced far into the Soviet Union during 1941 and 1942 before being involved in the disaster at the Battle of Stalingrad in the winter of 1942-1943. Romania's most important general, Petre Dumitrescu, was commander of the Romanian Third Army at Stalingrad. In November 1942, the German Sixth Army was briefly put at Dumitrescu's disposal during a German attempt to relieve the Romanian Third Army following the devastating Soviet Operation Uranus. Prior to the Soviet counteroffensive at Stalingrad, the Antonescu government considered a war with Hungary over Transylvania an inevitability after the expected victory over the Soviet Union. Although it was the most dedicated ally of Germany, Romania's turning to the Allied side in August 1944 was rewarded with Northern Transylvania, which had been granted to Hungary in 1940 after the Second Vienna Award. War comes to Romania Air raids Throughout the Antonescu years, Romania supplied Nazi Germany and the Axis armies with oil, grain, and industrial products. Also, numerous train stations in the country, such as Gara de Nord in Bucharest, served as transit points for troops departing for the Eastern Front. Consequently, by 1943 Romania became a target of Allied aerial bombardment. One of the most notable air bombardments was Operation Tidal Wave — the attack on the oil fields of Ploieşti on 1 August 1943. Bucharest was subjected to intense Allied bombardment on 4 and 15 April 1944, and the Luftwaffe itself bombed the city on 24 and 25 August after the country switched sides. Ground offensive In February 1943, with the decisive Soviet counteroffensive at Stalingrad, it was growing clear that the tide of the war was turning against the Axis Powers. By 1944, the Romanian economy was in tatters because of the expenses of the war, and destructive Allied air bombing throughout Romania, including the capital, Bucharest. In addition, most of the products sent to Germany were provided without monetary compensation. As a result of these "uncompensated exports", inflation in Romania skyrocketed, causing widespread discontent among the Romanian population, even among groups and individuals who had once enthusiastically supported the Germans and the war. In April–May 1944, the Romanian forces led by General Mihai Racoviţǎ, together with elements of the German Eighth Army were responsible for defending northern Romania during the initial Soviet attempt to invade Romania, and took part in the Battles of Târgu Frumos. This first Soviet attacks were held back by Axis defensive lines in northern Romania. The Jassy–Kishinev Offensive, launched on 20 August 1944, resulted in a quick and decisive Soviet breakthrough, collapsing the German-Romanian front in the region. Soviet forces captured Târgu Frumos and Iaşi on 21 August and Chişinău on 24 August 1944. The royal coup On 23 August 1944, just as the Red Army was penetrating the Moldavian front, King Michael I of Romania led a successful coup with support from opposition politicians and the army. Michael I, who was initially considered to be not much more than a figurehead, was able to successfully depose the Antonescu dictatorship. The King then offered a non-confrontational retreat to German ambassador Manfred von Killinger. But the Germans considered the coup "reversible" and attempted to turn the situation around by military force. The Romanian First, Second (forming), and what little was left of the Third and the Fourth Armies (one corps) were under orders from the King to defend Romania against any German attacks. King Michael offered to put the Romanian Army, which at that point had a strength of nearly 1,000,000 men, on the side of the Allies. This resulted in a split of the country between those that still supported Germany and its armies and those that supported the new government, the latter often forming partisan groups and gradually gaining the most support. To the Germans the situation was very precarious as Romanian units had been integrated in the Axis defensive lines: not knowing which units were still loyal to the Axis cause and which ones joined the Soviets or discontinued fighting altogether, defensive lines could suddenly collapse. In a radio broadcast to the Romanian nation and army on the night of 23 August King Michael issued a cease-fire, proclaimed Romania's loyalty to the Allies, announced the acceptance of an armistice (to be signed on September 12) offered by Great Britain, the United States, and the USSR, and declared war on Germany. The coup accelerated the Red Army's advance into Romania, but did not avert a rapid Soviet occupation and capture of about 130,000 Romanian soldiers, who were transported to the Soviet Union where many perished in prison camps. The armistice was signed three weeks later on 12 September 1944, on terms virtually dictated by the Soviet Union. Under the terms of the armistice, Romania announced its unconditional surrender to the USSR and was placed under occupation of the Allied forces with the Soviet Union as their representative, in control of media, communication, post, and civil administration behind the front. It has been suggested that the coup may have shortened World War II by up to six months, thus saving hundreds of thousands of lives. Some attribute the postponement of a formal Allied recognition of the de facto change of orientation until 12 September (the date the armistice was signed in Moscow) to the complexities of the negotiations between the USSR and UK. During the Moscow Conference in October 1944 Winston Churchill, Prime Minister of the United Kingdom, proposed an agreement to Soviet leader Joseph Stalin on how to split up Eastern Europe into spheres of influence after the war. The Soviet Union was offered a 90% share of influence in Romania. The Armistice Agreement of 12 September stipulated in Article 18 that "An Allied Control Commission will be established which will undertake until the conclusion of peace the regulation of and control over the execution of the present terms under the general direction and orders of the Allied (Soviet) High Command, acting on behalf of the Allied Powers. The Annex to Article 18 made clear that "The Romanian Government and their organs shall fulfil all instructions of the Allied Control Commission arising out of the Armistice Agreement." The Agreement also stipulated that the Allied Control Commission would have its seat in Bucharest. In line with Article 14 of the Armistice Agreement, two Romanian People's Tribunals were set up to try suspected war criminals. Campaign against the Axis |This section requires expansion. (January 2011)| As the country declared war on Germany on the night of 23 August 1944, border clashes between Hungarian and Romanian troops erupted almost immediately. On 24 August German troops attempted to seize Bucharest and suppress Michael's coup, but were repelled by the city's defenses, which received some support from the United States Air Force. Other Wehrmacht units in the country suffered severe losses: remnants of the Sixth Army retreating west of the Prut River were cut off and destroyed by the Red Army, which was now advancing at an even greater speed, while Romanian units attacked German garrisons at the Ploieşti oilfields, forcing them to retreat to Hungary. The Romanian Army captured over 50,000 German prisoners around this time, who were later surrendered to the Soviets. In early September, Soviet and Romanian forces entered Transylvania and captured the towns of Braşov and Sibiu while advancing toward the Mureş River. Their main objective was Cluj (Cluj-Napoca), a city regarded as the historical capital of Transylvania. However, the Second Hungarian Army was present in the region, and together with the Eighth German Army engaged the Allied forces on 5 September in what was to become the Battle of Turda, which lasted until 8 October and resulted in heavy casualties for both sides. Also around this time, the Hungarian Army carried out its last independent offensive action of the war, penetrating Arad County in western Romania. Despite initial success, a number of ad-hoc Romanian cadet battalions managed to stop the Hungarian advance at the Battle of Păuliş, and soon a combined Romanian-Soviet counterattack overwhelmed the Hungarians, who gave ground and evacuated Arad itself on 21 September. The Romanian Army ended the war fighting against the Wehrmacht alongside the Red Army in Transylvania, Hungary, Yugoslavia, Austria and Czechoslovakia, from August 1944 until the end of the war in Europe. In May 1945, the First and Fourth armies took part in the Prague Offensive. The Romanian Army incurred heavy casualties fighting Nazi Germany. Of some 538,000 Romanian soldiers who fought against the Axis in 1944-45, some 167,000 were killed, wounded or went missing. (KIA, WIA, MIA) |Mountains crossed||Rivers crossed||Liberated villages||From which towns||Losses of the enemy |Romania||1944-08-23||1944-10-25||>275,000 (525,702)||58,330||900||8||11,000 KIA, WIA |Czechoslovakia||1944-12-18||1945-05-12||248,430||66,495||10||4||1,722||31||22,803 KIA, WIA, POW| |Austria||1945–04-10||1945-05-12||2,000||100||7||1||4,000 KIA, WIA, POW |LEGEND: KIA = Killed; MIA = Missing; WIA = Wounded; POW = Prisoners of war.| Romania and the Holocaust - See also Responsibility for the Holocaust (Romania), Antonescu and the Holocaust, Porajmos#Persecution in other Axis countries. According to an international commission report released by the Romanian government in 2004, between 280,000 to 380,000 Jews in the territories of Bessarabia, Bukovina and Transnistria were systematically murdered by Antonescu's regime. Of the 25,000 Roma deported, who were deported to concentration camps in Transnistria, 11,000 died. Though much of the killing was committed in the war zone by Romanian troops, there were also substantial persecutions behind the front line. During the Iaşi pogrom of June 1941, over 12,000 Jews were massacred or killed slowly in trains traveling back and forth across the countryside. Half of the 320,000 Jews living in Bessarabia, Bukovina, and Dorohoi district in Romania were murdered within months of the entry of the country into the war during 1941. Even after the initial killings, Jews in Moldavia, Bukovina and Bessarabia were subject to frequent pogroms, and were concentrated into ghettos from which they were sent to concentration camps, including camps built and run by Romanians. The number of deaths in this area is not certain, but the lowest respectable estimates run to about 250,000 Jews and 25,000 Roma in these eastern regions, while 120,000 of Transylvania's 150,000 Jews died at the hands of the Germans later in the war. Romanian soldiers also worked with the Einsatzkommandos, German killing squads, tasked with massacring Jews and Roma in conquered territories. Romanian troops were in large part responsible for the Odessa massacre, in which over 100,000 Jews were shot during the autumn of 1941. Nonetheless, most Jews living within the pre-Barbarossa borders survived the war, although they were subject to a wide range of harsh conditions, including forced labor, financial penalties, and discriminatory laws. Jewish property was nationalized. The report commissioned and accepted by the Romanian government in 2004 on the Holocaust concluded: Of all the allies of Nazi Germany, Romania bears responsibility for the deaths of more Jews than any country other than Germany itself. The murders committed in Iasi, Odessa, Bogdanovka, Domanovka, and Peciora, for example, were among the most hideous murders committed against Jews anywhere during the Holocaust. Romania committed genocide against the Jews. The survival of Jews in some parts of the country does not alter this reality. Under the 1947 Treaty of Paris, the Allies did not acknowledge Romania as a co-belligerent nation. Northern Transylvania was, once again, recognized as an integral part of Romania, but the border with the USSR was fixed at its state on January 1941, restoring the pre-Barbarossa status quo. Following the dissolution of the Soviet Union in 1991, these territories became part of Ukraine and the Republic of Moldova, respectively. In Romania proper, Soviet occupation following World War II facilitated the rise of the Communist Party as the main political force, leading ultimately to the forced abdication of the King and the establishment of a single-party people's republic in 1947. See also Further reading - Cristian Craciunoiu; Mark W. A. Axworthy; Cornel Scafes (1995). Third Axis Fourth Ally: Romanian Armed Forces in the European War, 1941-1945. London: Arms & Armour. p. 368. ISBN 1-85409-267-7. - David M. Glantz (2007). Red Storm over the Balkans: The Failed Soviet Invasion of Romania, Spring 1944 (Modern War Studies). Lawrence: University Press of Kansas. p. 448. ISBN 0-7006-1465-6. - Some passages in this article have been taken from the (public domain) U.S. Federal Research Division of the Library of Congress Country Study on Romania, sponsored by the U.S. Department of the Army, researched shortly before the 1989 fall of Romania's Communist regime and published shortly after. , accessed July 19, 2005. - Beevor, Anthony (1998). Stalingrad, page 20. - U.S. government Country study: Romania, c. 1990. - www.worldwar-2.net: World War II casualties list. Source: J. Lee Ready World War Two Nation by Nation. Arms and Armour, ISBN 1-85409-290-1 - Michael Alfred Peszke. The Polish underground army, the Western allies, and the failure of strategic unity in World War II, McFarland, 2005, ISBN 0-7864-2009-X - Axworthy, Mark; Scafes, Cornel; Craciunoiu, Cristian (editors) (1995). Third axis, Fourth Ally: Romanian Armed Forces In the European War 1941-1945. London: Arms & Armour Press. pp. 1–368. ISBN 963-389-606-1. - Country Studies: Romania, Chap. 23, Library of Congress - (Romanian) Delia Radu, "Serialul 'Ion Antonescu şi asumarea istoriei' (3)", BBC Romanian edition, August 1, 2008 - (Romanian) "The Dictatorship Has Ended and along with It All Oppression" - From The Proclamation to The Nation of King Michael I on The Night of August 23 1944, Curierul Naţional, August 7, 2004 - "King Proclaims Nation's Surrender and Wish to Help Allies", The New York Times, August 24, 1944 - (Romanian) Constantiniu, Florin, O istorie sinceră a poporului român ("An Honest History of the Romanian People"), Ed. Univers Enciclopedic, Bucureşti, 1997, ISBN 973-9243-07-X - European Navigator: The division of Europe - The Armistice Agreement with Romania - (Romanian) Florin Mihai, "Sărbătoarea Armatei Române", Jurnalul Naţional, October 25, 2007 - [verification needed] - [verification needed] - Third Axis Fourth Ally, p. 214 - (Romanian) Teroarea horthysto-fascistă în nord-vestul României, Bucureşti, 1985 - (Romanian) Romulus Dima, Contribuţia României la înfrângerea Germaniei fasciste, Bucureşti, 1982 - Armata Română în al Doilea Război Mondial/Romanian Army in World War II, Editura Meridiane, Bucureşti, 1995, ISBN 973-33-0329-1. - Ilie Fugaru, Romania clears doubts about Holocaust past, UPI, November 11, 2004 - International Commission on the Holocaust in Romania (November 11, 2004). "Executive Summary: Historical Findings and Recommendations" (PDF). Final Report of the International Commission on the Holocaust in Romania. Yad Vashem (The Holocaust Martyrs' and Heroes' Remembrance Authority). Retrieved 2012-05-17. |Wikimedia Commons has media related to: Romania in World War II| |Wikimedia Commons has media related to: The Holocaust in Romania| Military and political history - Axis History Factbook — Romania - worldwar2.ro: Romanian Armed Forces in the Second World War - Dan Reynolds. The Rifles of Romania 1878-1948 - Paul Paustovanu. The War in the East seen by the Romanian Veterans of Bukovina - Rebecca Ann Haynes. ‘A New Greater Romania’? Romanian Claims to the Serbian Banat in 1941 - Stefan Gheorge. Romania's economic arguments regarding the shortness o the Second World War - Map of Romania's territorial changes during World War II - "Final Report of the International Commission on the Holocaust in Romania" (pdf). Bucharest, Romania: International Commission on the Holocaust in Romania. November 2004. p. 89. Retrieved 2012-05-17. - Murder of the Jews of Romania on the Yad Vashem website - Holocaust in Romania from Holocaust Survivors and Remembrance Project: "Forget You Not" - Roma Holocaust victims speak out |This article may be expanded with text translated from the corresponding article in the Russian Wikipedia. (November 2011)|
http://en.wikipedia.org/wiki/Romania_in_World_War_II
13
14
view a plan Students learn about social capital and how to use networking for civic action 9, 10, 11, 12 Title – Do Something about… Voting/Civic Engagement Lesson 5 – Social Capital By – Do Something, Inc. / www.dosomething.org Primary Subject – Social Studies Secondary Subjects – Other Grade Level – 9-12 Do Something about… Teen Voting/Civic Engagement The following lesson is the fifth lesson of a 10-lesson Teen Voting/Civic Engagement Unit from Do Something, Inc. Other lessons in this unit are as follows: | Lesson 1: What is Civic Action? Students learn about why people get involved in their communities. | Lesson 2: Why Is Democracy So Demanding? Students will discuss the role of citizens in a democracy. | Lesson 3: Representin’ Students learn about the system of representation in a democracy. | Lesson 4: How have people used elected offices to make changes? Students learn how holding a political office effects change. | Lesson 5: Social Capital Students learn about social capital and how to use networking for civic action. | Lesson 6: Politics, A Laughing Matter Students learn how cartoons and satire raise concerns about an issue. | Lesson 7: How do organizers bring about change? Students earn about the strategies of unionizing and boycotting. | Lesson 8: Why do I have to do jury duty? Students learn how jury duty is a type of civic engagement. | Lesson 9: How can I use writing to lead others to action? Students learn how the written word is a method of civic action. | Lesson 10: How can speaking engage others in my cause? Students learn how speeches can gather support for community change. More student teen voting resources can be found at: For more Service-Learning Curricula check out: Lesson 5: Social Capital Goal: Students will learn the concept of social concept and how social networks can be important for civic action. Civics: Standard 10 - Understands the roles of voluntarism and organized groups in American social and political life - Warm-up : Ask students to list the groups and organizations to which they and their parents belong. Some examples might include clubs, email organizations, religious groups, book clubs, etc. - Discover : Introduce the concept of Social Capital to students. “The central premise of social capital is that social networks have value. Social capital refers to the collective value of all “social networks” [who people know] and the inclinations that arise from these networks to do things for each other ["norms of reciprocity"].” You may want to provide students with some examples of social capital such as concerned neighbors watching over each other’s property, mothers who watch each other’s children at the playground, email groups that help individuals research a topic. “Social capital can be found in friendship networks, neighborhoods, churches, schools, bridge clubs, civic associations, and even bars.” - Ask students to discuss and rate the amount of social capital in their school or community. Do they feel it is adequate? - Have students choose one of the case studies of social capital from the following website http://www.cpn.org/tools/dictionary/capital.html and investigate the effects of the social networks. Have students create a profile of the group and investigate how the group worked together to initiate change in their community. Why do people participate in this group? What benefits do they receive? - Take Action : Have students think about ways of increasing social capital in their school or neighborhood. Are there initiatives they could start to bring about awareness of their take action topic? How can they measure the change? E-Mail www.dosomething.org !
http://lessonplanspage.com/ssodosomethingaboutvotingcivicengagementunitlesson5socialcapital912-htm/
13
15
- How Pressure Sensors Work - Sensor Technology - Selection Criteria - Performance Specifications - Mechanical Considerations - Electrical Specifications - Environmental Considerations - Special Requirements How to Select Pressure Sensor Image Credit: GE | Kobold | Measurement Specialties Pressure sensors include all sensors, transducers and elements that produce an electrical signal proportional to pressure or changes in pressure. The device reads the changes in pressure, and then relays this data to recorders or switches. How Pressure Sensors Work Pressure instruments monitor the amount of pressure applied to a part of the process. There are several types of pressure instruments: Sensors-Pressure sensors convert a measured pressure into an electrical output signal. They are typically simple devices that do not include a display or user interface. Elements are the portions of a pressure instrument which are moved or temporarily deformed by the gas or liquid of the system to which the gage is connected. These include the Bourdon tube which is a sealed tube that deflects in response to applied pressure, as well as bellows, capsule elements and diaphragm1 elements. The basic pressure sensing element can be configured as a C-shaped Bourdon tube (A); a helical Bourdon tube (B); flat diaphragm (C); a convoluted diaphragm (D); a capsule (E); or a set of bellows (F). Image Credit: sensorsmag Transducers- Pressure transducers are pressure-sensing devices. It converts an applied pressure into an electrical signal. The output signal is generated by the primary sensing element and the device maintains the natural characteristics of the sensing technology. A transducer is also a sensor but a transducer always converts the non-electric pressure signal into an electrical signal. Therefore, a transducer is always a sensor but a sensor is not always a transducer. In industry the terms are often interchanged. There are several types of transducers including: Thin film sensors have an extremely thin layer of material deposited on a substrate by sputtering, chemical vapor deposition, or other technique. This technology incorporates a compact design with good temperature stability. There are a variety of materials used in thin film technology, such as titanium nitride and polysilicon. These gauges are most suitable for long-term use and harsh measurement conditions. Semiconductor strain gauge There are numerous technologies by which pressure transducers and sensors function. Some of the most widely used technologies include, Piston technology uses a sealed piston/Cylinder to measure changes in pressure. Mechanical deflection uses an elastic or flexible element to mechanically deflect with a change in pressure, for example a diaphragm, Bourdon tube, or bellows. Diaphragm Pressure Sensor. Image Credit: machinedesign.com Piezoelectric pressure sensors measure dynamic and quasi-static pressures. The bi-directional transducers consist of metalized quartz or ceramic materials which have naturally occurring electrical properties. They are capable of converting stress into an electric potential and vice versa. The common modes of operation are charge mode, which generates a high-impedance charge output; and voltage mode, which uses an amplifier to convert the high-impedance charge into a low-impedance output voltage. The sensors can only be used for varying pressures. They are very rugged but require amplification circuitry and are susceptible to shock and vibration. Piezoelectric Pressure Transducer. Image Credit: National Instruments MicroElectroMechanical systems (MEMS) are typically micro systems manufactured by silicon surface micromachining for use in very small industrial or biological systems. Vibrating elements (silicon resonance) use a vibrating element technology, such as silicon resonance. Variable capacitance pressure instruments use the capacitance change results from the movement of a diaphragm element to measure pressure. Depending on the type of pressure, the capacitive transducer can be either an absolute, gauge, or differential pressure transducer. The device uses a thin diaphragm as one plate of a capacitor. The applied pressure causes the diaphragm to deflect and the capacitance to change. The deflection of the diaphragm causes a change in capacitance that is detected by a bridge circuit. Design Tip: The electronics for signal conditioning should be located close to the sensing element to prevent errors due to stray capacitance The capacitance of two parallel plates is given by the following equation, µ = dielectric constant of the material between the plates A = area of the plates d = spacing between the plates These pressure transducers are generally very stable, linear and accurate, but are sensitive to high temperatures and are more complicated to setup than most pressure sensors. Capacitive absolute pressure sensors with a vacuum between the plates are ideal for preventing error by keeping the dielectric constant of the material constant. - Strain gauges (strain-sensitive variable resistors) are bonded to parts of the structure that deform as the pressure changes. Four strain gages are typically used in series in a Wheatstone bridge circuit, which is used to make the measurement. When voltage is applied to two opposite corners of the bridge, an electrical output signal is developed proportional to the applied pressure. The output signal is collected at the remaining two corners of the bridge. Strain gauges are rugged, accurate, and stable, they can operate in severe shock and vibration environments as well as in a variety of pressure media. Strain gauge pressure transducers come in several different varieties: the bonded strain gauge, the sputtered strain gauge, and the semiconductor strain gauge. Strain gauge pressure transducer. Image Credit: openticle.com Semiconductor piezoresistive sensors are based on semiconductor technology. The change in resistance is not only because of a change in the length and width (as it is with strain gage) but because of a shift of electrical charges within the resistor. There are four piezoresistors within the diagram area on the sensor connected to an element bridge. When the diaphragm is deflected, two resistors are subjected to tangential stress and two to radial stress. Piezoresistive semiconductor pressure sensors incorporate four piezoresistors in the diaphragm. Image Credit: sensorsmag The output is described by the following equation: Vout/ Vcc = ΔR/R Vcc = supply voltage R = base resistance of the piezoresistor ΔR = change with applied pressure and it typically 2.5% of the full R. These are very sensitive devices. The GlobalSpec SpecSearch database allows industrial buyers to select pressure sensors by performance specifications, mechanical considerations, electrical specifications, environmental considerations, and special requirements. The performance of the sensor is based on several factors intrinsic to the system in which the sensor will be used. These include maximum pressure, pressure reference, engineering units, accuracy required, and pressure conditions. Static pressure is defined as P = F/A; where P is pressure, F is applied force and A is the area of application. This equation can be used on liquid and gas that is not flowing. Pressure in moving fluids can be calculated using the equation P1 = ρVO2/2; where ρ is fluid density, and VO is the fluid velocity. Impact pressure is the pressure a moving fluid exerts parallel to the flow direction. Dynamic pressure measures more "real-life" applications. Pressure is in all directions in a fluid. Image Credit: schoolforchampions.com Maximum Pressure Range is the maximum allowable pressure at which a system or piece of equipment is designed to operate safely. The extremes of this range should be determined in accordance with the expected pressure range the device must operate within. It is common practice that this value should not exceed 75% of the device's maximum rated range. For example: if the device has a maximum rated range of 100 psi then the working range should not exceed 75 psi. Design Tip: Figure out what the anticipated pressure spikes will be and then pick a transducer rated 25% higher than the highest spike. An additional margin is suggested where "high cycling" may occur. Absolute pressure sensors measure the pressure of a system relative to a perfect vacuum. These sensors incorporate sensing elements which are completely evacuated and sealed; the high pressure port is not present and input pressure is applied through the low port. The measurement is done in pounds per square inch absolute. Differential pressure is measured by reading the difference between the inputs of two or more pressure levels. The sensor must have two separate pressure ports; the higher of the two pressures is applied through the high port and the lower through the low port. It is commonly measured in units of pounds per square inch. An example of a differential pressure sensor is filter monitors; when the filter starts to clog the flow resistance and therefore the pressure drop across the filter will increase. Bidirectional sensors are able to measure positive and negative pressure differences i.e. p1>p2 and p1Unidirectional sensors only operate in the positive range i.e. p1> p2 and the highest pressure has to be applied to the pressure port defined as "high pressure"Gauge sensors are the most common type of pressure sensors. The pressure is measured relative to ambient pressure which is the atmospheric pressure at a given location. The average atmospheric pressure at sea level is 1013.25 mbar but changes in weather and altitude directly influence the output of the pressure sensor. In this device, the input pressure is through the high port and the ambient pressure is applied through the open low port. Vacuum sensors are gauge sensors used to measure the pressure lower than the localized atmospheric pressure. A vacuum is a volume of space that is essentially empty of matter. Vacuum sensors are divided into different ranges of low, high and ultra-high vacuum. Sealed gauged sensors measure pressure relative to one atmosphere at sea level (14.7 PSI) regardless of local atmospheric pressure. The same sensor can be used for all three types of pressure measurement; only the references differ. Image Credit: sensorsmag Pressure is a measure of force per unit area. A variety of units are used depending on the application; a conversion table is below. 1psi = 51.714 mmHg = 2.0359 in.Hg = 27.680 in.H2O = 6.8946 kPa 1 bar = 14.504 psi 1 atm. = 14.696 psi Accuracy is defined as the difference (error) between the true value and the indicated value expressed as percent of the span. It includes the combined deviations resulting from the method, observer, apparatus and environment. Accuracy is observed in three different areas; static, thermal, and total. Static accuracy is the combined effects of linearity, hysteresis, and repeatability. It is expressed as +/- percentage of full scale output. The static error band is a good measure of the accuracy that can be expected at constant temperature. Linearity is the deviation of a calibration curve from a specified straight line. One way to measure linearity is to use the least squares method, which gives a best fit straight line. The best straight line (BSL) is a line between two parallel lines that enclose all output vs. pressure values on the calibration curve. Hysteresis is the maximum difference in output at any pressure within the specified range, when the value is first approached with increasing and then with decreasing pressure. Temperatures hysteresis is the sensor's ability to give the same output at a given temperature before and after a temperature cycle. Hysteresis is a sensor's ability to give the same output at a given temperature before and after a temperature cycle. Image Credit: sensorsmag Repeatability is the ability of a transducer to reproduce output readings when the same pressure is applied to the transducer repeatedly, under the same conditions and in the same direction. Thermal accuracy observes how temperature affects the output. It is expressed as a percentage of full scale output or as a percentage of full scale per degree Celsius, degree Fahrenheit or Kelvin. Total accuracy is the combination of static and thermal accuracy. In cases where the accuracy differs between middle span and the first and last quarters of the scale, the largest % error is reported. ASME2 B40.1 and DIN accuracy grades are frequently used: Grade 4A (0.1% Full Scale) Grade 3A (0.25% Full Scale) Grade 2A (0.5% Full Scale) Grade 1A (1% Full Scale) Grade A (1% middle half, 2% first and last quarters) Grade B (2% middle half, 3% first and last quarters) Grade C (3% middle half, 4% first and last quarters) Grade D (5% Full Scale) Industrial buyers should consider the pressure conditions that the sensor will be exposed to and ask the following questions: Over pressure: Will pressure ever exceed the maximum pressure? If so, by how much? Burst pressure: The designed safety limit which should not be exceeded. If this pressure is exceeded it may lead to mechanical breach and permanent loss of pressure containment. Are additional safety features needed? Dynamic loading: Dynamic loads can exceed expected static loads. Is the system experiencing dynamic pressure loading? Fatigue loading: Will the system experience high cycle rates? Vacuum Range is the span of pressures from the lowest vacuum pressure to the highest vacuum pressure (e.g., from 0 to 30 inches of mercury VAC). Mechanical conditions of the device determine how the sensor operates within the system. Consideration should be given to the physical constraints of the system, the media into which the sensor will be incorporated, process connectors, and configurations of the system and sensor. Physical constraints depend on the system that the sensor will be incorporated in to and should be considered when selecting a pressure sensor. Understanding the media of the system is critical when selecting a pressure sensor. The media environments for the sensor could be: Hydrogen and gases are very compressible, and they completely fill any closed vessels in which they are places. Abrasive or corrosive liquids and gases such as hydrogen sulfide, hydrochloric acid, bleach, bromides, and waste water. Pressure sensors made of Inconel X, phosphor bronze, beryllium copper or stainless steel are the most corrosion resistant materials to be used in the sensor. However, these materials require internal temperature compensation, in the form of a bi-metallic member, to offset the change in deflection of the sensor resulting from a change in temperature. Radioactive systems should include highly sensitive sensors which have explosion proof mechanisms. The temperature of the media should also be considered when selecting a pressure sensor to ensure the sensor can function in the range of the system. Pressure port and process connection options generally have male and female options and the standard connection depends on the application. British Standard Pipe (BSP)- Large diameter pressure connectors are needed for lower pressure ranges National Pipe Thread (NPT)- Commonly used in automotive and aerospace industries Unified Fine Thread (UNF)- Commonly used in automotive and aerospace industries Metric Threads- Meet ISO specifications. They are denoted with an M and a number which is the outside diameter in millimeters. Flush connectors- Used to provide a crevice free interface which is ideal for biotechnology, pharmaceutical or food process applications. Diary Pipe Standard- Used with hygienic pressure transmitters. Autoclave Engineers- Used in high pressure applications. Mechanical considerations include several application-driven device configurations. Differential systems measure the pressure difference between two points. Small diameter flow systems allow for flowing liquid or gas to be measured as it moves through the system. Flush diaphragms measure pressure in systems which have either completely flush or semi-flush exposed diaphragms to prevent buildup of material on the diaphragm and facilitate easy cleaning. Exposed diaphragm sensors are useful for measuring viscous fluids or media that are processed in a clean environment. Replaceable diaphragms are easily replaceable within the system to ensure high accuracy. Secondary containment houses the sensor to protect the device from environmental conditions. Explosion proof sensors are used in hazardous conditions. The electrical components of the pressure sensor are extremely important to consider and are specific to the application the sensor will be used in. Such specifications include electrical output, display, connections, signal conditioning and electrical features. Industrial buyers should consider the electrical output needed for seamless integration into the system controller. Analog- The output voltage is a simple (usually linear) function of the measurement. Pressure sensors generally have an output of mV/V. Most sensors operate from 10 V to 32 V, unregulated supply. The device will also have internal regulators to provide a stabilized input to the electronic circuitry under varying supply voltages. Industrial sensors can have high-level voltage outputs of 0-5 VDC, and 0-10 VDC. The output signal will lose its amplitude and accuracy due to resistance from the cable when transmitting voltages between a few inches and 30ft depending on the level. Design Tip: A zero- based output signal, such as 0-5VDC does not offer constant feedback at zero pressure because the controller is unaware if the system is operating or if there is a problem. Analog current levels or transmitters such as 4 - 20 mA are suitable for sending signals over long distances. 4-20mA current: is popular for long distances. Frequency: The output signal is encoded via amplitude modulation (AM), frequency modulation (FM), or some other modulation scheme such as sine wave or pulse train; however, the signal is still analog in nature. RS485(MODbus): RS232 and RS485 are serial communication protocols that transmit data one bit at a time. RS232 provides a standard interface between data terminal and data communications equipment. CANbus, J1939, CAN open: connects industrial devices such as limit switches, photoelectric cells, etc. to programmable logic controllers (PLCs) and personal computers (PCs). FOUNDATION Fieldbus a serial, all-digital, two-way communication system that serves as a local area network (LAN) for factory instrumentation and control devices. Special Digital (TTL) devices produce digital outputs other than standard serial or parallel signals. Examples include transistor-transistor logic (TTL) outputs. Combination includes analog and digital outputs The display is the interface the user interacts with to observe the pressure sensor reading. - Analog Meter-The device has an analog meter or simple visual indicator. - Digital- The device has a display for numerical values. Video- The device has a CRT, LCD or other multi-line display. Connectors are considered for the electrical termination of the sensor. The use of connectors adds benefits to pressure sensor installation such as easy removal from the system for recalibration or system maintenance. - Connector or integral cables connect the sensor to the rest of the system. Integral cables are used for submersed applications such as on pumps or hose down situations. - Mating connectors and cable accessories are needed for applications when sealing the sensor is important. Threads are very common for low to medium pressures. National Pipe Threads (NPT) are tapered in nature and require some form of Teflon® tape or putty to seal the thread to a piece of equipment. - Connector/cable orientation allows the sensor to be unplugged and the senor un-threaded. High-vibration environments require an inline connector at the end of a length of wire to reduce the loading on the connector pins. Inline connectors increase the life of the sensor. When selecting a cable option, the outer jacket material and inner conductor insulators must be selected to match the application. Wiring codes and pin-outs External zero and span potentiometers are used to compensate stray current in the measuring circuit to prevent distortion. DIN Rail mount or In-line signal conditioning for mV/V units will amplify output signals. These devices are used for applications requiring high level analog outputs and where the pressure transducer is exposed to conditions detrimental to internal signal conditioning or the required pressure transducer configuration will not accommodate an internal amplifier. Wireless sensors allow the information to be transmitted via a wireless signal to the host. Typically, the wireless signal is a radio frequency (RF) signal. Switch sensors change the output to a switch or relay closure to turn the system on or off with changes in pressure. Temperature output devices provide temperature measurement outputs in addition to pressure. Negative pressure output are available with devices that provide differential pressure measurements Alarm indicator devices have a built-in audible or visual alarm to warn operators of changes and/or danger in the system. Frequency response identifies the highest frequency that the sensor will measure without distortion or attenuation. The sensor's frequency response should be 5-10 times the highest frequency component in the pressure signal. Sometimes this feature is given as response time and the relation is: FB = ½ πτ FB = frequency where the response is reduced by 50% τ = time constant the output rises to 63% of its final value following a step input change. The environment the sensor will operate in should be considered when selecting a pressure sensor. Environmental considerations such as temperature, indoor/outdoor use and use in hazardous locations can affect the accuracy of the sensor. Changes in temperature are directly related to changes in pressure. A plot of the vapor pressure of water versus the water's temperature. Image Credit: purdue.edu Operating temperature is important to consider. Buyers should be aware of the ambient and media temperatures in the environment of the sensor. If the sensor is not compensated correctly the reading can change drastically. Temperature compensation devices include built-in factors that prevent pressure measurement errors due to temperature changes. A material such as a nickel alloy called Ni Span "C", requires no internal temperature compensation because they are relatively insensitive to temperature. Electromagnetic and radio frequency interference (EMI/RFI) have been identified as environmental conditions that affect the performance of safety-related electrical equipment. Ingress Protection or National Electrical Manufacturing Association rating required. IP protection is used in Europe and follows three parameters; protects the equipment, protects the personal, and protects the equipment against penetration of water with harmful effects. IP does not specify degrees of protection against mechanical damage, explosions, moisture, corrosive vapors or vermin. The NEMA standard for the environments surrounding the electrical equipment tests environmental conditions such as corrosion, rust, icing, oil, and coolants. A full explanation of IP and NEMA standards can be found at Solid Applied Technologies. If the pressure sensor is being used outdoors, the sensor may be sealed or vented depending on the pressure range and the accuracy needed. Environmental exposure can include: Animals and rodent tampering If the pressure sensor is going to be used in a hazardous area, the class type and group type must be known in order for the product to comply with NEC or CEC codes in North America. Some systems may require special calibration and approvals. Standard 11-point calibration means the sensor is calibrated to 11 pressure points spanning the full scale range of the pressure sensor. Points such as 0% 20% 40% 60% 80% 100% 80% 60% 40% 20% 0% can be used and going up and down the pressure range will check for hysteresis. Special calibration with additional calibration points Pressure sensors may need special approvals or certifications for operation in certain environments to protect the user and the environment. Specific testing, cleaning procedures and labeling may also need to be implemented for applications. Some additional considerations when installing or budgeting for a sensor in a system are: How accessible will the sensor be? How often will it need to be serviced? Fluid level in a tank: A gauge pressure sensor can be used to measure the pressure at the bottom of a tank. Fluid level can be calculated using the relation: h = P/ρg h= depth below the water surface ρ= water density g= acceleration of gravity Fluid flow: Placing an orifice plate in a pipe section results in a pressure drop which can be used to measure flow. This method is commonly used because it does not cause clogging and the pressure drop is small compared to many other flow meters. The relation is: V0 = √2 (Ps-P0)/ρ In some cases, differential pressures of only a few inches of water are measured in the presence of common-mode pressures of thousands of pounds per square inch. Automotive. A wide variety of pressure applications exist in the modern electronically controlled auto. Among the most important are: Manifold absolute pressure (MAP). Many engine control systems use the speed-density approach to intake air mass flow rate measurement. The mass flow rate must be known so that the optimum amount of fuel can be injected. Engine oil pressure. Engine lubrication requires pressures of 10-15 psig. Evaporative purge system leak detection. To reduce emissions, modern fuel systems are not vented to the atmosphere. This means that fumes resulting from temperature-induced pressure changes in the fuel tank are captured in a carbon canister and later recycled through the engine. Tire pressure. Recent development of the "run-flat" tire has prompted development of a remote tire pressure measurement system. Tank level measurement Differential pressure application. Image Credit: futek Brief explanation of a how a strain gauge diaphragm works Design Essentials: How to select a pressure sensor for a specific application Fundamentals of Pressure Sensor Technology IP/NEMA Rating Introduction Pressure Measurement Glossary Pressure Transducer Tutorial 1Diaphragm- A strain gauge diaphragm typically consists of a flat circular piece of uniform elastic material which is manufactured into a variety of different surface areas and thickness' to optimize performance at lower and higher pressure ranges. 2ASME- American Society of Mechanical Engineers Related Products & Services Digital Pressure Gauges Digital pressure gauges use electronic components to convert applied pressure into usable signals. The gauge readout has a digital numerical display. Mechanical Pressure Gauges Analog pressure gauges are mechanical devices that include bellows, Bourdon tubes, capsule elements and diaphragm element gauges. Pressure gauges are used for a variety of industrial and application-specific pressure monitoring applications. Uses include visual monitoring of air and gas pressure for compressors, vacuum equipment, process lines and specialty tank applications such as medical gas cylinders and fire extinguishers. Pressure instruments are used to measure, monitor, record, transmit or control pressure. Pressure switches are actuated by a change in the pressure of a liquid or gas. They activate electromechanical or solid-state switches upon reaching a specific pressure level. Pressure transmitters translate the low level output of a sensor or transducer to a higher level signal suitable for transmission to a site where it can be further processed. These devices include pressure sensors, transducers, elements, and instruments. Vacuum sensors are devices for measuring vacuum or sub-atmospheric pressures. - Alarm Indicator - Analog Current - Analog Meter - Analog Voltage - FOUNDATION Fieldbus - HART® Protocol - Mechanical Deflection - Negative Pressure Output - Pressure Reading:Other - Sensor Technology:Other - Electrical Output:Other - RS232 / RS485 - Semiconductor Piezoresistive
http://www.globalspec.com/learnmore/sensors_transducers_detectors/pressure_sensing/pressure_sensors_instruments
13
14
DIVISION OF A POLYNOMIAL BY A MONOMIAL Division, like multiplication, may be distributive. Consider, for example, the problem(4 + 6 - 2) ÷ 2, which may be solved by adding the numbers within the parentheses and then dividing the total by 2. Thus, Now notice that the problem may also-be solveddistributively. CAUTION: Do not confuse problems of thetype just described with another type which is similar in appearance but not in final result. For example, in a problem such as 2 ÷ (4 + 6 - 2) the beginner is tempted to divide 2 successively by 4, then 6, and then -2, as follows: Notice that we have canceled the "equals" sign, because 2 + 8 is obviously not equal to 1/2 + 2/6. - 1. The distributive method applies only in those cases in which several different numerators are to be used with the same denominator When literal numbers are present in an expression, the distributive method must be used, as in the following two problems: Quite often this division may be done mentally, and the intermediate steps need not be written out. DIVISION OF A POLYNOMIAL BY A POLYNOMIAL Division of one polynomial by another proceeds as follows: 1. Arrange both the dividend and the divisor in either descending or ascending powers of the same letter. 2. Divide the first term of the dividend by the first term of the divisor and write. the result as the first term of the quotient. 3. Multiply the complete divisor by the quotient just obtained, write the terms of the product under the like terms of the dividend, and subtract this expression from the dividend. 4. Consider the remainder as a new dividend and repeat steps 1, 2, and 3. (10x3 - 7x2y - 16xyz2 + 12y3) + (5x - 6y) In the example just shown, we began by dividing the first term, 10x3, of the dividend by the first term, 5x, of the divisor. The result is 2x2. This is the first term of the quotient. Next, we multiply the divisor by 2x2 and subtract this product from the dividend. Use the remainder as a new dividend. Get the second term, xy, in the quotient by dividing the first term, 5x2y, of the new dividend by the first term, 5x,of the divisor. Multiply the divisor by xy and again subtract from the dividend. Continue the process until the remainder is zero or is of a degree lower than the divisor. In the example being considered, the remainder is zero (indicated by the double line at the bottom). The quotient is 2x2 + xy - 2y2. The following long division problem is an example in which a remainder is produced: The remainder is -4. Notice that the term -3x in the second step of this problem is subtracted from zero, since there is no term containing x in the dividend. When writing down a dividend for long division, leave spaces for missing terms which may enter during the long division process. In arithmetic, division problems are often arranged as follows, in order to emphasize the relationship between the remainder and the divisor : This same type of arrangement is used in algebra. For example, in the problem just shown, the results could be written as follows: Remember, before dividing polynomials arrange the terms in the dividend and divisor according to either descending or ascending powers of one of the literal numbers. When only one literal number occurs, the terms are usually arranged in order of descending powers.. For example, in the polynomial 2x2 + 4x3 + 5 - 7x the highest power among the literal terms is x3. If the terms are arranged according to descending powers of x, the term in x3 should appear first. The x3 term should be followed by the x term, the x term, and finally the constant term. The polynomial arranged according to descending powers of x is 4x3 + 2x2 - 7x + 5. Suppose that 4ab + b2 + 15a2 is to be divided by 3a + 2b. Since 3a can be divided evenly into 15a2, arrange the terms according to descending powers of a. The dividend takes the form 15a2 + 4ab + b2
http://www.tpub.com/math1/10g.htm
13
32
February 13, 2013 Samples are currently making news for NASA’s planetary exploration program. Last August, the rover Curiosity, equipped with a package of laboratory instruments, landed on Mars. On February 9th the rover’s robotic arm drilled its first hole in a rock selected by scientists. In their attempt to gain more information about Mars, scientists will use the rover’s science package to remotely analyze these samples on the martian surface. The results will give them some fairly detailed knowledge on the chemical and mineral make up of these rocks. But what else can we possibly learn from samples? Geologists in general and planetary scientists in particular often emphasize that “such and such” cannot be known for certain “until we obtain samples” of some planetary surface or outcrop. What is this obsession with samples? Why do (some) scientists value them so highly and exactly what do they tell us? Answers to this question (for there is not a single, simple one) are more involved than you might think. With today’s technology providing us with only the most rudimentary information, sample analyses made remotely on a distant planetary surface is limited. Some of the things we want to know, such as the formation age of rocks, can only be discovered with high precision, careful laboratory work. That’s a tall order for remote systems. For example, one of the most common techniques used to “date” a rock’s age requires the separation of individual minerals that make up the rock. Next, the ratio of minute trace elements and their isotopes in each grain must be determined. Assuming that the rock has not been disturbed by heating or a crater shock event, this information can be used to infer an age of formation. If we can convince ourselves that the rock being studied is representative of some larger unit of regional significance, we can use this information to reconstruct the geological history of the region and eventually, the entire alien world. So sample analysis is an important aspect of geological exploration. As I have written previously, we used images to geologically map the entire Moon, noting its crater, basin and mare deposits, and their relative sequence of formation. When the first landing missions were sent to the Moon, great emphasis was placed on obtaining representative samples of each landing site. It was thought that such samples could be studied in detail in Earth laboratories and then extrapolated to the larger regional units shown on the geologic maps. With few exceptions, this approach worked pretty well. As we moved from the landing sites on the maria (ancient lava flows) into the complex highlands, the “context” of the samples – their relation to observed regional landforms or events – became more obscure. A lunar highland rock is typically a complex mixture of earlier rocks, sometimes showing evidence for several generations of mixture, re-fragmentation, and re-assembly. Loose samples lying on the surface were collected from the highlands, none of them were sampled “in place” (i.e., from bedrock). Although this is also true of the rocks from the maria, we observed bedrock “in place” at most of the mare sites and may have actually collected at least one sample from lava bedrock at the edge of Hadley Rille near the Apollo 15 site. None of the highland samples possess the same degree of contextual certainty as the mare samples. This fact, coupled with their individual complexity, sometimes leads to consternation over exactly what the samples are telling us. It doesn’t help that the Moon’s early history was itself very complex, with magmas solidifying, lavas erupting, volcanic ash hurled into space and laid down in bedded deposits. On top of all those processes were cratering events that mixed and reassembled everything into a complex geologic puzzle, a virtual stew of processes and compositions that hold clues to billions of years of the Moon’s (and Earth’s) history. Nonetheless, we can still perceive most of the story of the Moon’s history, enough at this point to tell us that without those lunar samples in hand, we would be well and truly ignorant of even its most important events and basic processes. The fixation with sample return stems from the science community’s belief that with just a few more carefully selected samples from some key units, all that is now dark will be made light. There may be severe consequences to the science community’s insistence on the primacy of sample return. The most recent “decadal survey,” the ten-year community study that gives NASA our wish lists for missions and exploration, made a sample return from Mars the centerpiece and sine qua non of future robotic missions. The NRC report was so emphatic in its insistence that it might be paraphrased as saying, in effect, “Give us a Mars sample or give us death!” (with apologies to Patrick Henry). Alas, that formulation may be more apt than anyone desired, as proposed out year budgets for the next five years of NASA funding cuts planetary exploration by almost 30% – a landscape of shifting priorities that raises questions and uncertainty for the future. Robotic sample return missions to large bodies like the Moon or Mars are expensive because they consist of multiple spacecraft – a lander, which softly places the spacecraft on the surface, a device (such as a rover) to collect and store the samples and an ascent vehicle to bring the sample back to Earth. While none of these functions individually are exceedingly difficult to achieve, all of them (done correctly and in proper sequence) add up to a substantially difficult, complex mission profile. In the space business (as with most endeavors), more difficult and complex means that more money is required. Moonrise, a proposed robotic mission to return about a kilogram of sample from the far side of the Moon, was projected to cost around one billion dollars. A Mars sample return mission consisted of three separate missions: one to land, collect and store the samples, another one to retrieve those samples and place them into orbit around Mars, and a final mission to return the samples to Earth. With each step costing up to several billion dollars, such a technically challenging Mars sample return mission would be unaffordable. Although samples have many advantages over remote measurements, those benefits must be weighed against the cost and difficulty of obtaining them. Perhaps the complete extent of what can be accomplished remotely has yet to be fully explored. As mentioned above, absolute ages are key information that we get from samples. Several dating techniques could be adapted to a remote instrument; these methods may not be the most precise imaginable, but they might be of adequate precision to answer the most critical questions. On the Moon, we do not know the absolute age of the youngest lava flows in the maria; age estimates range from as old as ~ 3 billion years to as young as less than 1 billion years. In such a case, a measurement with 10-20% precision is adequate to resolve the first-order question: When did lunar volcanism cease? In addition, such a result would enable us to calibrate the cratering curve for this part of lunar history, a function that is widely used to infer absolute ages throughout the Solar System. A solid result obtained from a robotic lander – even such a relatively imprecise one – would have important implications for lunar volcanic processes, thermal history, impact flux, and bulk composition. Complex robotic operations in space are always dicey, especially when attempting something for the first time. Samples are a key part of a planetary scientist’s toolbox but their acquisition is difficult, time-consuming and expensive. Samples from robotic missions are more likely to have ambiguous context, thus rendering less scientific value. Scientifically useful sample collection may remain problematic until people can physically go to exotic places in space and fully use their complex cognitive skills. This trade-off between cost and capability must be carefully considered when weighing future exploration alternatives and desired outcomes. Previous relevant posts: November 17, 2012 Space missions are commonly thought of as the ultimate in “high tech.” After all, rockets blast off into the wild blue yonder, accelerate their payloads to hypersonic and orbital speeds and then operate in zero gravity in the ice-cold, black sky of space. It requires our best technology to pull off this modern miracle and even then, things can go wrong. Why would anyone believe that with high technology, sometimes less can be more – that we’re missing a bet by not utilizing current technology. Like the intellectual tug of war involving man vs. machine, there also is a tug of war between proven technology and high-tech. Creating these barriers and distinctions is nonsensical. We need it all. And we can have it all. Point in question – in situ resource utilization (ISRU), which is the general term given to the concept of learning how to use the materials and energy we find in space. The idea of learning how to “live off the land” in space has been around for a long, long time. Countless papers have been written discussing the theory and practice of this operational approach. Yet to date, the only resource we have actually used in space is the conversion of sunlight into electricity via arrays of photovoltaic cells. Such power generation is clearly “mature” from a technical viewpoint, but it had to be demonstrated in actual spaceflight before it became considered as such (the earliest satellites were powered by batteries). The reason we have not used ISRU is because we’ve spent the last 30 years in low Earth orbit, without access to the material resources of space. Many ideas have been proposed to use the material resources of the Moon. A big advantage of doing so is that much less mass needs to be transported from Earth. The propellant needed to transport a unit of mass from the Earth to the Moon keeps us hobbled to the tyranny of the rocket equation – a constant roadblock to progress. If it takes several thousand dollars to launch one pound into Earth orbit, multiply that amount times ten to get the cost to put a pound of mass on the Moon. In the space business, new technologies tend to be viewed with a jaundiced eye. Aerospace engineers in particular are typically very conservative when it comes to integrating new technology into spacecraft and mission designs, largely on the basis that if we are not careful, missions can fail in a spectacularly dreadful fashion. To determine if a technology is ready for prime time, NASA developed the Technology Readiness Level (TRL) scale, a nine-step list of criteria that managers use to evaluate and classify how mature a technical concept is and whether the new technology is mission ready. Resource utilization has a very low TRL level – usually TRL 4 or lower. Thus, many engineers don’t think of ISRU as a viable technique to implement on a real mission. It seems too “far out” (more science fiction than science). Believing that a technology is too immature for use can become a self-fulfilling prophecy, a “Catch-22” for spaceflight: a technology is too immature for flight because it’s never flown and it’s never flown because it’s too immature. This prejudice is widespread among many “old hands” in the space business, who wield TRL quite effectively in order to keep new and innovative ideas stuffed in the closet and off flight manifests. In truth, the idea that the processing and use of off-planet resources is “high technology” is exactly backwards – most of the ideas proposed for ISRU are some of the simplest and oldest technologies known to man. One of the first ideas advanced for using resources on the Moon involve building things out of bulk regolith (rocks and soil of the lunar surface). This is certainly not high-tech; the use of building aggregate dates back to ancient times, reaching a high level of sophistication under the Romans, who over 2000 years ago built what is still the largest free-supported concrete dome in the world (the Pantheon). The Coliseum was made of concrete faced by marble. The Romans also built a complex network of roads, some which remain in use to this day; paving and grading is one of the oldest and most straightforward technologies known. Odd as it may seem, sand and gravel building material is the largest source of wealth from a terrestrial resource – the biggest economic material resource on Earth. Recently, interest has focused on the harvesting and use of water, found as ice deposits, at the poles of the Moon. Digging up ice-laden soil and heating it to extract water is very old, dating back to at least prehistoric times. This water could contain other substances, including possibly toxic amounts of some exotic elements, such as silver and mercury. No problem – we understand fractional distillation, a medieval separation technique based on the differing boiling temperatures of various substances. Again, this concept is not particularly high-tech as only a heater and a cooling column is needed (basically the configuration of an oil refinery). Some workers have suggested that lunar regolith could be mined for metals, which can then be used to manufacture both large construction pieces and complex equipment. Extracting metal from rocks and minerals is likewise very old, developed by the ancients and simply improved in efficiency over time. Processes like carbothermal reduction have been used for hundreds of years. The reactions and yields are well known, and the machinery needed to create a processing stream is simple and easy to operate. In short, the means needed to extract and use the material wealth of the Moon and other extraterrestrial bodies is technology that is centuries old. Even advanced chemical processing was largely completely developed by the 19th Century in both Europe and America. The “new” aspects of ISRU technology revolve around the use of computers to control and regulate the processing stream. Such control is already used in many industries on Earth, including the new and potentially revolutionary technique of three-dimensional printing. A key aspect of the old “Faster-Cheaper-Better” idea (one NASA never really embraced) was to push the envelope by relying more on “off-the-wall” ideas, whereby more innovation on more flights would lead to greater capability over time. Nothing that we plan to do on the Moon involves magic, alchemy or extremely high technology. Like most new fields of endeavor, we can start small and build capability over time. The TRL concept was designed as a guideline. It was not intended as a weapon eliminating possibly game-changing techniques from consideration or to carve out funding territories. Attitudes toward TRL must change at all levels, from the lowly subsystem to the complete, end-to-end architectural plan. A critical first step toward true space utilization and for understanding and controlling our destiny there is to recognize and take advantage of the leverage one gets from lunar (and in time planetary) resource utilization. September 8, 2012 Rick Tumlinson of the Space Frontier Foundation published a “free-enterprise” critique of the Republican platform in regard to the American civil space program. Indeed, the text of the space plank is vague (no doubt intentionally, so as to give the candidate maximum flexibility to structure the space program to align with his vision and goals for the country). But what I found most interesting was the underlying premise and assumptions in Tumlinson’s article, a worldview that I find striking. In brief, Tumlinson approves of the current administration’s direction for our civil space program. The U.S. has stepped back from pushing toward the Moon, Mars and beyond and redirected NASA on a quest for “game-changing” technologies (to make spaceflight easier and less costly), while simultaneously transitioning launch to low Earth orbit (LEO) operations to private “commercial space” companies selected by our government to compete for research and development funding and contracts. Many see this as gutting NASA and the U.S. national space program. To be clear, the term “commercial space” in this context does not refer to the long-established commercial aerospace industry (e.g., Lockheed-Martin, Boeing) but to a collection of startup companies dubbed “New Space” (typically, companies founded by internet billionaires who have spoken much and often about lofty space plans, but have actually flown in space very little). Tumlinson criticizes the Republican space plank because it does not explicitly declare that a new administration would continue the current policy. In his view, the very idea of a federal government space program, including a NASA-developed and operated launch and flight system, is a throwback to 1960’s Cold War thinking. Instead, he envisions space as a field for new, flexible and innovative companies, untainted by stodgy engineering traditions or bloated bureaucracy. Many space advocates on the web hold this viewpoint – “If only government would get out of the way and give New Space a chance, there will be a renaissance in space travel!” But travel to where? And why? The idea that LEO flight operations should be transitioned to the commercial sector is not new. It was a recommendation of the 2004 Aldridge Commission report on implementing the Vision for Space Exploration (VSE). NASA itself started the Commercial Orbital Transportation Services program (COTS) in 2006, designed to nurture a nascent spaceflight industry by offering subsidies to companies to develop and fly vehicles that could provision and exchange crew aboard the International Space Station. That effort was envisioned as an adjunct to – not a replacement of – federal government spaceflight capability. The termination of the VSE and the announcement of the “new direction” in space received high cover from the 2009 Augustine committee report, which concluded that the current “program of record” (e.g., Constellation) was unaffordable. The Augustine Committee received presentations with options to reconfigure Constellation whereby America could have returned to the Moon (to learn how to use resources found in space) under the existing budgetary cap, but they elected to start from first principles. Hence, we have something called Flexible Path, which doesn’t set a destination or a mission but calls on us “to develop technology” to go anywhere (unspecified) sometime in the future (also unspecified). With target dates of 2025 for a “possible” human mission to a near-Earth asteroid and a trip to Mars “sometime in the 2030’s,” timelines and milestones for the Flexible Path offer no clarity or purpose. Try getting a loan or finding investors using a “flexible” business plan. Tumlinson argues that both political parties should embrace this new direction because New Space will create greater capability for lower cost sooner. He also makes much about the philosophical inclinations of the Republican Party (the “conservative” major party in American politics) – Why don’t the Republicans support free enterprise in space? Why are they putting obstacles in the way of all these new trailblazing entrepreneurs? As to those obstacles, it is unclear exactly what they are. True enough, there are regulatory and liability issues with private launch services, but not of such magnitude that they cannot be handled through the traditional means of indemnification (e.g., launch insurance). The COTS program record of the past decade largely has not been a contract let for services, but a government grant for the technical development of launch vehicles and spacecraft. Close reading reveals the real issue: Tumlinson wants more of NASA’s shrinking budget to finance New Space companies. He is concerned that a new administration might cut off this flow of funding. However, what will cut off the flow of funding is having no market, no direction, and no architectural commitment – regardless of who occupies the White House. The belief of many New Space advocates is that once they are established to supply and crew the ISS, abundant and robust private commercial markets will emerge for their transportation services. Although many possible services are envisioned, space tourism is the activity most often mentioned. Whether such a market emerges is problematic. Although Richard Branson’s Virgin Galactic has a back-listed manifest of dozens of people desiring a suborbital thrill ride (at a cost of a few hundred thousand dollars), those journeys are infinitely more affordable than a possible orbital trek (which will cost several tens of millions of dollars, at least initially). Nevertheless, there will no doubt be takers for a ticket. But what will happen to a commercial space tourism market after the first fatal accident? New Space advocates often tout their indifference to danger, but such bravado is neither a common nor wise attitude in today’s lawsuit-happy society (not to mention, the inevitable loss of confidence from a limited customer base). My opinion is that after the first major accident with loss of life, a nascent space tourism industry will become immersed in an avalanche of litigation and will probably fully or partly collapse under the ensuing financial burden. We are no longer the barnstorming America of the 1920’s and spaceflight is much more difficult than aviation. Despite labeling themselves “free marketers,” New Space (in its current configuration) looks no different than any other contractor furiously lobbying for government sponsorship through continuation of its subsidies. True free-market capitalists do not seek government funding to develop a product. Rather, they devise an answer to an unmet need, identify a market, seek investors and invest their own capital, provide a product or service and only remain viable by making a profit through the sale of their goods and services. Tumlinson bemoans the attitude of some politicians, ascribing venal and petty motives as to why they do not fully embrace the administration’s new direction, e.g., the oft-thrown label “space pork” to describe support for NASA’s Space Launch System. In regard to New Space companies, Tumlinson asserts that, “[We] have to both give them a chance and get out of the way.” But in fact, he does not want government to “get out of the way” – at least not while they’re still shoveling millions into New Space company coffers – nor when they need (and they will) a ruling on, or protection of, their property rights in space. Any entity that accepts government money is making a “deal with the devil,” whereby it is understood that such money comes with oversight requirements (as well it should, consisting of taxpayer dollars). Successful commercialization of space has occurred in the past (e.g., COMSAT) and will occur in the future. But the creation of a select, subsidized, quasi-governmental industry is not by any stretch of the imagination what we commonly understand free market capitalism to mean. It is more akin to oligarchical corporatism, a common feature of the post-Soviet, Russian economy. True private sector space will be created and welcomed, but not through this mechanism, whose most worrisome accomplishment to date has been to effectively distract Americans from noticing the dismantling of their civil space program and preeminence in space. August 26, 2012 Because of his flying career and the life that he led, Neil Armstrong’s passing has many recounting his place in the history of spaceflight and remembering a life well lived. He holds a special place in our hearts and a unique place in history – and he always will. I met Neil Armstrong at a conference, an encounter I won’t forget. A quiet, unassuming man of medium height and build, pleasant and genial, surrounded by a horde of admirers and well-wishers, I could tell he was slightly uncomfortable with (but resigned to) the adulation he received. In his mind, the 1969 flight of Apollo 11 was simply another professional assignment he flew as a test pilot – the landing on the Moon was of more significance than his first step on it. He was an aviator, in every sense of that word. The landing was an accomplishment for humanity – a giant step for mankind. My glimpses of Neil come not from personal encounters with him, but from others who knew him. During a discussion several years ago with Dave Scott (Apollo astronaut and Commander of the 1971 Apollo 15 mission), I inquired about an obscure incident during the 1966 flight of Gemini 8 (flown by Neil and Dave). That mission conducted the first docking of two spacecraft in space and I wanted to know some details of the emergency experienced by the crew on that flight. The incident had occurred shortly after the docking, when the Gemini-Agena spacecraft began to roll slightly. The rate of rotation became greater with time and it was evident that something was very wrong. Neil, as commander, was responsible for “flying” the spacecraft but couldn’t get the rolling under control. Thinking that the Agena (their unmanned target vehicle) was responsible, the crew made the decision to undock from it (they were out of contact with Mission Control at the time). As soon as they did, the Gemini spacecraft started to roll and tumble at an ever increasing and alarming rate. Dave recalled with a chuckle that Neil looked over at him, pointed at the attitude control stick and said “See if you can do anything with it!” Dave’s recollection of their exchange gave me a glimpse of a very human moment in a life and death situation. This was serious – if they couldn’t regain control, they would black out from the centrifugal forces in the tumbling vehicle. Neil kept his cool, activated the re-entry thrusters and soon stabilized the bucking Gemini spacecraft. The solution saved their lives but ended the mission, sending them home prematurely but safely. The story of the first lunar landing is well known. The automatic systems of the Apollo 11 Lunar Module Eagle were targeting the vehicle into a large crater filled with automobile-sized boulders. Landing there would be disastrous, as the LM would likely topple over on touchdown, eliminating the crew’s ability to liftoff from the Moon and return home. Taking manual control, Neil (with Mission Control advising the crew they had thirty seconds of fuel left) guided the LM over the hazardous debris field to a safe touchdown a few hundred meters beyond the original landing site. Tension during the agonizingly long pause in the air-to-ground communications was palpable. Relief could be heard in Capcom Charlie Duke’s voice as Neil calmly announced that the Eagle had landed. Yet again, a critical situation expertly handled by a test pilot just doing his job – the calm and collected decision making necessary when flying finicky machines near the edges of their performance envelopes. Neil’s scientific work on the Moon during his EVA warrants special mention. Being the first humans to land on another world, it is understandable that the crew had many ceremonial duties to perform. Although they had been carefully instructed to stay close to the LM, without informing Mission Control, Neil walked back a hundred meters or so to Little West crater (overflown earlier) to examine and photograph its interior. Those photos showed the basaltic bedrock of Tranquillity Base – documenting that the Eagle had landed amidst ejecta from that crater thereby establishing the provenance of samples collected during the crew’s limited time on the surface. According to Gene Shoemaker and Gordon Swann, both of the U.S. Geological Survey, Neil was one of the best students of geology among the Apollo astronauts. Through his work on the Moon, he showed an ability beyond mere mastery of the facts of geology – he intuitively grasped its objectives, as well as the philosophy of the science. Like every other facet of the mission, Neil understood and took this role seriously. No matter what topic was addressed or which role was taken, he could always be counted on to turn in his best performance. Armstrong understood the historic role of being the first man on the Moon but he never succumbed to the siren call of fame. He could have cashed in on his status but choose a different path. He was the quintessence of quiet dignity, possessing the “Aw shucks, t’weren’t nothin’” Gary Cooper-ish manner of understated heroism. After retirement, he lived happily in his home state of Ohio, taught aeronautics (his first love) at the University of Cincinnati, and advised on various engineering topics and problems for both government and industry. Throughout NASA’s post-Apollo efforts – without fanfare – he often and freely lent his efforts to the space program. He served his country with honor and dignity. As a test pilot, Neil routinely showed his ability to make quick, life saving decisions in dangerous situations. As a senior spokesman for space, he clearly voiced his concern over the dismantling and destruction of our national space program. Neil understood that our civil space program is a critical national asset, both as a technology innovator and a source of inspiration for the public. Who would recognize this more clearly than Neil Armstrong? From long experience, he knew what kinds of government programs worked and what kind didn’t. He knew his fellow man. In appearances before Congress in recent years, he outlined specific objections to our current direction in space. A true patriot, Neil did not hesitate to voice his opinions, whether they aligned with current policy or not. It’s become cliché to say that Neil Armstrong holds a unique place in history. On this occasion, we should pause to consider just how singular his place is. No one – not the first human to Mars nor the first crew to venture beyond the Solar System – will ever achieve the same level of significance as the first human to step onto the surface of another world. The flight of Apollo 11 was truly a once in a lifetime event – and by that, I mean in the lifetime of humanity. That first step was indeed one to “divide history,” as the NASA Public Affairs Office put it at the time. Goodbye, Neil Armstrong – and thank you. We’ve lost one of our most authoritative and articulate spokesmen for human spaceflight. I mourn him and share his valid concerns for our dysfunctional national space program. July 22, 2012 Elon Musk founded Space Exploration Technologies Corporation (SpaceX) in 2002. Its stated business objective was the development of launch services for a fraction of the cost of the then-available commercial launch providers – to the greatest extent practicable, they would create reusable pieces of its launch system, thereby greatly lowering the cost of space access. Toward that end, SpaceX sponsored the development of its own launch vehicle and engines, using a vertically integrated business model in which SpaceX would design, fabricate, prepare and operate a launch system. Alan Boyle’s recent review of commercial efforts to supply the International Space Station naturally included coverage of the successful flight of SpaceX’s Falcon 9 rocket and Dragon’s delivery demonstration. The article focused on the way commercial space is financed, specifically how NASA is sponsoring the development of some of these capabilities. This financial arrangement is the basis for a point repeatedly voiced by critics of the heralded vision of “New Space” replacing “government” space – a company like SpaceX is not actually commercial in the traditional free market sense, but simply another government-funded contractor using a different procurement model. Falcon 1 was the first rocket developed by SpaceX. It is a two-stage launch vehicle capable of putting a metric ton (1000 kg) into low Earth orbit. Falcon 1 uses a single Merlin, a SpaceX-developed, LOX-kerosene rocket engine producing ~570,000 newtons of thrust (for comparison, a single Shuttle main engine burns LOX-hydrogen fuel and produces about 2,300,000 newtons of thrust). The Falcon 1 was designed to put relatively small satellites into low earth orbit. With such payload capacity, it is also capable of sending 100-200 kg microsats beyond LEO, into cislunar space. Much of the private start-up capital for SpaceX was used to develop the Falcon 1. They also received some government funding from other than NASA. The Department of Defense (DoD) had need for reliable, quick, and cheap space access for small payloads. To that end, SpaceX received funding from several DoD entities, including several million dollars from the U.S. Air Force under a program to develop launch capability for DARPA (a defense research agency). Space X was given access to and the use of DoD launch facilities at the Reagan Test Site (formerly Kwajalein Missile Range) in the Marshall Islands. The early days of Falcon 1 development were not pretty. The first launch failed after 25 seconds of flight. The second flight successfully launched and staged, but did not reach orbit. After the third attempt at flight failed during staging, a review board looked in detail at SpaceX’s launch processing stream and made recommendations for some significant changes. The next launch was successful in putting a dummy payload into orbit. In July 2009, six years after Falcon 1 development had begun, SpaceX achieved its first (and so far, only) commercial space success with the launch and orbit of the Malaysian RazakSAT imaging satellite on a Falcon 1 launch. Typically when a space company finally achieves a long-sought success, it moves rapidly to exploit the new vehicle’s operational status and begins to aggressively market and sell its new launch service. However, no Falcon 1 launch has occurred since the success of RazakSAT. A visit to the SpaceX web site describes the Falcon 1 vehicle, but at the bottom of the page it states that a Falcon 1 launch is no longer available for purchase. Instead, small, one-ton class payloads will be accommodated in the future through “piggyback” rides on the new, Falcon 9 medium-class launch vehicle. For a company to spend six years and start up money developing a needed launch system, only to abandon it just as success and profit is at hand, is difficult to sort through. One could be forgiven for imagining that the development of the Falcon 1 as a commercial launch system was never intended but rather a pretext to flight qualify the pieces (specifically the Merlin 1 engine) used in the nine-engine cluster that powers the Falcon 9 launcher. Interestingly, others have noted that the now-cancelled NASA Constellation Ares I launch vehicle (“The Stick”), purportedly designed to launch the new Orion spacecraft to LEO, likewise appeared to be more of a development effort than a flight project, in that its various pieces (e.g., cryogenic upper stage, five-segment SRB) were all needed to build the large Ares V heavy lift rocket. Meanwhile, customers in need of low-cost options for launching small payloads are out of luck. Falcon 9 has yet to launch an ounce of commercial payload and Falcon 1 is not for sale. Of course, one can launch small satellites using Orbital’s Taurus launch vehicle, but its ~$50-70 M cost and recent record of unreliability (e.g., the Glory satellite launch failure) engender neither comfort nor confidence. More significantly, after investing in the R&D effort of a new, unproven company that was offering a low cost, small launch vehicle, SpaceX’s original DoD customers, banking on the creation of a quick, inexpensive capability to launch small satellites, saw their support of Falcon 1 go by the board. It appears that SpaceX dropped their initial operational vehicle for the promotion and promise of far more ambitious and distant goals. That template seems to work for them – NASA has “invested” more than $500 million in the Falcon 9 over the last five years. Now, SpaceX holds court to advance their founder’s Mars fantasies and plans for a Falcon “heavy” launch vehicle – designed and marketed as sending very large payloads into space, at unbelievably low prices. (As an aside, I thought that a New Space article of faith is that heavy lift is a boondoggle and that fuel depots are the way to go beyond LEO.) When New Space advocates characterize old NASA contractors, legacy launch companies and politicians with NASA centers in their districts as “pigs at the trough of government funding,” they’d be wise to watch out for a “pig” donning falcon feathers. Debate, like competition is good and helpful but only useful when advocates honestly pitch their abilities, services, products and intentions. Money is an important consideration, however our nation’s ability to compete in the arena of space must be the overriding concern. In light of the current situation, that ability is slipping further and further away. We need to honestly assess what we’re buying before nothing remains of our decades long investment and leadership role in space. Next Page »
http://blogs.airspacemag.com/moon/category/space-transportation/
13
22
When the London Company sent out its first expedition to begin colonizing Virginia on December 20, 1606, it was by no means the first European attempt to exploit North America. In 1564, for example, French Protestants (Huguenots) built a colony near what is now Jacksonville, Florida. This intrusion did not go unnoticed by the Spanish, who had previously claimed the region. The next year, the Spanish established a military post at St. Augustine; Spanish troops soon wiped out the French interlopers residing but 40 miles away. Meanwhile, Basque, English, and French fishing fleets became regular visitors to the coasts from Newfoundland to Cape Cod. Some of these fishing fleets even set up semi-permanent camps on the coasts to dry their catches and to trade with local Indians, exchanging furs for manufactured goods. For the next two decades, Europeans' presence in North America was limited to these semi-permanent incursions. Then in the 1580s, the English tried to plant a permanent colony on Roanoke Island (on the outer banks of present-day North Carolina), but their effort was short-lived. In the early 1600s, in rapid succession, the English began a colony (Jamestown) in Chesapeake Bay in 1607, the French built Quebec in 1608, and the Dutch began their interest in the region that became present-day New York. Within another generation, the Plymouth Company (1620), the Massachusetts Bay Company (1629), the Company of New France (1627), and the Dutch West India Company (1621) began to send thousands of colonists, including families, to North America. Successful colonization was not inevitable. Rather, interest in North America was a halting, yet global, contest among European powers to exploit these lands. There is another very important point to keep in mind: European colonization and settlement of North America (and other areas of the so-called "new world") was an invasion of territory controlled and settled for centuries by Native Americans. To be sure, Indian control and settlement of that land looked different to European, as compared to Indian, eyes. Nonetheless, Indian groups perceived the Europeans' arrival as an encroachment and they pursued any number of avenues to deal with that invasion. That the Indians were unsuccessful in the long run in resisting or in establishing a more favorable accommodation with the Europeans was as much the result of the impact on Indians of European diseases as superior force of arms. Moreover, to view the situation from Indian perspectives ("facing east from Indian country," in historian Daniel K. Richter's wonderful phrase) is essential in understanding the complex interaction of these very different peoples. Finally, it is also important to keep in mind that yet a third group of people--in this case Africans--played an active role in the European invasion (or colonization) of the western hemisphere. From the very beginning, Europeans' attempts to establish colonies in the western hemisphere foundered on the lack of laborers to do the hard work of colony-building. For the most part, Europeans were not especially picky about who did the work, as long as it wasn't them. The Spanish, for example, enslaved the Indians in regions under their control. The English struck upon the idea of indentured servitude to solve the labor problem in Virginia. Virtually all the European powers eventually turned to African slavery to provide labor on their islands in the West Indies. Slavery was eventually transferred to other colonies in both South and North America. Because of the interactions of these very diverse peoples, the process of European invasion/colonization of the western hemisphere was a complex one, indeed. Individual members of each group confronted situations that were most often not of their own making or choosing. These individuals responded with the means available to them. For most, these means were not sufficient to prevail. Yet these people were not simply victims; they were active agents trying to shape their own destinies. That many of them failed should not detract from their efforts. The American Revolution Until the end of the Seven Years' War in 1763, few colonists in British North America objected to their place in the British Empire. Colonists in British America reaped many benefits from the British imperial system and bore few costs for those benefits. Indeed, until the early 1760s, the British mostly left their American colonies alone. The Seven Years' War (known in America as the French and Indian War) changed everything. Although Britain eventually achieved victory over France and its allies, victory had come at great cost. A staggering war debt influenced many British policies over the next decade. Attempts to raise money by reforming colonial administration, enforcing tax laws, and placing troops in America led directly to conflict with colonists. By the mid-1770s, relations between Americans and the British administration had become strained and acrimonious. The first shots of what would become the war for American independence were fired in April 1775. For some months before that clash at Lexington and Concord, patriots had been gathering arms and powder and had been training to fight the British if that became necessary. General Thomas Gage, commander of British forces around Boston, had been cautious; he did not wish to provoke the Americans. In April, however, Gage received orders to arrest several patriot leaders, rumored to be around Lexington. Gage sent his troops out on the night of April 18, hoping to catch the colonists by surprise and thus to avoid bloodshed. When the British arrived in Lexington, however, colonial militia awaited them. A fire fight soon ensued. Even so, it was not obvious that this clash would lead to war. American opinion was split. Some wanted to declare independence immediately; others hoped for a quick reconciliation. The majority of Americans remained undecided but watching and waiting. In June 1775, the Continental Congress created, on paper, a Continental Army and appointed George Washington as Commander. Washington's first task, when he arrived in Boston to take charge of the ragtag militia assembled there, was to create an army in fact. It was a daunting task with no end of problems: recruitment, retention, training and discipline, supply, and payment for soldiers' services were among those problems. Nevertheless, Washington realized that keeping an army in the field was his single most important objective. During the first two years of the Revolutionary War, most of the fighting between the patriots and British took place in the north. At first, the British generally had their way because of their far superior sea power. Despite Washington's daring victories at Trenton and Princeton, New Jersey, in late 1776 and early 1777, the British still retained the initiative. Indeed, had British efforts been better coordinated, they probably could have put down the rebellion in 1777. But such was not to be. Patriot forces, commanded by General Horatio Gates, achieved a significant victory at Saratoga, New York, in October 1777. Within months, this victory induced France to sign treaties of alliance and commerce with the United States. In retrospect, French involvement was the turning point of the war, although that was not obvious at the time. Between 1778 and 1781, British military operations focused on the south because the British assumed a large percentage of Southerners were loyalists who could help them subdue the patriots. The British were successful in most conventional battles fought in that region, especially in areas close to their points of supply on the Atlantic coast. Even so, American generals Nathanael Greene and Daniel Morgan turned to guerrilla and hit-and-run warfare that eventually stymied the British. By 1781, British General Lord Charles Cornwallis was ordered to march into Virginia to await resupply near Chesapeake Bay. The Americans and their French allies pounced on Cornwallis and forced his surrender. Yorktown was a signal victory for the patriots, but two years of sporadic warfare, continued military preparations, and diplomatic negotiations still lay ahead. The Americans and British signed a preliminary peace treaty on November 30, 1782; they signed the final treaty, known as the Peace of Paris, on September 10, 1783. The treaty was generally quite favorable to the United States in terms of national boundaries and other concessions. Even so, British violations of the agreement would become an almost constant source of irritation between the two nations far into the future. The New Nation At the successful conclusion of the Revolutionary War with Great Britain in 1783, an American could look back and reflect on the truly revolutionary events that had occurred in the preceding three decades. In that period American colonists had first helped the British win a global struggle with France. Soon, however, troubles surfaced as Britain began to assert tighter control of its North American colonies. Eventually, these troubles led to a struggle in which American colonists severed their colonial ties with Great Britain. Meanwhile, Americans began to experiment with new forms of self-government. This movement occurred in both the Continental Congress during the Revolution and at the local and state levels. After winning their independence, Americans continued to experiment with how to govern themselves under the Articles of Confederation. Over time, some influential groups--and these by no means reflected the sentiments of all Americans--found the Confederation government inadequate. Representatives of these groups came together in Philadelphia to explore the creation of yet another, newer form of government. The result was a new constitution. Not all Americans embraced this new Constitution, however, and ratification of the document produced many disagreements. Even so, the Constitution was ratified, and with a new constitution in place, Americans once again turned to George Washington for leadership, this time as President of the new republic. Although Washington proved to be personally popular and respected, conflict over the proper functions and locus of governmental power dominated his two terms as president. These disputes soon led to the formation of factions and then political parties that were deeply divided over the nature and purposes of the federal government, over foreign affairs, and over the very future of the new nation. Events during the single term of John Adams, our second president, made these divisions even worse and they continued into the presidency of Thomas Jefferson (1801-1809). Even so, President Jefferson nearly doubled the size of the new nation by purchasing the Louisiana Territory from France. This purchase also led Jefferson to form the Lewis and Clark expedition to discover just what was contained in the new land. Jefferson's successor as President, James Madison (1809-1817)--one of authors of the constitution--led the new nation through another war with Great Britain. This, of course, was the unpopular War of 1812. This war ended in 1815 and if nothing else it convinced Britain that the United States was on the map to stay. Meanwhile, Americans began to develop a culture and way of life that was truly their own and no longer that of mere colonials. During this period, the small republic founded by George Washington's generation became the world's largest democracy. All adult, white males received the right to vote. With wider suffrage, politics became hotly contested. The period also saw the emergence--and demise--of a number of significant political parties, including the Democratic, the Whig, the American, the Free Soil, and the Republican Parties. Meanwhile, the young republic expanded geographically from the Atlantic to the Pacific. The Stars and Stripes were raised over Texas, Oregon, California, and the Southwest. Expansion, however, proved to be a mixed blessing for Americans. While many white settlers found new opportunities to the West, their settlement displaced other groups including Indian tribes and Mexicans. In addition, territorial expansion gave African-American slavery a new lease on life and led to increasing conflict between North and South. Democracy and territorial expansion led most Americans to feel optimistic about the future. These forces, reinforced by widespread religious revivals, also led many Americans to support social reforms. These reforms included promoting temperance, creating public school systems, improving the treatment of prisoners, the insane, and the poor, abolishing slavery, and gaining equal rights for women. Some of these reforms achieved significant successes. The political climate supporting reform declined in the 1850s, as conflict grew between the North and South over the slavery question. Civil War and Reconstruction In 1861, the United States faced its greatest crisis to that time. The northern and southern states had become less and less alike--socially, economically, politically. The North had become increasingly industrial and commercial while the South had remained largely agricultural. More important than these differences, however, was African-American slavery. The "peculiar institution," more than any other single thing, separated the South from the North. Northerners generally wanted to limit the spread of slavery; some wanted to abolish it altogether. Southerners generally wanted to maintain and even expand the institution. Thus, slavery became the focal point of a political crisis. Following the 1860 election to the presidency of Republican Abraham Lincoln, 11 southern states eventually seceded from the Federal Union in 1861. They sought to establish an independent Confederacy of states in which slavery would be protected. Northern Unionists, on the other hand, insisted that secession was not only unconstitutional but unthinkable as well. They were willing to use military force to keep the South in the Union. Even Southerners who owned no slaves opposed threatened Federal coercion. The result was a costly and bloody civil war. Almost as many Americans were killed in the Civil War as in all the nation's other wars combined. After four years of fighting, the Union was restored through the force of arms. The problems of reconstructing the Union were just as difficult as fighting the war had been. Because most of the war was fought in the South, the region was devastated physically and economically. Helping freedmen (ex-slaves) and creating state governments loyal to the Union also presented difficult problems that would take years to resolve. Rise of Industrial America In the decades following the Civil War, the United States emerged as an industrial giant. Old industries expanded and many new ones, including petroleum refining, steel manufacturing, and electrical power, emerged. Railroads expanded significantly, bringing even remote parts of the country into a national market economy. Industrial growth transformed American society. It produced a new class of wealthy industrialists and a prosperous middle class. It also produced a vastly expanded blue collar working class. The labor force that made industrialization possible was made up of millions of newly arrived immigrants and even larger numbers of migrants from rural areas. American society became more diverse than ever before. Not everyone shared in the economic prosperity of this period. Many workers were typically unemployed at least part of the year, and their wages were relatively low when they did work. This situation led many workers to support and join labor unions. Meanwhile, farmers also faced hard times as technology and increasing production led to more competition and falling prices for farm products. Hard times on farms led many young people to move to the city in search of better job opportunities. Americans who were born in the 1840s and 1850s would experience enormous changes in their lifetimes. Some of these changes resulted from a sweeping technological revolution. Their major source of light, for example, would change from candles, to kerosene lamps, and then to electric light bulbs. They would see their transportation evolve from walking and horse power to steam-powered locomotives, to electric trolley cars, to gasoline-powered automobiles. Born into a society in which the vast majority of people were involved in agriculture, they experienced an industrial revolution that radically changed the ways millions of people worked and where they lived. They would experience the migration of millions of people from rural America to the nation's rapidly growing cities. Progressive Era to New Era The early 20th century was an era of business expansion and progressive reform in the United States. The progressives, as they called themselves, worked to make American society a better and safer place in which to live. They tried to make big business more responsible through regulations of various kinds. They worked to clean up corrupt city governments, to improve working conditions in factories, and to better living conditions for those who lived in slum areas, a large number of whom were recent immigrants from Southern and Eastern Europe. Many progressives were also concerned with the environment and conservation of resources. This generation of Americans also hoped to make the world a more democratic place. At home, this meant expanding the right to vote to women and a number of election reforms such as the recall, referendum, and direct election of Senators. Abroad, it meant trying to make the world safe for democracy. In 1917, the United States joined Great Britain and France--two democratic nations--in their war against autocratic Germany and Austria-Hungary. Soon after the Great War, the majority of Americans turned away from concern about foreign affairs, adopting an attitude of live and let live. The 1920s, also known as the "roaring twenties" and as "the new era," were similar to the Progressive Era in that America continued its economic growth and prosperity. The incomes of working people increased along with those of middle class and wealthier Americans. The major growth industry was automobile manufacturing. Americans fell in love with the automobile, which radically changed their way of life. On the other hand, the 1920s saw the decline of many reform activities that had been so widespread after 1900. Great Depression and World War 2 The widespread prosperity of the 1920s ended abruptly with the stock market crash in October 1929 and the great economic depression that followed. The depression threatened people's jobs, savings, and even their homes and farms. At the depths of the depression, over one-quarter of the American workforce was out of work. For many Americans, these were hard times. The New Deal, as the first two terms of Franklin Delano Roosevelt's presidency were called, became a time of hope and optimism. Although the economic depression continued throughout the New Deal era, the darkest hours of despair seemed to have passed. In part, this was the result of FDR himself. In his first inaugural address, FDR asserted his "firm belief that the only thing we have to fear is fear itself--nameless, unreasoning, unjustified terror." As FDR provided leadership, most Americans placed great confidence in him. The economic troubles of the 1930s were worldwide in scope and effect. Economic instability led to political instability in many parts of the world. Political chaos, in turn, gave rise to dictatorial regimes such as Adolf Hitler's in Germany and the military's in Japan. (Totalitarian regimes in the Soviet Union and Italy predated the depression.) These regimes pushed the world ever-closer to war in the 1930s. When world war finally broke out in both Europe and Asia, the United States tried to avoid being drawn into the conflict. But so powerful and influential a nation as the United States could scarcely avoid involvement for long. When Japan attacked the U.S. Naval base at Pearl Harbor, Hawaii, on December 7, 1941, the United States found itself in the war it had sought to avoid for more than two years. Mobilizing the economy for world war finally cured the depression. Millions of men and women joined the armed forces, and even larger numbers went to work in well-paying defense jobs. World War Two affected the world and the United States profoundly; it continues to influence us even today. The entry of the United States into World War II caused vast changes in virtually every aspect of American life. Millions of men and women entered military service and saw parts of the world they would likely never have seen otherwise. The labor demands of war industries caused millions more Americans to move--largely to the Atlantic, Pacific, and Gulf coasts where most defense plants located. When World War II ended, the United States was in better economic condition than any other country in the world. Even the 300,000 combat deaths suffered by Americans paled in comparison to any other major belligerent. Building on the economic base left after the war, American society became more affluent in the postwar years than most Americans could have imagined in their wildest dreams before or during the war. Public policy, like the so-called GI Bill of Rights passed in 1944, provided money for veterans to attend college, to purchase homes, and to buy farms. The overall impact of such public policies was almost incalculable, but it certainly aided returning veterans to better themselves and to begin forming families and having children in unprecedented numbers. Not all Americans participated equally in these expanding life opportunities and in the growing economic prosperity. The image and reality of overall economic prosperity--and the upward mobility it provided for many white Americans--was not lost on those who had largely been excluded from the full meaning of the American Dream, both before and after the war. As a consequence, such groups as African Americans, Hispano Americans, and American women became more aggressive in trying to win their full freedoms and civil rights as guaranteed by the Declaration of Independence and US Constitution during the postwar era. The postwar world also presented Americans with a number of problems and issues. Flushed with their success against Germany and Japan in 1945, most Americans initially viewed their place in the postwar world with optimism and confidence. But within two years of the end of the war, new challenges and perceived threats had arisen to erode that confidence. By 1948, a new form of international tension had emerged--Cold War--between the United States and its allies and the Soviet Union and its allies. In the next 20 years, the Cold War spawned many tensions between the two superpowers abroad and fears of Communist subversion gripped domestic politics at home. In the twenty years following 1945, there was a broad political consensus concerning the Cold War and anti-Communism. Usually there was bipartisan support for most US foreign policy initiatives. After the United States intervened militarily in Vietnam in the mid-1960s, however, this political consensus began to break down. By 1968, strident debate among American about the Vietnam War signified that the Cold War consensus had shattered, perhaps beyond repair. Late 20th Century The end of the Vietnam War helped to end debates about that war. The Iran Hostage Crisis and the failure of the Presidency of Jimmy Carter helped Americans to realize the dangers of the Islamic world and the how dependent they had become on foreign oil. Ronald Reagan led the United State to a victory over the Soviet Union in the Cold War. George Bush helped to liberate the nation of Kuwait and to frustrate Iraqi aggression in 1990-1991. President Clinton oversaw a successful economy and war in Yugoslavia but ultimately was impeached (but not convicted) for perjuring himself in a sexual harassment lawsuit. As the Soviet Union collapsed and the Eastern bloc shattered, the wealth of the United States grew to unprecedented proportions, as did its debt and international entanglements. Social change continued, albeit more slowly than in the '60s, as the baby boomers put the finishing touches on their revolution. And as the 21st century was born, the United States came to realize that its Cold War victory was anything but the end of history, as battling islamic terrorism, at home and abroad, became the country's newest raison d'être. At the start of the 21st Century, the USA was the greatest nation (militarily, economically, scientifically and culturally) that the world had ever seen leading to the description of the current world as Pax Americana. American History Bibliography - A People's History of the United States : 1492-Present (Perennial Classics) More To Explore You May Like
http://www.historyofnations.net/northamerica/usa.html
13
15
Just as cotton was at the center of the Industrial Revolution, wool was a key commodity in the Medieval era. Wool was the principal raw material used for textiles in Medieval Europe. It was usually woven to produce cloth, but some was used to produce felt. Wool is produced by sheep. Different breeds produce wool of varying quality. Some sheep have fine silky fleece. Other sheep have a very coarse fleece. High quality cloth required a fine yarn. This required the fleece to be carded, meaning combing with a large iron comb-like tool. The other principal fabric used to produce textiles in Medieval Europe was linen. Linen was produced from flax. Yarn was required to produce textiles. Yarn was produced from raw wool or flax fibers by hand. (The spinning wheels did not appear until the late Medieval era.) The raw wool or processed flax was placed on a drop spindle. A drop spindle was fashioned from wood or bone and weighted on the bottom with stone or metal. The yarn was produced by drawing the fiber out from the spindle and twisting it in the process. This was a tedious, labor intensive process, but only after the yarn was fashioned could the production of textiles begin. The next step was to dye the yarn. The yarn was then woven into cloth on hand looms. The woven cloth was then ironed by pressing with a whale bone (baleen) plaque, glass, or stone smoother which had been heated for the purpose. The final process was decoration by braided cords, tapestry, or embroidery. The center of the European wool trade was Flanders, but the damp Low Countries was not condusive to sheep husbandry. Conditions accross the Channel in England were ideal and sheep flourished there and it was a center of wool production. Thus critical economic ties developed between England and Flanders. Just as cotton was at the center of the Industrial Revolution, wool was a key commodity in the Medieval era. Wool was the principal raw material used for textiles in Medieval Europe. It was usually woven to produce cloth, but some was used to produce felt. Wool is produced by sheep. Different breeds produce wool of varying quality. Some sheep have fine silky fleece. Other sheep have a very coarse fleece. There were many steps to the production of wool, including shearing, wool-sorting and preparation, combing, and carding. High quality cloth required a fine yarn. This required the fleece to be carded, meaning combing with a large iron comb-like tool. The other principal fabric used to produce textiles in Medieval Europe was linen. Linen was produced from flax. Yarn was required to produce textiles. Yarn was produced from raw wool or flax fibers by hand. (The spinning wheels did not appear until the late Medieval era.) The raw wool or processed flax was placed on a drop spindle. A drop spindle was fashioned from wood or bone and weighted on the bottom with stone or metal. The yarn was produced by drawing the fiber out from the spindle and twisting it in the process. This was a tedious, labor intensive process, but only after the yarn was fashioned could the production of textiles begin. The next step was to dye the yarn. A variety of natural dyes were used, varying somewhat as the kinds of plants available locally. As we see in the paintings from the period, colors were very important. Peasants might wear garments done in brown and other natural colors. If they coud afords it,people wanted colors and often this meant very bright colors. As the medievalmperiod prigressed new dyes were developed and people learnedcmoire bout the process. Some people made the dyes themselves, but natural dyes could be purchased in village markets. Some of the common dies were: woad or indigo (blue), weld (yellow), madder(orange and red), brazil tree (reds), alkanet (lilac), and many other roots, berries, barks and lichens. The colors were fixed with urine. One shuders to think how they figured this out. It was the amonia in urine that fixed the dyes. It was widely belirved that male urine should be used. We do not know if male urine indeed was more effective in the fixing process. There are reports thatv there were vats on London streets that were set up to collect the urine. Presumably the same occurred in other towns and cities. The yarn was then woven into cloth on hand looms. The woven cloth was then ironed by pressing with a whale bone (baleen) plaque, glass, or stone smoother which had been heated for the purpose. Weavers produced a wide range of wool textiles, which were variously referred to as woollens, worsteds, and semi-worsted ‘stuffs’. These wollen goods ranged widely in both quality and price. Many peasants spun and wove their own cloth, known a honespun. In the villages and towns cloth could be bought ranging from cheap coarse fabric that common townspeople might purchase to luxurious woollen scarlets purchased by arisytocrats and beyond the ability of most commoners to afford. The final process was decoration by braided cords, tapestry, or embroidery. The wool trade was a central factor in the developing Medieval economy. Between the 10th and the 15th century, a rural domestic handicraft evolved into a complex, largely urban industry with a sophisticated division of labour. A variety of technical changed incvolved new industrial processes which often were designed to cut production costs, but often impaired quality. Producers involved in luxury markets often often rejected the technolical changes for fear of impairing quality. Craft guilds and governments imposed quality-controls. The production of textiles by the 15th century reached a techical level that changed little until the Industrial Revolution. The principal source of wool in Medieval Europe was England. The heavy rainfall in England produced luxurious pastures that was ideal for grazing sheep. As a result, England was able to produce fine quality wool in great quantity. England did not have, however, the skilled craftsmen to make high-quality cloth. England thus exported much of its wool production, a great deal of it to Netherlands/Flanders where in the lower, damper climate sheep did not flourish. Thus a close relationship developed between England and the Low Countries. The English Crown imposed the the 'Staple', an export tax on wool As rge demand for raw wool increased, English nobles amassed great fortunes. They turned peasants off their small plots and converted their lands to sheep pastures. English kings attempted to control the wool trade. They restricted the wool trade to a few "Staple" ports so it could be more easily controlled. Two of the most important ports were Sandwich and Calais. (Calais was a French port controlled by the English.) Royal officials in the Staple ports carefull counted the bales and levied the export tax. Smuggling developed to avoid the toyal tax. Only slowly did an important weaving industry develop in England. One imprtant fabric was the soft fabric flannel produced in west of England and Wales. Weavers to the north in Scotland produced looser, rougher weaves that we today call tweeds. thee cloth towns of Flanders (Bruges, Lille, Bergues, and Arras in Artois) were noted for their skilled craftsmen and quality of the wool cloth that they produced. I am not sure why these Flemish cloth towns develped the ability to produce such high-quality cloth. They imported raw wool from England. Transporing the English wool to the Flemish cloth towns was expensive. Roads virtually did not exist. The wool was transported by water. The imported wool not only had to be transported to English ports and then accroos the Channel, but river barges had to bring the wool up rivers to the Flemish towns. Boats could supply wool to cloth towns as far from the sea as St-Omer, Bethune, and Arras. There were, however, signoficant costs. Taxes had to be paid to the local lords through whose territory the wool passed. Rapids at required that cargoes be and carted along a short road a little upstream to be reloaded on barges. At Douai there was a lock on the river wher boats had to pay a tax to pass. All of this substantially increased the cost of the wool by the time it reached Flanders. Skilled Flemish craftsmen and women took raw wool and spun the raw wool into yarn and then wove it into the finest woolen cloth produced in Europe. Flanders had, however, a seious problem. The lower damper climate of the Lowlands was not condusive to sheep hubandry. Thus the Flemish had to find a source of wool. One was at hand in England accross the Channel where sheep thrived. Thus critical economic ties bound England and the Low Countries. England prodyuced and exported wool, the Flemuish bought the wool and woce cloth. And finally the English imported the cloth. As the cloth industry developed in Flanders, an essential rural artisanal operation became centered in towns and increasingly specilized. Craftsmen began to speciuslize in various finishing trades, like dying and fulling (giving cloth a shiny surface). Boys would begin at apprentices ar about age 12 to learn the required skills. The various trades were controlled by a Guild of master craftsmen. Flemish towns often specialised in specific garments like bonnets or stockings. Other towns specialized in tapestries. The fine woolen cloth and textiles produced in Flanders were marketed by merchants at fairs in Bruges, Cologne, Paris, and other important urban centers. Transporting the woolen girls again significantly added to costs. Flanders was part of the territory controlled by the dukes of Burgandy. The Dukes of Burgandy were some of the most importsnt French nobels and at times in alliance with the Englih rivaled the power of the French monarchy. The woolen trade was an inportant element in the wealth of the Burgundians. The Dukes of Burgundy protected the Flemish cloth trade and as a result the "Golden Age" of Flanders occurred during the 14th and 15th centuries. The wool trade brought great wealth to Flanders. The Burgundian dukes were able to maintain spectacular courts in Bruges, Lille, and Brussels. Aristocrats there were able to afford luxurious outfits and began to set fashions trends throughout Europe. The Burgundians not only set clothing fashions, but also furnishings like tapestries to decorate the baren stone walls of castles. This in turn helped expand the demand for Flemish woolen goods. The wool trade closely tied Flanders to England. As a result, the Burgundians nomially owing loyalty to the Frenvch monrchy, often allied them selves to the English. The dukes of Burgandy saw this as a way of establidhing an independent monarchy. The Burgundians fought with the English for much of the Hundred Years War. Artois in the south was between France and Flanders was epecially affected. During this period, Flanders was not only ravaged by war, but also the Blavk Death which first struck Flanders in 1348. With the extinction of the Burgundian line, Flanders passed to the Hapsburgs. The Emperor Maximilien of Austria granted Tourcoing the right to hold an annual tax-free "Franche Foire" (tax free) fair which which helped promote the town's textile industry. Emperor Charles V divided his territories and Flanders passed to his son the Philip II and the Spanish Hapsburgs. King Philip II granted Lille the right to conduct business in the Vieille Bourse. This new Exchange was modeled on those in Antwerp and other Dutch towns. At the Vieille Bourse, 24 cloth merchants each had shops/offices arrayed around a common courtyard. Here they bought and sold their merchandise to foreign buyers and traders. Banking developed to arrange the needed finance. The Reformation and Catholic efforts to supress it played a major role in thge decline of Flanders as the economic center of Europe. The Reformation began in Germany, but spread to the Low Countries, referred at the time as the Austrian and later the Spanish Netherlands. When Philip II launched a campaign to supress Protestantism in the Netherlands, the people thgere fought for indepoendence. Philip lrgely succeeded in the souhern Netherlands, but failed in the north where the Dutch assisted by the Englidh were able to secure their indeprendence. After Philip King Louis XIV of France pressed his claims. By this time, Flanders had lost its former domance. The military campaigns had caused great damage. Many gifted Protestant craftsmen had fled to the saftey of Proitestant lands, including Ebgland. Thus the Flemish Cloth Towns no longer had a monnoply of the skills and technologies. Some towns like Arras or Hondschoote were virtually destroyed in the fighting. There was also competition from rural weavers who were no limited by the quality controls of the guilds. There cloth was often of poor quality, but it was much less expensive and thus found a ready market. Navigate the Historic Boys' Clothing Web Site: [Return to the Main Medieval page] [Return to the Main cloth andmaterial page] [Introduction] [Activities] [Art chronologies] [Biographies] [Chronology] [Countries] [Economics] [Material] [Style Index] [Bibliographies] [Contributions] [FAQs] [Glossaries] [Images] [Links] [Registration] [Tools] [Boys' Clothing Home]
http://www.histclo.com/chron/med/eco/me-tex.html
13
29
Science Fair Project Encyclopedia In economics, economic equilibrium often refers to an equilibrium in a market that "clears": this is the case where a market for a product has attained the price where the amount supplied of a certain product equals the quantity demanded. In most markets, this supply and demand balance is an economic equilibrium. The concept of equilibrium is also applied to describe and understand other sub-systems of the economy that do not follow the logic of supply and demand, for example, population growth. (If economic growth encourages population growth, and vice-versa, we might see this two-way relationship attaining balance or equilibrium.) This entry concerns only issues of supply and demand. In most simple microeconomic stories of supply and demand in a market, we see a static equilibrium in a market; however, economic equilibrium can exist in non-market relationships and be dynamic. This example is also partial equilibrium , while equilibrium may be multi-market or general. As in most usage (say, that of chemistry), in economics equilibrium means "balance," here between supply forces and demand forces: for example, an increase in supply will disrupt the equilibrium, leading to lower prices. Eventually, a new equilibrium will be attained in most markets. Then, there will be no change in price or the amount of output bought and sold — until there is an exogenous shift in supply or demand (such as changes in technology or tastes). That is, there are no endogenous forces leading to the price or the quantity. Not all economic equilibria are stable. For an equilbrium to be stable, a small deviation from equilibrium leads to economic forces that returns an economic sub-system toward the original equilibrium. For example, if a movement out of supply/demand equilibrium leads to an excess supply (glut) that induces price declines which return the market to a situation where the quantity demanded equals the quantity supplied. If supply and demand curves intersect more than once, then both stable and unstable equilibria are found. There is nothing inherently good or bad about equilibrium, so that it is mistake to attach normative meaning to this concept. That is, food markets may be in equilibrium at the same time that people are starving (because they cannot afford tp pay the high equilibrium price). In most interpretations, classical economists such as Adam Smith maintained that the free market would tend towards economic equilibrium through the price mechanism . That is, any excess supply (market surplus or glut) will lead to price cuts, which decrease the quantity supplied (by reducing the incentive to produce and sell the product) and increase the quantity demanded (by offering consumers bargains). This automatically abolishes the glut. Similarly, in an unfettered market, any excess demand (or shortage) will lead to price increases, which lead to cuts in the quantity demanded (as customers are priced out of the market) and increases in the quantity supplied (as the incentive to produce and sell a product rises). As before, the disequilibrium (here, the shortage) disappears. This automatic abolition of market non-clearing situations distinguishes markets from central planning schemes, which often have a difficult time getting prices right and suffer from persistent shortages of goods and services. This view came under attack from at least two viewpoints. Modern mainstream economics points to cases where equilibrium does not correspond to market clearing (but instead to unemployment), as with the efficiency wage hypothesis in labor economics. In some ways parallel is the phenomenon of credit rationing , in which banks hold interest rates low in order to create an excess demand for loans, so that they can pick and choose whom to lend to. Further, economic equilibrium can correspond with monopoly, where the monopolistic firm maintains an artificial shortage in order to prop up prices and to maximize profits. Finally, Keynesian macroeconomics points to underemployment equilibrium, where a surplus of labor (i.e., cyclical unemployment) co-exists for a long time with a shortage of aggregate demand. On the other hand, the Austrian School and Joseph Schumpeter maintained that in the short term equilibrium is never attained as everyone was always trying to take advantage of the pricing system and so there was always some dynamism in the system. The free market's strength was not creating a static or a general equilibrium but instead in organising resources to meet individual desires and discovering the best methods to carry the economy forward. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Economic_equilibrium
13
14
After a lengthy history indicating that Alaska Native people had aboriginal claims to ancestral lands and resources, and some debate and litigation over whether or not they did, Congress answered the question in a very distinctive way. On December 18, 1971 Alaska Native aboriginal claims were ‘settled’ and extinguished by an Act of Congress and signed by President Nixon through the Alaska Native Claims Settlement Act (ANCSA), the largest land claims settlement in U.S. history. Some called it an ‘experiment,’ others declared it to be an act of ‘assimilation’ or even ‘termination,’ and all would agree that the implementation of ANCSA is very complex. ANCSA has been amended by every Congress since its passage, both to refine the terms of the settlement and also to reduce the likelihood of losing the land so precious to the Alaska Native people. It has seen tremendous successes and some failure, and today, some 40 years later, issues are still being worked out such as the rights of Alaska Native people born after ANCSA was passed, rights to fish and game resources, and the jurisdiction of tribes that were essentially left landless. Alaska Native people had a claim to ownership of all land in Alaska, surface and sub-surface, based on their aboriginal use and occupancy of it. The settlement of that claim was far from a land give-away. Rather than designating reservations held in trust by the United States government as the majority of tribes in the Lower 48 have, the Alaska Native Claims Settlement Act created 12 regional profit-making Alaska Native corporations and over 200 village, group, and urban corporations to receive what would end up being around 45.5 million acres of land along with about a billion dollars cash payment. A 13th regional corporation headquartered in Seattle was later established for Alaska Natives who lived outside of Alaska which participated in the cash settlement but did not receive land. The corporations have specific procedures to follow as provided by ANCSA, but they are also incorporated under State of Alaska law and must follow state corporation law. The lands, assets and businesses are owned by the shareholders of the Native corporations, and subject to terms, protections, and restrictions placed on them by both federal Indian law (ANCSA) and by State of Alaska corporation law. The most common pattern of land selection was for the village corporations to select core township lands around the villages. The village and regional corporations selected lands around the perimeter, alternating townships to create a ‘checkerboard’ pattern. The village corporations own the surface estate to their lands, while the regional corporation owns the sub-surface of the village corporation lands. ANCSA also gave the village corporations some control over development “within the confines of the Native village” by the regional corporations. The split ownership of the land surface and sub-surface resources, and control over development, generated controversy between regional and village corporations over the details of what it actually meant in practice, and court decisions and amendments to ANCSA have been necessary to clarify this relationship. Villages that had reservations were allowed to choose between being treated like other villages or to have their village corporations select all the former reservation lands, both surface and subsurface rather than participating in their regional corporations. The land selections authorized under ANCSA are very complex, and amendments to ANCSA were made to correct some of the inequities made during the selection process. In addition to lands owned by the Native corporations around the villages, corporations also ended up with blocks of lands in various places in Alaska, and relatively smaller allocations went to urban and rural Native groups. The land settlement made through ANCSA is further complicated by land exchanges, easements, land protection issues, and requirements for village corporation reconveyance of land to individuals, businesses, and for community development under Section 14(C)(3) of ANCSA. ANCSA terminated all the Indian reservations and reserves in Alaska with the exception of Metlakatla. Tribes that had their reservations terminated had the option of keeping their former reservation land with both surface and subsurface ownership. If they chose that option they did not receive a cash settlement or participate as shareholders in the regional corporations. ANCSA also affected Native lands by repealing the Alaska Native Allotment Act of 1906, unless Congress permits exceptions for new Native allotments. Around 80,000 people of at least ¼ Alaska Native blood, living at the time of the passage of the Alaska Native Claims Settlement Act, became the shareholders in the ANCSA corporations. People could enroll under ANCSA based upon residency in 1971 or past residency, place of birth, or based on family heritage. Native people could also choose to enroll in a region only, and not in a village. These shareholders, known as “at-large” enrollees, received additional stock benefits that village enrollees did not receive. Alaska Native people born after December 18, 1971 could not receive ANCSA stock except by inheritance or court order in a divorce or child custody dispute. Later amendments to ANCSA allowed the shareholders of the corporations to decide if they want to admit children born after the date ANCSA was passed as new shareholders. A few village and regional corporations have done so, but this continues to be an issue of controversy within many of the Alaska Native corporations. Today, approximately 60% of Alaska Natives are shareholders in ANCSA corporations. Originally the land and ownership of the corporations through stocks was protected from loss only for the first twenty years after ANCSA was passed because Native corporation stocks could not be sold until December 18, 1991. Selling stock on the open market would most likely have lead to the loss of Native control and ownership over the corporations and lands to non-Native individuals and corporations which is why many called ANSCA an act of termination. Congress did take action and adopted the so-called ‘1991 amendments’ to extend the restrictions on selling stocks in the Native corporations until a majority of all shareholders in a particular corporation vote to eliminate the restriction. None have done so. The Alaska Native corporations employ thousands of people in Alaska and worldwide through a tremendous variety of businesses ranging from natural resource development to telecommunications, engineering, government contracts, construction, drilling, environmental remediation, alternative energy, real estate, investments and tourism. A provision of ANCSA referred to as ‘7(i),’ requires each land owning regional corporation to pay the other eleven regional corporations a percentage of the revenue received from the subsurface resources and from timber sales. This provision was meant to make up for an uneven distribution of Alaska’s natural resources such as timber, gravel, and oil. The individual corporations distribute dividends to their stockholders as profits from businesses and investments permit. While ANCSA extinguished aboriginal claims to land and “any aboriginal hunting and fishing rights that may exist,” ANCSA did not provide protection for hunting and fishing needs of the Alaska Native people. Congress expected that the State of Alaska and the Secretary of Interior would somehow protect traditional Native hunting and fishing practices. Hunting for food requires more land than was received under ANCSA, and waters with fishing resources were not part of the settlement. Section 17(d)(1 and 2) of ANCSA provided for withdrawing millions of acres of unreserved public land in Alaska for national and public interests, which resulted in the passage of the Alaska National Interest Lands Conservation Act (ANILCA) in 1980. Lack of action to protect Native hunting and fishing led to the inclusion of Title VIII of ANILCA which intended to carry out the unfulfilled settlement of aboriginal hunting and fishing rights now called ‘subsistence’ by giving a rural resident preference for subsistence resources. ANILCA also amended ANCSA by allowing ANCSA corporation land to be protected by placing it into a ‘land bank’ through an agreement. The Alaska Native Claims Settlement Act did not grant land to the tribes in Alaska, nor did it terminate their status as tribes as some thought that it did. There were years of confusion and debate after ANCSA’s passage before the Department of Interior clarified the matter by issuing the list of federally recognized tribes in 1993 and Congress confirming it through the List Act in 1994. The tribes roughly correspond to the villages organized under ANCSA. However, without a land base held in trust like Lower 48 reservations, the jurisdiction tribes in Alaska possess became subject to many court battles. Thirteen regional corporations created under ANCSA: - Ahtna, Incorporated - The Aleut Corporation - Arctic Slope Regional Corporation - Bering Straits Native Corporation - Bristol Bay Native Corporation - Calista Corporation - Chugach Alaska Corporation - Cook Inlet Region, Inc. - Doyon, Limited - Koniag, Incorporated - NANA Regional Corporation - Sealaska Corporation - The 13th Regional Corporation
http://tm112.community.uaf.edu/unit-3/alaska-native-claims-settlement-act-ancsa-1971/
13
16
How We Hear... Hair cells in the inner ear: Hearing depends on a series of events that change sound waves in the air into electrical signals. Our auditory nerve then carries these signals to the brain through a complex series - Sound waves enter the outer ear and travel through a narrow passageway called the ear canal, which leads to the eardrum. - The eardrum vibrates from the incoming sound waves and sends these vibrations to three tiny bones in the middle ear. These bones are called the malleus, incus, and stapes. - The bones in the middle ear amplify, or increase, the sound vibrations and send them to the inner ear—also called the cochlea—which is shaped like a snail and is filled with fluid. An elastic membrane runs from the beginning to the end of the cochlea, splitting it into an upper and lower part. This membrane is called the “basilar” membrane because it serves as the base, or ground floor, on which key hearing structures sit. - The sound vibrations cause the fluid inside the cochlea to ripple, and a traveling wave forms along the basilar membrane. Hair cells—sensory cells sitting on top of the membrane—“ride the wave.” - As the hair cells move up and down, their bristly structures bump up against an overlying membrane and tilt to one side. This tilting action causes pore-like channels, which are on the surface of the bristles, to open up. When that happens, certain chemicals rush in, creating an electrical signal. - The auditory nerve carries this electrical signal to the brain, which translates it into a “sound” that we recognize and understand. - Hair cells near the base of the cochlea detect higher-pitched sounds, such as a cell phone ringing. Those nearer the apex, or centermost point, detect lower-pitched sounds, such as a large dog barking. We Need Two Ears... Our two ears act like radar antennae to register acoustic signals coming from multiple directions. The complex structures of each ear process the received signals and pass them to the brain where we interpret our acoustic environment. Take, for example, the sound of the wind or of an approaching truck: the nearest ear receives the sound slightly earlier than the other and a little louder. Using the finely processed acoustic information from each ear, the brain has the capacity to calculate the direction of the wind or the direction of the truck's approach and we also "know" approximately how close it is. Advantages of Two Properly Functioning Ears: - Excellent Sound Localization Skills - Easier Speech Understanding in Noisy Situations - Rich Sound Quality - Accurate Judgement of Appropriate Volume - Ability to Hear All Tones of Voice, Including Children! There are Three Major Types of Hearing Loss: Sensorineural, Conductive & Mixed Sensorineural hearing loss is the most common type of hearing loss and is caused by damage to the inner ear and/or the auditory nerve. Noise exposure, diseases, certain medications and aging can destroy parts of the inner ear and cause permanent hearing loss. Sensorineural hearing loss usually effects the high frequencies, which impairs a person’s ability to differentiate consonant sounds and thus the fine distinctions in words such as "fit" versus "sit". Your audiologist can effectively treat this type of loss with hearing instruments. Conductive hearing loss is caused by a problem in the outer or middle ear, or from a defect in the ossicular chain. Conductive hearing loss can often be medically treated. Your Audiologist will be able to diagnose this type of hearing loss and refer you to the appropriate medical professional for treatment. When a patient has both a conductive hearing loss and a sensorineural hearing loss, it is called a "Mixed Hearing loss". This type of hearing loss often requires both medical/surgical intervention and the use of hearing aids. Your audiologist will diagnose this type of loss and refer you to the appropriate medical professional for treatment. For more information on hearing-loss, and other communication disorders, visit NIDCD.com
http://hearstl.com/Hearing-Loss-Information.html
13
35
Nine economic principles. One goal. Nine economic principles create the foundation of all programs and lessons developed and taught by the California Council on Economic Education, CCEE. Just mentioning the word economics can sound complex to students, but when broken down into a simple idea, lessons are fun and actionable. CCEE programs and workshops have earned a 97% teacher-approval rating because all lessons are classroom-ready and easy to use. CCEE’s focus is to make economics understandable for teachers and students alike. When we do so: California’s students will be prepared to participate in the global economy as responsible workers, consumers, savers and citizens. 1. People choose Economics is about choosing from alternative ways to use scarce resources to accomplish goals. All economic analysis focuses on how people choose. Children are constantly making choices, which become more important and more complicated as they grow. It is important that they recognize that the choices they make today influence them for the rest of their lives. As Professor Dumbledore said to Harry Potter, “It is our choices, Harry, that show what we truly are, far more than our abilities.” Once students accept the fact that they choose, they can lose the “victim” mentality. Although some things do happen to them, by and large, their lives are a result of their choices. 2. Every choice has a cost While many attribute the statement, “There is no such thing as a free lunch” to an economist, it is really not the economic way of thinking. This is because things don’t have costs, choices do. There is no denying that a choice involves selecting one alternative from at least two. The economist’s concept of cost is that something is given up when choosing something else. The alternative that was given up is the cost of the choice, called “opportunity cost.” Opportunity cost is not all of the alternatives you could have selected; it is what is lost by the choice you did make. 3. Benefit/cost analysis is useful Every choice we make involves benefit/cost analysis either implicitly or explicitly; it is the primary tool of economic reasoning. In using scarce resources to achieve goals, it is helpful to do so in a systematic manner. Benefit/cost analysis, or decision making, includes five steps: - Identify the goal and the resources available to achieve the goal. For most students, the goal will be to maximize their wealth (defined as “the subjective evaluation of their well being,” not money), by using their human capital in a particular situation. - Identify alternative ways to achieve the goal and narrow the options down to two. - Evaluate the advantages and disadvantages of each option. - Choose one of the alternatives. - Identify the best alternative not selected as the opportunity cost of the choice. Buy the Decision-Making Apron, a tool to help students visualize the decision-making process and teach Benefit/Cost Analysis. 4. Incentives matter When developing public policy concerning the use of scarce resources, it is essential to think about the incentives provided by the policy. For example, if legislation to protect endangered species takes property rights away from landowners, then the legislation may inadvertently provide incentives to those landowners to destroy the very animals the legislation sought to protect. Some public policies provide incentives that lead to unintended consequences. Prices are incentives to produce and disincentives to consume. 5. Exchange benefits the traders The phrase “One person’s trash is another’s treasure” is true because of the subjective value each individual places on wealth. Wealth is the subjective evaluation of one’s well being, not an amount of money. The Principle of Exchange states that two parties with equal information will voluntarily exchange only if they gain more than they give. This is not a statement that people are selfish. If one party gives a thing of great monetary value to another, the giver is gaining a feeling of satisfaction and self worth even if he is receiving something of lesser monetary value. This is, in fact, an exchange and the giver is adding to his well being by giving. 6. Markets work with competition, information, incentives, and property rights An economy accomplishes two tasks: 1) production of goods and services desired by society and 2) facilitation of exchange between parties. In a market economy, these two tasks are done using the least amount of resources for the greatest amount of good if the four conditions (competition, information, incentives, property rights) are present. Competition involves buyers competing against other buyers for scarce goods and services and sellers competing against other sellers for customers. Profit is the major incentive of a market economy and competition is the major regulator. “Market failures” usually occur when one of the four conditions does not exist. When they do exist, markets are efficient. 7. Skills and knowledge influence income Many students assume the government will decide their income. If this is the case, there is little incentive to develop their human capital. It is important that they recognize how labor markets work. Applying the Principle of Exchange, employers will hire workers if the employers expect to gain more than they give. Salaries of some entertainers, executives, and athletes make perfect sense if those paying the salaries expect the individual to bring in more revenue than they are being paid. Today’s students will enter a highly technological, information-oriented economy. Demand for workers in those areas is high and income is high. There are plenty of jobs in low-skilled sectors, but those jobs pay a low income. Students should recognize that individual wages are determined by the supply of and demand for those workers and reflect the relative scarcity of those workers. Market imperfections exist in labor markets just as they do in product markets. 8. Monetary and fiscal policies affect people’s choices Three goals of macroeconomic policy are economic growth, full employment and price stability. Students should know how these goals are measured so they can assess the health of the economy and recognize the implications for them as workers, consumers and savers. They should learn to distinguish real and nominal data in order to evaluate economic indicators. Monetary policy is constantly in the news and students should be able to evaluate both monetary and fiscal policy by learning how to read basic indicators that affect the mortgage they may one day become responsible for or the vote they will cast. 9. Government policies have benefits and costs Government policies always involve trade offs. When governments use resources there is an opportunity cost. The benefits and the opportunity cost of particular policies are not distributed evenly; some reap the benefits while others pay the cost. Markets fail to work efficiently if even one condition is missing from competition, information, incentives or property rights. So too, government agents fail to work for the public interest when perverse incentives guide their actions. Students should learn to evaluate whether the benefit of each government program outweighs the opportunity cost and whether the distributive effects of the policy are desirable. They should also evaluate whether an inefficient market outcome is better or worse than an inefficient government outcome for particular policies. Students should investigate whether there are unintended consequences of particular government actions. Check the calendar for an upcoming workshop to learn how to use the Teacher Guide to the California Economics Standards and other ways CCEE applies these nine economic principles.
http://www.ccee.org/about-ccee/nine-economic-principles/
13
22
In 1939 when World War II began in Europe nearly all Great Plains Farmers wanted to stay out of the conflict. They feared the loss of life, particularly their sons, if the United States became involved. They also remembered the collapse of the agricultural economy after World War II. Still, many farm men and women considered the war an opportunity for the United States to sell surplus, price-depressing agricultural commodities to Great Britain and France. Wartime demands, they hoped, would increase farm prices and improve their income and the standard of living for farm families across the Great Plains. The editor of the Nebraska Farmer contended that a long war would bring prosperity to farmers because the belligerent nations would turn to the United States for agricultural commodities that they could no longer produce in order to feed their people. Although agricultural prices, particularly grain and livestock increased during the autumn of 1939, most farmers anxiously awaited major price increases for farm products. By early spring 1940, however, the Nebraska Farmer reported that the war had not "lived up to the expectations of those who looked for a boom in exports of farm products." Britain and France continued to spend more for armaments than American farm commodities. As a result, by late 1940, only government buying, commodity loans, and export subsidies kept agricultural prices from falling due to a loss of foreign markets, primarily due to German and British blockades. By mid-1941, however, increased British demands for food as well as an expanding U.S. military had substantially increased agricultural prices. Farmers now enjoyed 25 percent more purchasing power than during the previous year, and agricultural experts predicted another 25 percent increase the next year. In September 1941, Great Plains farmers became even more optimistic when Secretary of Agriculture Claude R. Wickard called for "the largest production in the history of American agriculture to meet the expanding food needs of this country and nations resisting the Axis." Farm income now out-paced expenses, at least for the moment. As the nation drifted toward war, Great Plains farmers worried about government price fixing for agricultural commodities, if the United States became involved in the conflict. On the eve of Pearl Harbor, Congress bowed to farm-state pressure and approved liberal maximum prices for farm commodities while promising farmers that agricultural prices would not be targeted for control if war came and consumer prices escalated. Nearly everyone understood that agricultural production must increase to feed an expanding military. By the autumn of 1941 Secretary Wickard believed the European war and the needs of those nations fighting Germany would require record-breaking agricultural production. Wickard contended that American farmers would need to feed ten million Britain's and that seventy cents of each dollar spent for dairy products, butter, eggs, and cotton, among other agricultural commodities would reach the farmer. Soon people began speaking of "Food for Defense." In Kansas federal and state officials met with farmers across the state to encourage them to increase production by specific amounts. - ["State Food Goals," Salina Journal October 4, 1941] - ["Double Food Task," Salina Journal, October 8, 1941] Agricultural Adjustment Administration officials, who represented the federal government, visited farms and asked farmers how much they could increase production of various commodities. In October 1941, they asked Colorado farmers to increase hog production by 30 percent and cattle ready for slaughter by 18 percent. Across the Great Plains, however, wheat and cotton production still seemed more than sufficient to meet the nation's needs for bread and fiber. Most observers believed the new European war might end soon, and farmers did not want to produce too much and suffer price depressing surpluses and an economic depression like the one that followed World War I. The Japanese attack on Pearl Harbor on December 7, 1941, ended the reluctance of most Great Plains farmers to increase production. Quickly, the army became the major buyer of flour from wheat and beef produced in the Great Plains. Farm prices sky rocketed by 42 percent while farm costs increased only 16 percent from the previous year. Great Plains farmers met the challenge of the United States Department of Agriculture and other government agencies to increase production by seeding more acres, raising more livestock, and working longer days. They also took pride in their achievements and couched their work in patriotic terms as their contribution to the war effort. In July 1942, the Nebraska Farmer touted the increased productivity of farmers in the Cornhusker State noting, "On every Nebraska farm there is a dramatic story of sacrifices, hard work and long hours, often made by women and children who took the place of sons and brothers in the military." In Nebraska, like other Great Plains states, farm men, women, and children exhibited a "can-do" spirit for the sake of the nation's war effort. This patriotic sentiment, pride, and efforts to increase production continued until the war ended. - ["Nebraska's 120 Thousand Fighting Farmers" Nebraska Farmer, July 11, 1942] - ["Farmers Score Food Victory," Omaha World-Herald January 1, 1944] One Oklahoma editor contended, "The war has made the farmer almost the most important person in the county, and farming has become as essential a war-time business as the manufacturer of planes, tanks, guns and ammunition." By early 1942, Great Plains farmers knew the war would dramatically increase their income. In South Dakota farmers and livestock raisers anticipated wartime profits because approximately 75 percent of the state's farm income came from sales to allied forces and civilians through the Lend-Lease program. In 1941, gross farm income increased by $30 million. - ["Last Year's Record Farm Output Is Base for Bigger Production," Rapid City Daily Journal, February 26, 1942] Yet, as agricultural income increased, Great Plains farmers recognized a looming agricultural labor shortage as their sons and hired hands joined the military while the federal government expected them to increase production. By spring 1942, the U.S. Employment Service could not find enough workers for farm labor. Government officials recommended the employment of nonfarm women and men and boys and girls, and it urged businesses to close during peak agricultural seasons, such as harvest time, to enable employees to help local farmers. In Colorado, however, some people opposed the organization of school children for farm labor because it required too much regimentation. Many schools and civic organizations, however, provided volunteers to help farmers. - ["Urges School Boys to Fill Farm Jobs," Omaha World-Herald, February 7, 1942.] - ["Farms Will Draw on New Labor Sources for 1942," Bismarck Tribune, March 23, 1942] - ["City Lads Learn Farm Work to Help Meet Rural Labor Shortage," Omaha World-Herald, April 26, 1942] - ["Denver Area Fights U.S. Plan for Your Farmers Battalions," Denver Post, May 2, 1942] - ["Men for Harvest," Salina Journal, May 11, 1942] - ["Western Farmers Are Told to Solve Labor Problems," Denver Post, May 21, 1942] - ["Business Men, School Boys As Farm Workers," Salina Journal, September 14, 1942.] - ["Leave School to Help in Harvest," Salina Journal, September 23, 1942.] - ["60 Kiwanis to Pick Cotton; Banker to be Water Boy, Daily Oklahoman, October 9, 1942] - ["Roswell High School Students to Pick Cotton," Roswell Daily Record, October 16, 1942] - ["Calls on Schools to Provide Labor," Salina Journal, August 24, 1943] In June 1942, O. M. Olsen, Commissioner of Labor for Nebraska, surveyed the labor shortage in the sugar beet region of western Nebraska. He supported the recruitment and hiring of 700 Mexican farm workers to help farmers block, that is, thin sugar beets. In Wyoming, volunteers helped farmers thin beets to ensure a crop. Some farmers also hoped that Japanese evacuees from the west coast who were relocated to Heart Mountain, Wyoming, could help harvest sugar beets. - [Letter: O. M. Olsen to Governor Dwight Griswold, June 28, 1942, Folder 218, Box 12, 1942 Correspondence, Griswold Papers, Nebraska State Historical Society] - ["Volunteers Save Sugar Beets of Sheridan Region," Wyoming Eagle, July 23, 1942] - ["Labor Shortage in Sugar Beet and Bean Fields Grows Grave," Wyoming Eagle, August 14, 1942] - ["State Needs 3,439 More Beet Workers," Denver Post, September 9, 1942] Great Plains farmers knew that agricultural machinery would help them solve the labor shortage, improve efficiency and production, and reduce labor costs. But, they could not purchase much equipment during the war because defense industry needs for iron, steel, and rubber had priority over agricultural machinery manufacturers. A farm implement shortage developed quickly, particularly for tractors, combines, and corn pickers, and forced Great Plains farmers to share equipment when an implement broke or wore out. During the summer of 1942, H. O. Davis, rationing director for Kansas, told farmers, "This is more than a question of 'neighboring' it is a question of patriotic service for the country." By autumn, E. K. Davis, president of the Kansas Farmers Union, urged members to share labor and machinery. - ["Farm Implement Dealers Fear Loss of Business," Omaha World-Herald, December 11, 1942] - ["Rigid Quotas for Machinery for Farmers," Omaha World-Herald, January 11, 1942] - ["Share on Farms," Salina Journal, October 29, 1942] In September 1942, Secretary of Agriculture Wickard issued a rationing order for all farm machinery, effective in November. As a result, Great Plains farmers used only worn-out equipment during the war. Implement dealers often could not keep pace with the demands for repair work. Great Plains farmers could only make do with the implements that they had when the war began, while recognizing the potential problems ahead. - ["Wickard Orders Rationing for Farm Machinery," Wyoming Eagle, September 17, 1942] - ["This Is Going to be Tough! Farm Machinery Will be Hard to Get," Dakota Farmer, November 14, 1942] By 1944, Great Plains farmers experienced a severe implement shortage. With most iron and steel reserved for military purposes, few farm implement manufacturers built needed equipment. Great Plains farmers compensated by sharing implements, employing itinerant harvest crews, called custom cutters, and by hiring nonfarm workers for the corn harvest. Farm women also helped harvest crops. Some farmers, however, who lacked both corn pickers and labor, harvested their crop by letting their hogs graze it for later sale as pork. Throughout the war insufficient farm machinery and labor hindered the efforts of farmer's to increase production. Most farmers, however, confronted their problem and profited from increased productivity and high war-time prices. - ["Second Hand Farm Implements Once Ignored Are Now Bringing Fabulous Prices at Rural Sales," Rapid City Daily Journal, March 15, 1943] - ["Pooling of Farm Machinery Relieves Shortage of Labor," Amarillo Globe, March 8, 1942] - ["Corn Harvest is Big Job; Men, Women, and Livestock are Helping," Nebraska Farmer, November 4, 1944] While farmers endured the shortage of farm implements, they also contended with a labor shortage throughout the war. In Colorado Governor John C. Vivian appealed to Secretary of War Henry Stimson to release men in the military provided they worked on farms. He believed the induction of farm men into the military by the Selective Service contradicted government appeals for farmers to increase production. Governor Vivian argued that only farmers knew how to farm, not city men and women, who might be hired as agricultural workers. He feared lost crops and food shortages, if farmers continued to operate without their sons. The War Department ignored Governor Vivian's request, and Colorado farmers sought other solutions to their farm worker shortage, but not before Governor Vivian gained considerable attention for his plan in the newspapers. During the war, the farm labor shortage became serious across the Great Plains. Farmers could not compete with defense industry wages, and the military took away many of their sons and hired hands. The construction of military bases and employment at the bomber and ordnance plants, airbases, ammunition depots, and flying schools further drained the agricultural labor supply in the region because the construction and war industries paid considerably higher wages than farmers. In Kansas, farmers paid approximately $50 per month with room and board for year-round help and $3 per day for seasonal harvest hands. By autumn 1942, however, they paid $5 per day for inexperienced workers, and they could not employ enough of them, in part, because the aircraft industry in Wichita paid wages as high as $12 per day. Farmers continued to demand changes in the draft system, and the provision of military furloughs to ensure adequate agricultural labor, but the War Department staunchly opposed such policy. Early in 1943, Paul V. McNutt, director of the War Manpower Commission, and newly appointed Food Administrator, Chester Davis, also announced that they would seek mobilization of a 3.5 million volunteer "land army" for seasonal work on farms across the nation. Local extension agents would recruit workers not employed in defense industries and urge them to work on farms for "regular farm wages," even if below the pay of their regular jobs as a contribution to the war effort. In Colorado, Governor Vivian told Secretary Wickard that farmers could not meet their labor needs by employing teenagers from the cities and towns as the United States Department of Agriculture and President Franklin D. Roosevelt suggested, because they did not have the necessary experience. Even so, he urged school officials to release these students to help with spring planting. In 1943, the state extension services and the United States Department of Agriculture began a major campaign to encourage farmers to employ boys and girls and men and women from the towns and cities to help meet their labor needs. The Kansas Extension Service reported that, "It may take two boys to make one man, or three businessmen to replace one skilled farmer but the help that is here must be utilized." The Extension Service also observed that, "It will take patience on the part of the farmer to train skilled help. It will also require that sacrifice be made by town people unused to farm work under the summer sun. All of this is incidental to getting the job done." In April, the Kansas State Extension Service appealed to the patriotism of town and country people alike to help solve the farm labor shortage. - ["Farm Labor and Agricultural Production," Kansas Agricultural Extension Service, April 14, 1943] - [War Manpower Commission, Farm Labor Bulletin No. 25, "To All Volunteer Farm Placement Representatives] State and federal agencies also provided information to teenagers in the towns and cities who might seek jobs on farms. The Kansas State Board for Vocational Education, for example, provided the following suggestions to help town boys adjust to agricultural work and daily instruction by farm men and women. The board also provided advice for farmers who employed town boys as well as county extension agents involved in the recruitment process. - [Kansas State Board for Vocational Education, Topeka, "Suggestions To Town Boys Who Will Work on the Farm] - ["Some Suggestions to Farmers Who Employ Town Boys, by Lester B. Pollon, State Supervisor, Vocational Education] - ["Suggestions for Recruiting and Training an Emergency Lane Army in the Food Production Program." By. C. M. Miller, Director, Kansas state Board of Vocational Education, Topeka, no date.] - ["Suggestions for Recruiting, Registering, and Placement of non-Farm Labor, General Statement, no date.] - ["Suggestions for County Action to Meet Labor Needs; Getting The Facts About Farm Labor, no date.] - [Power and the Plow, no date.] By 1943, then, the United States Department of Agriculture sought to keep a force of experienced farmers and agricultural workers on the land and encourage the return of workers who were not employed in essential defense industries and who had agricultural experience on Great Plains farms. USDA officials also wanted to mobilize a "land army" or a "U.S. Crop Corps" of 3.5 million men, women, and children from the towns and cities for full-time, seasonal, and temporary farm work, particularly at harvest time. On April 29, 1943, Congress passed Public Law 45 which established the Emergency Farm Labor Program. This legislation gave the Extension Service in each state the responsibility for recruiting, transporting, and placing agricultural workers. The Extension Service also would work with the U.S. Department of Education to recruit school children for "Victory Farm Volunteers" of the U.S. Crop Corps and enlist a Women's Land Army. - ["U.S. to Mobilize Land Army of Over 3 millions," Denver Post, January 25, 1943] - ["Great Land Army Being Mobilized for Harvest," Denver Post, July 6, 1943] The agricultural labor shortage remained critical across the Great Plains during the war years. The Dallas Chamber of Commerce asked business leaders to release their employees for field work, but few businessmen or their employees volunteered to chop, that is, weed cotton fields with a hoe. Similarly, farm labor officials urged Cheyenne businessmen and their employees to spend their summer vacations on a farm within a fifty-mile radius of the city. In Nebraska, one county agent reported that interest among school boys and girls for farm work lagged, and a survey of high school students in Oklahoma City clearly indicated that most had no intention of working on farms for patriotic reasons because they could earn $100 or more per month in various city jobs. Few farmers could pay such high wages. In Kansas, for example, the average farm worker earned about $80 per month or $60 per month with room and board. In Oklahoma and Kansas officials reported that farm labor needs could only be met by school boys and girls, businessmen, and "rural and town women," but when harvest time came for the wheat crop wages of $10 per day with room and board attracted few volunteers. Near Dallas, cotton pickers earned at best $5 per day, and few workers took that employment. School leaders informed agricultural officials and labor recruiters that their children would not pick cotton even if they were released from school. - ["Farm Work Call is Issued to Women and Children," Denver Post, February 29, 1944] - ["Rains, Lack of Pickers Menace Cotton Crop," Dallas Morning News, October 5, 1944] - ["Rains, Picker Shortage Upset Cotton Growers," Dallas Morning News, October 6, 1944] - [Unlucky Farmers Will Pick Corn All Winter," Rapid City Daily Journal, October 13, 1944] Given the inability of many Great Plains farmers to meet their labor needs locally, they increasingly sought Mexican and Mexican American workers, particularly for work in the sugar beet fields for cultivating and harvesting as well as to chop and pick cotton in New Mexico and Texas. In 1942, many Great Plains farmers were encouraged when the federal government negotiated an agreement with Mexico to support the temporary migration of workers to aid farmers with certain needs and who met specific wage, housing, and working regulations. This agreement became effective on August 4, 1942. Soon, farmers and agricultural officials referred to it as the Bracero Program. Mexican workers, called braceros, proved good workers in the Great Plains sugar beet fields. Sugar beet growers and nearby refineries quickly stereotyped them as a people who would work long and hard for low wages and not complain, and many came from rural areas and understood farm work. Few local or white migrant workers sought this back-aching work for about $10 per day. During the remainder of the war, Great Plains farmers, particularly sugar beet growers, sought braceros that they contracted through the federal government. - ["A Mexico Will Send 20,000 to Labor for Beet Growers," Wyoming State Tribune, June 13, 1944] - ["50 Mexicans Reach State," Omaha World Herald, May 14, 1943] - ["2,000 Mexican Workers in North Dakota," Bismarck Tribune, June 15, 1944] - ["Mexican Workers May Come Here," Bismarck Tribune, July 1, 1944] - ["3,000 Mexican Harvest Workers Due in North Dakota Soon," Bismarck Tribune, July 21, 1944] - ["Few Sign Up for Beet Harvest Work, Recruiter May Try to Enlist Mexicans," Omaha World Herald, October 22, 1944] Braceros also worked for Great Plains farmers in other capacities. They harvested potatoes, shocked corn, threshed grain, and stacked hay. Great Plains farmers appreciated the willingness of braceros to do the required work, but they wanted the Mexicans to leave their farms and the area when the job ended because of their racist prejudices. The braceros and Mexican American migrant workers from the southern Great Plains confronted segregation in businesses and public places across the Great Plains. Great Plains farmers, who employed braceros, however, praised their work ethic and productivity. Even though they sometimes lacked skills for harvesting corn and wheat or using machinery, they learned quickly and worked hard. South Dakota farmers particularly welcomed bracero workers during the war years. - ["Your Mexican Hired Hands," Dakota Farmer, July 7, 1945] - ["Mexicans Aid Harvest, ´Rapid City Daily Journal, February 5, 1945] In Nebraska the extension agent praised the ability of the braceros to learn any farm job. The Fillmore County agent observed that they were accustomed to working with their hands which gave them an advantage over "most unskilled workers." He also urged farmers to help ensure good working relations for them. Great Plains farmers, particularly sugar beet growers, needed Mexican nationals in their fields, and their labor proved essential. Between August 1943 and August 1945 approximately 20,000 braceros worked in the Great Plains where they served as an important labor force. Braceros helped farmers provide food for the military and public and earn a profit. Braceros, however, could not provide all of the labor needed on Great Plains farms. Some agricultural officials in the USDA and state extension services believed that women in the cities and towns could help ease the farm labor shortage by joining a Women's Land Army. Confronted with a labor problem that had no male solution, some agricultural officials, state politicians, and women's organizations began considering women, both farm and town, as a collective agricultural labor pool. The USDA studied the possibility of mobilizing nonfarm women for agricultural labor and, in February 1943, Secretary of Agriculture Wickard asked the Extension Service to develop a program for the recruitment of women for farm work. And, in April Congress appropriated and authorized funding for a Women's Land Army (WLA). Florence Hall, an experienced USDA employee, became head of the WLA. The state extension services had the responsibility to appoint leaders for recruitment and organizational work. The WLA functioned as part of the Emergency Farm Labor Program and the U.S. Crop Corps. The WLA sought women for assignment to farms on a part-time, weekly, or monthly basis. Enlistment was available for women at least eighteen years of age who produced a doctor's certification of good health. The WLA planned to recruit women in areas where a farm labor shortage existed. This recruitment would help solve transportation and housing problems. Each woman had to be willing to work on a farm continuously for at least one month. WLA volunteers would receive training for "life on a farm" at a state agricultural college or similar institution. Agriculture and home economic teachers would provide the training. Women employed by the WLA would receive the prevailing local wage for farm work. Farmers interested in hiring these women could contact their county agent who would assign them women workers best suited for the situation from the local and state labor pool of the WLA. The county extension service would monitor this employment to ensure the town women adjusted to farm life and the provision of adequate living quarters and sanitary facilities. Great Plains farm men and women appreciated patriotism, but they questioned whether nonfarm women could perform physical agricultural work. Few town women volunteered for the WLA. As a result, in October 1943, farm women became eligible to join the organization. This decision enabled WLA officials to count more women as participants and claim some recruitment success. From 1943 to the termination of the WLA in 1945 as many as two million women became part of the WLA nationwide. In the Great Plains, farmers traditionally had not hired women for seasonal, that is, harvest work, and recruitment proved difficult. In Nebraska, the extension service reported that farmers willingly accepted their wives and daughters in the fields, but they were reluctant to hire nonfarm women. Moreover, few nonfarm women sought agricultural employment because they were not interested in this work or considered it a contribution to the war effort. Moreover, it did not pay as much as defense industry jobs. Still, in 1943, women primarily from the farms, but a few from the towns, played a major role in completing the wheat harvest. One observer noted, "These are women of prosperous wheat farms. They are mostly educated, refined women. . . . many young college girls, out of school for the summer." - ["Labor Shortage Putting More Women Workers Upon Farms," Wyoming Eagle, April 23, 1942] - ["Increased Use of Women on Farms Sought," Wyoming Eagle, October 8, 1942] - ["Men at War, Wheat Harvest Story of Women in the Fields," Daily Oklahoman, June 27, 1943] As the Selective Service System drafted more men for the military, one agricultural official reported, "If manpower continues to be drained . . . we will have to accept the idea that women will supplant men in the fields." He contended, "They do it in England and there's no reason we can't do it here." WLA recruiters visited schools and women's groups and canvassed neighborhoods by going house-to-house to enlist women who would attend short courses at the colleges across the state where they would learn to tend poultry, milk cows, and conduct other farm work. The following documents were prepared by the Kansas State Extension Service to help home demonstration agents and others recruit women for agricultural labor. Extension agents could use the documents to address local groups. These documents stressed the importance of agriculture, noted the farm labor shortage, and urged women to enlist. - [A Call to Farms: Basic Materials for use in discussing the Women's Land Army as a phase of the Kansas Farm Labor Program-1943] - [Kansas Women in the Farm Labor Program; Talk by Georgiana H. Smurthwaite, State Home Demonstration Leader, Kansas State College, Extension Service, Manhattan Regional Farm Meetings, Kansas-May 18-28, 1943] - [Questions and Answers on the Women's Land Army of the U.S. Crop Corps; Basic Materials for use in the Kansas Farm labor Program-1943.] - [Be a Soldier of the Soil, May 5, 1943] Although few women joined the WLA, many worked on Great Plains farms. Women detassled corn and pitched hay in South Dakota, shocked wheat in North Dakota and harvest potatoes in Wyoming, where the number of women driving tractors also became noticeable. In Nebraska, women also drove tractors to cultivate corn, and they harvested grain and picked corn. Most of these women, however, were family members not nonfarm, that is, urban or town women. Most of these farm women drove trucks and tractors during the harvest, with hauling grain their most common job. Implement companies, state extension services, and agricultural employment committees often sponsored training courses for farm and nonfarm women to help them learn to operate agricultural implements, particularly tractors. - ["Women Undertake Work on Saline Farms," Salina Journal, March 31, 1944] - ["Women Will Drive Tractors," Kansas Farmer, March 4, 1944] Few farm women wanted city women working in their homes, unless they cleaned and cooked. Farm women did not want nonfarm women working in the fields. Moreover farmers were skeptical about hiring females, particularly nonfarm women. They preferred to entrust their machinery to their wives and daughters or other farm women, because they had some knowledge about the operation of various implements. Consequently, the women working on Great Plains farms generally were: first, the farmer's wife; second, his live-at-home daughter; third, the daughter who had moved away but retuned during the harvest; fourth a relative; fifth, friends; and sixth or last, town women who wanted to work on a farm, if the family accepted them. - ["Are Farmettes Worth Their Keep," Omaha World Herald, March 1, 1943] - ["State's Farm Women Earn a Chapter in War History," Omaha World Herald, July 29, 1943] - ["Women Will Be Enlisted For Harvest," Salina Journal, January 14, 1944] - ["City Folk to Aid of Harvests," Wichita Eagle, May 14, 1944] No one can say precisely how many women worked on Great Plains farms as part of the WLA because the records are imprecise. Thousand of women, however, labored on farms across the Great Plains, but they were so widely scattered and worked so unobtrusively that few people were aware of their contribution to the war effort. - ["Women's Land Army Is Doing Big Job in State," Denver Post, September 28, 1944] - ["Midwest Women Save America's Farms," Denver Post, July 1, 1945] The WLA achieved only modest success recruiting, enlisting, and placing nonfarm women in agricultural positions in the Great Plains. But, as an organization that encouraged nonfarm women to leave their homes and jobs for farm work, the WLA served as an important symbol of collective unity and patriotic sacrifice. In the Great Plains, however, women conducted a considerable amount of agricultural labor, but not as part of the WLA. At best, farm men approved of nonfarm women helping their wives with domestic chores and farm women treated them as "hired girls" who did not know very much. Farm women considered field work their responsibility in time of need. In the end, farm women, not town recruits of the WLA, made the greatest contribution of women to agriculture work in the Great Plains during World War II. Estimated Number of Women in Farm Labor Through the Extension Farm Labor Program (Seasonal and Year-round ¹Includes farm women not in the region of the Great Plains Source: Wayne D. Rasmussen, A History of the Emergency Farm Labor Supply Program, 1943-47. Agriculture Monograph No. 13, (Washington, D.C.: U.S. Department of Agriculture, September 1951), 148-49 In retrospect when the war began farmers optimistically hoped the new conflict would benefit them. Increased federal demands for greater production meant more money. In 1940 farmers received an index price of 84 (1910-1914=100) for wheat, 83 for cotton, and 108 for livestock while their cost of living reached 121. By the end of the war, the index wheat price reached 172, cotton, 178, and livestock 210, while the cost-of-living index reached 182. Put differently, the index prices received on all farm products was 95 in 1939 and 204 in 1945. At the same time, farmers paid an index price for commodities, interest, taxes and wages of 123 in 1939 and 192 in 1945. Net income on a typical Great Plains wheat farms in Kansas, Oklahoma, and Texas rose from $558 in 1939 to $6,700 in 1945 for a 1,102 percent increase. In Oklahoma and Texas, cotton farmers earned an average of $997 for their crop in 1939 and $2,894 in 1945, a 190 percent increase. Overall, then, Great Plains farmers benefited from World War II. They paid debts and mortgages, bought land, and saved. They hoped that any post-war economic depression would pass quickly. The war years had ended the price depressing surpluses and low farm income of the Great Depression. Great Plains farmers agreed that war paid.
http://plainshumanities.unl.edu/homefront/agriculture.html
13
17
Guatemalan, and indeed Central American, independence came more as a result of pressures from without than from a genuine internal uprising demanding freedom from Spanish rule. This is not to say that all was well with Spanish colonial rule, as there were policies and social stratifications in place contributing to unrest among the lower strata of society. Spanish policies kept wealth and power in the hands of Spanish-born elites, or chapetones. Criollos, or those born in the New World of Spanish descent, were the next rung down the ladder, with the lowest standings reserved for mixed-blood mestizos and full-blooded Indians. Napoleon’s invasion of Spain in 1808 led to the imposition of a liberal constitution on Spain in 1812. When Mexican general Agustán Iturbide declared his own country’s independence from Spain, Guatemala followed suit. The reigning Captain General Gabino Gaínza bowed to demands for independence but hoped to maintain the existent power structure with the support of the church and landowning elites. The declaration of independence essentially maintained the old power structure under new management. Mexico quickly dispatched troops to annex Guatemala, and all of Central America, to Iturbide’s new empire. Iturbide was dethroned in 1823, and Central America, minus the state of Chiapas, declared its independence from Mexico. This second declaration joined the remaining states in a loose federation and adopted many U.S.-modeled liberal reforms such as the abolition of slavery. A protracted power struggle between liberals advocating a secular, more egalitarian state and conservatives wanting to maintain the church-dominated political and economic structures marked the early years of independence. The Central American Federation was weakened not only by inner power struggles within individual member states, but also by a struggle to determine regional leadership over neighboring states. Justo Rufino Barrios and the Liberal Reforms The liberals would finally succeed in 1871 under the leadership of General Justo Rufino Barrios, who, along with Miguel García Granados, set out from Mexico with a force of just 45 men, gaining numbers as their approach to the capital grew closer. The capital was taken on June 30, 1871, and Granados was installed as the leader of the new liberal government. Granados made only limited reforms and by 1872 a frustrated Barrios marched to the capital with his troops and demanded elections, which he won overwhelmingly. Among the reforms quickly instituted by Barrios, who would go down in Guatemalan history as “The Reformer,” were educational reform and separation of church and state. Barrios was the first of the caudillos, military strongmen who ruled the country with an iron fist and sense of absolute omnipotence, mostly uninterrupted, until the revolution of 1944. He masterfully strengthened his power over the entire country with links to local strongmen in rural areas wielding power on his behalf but unable to challenge his hold because of the restricted development of secondary market centers and the overwhelming economic dominance of Guatemala City. To further exercise his dominion, Barrios professionalized the military, creating a new military academy, the Escuela Politecnica, still in existence today. The addition of rural militia further strengthened national control over the rural hinterlands. Barrios was decidedly pro-Western and sought to impose a European worldview to suppress what he saw as a vastly inferior Indian culture. Liberal economic policies ensured minimal protection of village lands, Indian culture, or the welfare of peasant villages. During this time, coffee came to dominate the Guatemalan economy and Barrios’s economic policies ensured the availability of a peasant workforce to supply the labor-intensive coffee harvest with its share of needed workers. Furthermore, the increasingly racist attitudes of Guatemala’s coffee elites toward the Indians served to justify the coercive means used to secure this labor force. The Indians were seen as lazy, making forced labor and the submission of the indigenous masses both necessary and morally justified. In this regard, the mandamiento, which came to replace the repartimiento, was increasingly enforced in the last two decades of the 19th century, requiring villages to supply a specified number of laborers per year. Increasingly, however, elites found more coercive ways to exact labor from the Indians by way of debt peonage. Rural workers were required to carry a libreto, a record containing an individual’s labor and debt figures. Habilitadores, or labor contractors, were charged with advancing money to peasants in exchange for labor contracts. The contractors often used alcohol as an added incentive and took advantage of widespread peasant illiteracy to ensure many of them contracted debts they would never be able to repay. In this way, depressed rural wages from debt peonage and low-cost labor increased the wealth of agricultural elites while making the rural peasantry even poorer. Manuel Estrada Cabrera Justo Rufino Barrios died in battle in 1885 while fighting to create a reunified Central America under Guatemalan leadership. He was succeeded by a string of short-lived caudillo presidents. The next to hold power for any significant time was Manuel Estrada Cabrera, whose legacy included undivided support for big business and crackdowns on labor organization. He ruled from 1898 until his overthrow in 1920, having been declared mentally insane. Among Cabrera’s many peculiarities was the construction of several temples to honor Minerva, the Roman goddess of wisdom. Cabrera’s legacy includes gross corruption, a beefed-up military, and a neglected educational system. Export agriculture continued its unprecedented growth under Cabrera, thus paving the way for the dominance of two foreign groups that would come to control much of Guatemala’s economy in later years. The first of these were German coffee planters who settled in the region of Las Verapaces. By 1913 this German enclave owned 170 of the country’s coffee plantations, with about half of them in the vicinity of Cobán. The other significant foreign presence in Guatemala during this time was the U.S.-owned United Fruit Company (UFCo), aptly nicknamed “El Pulpo” (The Octopus), and its tentacles consisting of International Railways of Central America (IRCA) and the UFCo Steamship Lines. Its vast control of land, rail, and steamship transportation, in addition to Guatemala’s sole Caribbean port, Puerto Barrios, made it a political and economic powerhouse. Its political clout would be seen in the mid-20th century when, together with the CIA, it would be directly responsible for ousting Guatemala’s president, Jacobo Árbenz Guzmán, from power when land reform policies interfered with the company’s vast land holdings. After the overthrow of Estrada Cabrera in 1920, the country entered a period of instability and power struggles culminating in the rise to power of Jorge Ubico. Continuing in the now well-established pattern of megalomaniacal, heavy-handed leadership that would come to characterize many of Guatemala’s presidents, Ubico continued the unconditional support for U.S. agribusiness and the local oligarchy. By 1940, 90 percent of Guatemala’s exports were sold to the United States. Ubico caved in to U.S demands for the expulsion of the German coffee planters from Guatemala during World War II, evidencing the increasing U.S. hold on Guatemalan domestic policy. Within Guatemala, Ubico embarked on various reforms, including ambitious road-building projects, as well as improvements in health care and social welfare. Debt peonage was also outlawed but was replaced by a vagrancy law enforcing compulsory labor contributions of 150 days upon landless peasants in either rural plantations or in the government road-building programs. Ubico’s reforms always had in mind the modernization of the state economy. Far from an attempt to free the indigenous peoples from coercive labor practices, the vagrancy law asserted centralized control over the national labor force while keeping the political power of the oligarchy firmly in check. Ubico was also obsessed with internal security. He saw himself as a reincarnated Napoleon and became increasingly paranoid, creating a network of spies and informers used to repress opposition to his increasingly tyrannical rule. Much of this opposition came from the indigenous peasant population, whom Ubico ignored and regarded as retrograde and inferior. This led to numerous revolts in the late 1930s and early 1940s. The discovery of an assassination plot in 1934 led to the execution of 300 suspected conspirators within 48 hours. © Al Argueta from Moon Guatemala, 3rd Edition. Photos © Al Argueta www.alargueta.com
http://moon.com/destinations/guatemala/background/history/independence
13
15
|Reflection is a process for looking back and integrating new knowledge. Reflections need to occur throughout the building blocks of constructivism and include teacher-led student-driven and teacher reflections. We need to encourage students to stop throughout the learning process and think about their learning. Teachers need to model the reflective process to encourage students to think openly, critically and creatively.| Techniques for Reflections Closing Circle – A quick way to circle around a classroom and ask each student to share one thing they now know about a topic or a connection that they made that will help them to remember or how this new knowledge can be applied in real life. Exit Cards – An easy 5 minute activity to check student knowledge before, during and after a lesson or complete unit of study. Students respond to 3 questions posed by the teacher. Teachers can quickly read the responses and plan necessary instruction. Learning Logs – Short, ungraded and unedited, reflective writing in learning logs is a venue to promote genuine consideration of learning activities. Reflective Journals – Journals can be used to allow students to reflect on their own learning. They can be open-ended or the teacher can provide guiding, reflective questions for the students to respond to. These provide insight on how the students are synthesizing their learning but it also helps the students to make connections and better understand how they learn. Rubrics – Students take time to self-evaluate and peer-evaluate using the rubric that was given or created at the beginning of the learning process. By doing this, students will understand what areas they were very strong in and what areas to improve for next time. Write a Letter – The students write a letter to themselves or to the subject they are studying. This makes the students think of connections in a very personal way. Students enjoy sharing these letters and learn from listening to other ideas. By using a variety of ways to show what they know, such as projects, metaphors or graphic organizers, students are allowed to come to closure on some idea, to develop it and to further their imagination to find understanding. Understanding is taking bits of knowledge in all different curriculum and life experiences and applying this new knowledge. When students apply new knowledge, connections are made and learning is meaningful and relevant. Application is a higher order thinking skill that is critical for true learning to occur. Possible Student Exhibits Analogies - Students compare a topic or unit of study to an inanimate object such as comparing something known to the unknown or some inanimate object to the topic. Blogs – Blogs, short for weblogs, are online journals or diaries that have become popular since the mid 1990′s. Bloggers post personal opinions, random thoughts, connections and real life stories to interact with others via the Web! Weblinks and photos can also be added to the blog. A learner may choose to have their own blog to record their learning on a specific topic. A group of learners could choose to share a blog and read, write, challenge, debate, validate and build shared knowledge as a group. Check out Blogger.com to set up your own personal or professional blog – develop your digital voice and model for your students. Collage – Students cut out or draw pictures to represent a specific topic. To evaluate the level of understanding, students write an explanation or discuss in small groups the significance of the pictures and why they are representative of the topic. This technique encourages students to make connections, to create a visual representation and to then explain or exhibit their understanding. Celebration of Learning – A demonstration where students have the opportunity to share their expertise in several subject areas with other students, teachers and parents. Graphic Organizers – Graphic organizers, also known as mind maps, are instructional tools used to illustrate prior knowledge. Portfolios - A portfolio is a representative collection of an individual student’s work. A student portfolio is generally composed of best work to date and a few “works in progress” that demonstrate the process. Students show their knowledge, skills and abilities in a variety of different ways that are not dependent upon traditional media such as exams and essays. Multiple Intelligences Portfolios are an effective way for students to understand not how smart they are but how they are smart. Project-Based Learning- Students create projects by investigating and making connections from the topic or unit of study to real life situations. Multimedia is one effective tool for students to design their projects. T-charts – A simple t is drawn and students jot down information relating to a topic in two different columns. Venn-Diagram – A graphic organizer that is made with 2 intersecting circles and is used to compare and contrast. Using this tool, students identify what is different about 2 topics and identify the overlap between the two topics in the shared shared area. Twelve Tips for Setting Up An Autism Classroom Standing before your students’ expectant faces, you’re determined to create a successful classroom. You will! These twelve tips are here to guide you. To be truly effective, never lose sight of the secret ingredient. Your students must know you accept them for who they are. They must feel your belief in them. By believing they can do it, you will expect a lot from them and you will get it. In the process and quite unexpectedly, you will receive a surprise bonus. Your students will adore you and look forward to learning in your class every day. 1. Keep it structured Children with autism thrive in a structured environment. Establish a routine and keep it as consistent as possible. In a world that’s ever changing, routine and structure provide great comfort to a child on the autism spectrum. Define routines clearly. For example, every morning: - Enter the classroom - Greet the teacher - Greet the friend next to you - Unpack your school bag - Put notes in the red tray - Put lunch bags in the blue tray - Sit at your desk Activities are successful when they’re broken into small steps. If children are creating a craft such as a paper airplane, define when it’s time to cut, draw and paste. Make sure children know what to do if they finish ahead of time. Typically, children with autism do not use free time productively; therefore strive to have as little downtime between activities as possible. 2. Use visuals A picture speaks a thousand words! Use them whenever you can. Children with autism learn faster and with greater ease when you use visuals. In fact, we all respond better to visuals. Look at any page of advertisements and see which ones catch your eye. When verbal instructions require too much concentration, children will tune you out. Visual supports maintain a child’s focus and interest. So what can you use visuals with? Just about anything. Are you teaching hygiene? Show pictures of children brushing their teeth or combing their hair. Are you teaching greeting skills? Show pictures of children greeting their friends, bus driver, parents and teachers. Are you explaining an outing like a field trip? Show visuals of what to expect on the trip such as getting on the bus, arriving at the destination, planned activities, eating a snack and returning to school. Remember to keep explanations simple and short about each picture or concentration will wane. Give written instructions instead of verbal whenever you can. Highlight or underline any text for emphasis. People with autism like order and detail. They feel in control and secure when they know what to expect. Schedules help students know what’s ahead. Picture schedules are even more powerful because they help a student visualize the actions. Schedules can be broad or detailed. You can use them with any sequence of events. These examples will give you an indication of how they can be used. Classroom on Tuesday is an example of a broad schedule since it takes a whole day to complete Picture of “Unpacking school bag” Picture of “Writing in a journal” Picture of “Floor time” Picture of “Snack” Picture of “Music class” Picture of “Math” Picture of “Lunch” Picture of “Playing at recess” Picture of “Science experiment” Picture of “Reading a book” Picture of “Geography” Picture of “Packing school bag” Picture of “Saying goodbye” Make sure you have this schedule in a very visible place in your classroom and direct the students’ attention to it frequently, particularly a few minutes before you begin the next activity. The end of a school day is a more detailed schedule as it explains a short activity Picture of “A clock depicting the end of day” Picture of “Retrieving a school bag from its location” Picture of “Placing a homework book in the backpack” Picture of “Placing a folder in the backpack” Picture of “Putting on a coat” Picture of “Saying good-bye to friends” Picture of “Saying good-bye to the teacher” Picture of “Getting on the school bus” Make sure this schedule is available and draw attention to it before the activity begins. Another option is to create schedule strips and place it on each student’s desk. Written schedules are very effective for good readers. These can also be typed up and placed on a student’s desk. The child can “check off” each item as it’s completed, which is often very motivating for a student. 4. Reduce distractions Many people with autism find it difficult to filter out background noise and visual information. Children with autism pay attention to detail. Wall charts and posters can be very distracting. While you or I would stop “seeing the posters” after a while, children on the spectrum will not. Each time they look at it will be like the very first time and it will be impossible for them to ignore it. Try and seat children away from windows and doors. Use storage bins and closets for packing away toys and books. Remember the old adage – out of sight, out of mind. Noise and smells can be very disturbing to people with autism. Keep the door closed if possible. If your classroom is in a high traffic area – time to speak to the Principal! 5. Use concrete language Always keep your language simple and concrete. Get your point across in as few words as possible. Typically, it’s far more effective to say “Pens down, close your journal and line up to go outside” than “It looks so nice outside. Let’s do our science lesson now. As soon as you’ve finished your writing, close your books and line up at the door. We’re going to study plants outdoors today”. If you ask a question or give an instruction and are greeted with a blank stare, reword your sentence. Asking a student what you just said helps clarify that you’ve been understood. Avoid using sarcasm. If a student accidentally knocks all your papers on the floor and you say “Great!” you will be taken literally and this action might be repeated on a regular basis. Avoid using idioms. “Put your thinking caps on”, “Open your ears” and “Zipper your lips” will leave a student completely mystified and wondering how to do that. Give very clear choices and try not to leave choices open ended. You’re bound to get a better result by asking “Do you want to read or draw?” than by asking “What do you want to do now?” 6. It’s not personal Children with autism are not rude. They simply don’t understand social rules or how they’re supposed to behave. It can feel insulting when you excitedly give a gift or eagerly try and share information and you get little to no response. Turn these incidents into learning experiences. As an example, if you enthusiastically greet a child with autism and you get the cold shoulder, create a “Greeting Lesson”. Take two index cards. Draw a stick figure saying “Hi” on the first card. On the second card draw a stick figure smiling and waving. Show each card to the child as you say. “When somebody says Hi, you can either say “Hi” or you can smile and wave. Which one do you want to do?” When the child picks a card, say “Great, let’s practice. “Hi Jordan”. Show the card to prompt the child to respond according to the card he picked. Praise the child highly after a response and have your cards ready for the next morning greeting! Keep it consistent by asking the parents to follow through with this activity at home. If you get frustrated (and we all have our days) always remember the golden rule. NEVER, ever, speak about a child on the autism spectrum as if they weren’t present. While it might look like the student isn’t listening or doesn’t understand, this probably is not the case. People with autism often have acute hearing. They can be absorbed in a book on the other side of the room and despite the noise level in the class, they will easily be able to tune into what you are saying. Despite the lack of reaction they sometimes present, hearing you speak about them in a negative way will crush their self esteem. Children on the autism spectrum feel secure when things are constant. Changing an activity provides a fear of the unknown. This elevates stress which produces anxiety. While a typical child easily moves from sitting in a circle on the floor to their desk, it can be a very big deal to a child on the spectrum. Reduce the stress of transitions by giving ample warning. Some ways you can do this is by verbal instruction example “In 5 minutes, it’s time to return to our desks” and then again “Three minutes until we return to our desks” and then again “One more minute till we return to our desks”. Another option is to use a timer. Explain that when the timer goes off, it’s time to start a new activity. Periodically, let students know approximately how much time is left. When you ask a child to transition from a preferred activity, they might be very resistant if they have no idea when they will be allowed to resume. If a student loves reading, you could say “In 5 minutes it’s time to do science. Then it’s math and then you can read again”. This way, the child knows that it’s OK to stop because the activity can be resumed again soon. If a child is particularly struggling with a transition, it often helps to allow them to hold onto a “transitional object” such as a preferred small toy or an object of their choice. This helps a child feel in control and gives them something to look forward to. As an example you can say “In 3 minutes we’re going to pick a toy and then we’re going down the hall to music class”. Using schedules helps with transitions too as students have time to “psyche themselves up” for the changes ahead. 8. Establish independence Teaching students with autism how to be independent is vital to their well being. While it’s tempting to help someone that’s struggling to close a zipper, it’s a much greater service to calmly teach that person how to do it themselves. People can be slow when they are learning a new skill until they become proficient. Time is usually something we don’t have to spare, particularly in western societies. However in order to help a person progress we must make time to show them the ropes. While it’s wonderful that your students take direction from you, it’s equally important they learn to respond to peers. If a student asks for a scissor, tell him to ask his peer. Encourage your students to ask each other for help and information. By doing so, students learn there are many people they can seek out for help and companionship. Making decisions is equally important and this begins by teaching students to make a choice. Offer two choices. Once students can easily decide between two options introduce a third choice. This method will help children think of various options and make decisions. People with autism may take extra time to process verbal instructions. When giving a directive or asking a question, make sure you allow for extra processing time before offering guidance. Self help skills are essential to learn. Some of these include navigating the school halls, putting on outerwear, asking for assistance and accounting for personal belongings. Fade all prompts as soon as you can. Remember that written prompts are usually easier to fade than verbal prompts. Fading prompts can be done in a phased approach. If you are prompting a child to greet someone by showing them an index card with the word “Hello”, try fading it to a blank index card as a reminder before you completely remove the prompt. Never underestimate the power of consistency. Nothing works in a day whether it’s a diet, an exercise plan or learning to behave in class. Often we implement solutions and if there are no results within a few days we throw our hands up in the air and say “This doesn’t work. Let me try something else”. Avoid this temptation and make sure you allow ample time before you abandon an idea. Remember that consistency is a key component of success. If you’re teaching a student to control aggression, the same plan should be implemented in all settings, at school and at home. 9. Rewards before consequences We all love being rewarded and people with autism are no different. Rewards and positive reinforcement are a wonderful way to increase desired behavior. Help students clearly understand which behaviors and actions lead to rewards. If possible, let your students pick their own reward so they can anticipate receiving it. There are many reward systems which include negative responses and typically, these do not work as well. An example of this type of reward system is where a student will begin with a blank sheet of paper. For each good behavior the student will receive a smiley face. However if the student performs poorly, he will receive a sad face or have a smiley face taken away. It’s far better to just stop providing rewards than it is to take them away. Focusing on negative aspects can often lead to poor results and a de-motivated student. When used correctly, rewards are very powerful and irresistible. Think of all the actions you do to receive rewards such as your salary, a good body and close relationships. There are many wonderful ideas for reward systems. Ten tokens might equal a big prize. Collecting pennies until you have enough to “buy” the reward of your choice. Choice objects to play with after a student does a great job. Rewards don’t have to be big. They do have to be something a student desires and show students they have done a great job. Every reward should be showered in praise. Even though people on the spectrum might not respond typically when praised, they enjoy it just as much as you! 10. Teach with lists Teaching with lists can be used in two ways. One is by setting expectations and the other is by ordering information. Let’s discuss the first method. Teaching with lists sets clear expectations. It defines a beginning, middle and an end. If I ask you to pay attention because we’re going to do Calculus, you probably wouldn’t jump for joy and might even protest. However, you’re likely to be a more willing participant if I explain that there are only 5 calculus sums. I demonstrate this by writing 1 through 5 on the blackboard. As we complete each sum, I check it off on the board, visually and verbally letting you know how many are left till completion. The second method of teaching with lists is by ordering information. People on the autism spectrum respond well to order and lists are no exception. Almost anything can be taught in a list format. If a student is struggling with reading comprehension, recreate the passage in list format. This presentation is much easier for a student to process. Answering questions about the passage in this format will be easier. Similarly, if you’re teaching categories, define clear columns and list the items in each category. While typical people often think in very abstract format, people on the spectrum have a very organized way of thought. Finding ways to work within these parameters can escalate the learning curve. 11. Creative teaching It helps to be creative when you’re teaching students with autism. People on the spectrum think out of the box and if you do too, you will get great results. Throw all your old tactics out of the window and get a new perspective. Often, people with autism have very specific interests. Use these interests as motivators. If you’re teaching reading comprehension and students are bored with a story about Miss Mavis, make up your own story about dinosaurs, baseball statistics or any other topic your students enjoy. Act things out as often as you can. If you’re teaching good behavior, flick your pencil on the floor as you ask your students “Is it OK to do this?” Raise your hand as if to ask a question while you ask “Is it OK to do this?” Another great strategy to use is called “Teaching with questions”. This method keeps students involved, focused and ensures understanding. As an example you might say: Teacher: Plants need sun. What do they need? Teacher: That’s right. They also need air and water. What do plants need? Class: Air and water. Teacher: That’s right and what else? Teacher: Correct. Plants have stems and leaves. What do they have? Class: Stems and leaves. Teacher: And what do they need? Class: Air and water Teacher: And what else? Teacher: That’s right… Another great way of teaching is by adding humor to your lessons. We all respond to humor. If you’re at a conference, think about how a lecturer holds your attention when he makes jokes. It’s OK to be silly in class. You will have your students’ attention and they will love learning with you. The saying goes that people on the autism spectrum march to the beat of their own drum. Therefore, they often respond to unconventional methods of teaching. While it might take some imagination and prep time, watching them succeed is definitely well worth the effort. 12. Don’t sweat the small stuff The final goal is for children to be happy and to function as independently as possible. Always keep this in mind and pick your battles wisely. Don’t demand eye contact if a student has trouble processing visual and auditory information simultaneously. People with autism often have poor attending skills but excellent attendance. Does it really matter if a student does one page of homework instead of two? What about if a student is more comfortable sitting on his knees than flat on the floor? It’s just as important to teach appropriate behavior as it is self esteem. By correcting every action a person does, you’re sending a message that they’re not good enough the way they are. When making a decision about what to correct, always ask yourself first, “Will correcting this action help this person lead a productive and happy life?” The Power of Self-Esteem: Build It and They Will Flourish The term “self-esteem,” long the centerpiece of most discussions concerning the emotional well being of young adolescents, has taken a beating lately. Some people who question this emphasis on adolescent self-esteem suggest that it takes time and attention away from more important aspects of education. Others contend that many of the most difficult adolescents suffer from too much self-esteem and our insistence on building higher levels is detrimental to the student and to society. But many experts and middle school educators stand firm in their conviction that since self-worth is rigorously tested during the middle school years, attention to it can only help students become successful. Perhaps, they say, self-esteem simply has not been defined properly or the strategies used to build it have done more harm than good. For example, “Praising kids for a lack of effort is useless,” says Jane Bluestein, a former classroom teacher, school administrator, speaker, and the author of several books and articles on adolescence and self-esteem. “Calling a bad job on a paper a ‘great first draft’ doesn’t do anyone any good. I think we’ve learned that. If I’m feeling stupid and worthless and you tell me I’m smart, that makes you stupid in my eyes,” she says. “It doesn’t make me any better.” But Bluestein and others say that simply because the corrective methods are misguided doesn’t mean middle school educators should not pay close attention to their students’ self-esteem. Jan Burgess, a former principal at Lake Oswego Junior High School in Oregon, explains, “We’ve all seen kids whose parents believe self-esteem is absolutely the highest priority. But heaping praise without warrant is empty praise. Self-esteem is important, and it comes from aiming high and reaching the goal. That is much more meaningful.” On the other hand, James Bierma, a school counselor at Washington Technical Magnet in St. Paul, Minnesota, says he is wary of those who want to reduce praise for students. “I don’t see heaping praise on kids as a big problem. I work in an urban area where we have more than 85% of students in poverty. I wish our students received more praise,” he says. “You can go overboard, but that rarely happens in my dealings with families. Students respond well to praise from parents and school staff.” Robert Reasoner, a former school administrator and the developer of a model for measuring and building self-esteem that has been adopted by schools throughout the United States, says there has been a lot of confusion about the concept of self-esteem. “Some have referred to self-esteem as merely ‘feeling good’ or having positive feelings about oneself,” says Reasoner, who is president of the National Association of Self Esteem. “Others have gone so far as to equate it with egotism, arrogance, conceit, narcissism, a sense of superiority, and traits that lead to violence. Those things actually suggest that self-esteem is lacking.” He notes that self-value is difficult to study and address because it is both a psychological and sociological issue and affects students in many different ways. “Self-esteem is a fluid rather than static condition,” says Sylvia Starkey, a school psychologist and counselor for 16 years in the Lake Oswego School District. She notes that the way adolescents view themselves can depend on how they feel about their competence in a particular activity. It also is influenced by the child’s general temperament and even family birth order, all of which might make it harder to identify the causes of low self-esteem—or raise it. Reasoner says self-esteem can be defined as “the experience of being capable of meeting life’s challenges and being worthy of happiness.” He notes that the worthiness is the psychological aspect of self-esteem, while the competence, or meeting challenges, is the sociological aspect. He notes that when we heap praise on a student, a sense of personal worth may elevate, but competence may not—which can make someone egotistical. Self-esteem, he says, comes from accomplishing meaningful things, overcoming adversity, bouncing back from failure, assuming self-responsibility, and maintaining integrity. Self-Esteem at the Middle Level Middle school students are particularly vulnerable to blows to their self-esteem because they are moving to a more complex, more challenging school environment; they are adjusting to huge physical and emotional changes; and their feelings of self-worth are beginning to come from peers rather than adults, just at a time when peer support can be uncertain, Reasoner says. “Early on, it’s parents who affirm the young person’s worth, then it’s the teacher. In middle school, peer esteem is a powerful source of one’s sense of self,” according to Mary Pat McCartney, a counselor at Bristow Run Elementary School in Bristow, Virginia, and former elementary-level vice president of the American School Counselors Association. No matter how much students have been swamped with praise by well-meaning parents, she says, what their friends think of them is most important. Beth Graney, guidance director at Bull Run Middle School in Gainesville, Virginia, says adults gain their self-esteem through accomplishments and by setting themselves apart from others, while adolescents gain it from their group. “Peer relationships are so critical to kids feeling good about themselves,” she says. Opportunities to Succeed The solution, rather than praising without merit, seems to be providing students with an opportunity to succeed. “Self-esteem that comes from aiming high and reaching goals helps build resilience for students as well,” says Burgess. She says teachers can help kids target their learning and fashion goals that are obtainable, while giving them constructive feedback along the way. “Self-esteem rises and students feel in charge—and this can help parents understand how to heap praise when it is earned.” Bluestein says students often want an opportunity to feel valued and successful. As a group, they can perhaps make a simple decision in class (which of two topics they study first, for example) and individuals might gain from helping others, either collaboratively or as a mentor or tutor. She suggests having students work with others in a lower grade level. As a result, the self-esteem of the students being helped also improves. “Peer helpers, lunch buddies, peer mentors often help kids feel that someone is in their corner and can help them fit in with a larger group,” Graney says. She says parents should encourage their children to find an activity that they like where they can have some success and feel accepted. Bluestein recalls a program she began in which her “worst kids” who seemed to have lower levels of self-worth were asked to work with younger students. Their sense of themselves improved, she says, and eventually they were skipping recess or lunch periods to work with the younger students. Mary Elleen Eisensee, a middle school counselor for more than 30 years at Lake Oswego Junior High School, says if kids can be “guided to accept and support one another, the resulting atmosphere will be conducive for building self-confidence and esteem for everyone.” |Special Care for Special Students Michelle Borba, nationally known author and consultant on self-esteem and achievement in children, says there are five things middle school educators can do easily to improve the self-esteem of their students: Adult Affirmation Is Important Adults play a role, too, by helping students find areas where they can have success and making note of it when they do. They can also just notice students. “Legitimate affirmation makes a huge difference. But plain recognition is just as meaningful. Greeting a student by name even pays big dividends,” says Starkey. She says adult volunteer tutors and mentors help students with social and academic skills and encourage them. An assessment of factors that promote self-esteem in her school district showed such adult attention is very valuable. At Bierma’s school, counselors call parents on Fridays when students’ scores on achievement, attendance, academic, and behavior goals are announced. “It has helped students turn negative behaviors into positive ones.” McCartney says simply treating students respectfully and listening carefully affirms a student’s self-worth. She says teachers can also bolster self-esteem if they allow the students to accidentally “overhear key adults bragging about one of their accomplishments.” Reasoner points out that despite thinking to the contrary, strong self-esteem is critical in the middle school years. Students without it withdraw or develop unhealthy ways of gaining social acceptance, often by responding to peer pressure to engage in sex, drinking, drug abuse, or other harmful behaviors. “Many of these problems can simply be avoided if a child has healthy self-esteem,” Reasoner says. Learning Disabilities: Signs, Symptoms and Strategies A learning disability is a neurological disorder that affects one or more of the basic psychological processes involved in understanding or in using spoken or written language. The disability may manifest itself in an imperfect ability to listen, think, speak, read, write, spell or to do mathematical calculations. Every individual with a learning disability is unique and shows a different combination and degree of difficulties. A common characteristic among people with learning disabilities is uneven areas of ability, “a weakness within a sea of strengths.” For instance, a child with dyslexia who struggles with reading, writing and spelling may be very capable in math and science. Learning disabilities should not be confused with learning problems which are primarily the result of visual, hearing, or motor handicaps; of mental retardation; of emotional disturbance; or of environmental, cultural or economic disadvantages. Generally speaking, people with learning disabilities are of average or above average intelligence. There often appears to be a gap between the individual’s potential and actual achievement. This is why learning disabilities are referred to as “hidden disabilities:” the person looks perfectly “normal” and seems to be a very bright and intelligent person, yet may be unable to demonstrate the skill level expected from someone of a similar age. A learning disability cannot be cured or fixed; it is a lifelong challenge. However, with appropriate support and intervention, people with learning disabilities can achieve success in school, at work, in relationships, and in the community. In Federal law, under the Individuals with Disabilities Education Act (IDEA), the term is “specific learning disability,” one of 13 categories of disability under that law. “Learning Disabilities” is an “umbrella” term describing a number of other, more specific learning disabilities, such as dyslexia and dysgraphia. Find the signs and symptoms of each, plus strategies to help: A language and reading disability Problems with arithmetic and math concepts A writing disorder resulting in illegibility Dyspraxia (Sensory Integration Disorder) Problems with motor coordination Central Auditory Processing Disorder Difficulty processing and remembering language-related tasks Non-Verbal Learning Disorders Trouble with nonverbal cues, e.g., body language; poor coordination, clumsy Visual Perceptual/Visual Motor Deficit Reverses letters; cannot copy accurately; eyes hurt and itch; loses place; struggles with cutting Language Disorders (Aphasia/Dysphasia) Trouble understanding spoken language; poor reading comprehension Symptoms of Learning Disabilities The symptoms of learning disabilities are a diverse set of characteristics which affect development and achievement. Some of these symptoms can be found in all children at some time during their development. However, a person with learning disabilities has a cluster of these symptoms which do not disappear as s/he grows older. Most frequently displayed symptoms: - Short attention span - Poor memory - Difficulty following directions - Inability to discriminate between/among letters, numerals, or sounds - Poor reading and/or writing ability - Eye-hand coordination problems; poorly coordinated - Difficulties with sequencing - Disorganization and other sensory difficulties Other characteristics that may be present: - Performs differently from day to day - Responds inappropriately in many instances - Distractible, restless, impulsive - Says one thing, means another - Difficult to discipline - Doesn’t adjust well to change - Difficulty listening and remembering - Difficulty telling time and knowing right from left - Difficulty sounding out words - Reverses letters - Places letters in incorrect sequence - Difficulty understanding words or concepts - Delayed speech development; immature speech What are instructional strategies? Instructional strategies are methods that are used in the lesson to ensure that the sequence or delivery of instruction helps students learn. What does effective mean? The term “effective” means that student performance improves when the instructional strategies are used. The strategies were identified in studies conducted using research procedures and guidelines that ensure confidence about the results. In addition, several studies exist for each strategy with an adequate sample size and the use of treatment and control groups to generalize to the target population. This allows teachers to be confident about how to apply the strategies in their classrooms. Strategies to use in designing effective lessons These six strategies have been proven to work with diverse groups of learners (Kameenui & Carnine, Effective Teaching Strategies that Accommodate Diverse Learners, 1998). All students, and particularly those with disabilities, benefit when teachers incorporate these strategies into their instruction on a regular basis. - Focus on essentials. - Make linkages obvious and explicit. - Prime background knowledge. - Provide temporary support for learning. - Use conspicuous steps and strategies. - Review for fluency and generalization. Identify important principles, key concepts, and big ideas from the curriculum that apply across major themes in the subject content. - Big Ideas: Instruction is organized around the major themes that run through a subject area. This helps students make the connections between concepts and learn to use higher order thinking skills. Kameenui and Carnine (1998) gave these examples of big ideas for social studies: - success of group efforts is related to motivation, leadership, resources, and capability - Graphic organizers: Important ideas and details are laid out graphically to help students see connections between ideas. Semantic webs and concept maps are examples of graphic organizers. - Thematic instruction: Instructional units combine subject areas to make themes and essential ideas more apparent and meaningful. Lessons and assignments can be integrated or coordinated across classes. - Planning routines: The Center for Research on Learning at the University of Kansas website (go to http://www.ku-crl.org/sim/lscurriculum.html) has developed the Learning Strategies Curriculum, systematic routines that include graphic organizers to help teachers plan a course, unit, or lesson around the essentials or big ideas. Teachers guide students to use the organizer to monitor their learning. Actively help students understand how key concepts across the curriculum relate to each other as you are teaching. - Give clear verbal explanations and use visual displays (such as flow charts, diagrams, or graphic organizers) to portray key concepts and relationships. - Help students use techniques like outlining or mind mapping to show connections among concepts. Connect new information or skills to what students have already learned. Provide additional instruction or support to students who lack necessary background knowledge. - Ask questions to prompt student recall of relevant prior knowledge. - Make comparisons between the new concept and things students already know. - Relate the topic to current or past events that are familiar to students. - Relate the concept to a fictional story or scenario known to the students. - Use instructional materials that provide easy access to critical background knowledge. Provide support (scaffolding) while students are learning new knowledge and skills, gradually reducing the level of support as students move toward independence. - Provide verbal or written prompts to remind students of key information or processes. - Physically assist and guide a student when learning a new motor skill, such as cutting. - Provide study or note taking guides to support learning from text or lectures. - Use commercial materials that have been specifically designed to incorporate supports for learning. - Use mnemonics to help students remember multiple steps in a procedure. Teach students to follow a specific set of procedures to solve problems or use a process. - Model the steps in the strategy, using a think-aloud process. - Name the strategy and give students prompts for using it such as posting steps on the board, providing an example of a problem with the strategy steps labeled, or using memory strategies, such as mnemonics to help student recall the steps. - Prompt students to use the strategy in practice situations. - Reduce prompting as students become proficient in applying the strategy. - Explicitly teach students the organizational structure of text and prompt its use. Review for fluency and generalization Give students many opportunities to practice what they have learned and receive feedback on their performance to ensure knowledge is retained over time and can be applied in different situations. - Use multiple reviews of concepts and skills. - Give students specific feedback about what they are doing well or need to change. - Give students enough practice to master skills. - Distribute reviews over time to insure proficiency is maintained. - Provide review in different contexts to enhance generalization of learning. - Provide cumulative review that addresses content learned throughout the year. In order for learning to occur, students need to connect to their own prior knowledge. Connections are like building bridges between the old and new. This building bridge can be brief or in-depth as long as it serves the needs of all learners. Pre-assessment determines prior knowledge whereas connections provides the link between old knowledge and new knowledge. This step is critical to applying constructivist theory in a classroom. How do I build community? - create trust between teacher and student and among the students - build self-confidence so students will take risks, engage in dialogue - move from competition to collaboration - form ‘community clusters’ create learning circles of like-minded teachers to provide support and share ideas practice working in collaborative groups and assign specific roles and tasks encourage partner or peer tutoring situations begin using reflective journals and/or learning logs be open with the students that you are trying a different way of teaching and explain why – allow them time to express thoughts & feelings throughout the process How do I group my students? - students need to be taught how to work in a collaborative group - keep groups “fluid” where students move in and out as needed - use a variety of groupings based on ability or readiness, instructional needs and interests - heterogeneous – a group of students with varying ability where each student takes a role in an area of strength that adds to the knowledge of the whole group - homogeneous – ‘cluster’ grouping of a group of students with similar abilities or interest area can be effective for certain areas of study - a group of 3 or 4 students works well in most settings - teacher may choose and at other times, students may choose group members - establish home-based teams and work teams to blend a heterogeneous group with a homogeneous group - multiage groupings allow students of similar interests to learn from each other and work together What strategies or instructional approaches can help students make connections? Blogs - Blogs, short for weblogs, are online journals or diaries that have become popular since the mid 1990′s. Bloggers post personal opinions, random thoughts, connections and real life stories in order to interact with others via the Web! Weblinks and photos can also be added to the blog. A learner may choose to have their own blog to record their learning on a specific topic. A group of learners could choose to share a blog and read, write, challenge, debate, validate and build shared knowledge as a group. Check out Blogger.com to set up your own personal or professional blog – develop your digital voice and model for your students. Graphic Organizers or Mind Maps – instructional tools used to illustrate prior knowledge. Student sample page See Best Practice Graphic Organizers for more information and examples. KWL Charts – K-what do the students already know? W-what do the students need and want to know? L-what did the students learn? An effective pre-assessment tool but also an effective tool to evaluate the level of understanding. Many teachers use the L part as an open-ended question on an exam allowing the students to share the depth of knowledge that was gained in the unit of study. Questioning Techniques – Questions are a key element in each of the building blocks of constructivism. Categories of questions are guiding, anticipated, clarifying and integrating. Reflective Journals or Learning Logs - Journals can be used to assess for process of learning and student growth. They can be open-ended or the teacher can provide guiding, reflective questions for the students to respond to. These often provide insight on how the students are synthesizing their learning. Are you tired of using a pre-test or KWL chart as your pre-assessment tool? If so, read on and get more ideas on how to figure out what your students already know (or think that they know) prior to teaching a unit or lesson. “Assessment is today’s means of modifying tomorrow’s instruction.” Carol Ann Tomlinson Pre-assessment allows the teacher and student to discover what is already known in a specific topic or subject. It is critical to recognize prior knowledge so students can engage in questioning, formulating, thinking and theorizing in order to construct new knowledge appropriate to their level. Ongoing assessment throughout the learning process is also critical as it directs the teacher and student as to where to go next. Several assessment techniques are described in this section. KWL Charts - K-what do the students already know? W-what do the students need and want to know? L-what did the students learn? An effective pre-assessment tool and summative evaluation tool to measure the level of understanding at the end of unit. Many teachers use the L part as an open-ended question on an exam allowing the students to share the depth of knowledge that was gained in the unit of study. Yes/No Cards – Students make a card with Yes (or Got It) on one side, No (No clue) on the opposite side. Teachers ask an introductory or review question. Students who know the answer hold up the Yes card, if they don’t know the answer they hold the No card. This is very effective to use when introducing vocabulary words that students need as a knowledge base for a specific unit of study. SA/A/D/SD – Students are given to opportunity to formulate their own views and opinions along a continuum rather than dialectically. Given an issue (similar to those outlined above) students are asked to consider the topic and determine whether they strongly agree (SA), agree (A), disagree (d), or strongly disagree (SD) with the statement. They are then asked to move to the appropriate station in the classroom identified with one of the options. A class discussion follows as students are given the opportunity to outline and defend their positions, refute the arguments of others as well as re-evaluate their own ideas. Squaring Off – Place a card in each corner of the room with the following phrases: Dirt Road, Paved Road, Highway and Yellow Brick Road. Instruct the students to go to the corner of the room that matches where they are in the new unit of study. Students go to the corner of the room and as a group, discuss what they know about the topic. Turn & Talk- During a lesson, there may be opportunities to have the students do a turn & talk activity for a few minutes. This allows students to talk about the information presented or shared and to clarify thoughts or questions. This is an effective alternate strategy to asking questions to the whole group and having the same students responding. All students have a chance to talk in a non-threatening situation for a short period of time. “Assessment is today’s means of modifying tomorrow’s instruction.” Carole Tomlinson Preassessment: a way to determine what students know about a topic before it is taught. It should be Teacher prepared pretests |C L A S S R O O M M A N A G E M E N T| Teachers, Start Your Engines: Weekly Tip - Creating Puzzling Classroom See if you can figure this one out: “George, Helen, and Steve are drinking coffee. Bert, Karen, and Dave are drinking soda. Is Elizabeth drinking soda or coffee? (It is possible to reason this out using logic.) Most of us love a good puzzle. Some are more drawn to spatial puzzles, others enjoy a good logic puzzle, still others math or situation puzzles. In fact, you may even be distracted from this article right now because you are trying to figure out that puzzle I posted above. Mazes, jig-saw puzzles, brain-teasers, and short mysteries pose challenges that capture our imagination and our thoughts. Puzzles are also an excellent way to capture the attention of our students and encourage them to think on higher levels. Below are some ideas for incorporating puzzles into your classroom, no matter what you teach: 1. Pose a “puzzle of the week” every Monday. Students have all week to try to answer it. Have students place possible answers in a folder or large envelope to be opened on Friday. With younger students you might let them try throughout the week and then tell them whether they are “cold” or “hot”. Students who are “cold” might go back and rethink their answer. Students who are “hot” know they have it. 2. Add an object to your room that has to do with the topic or skill you are teaching. Challenge students to find the new object. (Original idea by Frederick Briehl) 3. Have a jig-saw puzzle “station” in the back of your classroom that students can work on during free time or when they finish their work early. A jig-saw puzzle is fun, but it also requires students to think logically and use spatial relationships to determine where pieces fit. If you can find a puzzle that relates to your novel, author, setting or location, art, music, math, science, or historical event – all the better. Use the internet to find topic related puzzles for students to solve. When working with pre-school and Kindergarten students, have a permanent puzzle station where students can put together different jig-saw puzzles. 4. Copy and paste logic puzzles and brain-teasers on the inside of a manila folder. Laminate the folder to last. This makes a portable “Thinking Center” that students can take to their seats and work on when finished with a class assignment or test. 5. Have students create their own jig-saw puzzles. Students could draw the setting or character in a story, create a timeline, draw a historical figure or event, or even create a mind-map or semantic web. If you have them do this on cardstock, turn the page over and have students draw different shapes and figures that interlock on the back. Cut along the lines and voila! You have a jig-saw puzzle. Craft stores such as Hobby Lobby and Michaels also sell jig-saw puzzle paper that is already shaped and pierced. Students simply draw on the sheet and then punch out the pieces. 6. Use a situation puzzle for a transition or time filler. Pose the statement or question to students and give them 20 questions to solve it. Two sources of situation puzzles are Jed’s List – http://www.kith.org/logos/things/sitpuz/situations.html and Nathan Levy’s book series – “Stories with Holes” – http://www.storieswithholes.com. You can also use a search engine to find situation puzzles. Just keep in mind that many are mini-mysteries and can include someone dying or being killed. Always check the puzzles for appropriateness before using in school and with certain ages. 7. Bring a wrapped present to class and situate it where everyone can see it. Don’t mention it or talk about it. When students ask, wave it off as nothing for them to worry about. This will drive them crazy. You might then pose a challenging question and tell students that the first to answer will get to open the present. Inside would be an object that relates to the topic of study. Ask your students at that point to identify why this particular object was chosen. This makes a great way to introduce a new topic. 8. A variation on the idea above is to put the objects in a box with an opening large enough for only a hand. Students feel the objects and try to guess at each. What do these objects have in common? How are they different? How does each object relate to the topic currently studied? 9. Have a Sudoku challenge on Fridays. The older students are, the more complicated the puzzle should be. 10. Use Crosswords and Word Searches when practicing definitions and vocabulary words. Cryptograms are also great for vocabulary and sentence practice. Students must use their knowledge of how sentences are formed to determine the “key” words that will help them decipher the puzzle. You can offer certain vocabulary words as clues to help determine the “key”. (Cryptograms are puzzles that substitute one letter for another. For example: a is really s, p is really a, and o is really t. The word might be “sat”, but in the puzzle it will show as “apo”. Once a word is deciphered, you use the “key” letters from that word to determine other words. Cryptograms are usually sentences and phrases.) Puzzles are fun, challenging, and require us to think critically in order to solve them. We must use our knowledge of spatial relationships, numbers, number relationships, words, and our experiences in the world to solve different puzzles. This makes them not only enjoyable, but also a great learning tool. The next time you have a boring worksheet or activity, take some time to think about how you can turn it into a puzzle or mystery for students to solve. Look for different ways to incorporate puzzles into your classroom for students to solve as part of your class and outside of class. Before you know it, you too will have a puzzling classroom! Still wondering about that puzzle above? Thought I’d leave you hanging, did you? Well, here’s the answer: Elizabeth is drinking coffee. She has two E’s in her name, just like everyone else in the puzzle drinking coffee (as well as coffee itself).
http://kendrik2.wordpress.com/category/children/
13
24
At its core, the study of history reveals choice. New Orleans provides an excellent exercise in the nature of historical inquiry. Why was any city placed in an environment so inhospitable to settlement and why was it so central to this nation's growth? New Orleans was always more than its locality, a national city from the beginning. The history of our nation is one of perpetual conflict between seemingly incompatible options — freedom and order, opportunity and expectation, inclusion and elitism, openness and privilege. We want our citizens and country to grow unfettered but need to insert safeguards. It can be a perilous balance. Placing a city on banks of a river in low lying areas, such as New Orleans, is a literal illustration of this paradox. The shifting ground underneath New Orleans is both a fact and a metaphor for this city. The French, along with the other European powers, extended their settlement of the Western Hemisphere throughout the seventeenth century (Figure 1). They first arrived at the islands of the Caribbean and then extended their interests to the North American continent. The French settled in Canada and then continued southward. By 1700, it was clear that they needed an outpost at mouth of the all important Mississippi River (Figure 2) but there was not a location close enough to the Gulf of Mexico that was not prone to flooding. By using the narrow inlet of Bayou St. John (Figure 3b), the distance between the two bodies of water was made shorter and easier for overland transportation of cargo and canoes. The natural levees along the river provided sufficient drainage and accessibility. The site of New Orleans was first settled by the Choctaws in 1699, literally an island formed as the portage met the river. The French needed to secure the area for more than just economic reasons. The Spanish wanted to expand their holdings in the Southern portion of the continent, so the French established themselves in New Orleans in 1718 (Figure 2). New Orleans was urban and European from the start. The territory, named Louisiana in honor of the Louis XIV, French king, was a mercantile colony, so the interests of the Crown and landowners merged. The needs of the colony were best served by a large, forced labor system and slavery became integral to its economic success. In the early years of the city, the terrain determined its growth. Each time the river overflowed, it left sediment on its banks, forming a natural levee that extended back one to two miles. Further away from the banks, softer materials settled, resulting in swamp and sludge. Only the poorest inhabited this area. Land closest to the river was the most desirable in terms of trade, transportation and safety. Over the next forty years, the natural levee was enhanced and by 1727 a four foot high bulwark stretched along the waterfront. French companies oversaw this construction. By the 1750s, the city covered a 66 block area. The French built a canal to connect Bayou St. John to the natural levee. The initial purpose of this construction was drainage but eventually boats began to unload cargo at the end of the canal. This basin would become the site of waterfront commerce (Figure 3). By end of Seven Years War in 1763, French power in North America weakened as the British exerted their dominance in the northern areas (Figure 5). King Louis XV gave the Louisiana territory to Spain, ruled by his cousin. By the end of the 18th century, Napoleon rose to power. He was determined to reassert French authority in the Western hemisphere. He planned to use French possessions in the Caribbean (Figure 9) as a base to expand his influence onto the continent west of the Mississippi, still held by Spain. Napoleon negotiated a secret treaty with Spain in 1800 that returned lands in the Louisiana territory to the French. Included under the terms of this treaty was the port city of New Orleans. The importance of New Orleans to the growing American nation was clear. Residents of the new nation were rapidly populating areas west of the original thirteen colonies (Figure 6). All rivers were vital to the transportation and trade that would enable the young country to thrive. Particularly problematic was the law imposed by the French in 1802 that prohibited the long standing practice of American ships using the port of New Orleans to transfer cargo from the Mississippi to ocean going vessels. President Jefferson did not wish to engage the powerful French army in a military conflict but his political base, farmers in the growing western portion of country, insisted on a resolution. In 1803 Jefferson gave the French ambassador, Robert Livingston, instructions to purchase the city. Facing his own expansionist problems in Europe, Napoleon offered the entire territory to Jefferson. The American president had some questions regarding the constitutionality of this method of territorial acquisition, but yielded to the higher principal of providing room for an expanded arena that would enable his republican vision of the nation to flourish. By the terms of the treaty, the United States would pay France 15 million dollars. France, in turn, would receive certain exclusive commercial privileges at the port of New Orleans and the residents of the Louisiana territory would be incorporated into the US with the same rights and privileges as other citizens. Despite its new status as an American city, New Orleans remained foreign in design, culture, and attitude. These sensibilities were reinforced as the port remained an important point of entry for the nations of South America and Europe. New Orleans was always more than its locality. Both its glories and its troubles frequently began elsewhere. This reality was evident in the problem of and solution to flooding. Much of the overflowing of the Mississippi was not caused in New Orleans but as the result of its location at the basin of the continent's largest drainage system. Despite the origin of the problem, the residents of New Orleans suffered the consequences and had to guard against it. During French and then Spanish rule, the crown paid for these safeguards. In the democratically elected United States, the parish government was responsible for flood protection, a task which they passed onto landowners. The result was a haphazard and inconsistent system. The city of New Orleans also taxed ships that used its waterfront to generate the revenue necessary to help maintain the levee system. The reality was that a larger, more coordinated effort was necessary, but state and federal initiatives were slow in coming. The United States of the 19th century was a story of expansion. The Mississippi River was more than geographically central in the North American continent. It was also at the core of the growing nation's ability to extend itself. The advent of the steamboat made river travel dominant in the first half of the century. In the decades before the Civil War, New Orleans was the primary trading spot of the seemingly unlimited bounty of the South — rice, sugar, and cotton. The city's population doubled in the first ten years after its acquisition. The Battle of New Orleans, although technically occurring after the end of the War of 1812, had enormous symbolic value as the city was able to withstand the British onslaught and defend American interests. The city would not fare as well a half century later. Its general status as an economic rather than a political entity, as well as its location, made New Orleans important to both the North and South. By capturing the city, Union forces were able to cut off a major supply portal to the South (Figure 13). Even after the conflict ended, its stagnation during the war had a lasting impact. But New Orleans was already a city vulnerable to competition from railroads and the increasing importance of manufacturing to the nation's wealth. The urban centers of industrialization, especially Chicago and St. Louis, overshadowed the city whose port remained its most vital economic organ. As the city grew and expanded, settlement patterns were directed towards the natural levee. River frontage was important but scarce, so the result was fan shaped landholdings, with a small section of riverfront. The best land was upriver from the original city (Figure 10), (Figure 12) and populated by wealthy slaveholders. The poorest whites lived in the swamps inland from the riverfront. Racial settlement patterns were typical of the slave south, where there was a close proximity between the servants and their masters. Emancipation did little to change the racial residential realities of New Orleans. The city grew in concentric circles and all sectors were affected by topography. The two biggest problems were flooding and drainage. The problems of the Mississippi River were literally and figuratively beyond the control of New Orleans, despite efforts to tax both planters and shippers to generate the revenue necessary to erect flood protection walls. By the 1840s, the state of Mississippi created the Office of State Engineer to oversee public works and flood prevention, which meant construction and maintenance of levees. The Federal government assumed authority in 1879 as it established the Mississippi River Commission. River floods were only part of the problem. Lake Pontchartrain (Figure 10) also overflowed with great frequency. The insufficient gradient of the city led to the other big problem — drainage. Again, a coordinated effort was necessary to address this issue and one that the fragmented city government did not implement. The result was a city literally mired in a disease prone setting that restricted urban settlement to the natural levee. New Orleans did not experience the same problems as other urban areas in terms of tenements and urban squalor. Open canals laden with sewage and decades of using the river as means of garbage disposal took their toll. Clean and safe water was rare. As the nineteenth century came to a close, there were sporadic efforts to address concerns of garbage and sewage removal. Civic leaders attempted to improve public space with parks, plazas, and tree lined boulevards (Figure 16). The qualities of efficiency, expertise, and broad structural assaults to address long standing underlying problems were markers of the Progressive era of first two decades of twentieth century. The city of New Orleans was in desperate need of this approach. Besides the problems of disease and flooding, the city was crowded and literally sagging beneath its weight in the soft soil of the natural levee. Technological innovation provided the opportunity to drain the swamp lands and open the area between the river and Lake Pontchartrain for development . The city built an extensive network of pumping stations and canals, although it was an expensive and lengthy process. Swamps in the lowest lying areas were drained and resulted in new residential areas well below sea level. The areas closest to the lakefront were restricted to whites. The new technology accelerated racial segregation in New Orleans, as price, municipal ordinances, and later deed covenants specifically excluded blacks from designated sectors. So while drainage opened new neighborhoods, restriction limited access. Drainage was only one aspect of urban improvement. Better sewage and water delivery systems were also installed as the city became determined to follow rational engineering principles which necessitated even distribution of services. Inadequacy in one portion of town weakened the entire system but Jim Crow laws prevailed and it took several decades until city services reached all quarters. New technology was applied to more than residential areas. Locks built in 1909 allowed intercoastal shipping to come through the Mississippi Delta . Innovation plus an appreciation for efficiency had an impact on the port as the government took control of these facilities. Because trade remained so crucial to the city's economic vitality, politicians saw the need for improved administration. In 1923, the city opened the Inner Harbor Navigation Canal that linked the river with the Gulf of Mexico and allowed for deepwater dock space (Figure 16). To build this, engineers cut right through the ninth ward (Figure 12), (Figure 16) and (Figure 17) and isolated the area now knows as the Lower Ninth. Although this neighborhood would eventually be connected by bridges, it was a precursor to the era of social engineering by wrecking ball, contributing to the ongoing debate of the value of urban renewal at the expense of neighborhood ecosystems. The area around the lake was developed into an airport and residential space. Much of the construction was done as part of the New Deal's Works Progress Administration. The new lakefront was beautiful but public lands were quickly converted to private ownership. Half of the lots were sold by the Levee Board to pay off the bonds that had funded the project. In theory, this land was available to everyone but high prices and discrimination turned it into an exclusive, wealthy, and white enclave. At first the levee walls were kept low for aesthetic reasons, yet with the draining of the wetlands, the moisture rich peat soils began to dry out and sink. The result was that the northern part of the city was not only below sea level, but below lake level as well (Figure 17). When hurricanes hit, both the river and the lake overflowed. This scenario occurred in 1947. The damage from this storm led to the construction of a fourteen foot flood wall built with the help of the Army Corps of Engineers (Corps). Another problem that occurred was caused by the pace of water accumulation, i.e. the water comes in faster than it can be pumped out. Changes to New Orleans accelerated in the decades after World War Two. Economic considerations spurred this transformation. Changing shipping methods affected the city's orientation both literally and figuratively. Container shipping became the primary means of commercial water transportation but the Port of New Orleans was not designed to handle this traffic. The choice was either to tear down and rebuild the existing port or build new facilities elsewhere. The latter option was more feasible and the new facilities pulled city development in an eastward direction (Figure 17). Other economic sectors were developed, particularly in the area of petrochemicals. Those steps taken to accommodate these industries had an enormous impact on the environment. After the war, the federal government underwrote refining capacity in the South, especially along the Mississippi and the Gulf of Mexico. A series of oil and gas canals were built to create shipping shortcuts to the Port of New Orleans, such as MRGO — the Mississippi River-Gulf Outlet (Figure 17). These canals not only provided paths for ships, but they also allowed saltwater from the Gulf into the freshwater marshes and forests, destroying wetlands that provided protection for the delta's ecosystem which in turn worked as a natural buffer against hurricane damage. The shifting economic focus led to necessary changes in residential patterns, greatly assisted by two key pieces of construction. The first was a 22 mile causeway built across Lake Pontchartrain in 1959 (Figure 17). Now residents could live on the northern shore of the lake with no danger of flooding. The second, Interstate 10 (Figure 19), was completed in the 1960s, greatly reducing travel time from outlying areas. These areas provided solid land which facilitated residential growth. What resulted was an enormous demographic change within the city of New Orleans. Most of those who moved to the suburbs were white. Although the total population of the city would decline over the next few decades, blacks went from being one third of its population to a majority by 1974. The population decline was completely due to the white flight to suburbs that were often not welcoming to blacks. The city's decreasing tax base led to a decline in services and a general deterioration of such infrastructure elements as roads, transportation, hospitals, and schools. "White flight" left an increasingly black and working class population in the city. The residents of the city remain resilient and neighborhood support is crucial to survival. Always a city where people lived in close physical proximity due to limited waterfront acreage, inhabitants of New Orleans are used to sharing each others joys and sorrow. The destruction wrought by Hurricane Katrina has tested this resilience. Located at the mouth of Mississippi always entailed certain risks for New Orleans but the potential for danger from hurricane and flooding worsened as the 20th century progressed. Drainage allowed the city to expand but not towards high ground as it did in its first two centuries, but towards lower ground surrounded by man made levees. The result was a shallow saucer with its center below sea level. The reality is persistent risk, either from flooding or rainfall. Again, selection of the original location near the outfall of the continent's largest natural drainage system (Figure 10b) has often forced New Orleans to deal with problems that began elsewhere. The city sank due to the accumulated settlement of silt carried to the Mississippi Delta over thousands of years from a drainage basin that extends halfway across the North American continent. Human behavior has only made things worse. Canals have been built through the Delta, both for deep water navigation and shallow access points for oil and gas rigs (Figure 18d). One third of the nation's oil and one quarter of its natural gas are either produced or transported through of the Gulf of Mexico and marsh canals provide quick access. These incursions also lead to wetland erosion. There have been many attempts to address these problems, largely through what are termed structural solutions, i.e. building flood walls, levees, and flood gates to provide relief when too much water comes into the system. After a big flood in 1927, it became the task of the Army Corps of Engineers (Corps) to control floods. Because of the river's national importance, this task has been supervised by the federal government. In the forty years since the last great disaster for the city, Hurricane Betsy in 1965, the Corps has been attending to the task although it often found its hands tied by local problems, political infighting and a changing understanding of the nature of the problem. Rather than ask Congress to reconsider the problem and appropriate additional funds, the Corps implemented up to 90% of the original flood prevention plan yet it was inadequate to the task of protecting the city. The Corps' own solution — to build pumps and floodgates along Lake Pontchartrain that would have prevented much of Hurricane Katrina's flooding — was not pursued because of objections from local officials. There are several problems with the structural approach that the city has embraced. Levees constructed to protect the city from hurricane flooding leave it more vulnerable to flooding from intense rainfalls. The presence of levees and floodwalls creates a false sense of security. They are short term solutions when the real answers lie in the change of how the land is used, i.e. moving away from dangerous areas, often a politically charged option. Before and after Hurricane Katrina, there has been talk of urban renewal that might have led to removal of some residents from the most flood prone areas, but it has been met with a great deal of resistance. Nor will the food or petrochemical industries consider closing the delta canals and allow the crucial work of rebuilding the wetlands to commence. Absent the willingness to make unpleasant choices, New Orleans literally and figuratively sinks into an untenable situation. - What are the problems the city has faced from the beginning? Why has finding solutions to these problems been so difficult? - Why was New Orleans so important to the developing American nation? - Why is New Orleans important to the United States now? - How has New Orleans attempted to adapt to its geographic limitations? - Has technology been good for New Orleans? - Baum, Dan. "Letter from New Orleans: The Lost Year." The New Yorker. August 21, 2006. - Campanella, Richard and Marina. New Orleans, Then and Now. Gretna, LA: Pelican Publishing, 1999. - Colten, Craig. An Unnatural Metropolis. Baton Rouge: LSU Press, 2005. - Fitzpatrick, Tim. "New Orleans, Hurricane Katrina, and the Oil Industry." Environmental Chemistry.com June 30, 2006. - Lemann, Nicholas. "Insurrection." The New Yorker. September 26, 2005. - Lewis, Peirce. New Orleans: The Making of An Urban Landscape. Charlottesville: University of Virginia Press, 2003. - Schwartz, John. "Engineers Faulted on Hurricane System." The New York Times. July 11, 2007. - _________. "One Billion Dollars Later, a City Still at Risk." The New York Times. August 17, 2007. - Warrick, Jody and Whoriskey, Peter. "Army Corps is Faulted on New Orleans Levees." The Washington Post. March 25, 2006.
http://maps.bpl.org/orleans/history
13
41
A practical superlens, super lens or perfect lens, could significantly advance the field of optics and optical engineering. The principles governing the behavior of the superlens reveals resolution capabilities that go substantially beyond ordinary microscopes. As Ernst Abbe reported in 1873, the lens of a camera or microscope is incapable of capturing some very fine details of any given image. The super lens, on the other hand, is intended to capture these fine details. Consequently, conventional lens limitation has inhibited progress in certain areas of the biological sciences. This is because a virus or DNA molecule is out of visual range with the highest powered microscopes. Also, this limitation inhibits seeing the minute processes of cellular proteins moving alongside microtubules of a living cell in their natural environments. Additionally, computer chips and the interrelated microelectronics are manufactured to smaller and smaller scales. This requires specialized optical equipment, which is also limited because these use the conventional lens. Hence, the principles governing a super lens show that it has potential for imaging a DNA molecule and cellular protein processes, or aiding in the manufacture of even smaller computer chips and microelectronics. Furthermore, conventional lenses capture only the propagating light waves. These are waves that travel from a light source or an object to a lens, or the human eye. This can alternatively be studied as the far field. In contrast, the superlens, or perfect lens, captures propagating light waves and waves that stay on top of the surface of an object, which, alternatively, can be studied as both the far field and the near field. In other words, a superlens, super lens or perfect lens is a lens which uses metamaterials to go beyond the diffraction limit. The diffraction limit is an inherent limitation in conventional optical devices or lenses. In 2000, a type of lens was proposed that consisted of a metamaterial that compensates for wave decay and reconstructs images in the near field. In addition, both propagating and evanescent waves contribute to the resolution of the image. Theory and simulations show that the superlens and hyperlens can work, but engineering obstacles need to be overcome. An image of an object can be defined as a tangible or visible representation of the features of that object. A requirement for image formation is interaction with fields of electromagnetic radiation. Furthermore, the level of feature detail, or image resolution, is limited to a length of a wave of radiation. For example, with optical microscopy, image production and resolution depends on the length of a wave of visible light. However, with a superlens, this limitation may be removed, and a new class of image generated. Electron beam lithography can overcome this resolution limit. Optical microscopy, on the other hand cannot, being limited to some value just above 200 nanometers. However, new technologies combined with optical microscopy are beginning to allow for increased feature resolution (see sections below). One definition of being constrained by the resolution barrier, is a resolution cut off at half the wavelength of light. The visible spectrum has a range that extends from 390 nanometers to 750 nanometers. Green light, half way in between, is around 500 nanometers. Microscopy takes into account parameters such as lens aperture, distance from the object to the lens, and the refractive index of the observed material. This combination defines the resolution cutoff, or Microscopy's optical limit, which tabulates to 200 nanometers. Therefore, conventional lenses, which literally construct an image of an object by using "ordinary" light waves, discard information that produce very fine, and minuscule details of the object that are contained in evanescent waves. These dimensions are less than 200 nanometers. For this reason, conventional optical systems, such as microscopes, have been unable to accurately image very small, nanometer-sized structures or nanometer-sized organisms in vivo, such as individual viruses, or DNA molecules. The limitations of standard optical microscopy (bright field microscopy) lie in three areas: - The technique can only image dark or strongly refracting objects effectively. - Diffraction limits the object, or cell's, resolution to approximately 200 nanometers. - Out of focus light from points outside the focal plane reduces image clarity. Live biological cells in particular generally lack sufficient contrast to be studied successfully, because the internal structures of the cell are colorless and transparent. The most common way to increase contrast is to stain the different structures with selective dyes, but this involves killing and fixing the sample. Staining may also introduce artifacts, apparent structural details that are caused by the processing of the specimen and are thus not a legitimate feature of the specimen. The conventional glass lens is pervasive throughout our society and in the sciences. It is one of the fundamental tools of optics. However, the wavelength of light can be analogous to the width of a pencil used to draw the ordinary images. The limit becomes noticeable, for example, when the laser used in a digital video system can only detect and deliver details from a DVD based on the wavelength of light. The image cannot be rendered any sharper beyond this limitation. When an object emits or reflects light there are two types of electromagnetic radiation associated with this phenomenon. These are the near field radiation and the far field radiation. As implied by its description, the far field escapes beyond the object. It is then easily captured and manipulated by a conventional glass lens. However, useful (nanometer-sized) resolution details are not observed, because they are hidden in the near field. They remain localized, staying much closer to the light emitting object, unable to travel, and unable to be captured by the conventional lens. Controlling the near field radiation, for high resolution, can be accomplished with a new class of materials not found in nature. These are unlike familiar solids, such as crystals, which derive their properties from atomic and molecular units. The new material class, termed metamaterials, obtains its properties from its artificially larger structure. This has resulted in novel properties, and novel responses, which allow for details of images that surpass the limitations imposed by the wavelength of light. This has led to the desire to view live biological cell interactions in a real time, natural environment, and the need for subwavelength imaging. Subwavelength imaging can be defined as optical microscopy with the ability to see details of an object or organism below the wavelength of visible light (see discussion in the above sections). In other words, to have capability to observe, in real time, below 200 nanometers. Optical microscopy is a non-invasive technique and technology because everyday light is the transmission medium. Imaging below the optical limit in optical microscopy (subwavelength) can be engineered for the cellular level, and nanometer level in principle. For example, in 2007 a technique was demonstrated where a metamaterials-based lens coupled with a conventional optical lens could manipulate visible light to see (nanoscale) patterns that were too small to be observed with an ordinary optical microscope. This has potential applications not only for observing a whole living cell, or for observing cellular processes, such as how proteins and fats move in and out of cells. In the technology domain, it could be used to improve the first steps of photolithography and nanolithography, essential for manufacturing ever smaller computer chips. Focusing at subwavelength has become a unique imaging technique which allows visualization of features on the viewed object which are smaller than the wavelength of the photons in use. A photon is the minimum unit of light (see article). While previously thought to be physically impossible, subwavelength imaging has been made possible through the development of metamaterials. This is generally accomplished using a layer of metal such as gold or silver a few atoms thick, which acts as a superlens, or by means of 1D and 2D photonic crystals. There is a subtle interplay between propagating waves, evanescent waves, near field imaging and far field imaging discussed in the sections below. Early subwavelength imaging Metamaterial lenses (Superlens) are able to compensate for the exponential evanescent wave decay via negative refractive index, and in essence reconstruct the image. Prior to metamaterials, proposals were advanced in the 1970s to avoid this evanescent decay. For example, in 1974 proposals for two-dimensional, fabrication techniques were presented. These proposals included contact imaging to create a pattern in relief, photolithography, electron lithography, X-ray lithography, or ion bombardment, on an appropriate planar substrate. The shared technological goals of the metamaterial lens and the variety of lithography aim to optically resolve features having dimensions much smaller than that of the vacuum wavelength of the exposing light. In 1981 two different techniques of contact imaging of planar (flat) submicroscopic metal patterns with blue light (400 nm) were demonstrated. One demonstration resulted in an image resolution of 100 nm and the other a resolution of 50 to 70 nm. Since at least 1998 near field optical lithography was designed to create nanometer-scale features. Research on this technology continued as the first experimentally demonstrated negative index metamaterial came into existence in 2000–2001. The effectiveness of electron-beam lithography was also being researched at the beginning of the new millennium for nanometer-scale applications. Imprint lithography was shown to have desirable advantages for nanometer-scaled research and technology. Advanced deep UV photolithography can now offer sub-100 nm resolution, yet the minimum feature size and spacing between patterns are determined by the diffraction limit of light. Its derivative technologies such as evanescent near-field lithography, near-field interference lithography, and phase-shifting mask lithography were developed to overcome the diffraction limit. The first superlens (2004) with a negative refractive index provided resolution three times better than the diffraction limit and was demonstrated at microwave frequencies. In 2005, the first near field superlens was demonstrated by N.Fang et al., but the lens did not rely on negative refraction. Instead, a thin silver film was used to enhance the evanescent modes through surface plasmon coupling. Almost at the same time Melville and Blaikie succeeded with a near field superlens. Other groups followed. Two developments in superlens research were reported in 2008. In the second case, a metamaterial was formed from silver nanowires which were electrochemically deposited in porous aluminium oxide. The material exhibited negative refraction. The superlens has not yet been demonstrated at visible or near-infrared frequencies (Nielsen, R. B.; 2010). Furthermore as dispersive materials, these are limited to functioning at a single wavelength. Proposed solutions are metal–dielectric composites (MDCs) and multilayer lens structures. The multi-layer superlens appears to have better subwavelength resolution than the single layer superlens. Losses are less of a concern with the multi-layer system, but so far it appears to be impractical because of impedance mis-match. When the world is observed through conventional lenses, the sharpness of the image is determined by and limited to the wavelength of light. Around the year 2000, a slab of negative index metamaterial was theorized to create a lens with capabilities beyond conventional (positive index) lenses. Sir John Pendry, a British physicist, proposed that a thin slab of negative refractive metamaterial might overcome known problems with common lenses to achieve a "perfect" lens that would focus the entire spectrum, both the propagating as well as the evanescent spectra. A slab of silver was proposed as the metamaterial. As light moves away (propagates) from the source, it acquires an arbitrary phase. Through a conventional lens the phase remains consistent, but the evanescent waves decay exponentially. In the flat metamaterial DNG slab, normally decaying evanescent waves are contrarily amplified. Furthermore, as the evanescent waves are now amplified, the phase is reversed. Therefore, a type of lens was proposed, consisting of a metal film metamaterial. When illuminated near its plasma frequency, the lens could be used for superresolution imaging that compensates for wave decay and reconstructs images in the near-field. In addition, both propagating and evanescent waves contribute to the resolution of the image. Pendry suggested that left-handed slabs allow "perfect imaging" if they are completely lossless, impedance matched, and their refractive index is −1 relative to the surrounding medium. Theoretically, this would be a breakthrough in that the optical version resolves objects as minuscule as nanometers across. Pendry predicted that Double negative metamaterials (DNG) with a refractive index of n = −1, can act, at least in principle, as a "perfect lens" allowing imaging resolution which is limited not by the wavelength, but rather by material quality. Other studies concerning the perfect lens Further research demonstrated that Pendry's theory behind the perfect lens was not exactly correct. The analysis of the focusing of the evanescent spectrum (equations 13–21 in reference ) was flawed. In addition, this applies to only one (theoretical) instance, and that is one particular medium that is lossless, nondispersive and the constituent parameters are defined as: - ε(ω) / ε0 = µ(ω) / µ0 = −1, which in turn results in a negative refraction of n = −1 However, the final intuitive result of this theory that both the propagating and evanescent waves are focused, resulting in a converging focal point within the slab and another convergence (focal point) beyond the slab turned out to be correct. If the DNG metamaterial medium has a large negative index or becomes lossy or dispersive, Pendry's perfect lens effect cannot be realized. As a result, the perfect lens effect does not exist in general. According to FDTD simulations at the time (2001), the DNG slab acts like a converter from a pulsed cylindrical wave to a pulsed beam. Furthermore, in reality (in practice), a DNG medium must be and is dispersive and lossy, which can have either desirable or undesirable effects, depending on the research or application. Consequently, Pendry's perfect lens effect is inaccessible with any metamaterial designed to be a DNG medium. Another analysis, in 2002, of the perfect lens concept showed it to be in error while using the lossless, dispersionless DNG as the subject. This analysis mathematically demonstrated that subtleties of evanescent waves, restriction to a finite slab and absorption had led to inconsistencies and divergencies that contradict the basic mathematical properties of scattered wave fields. For example, this analysis stated that absorption, which is linked to dispersion, is always present in practice, and absorption tends to transform amplified waves into decaying ones inside this medium (DNG). A third analysis of Pendry's perfect lens concept, published in 2003, used the recent demonstration of negative refraction at microwave frequencies as confirming the viability of the fundamental concept of the perfect lens. In addition, this demonstration was thought to be experimental evidence that a planar DNG metamaterial would refocus the far field radiation of a point source. However, the perfect lens would require significantly different values for permittivity, permeability, and spatial periodicity than the demonstrated negative refractive sample. This study agrees that any deviation from conditions where ε = µ = −1 results in the normal, conventional, imperfect image that degrades exponentially i.e., the diffraction limit. The perfect lens solution in the absence of losses is again, not practical, and can lead to paradoxical interpretations. It was determined that although resonant surface plasmons are undesirable for imaging, these turn out to be essential for recovery of decaying evanescent waves. This analysis discovered that metamaterial periodicity has a significant effect on the recovery of types of evanescent components. In addition, achieving subwavelength resolution is possible with current technologies. Negative refractive indices have been demonstrated in structured metamaterials. Such materials can be engineered to have tunable material parameters, and so achieve the optimal conditions. Losses can be minimized in structures utilizing superconducting elements. Furthermore, consideration of alternate structures may lead to configurations of left-handed materials that can achieve subwavelength focusing. Such structures were being studied at the time. Near-field imaging with magnetic wires Pendry's theoretical lens was designed to focus both propagating waves and the near-field evanescent waves. From permittivity "ε" and magnetic permeability "µ" an index of refraction "n" is derived. The index of refraction determines how light is bent on traversing from one material to another. In 2003, it was suggested that a metamaterial constructed with alternating, parallel, layers of n = −1 materials and n = +1 materials, would be a more effective design for a metamaterial lens. It is an effective medium made up of a multi-layer stack, which exhibits birefringence, n2 = ∞, nx = 0. The effective refractive indices are then perpendicular and parallel, respectively. Like a conventional lens, the z-direction is along the axis of the roll. The resonant frequency (w0) – close to 21.3 MHz – is determined by the construction of the roll. Damping is achieved by the inherent resistance of the layers and the lossy part of permittivity. The details of construction are found in ref. Simply put, as the field pattern is transferred from the input to the output face of a slab, so the image information is transported across each layer. This was experimentally demonstrated. To test the two-dimensional imaging performance of the material, an antenna was constructed from a pair of anti-parallel wires in the shape of the letter M. This generated a line of magnetic flux, so providing a characteristic field pattern for imaging. It was placed horizontally, and the material, consisting of 271 Swiss rolls tuned to 21.5 MHz, was positioned on top of it. The material does indeed act as an image transfer device for the magnetic field. The shape of the antenna is faithfully reproduced in the output plane, both in the distribution of the peak intensity, and in the “valleys” that bound the M. A consistent characteristic of the very near (evanescent) field is that the electric and magnetic fields are largely decoupled. This allows for nearly independent manipulation of the electric field with the permittivity and the magnetic field with the permeability. Furthermore, this is highly anisotropic system. Therefore, the transverse (perpendicular) components of the EM field which radiate the material, that is the wavevector components kx and ky, are decoupled from the longitudinal component kz. So, the field pattern should be transferred from the input to the output face of a slab of material without degradation of the image information. Optical super lens with silver metamaterial In 2003, a group of researchers showed that optical evanescent waves would be enhanced as they passed through a silver metamaterial lens. This was referred to as a diffraction-free lens. Although a coherent, high-resolution, image was not intended, nor achieved, regeneration of the evanescent field was experimentally demonstrated. By 2003 it was known for decades that evanescent waves could be enhanced by producing excited states at the interface surfaces. However, the use of surface plasmons to reconstruct evanescent components was not tried until Pendry's recent proposal (see "Perfect lens" above). By studying films of varying thickness it has been noted that a rapidly growing transmission coefficient occurs, under the appropriate conditions. This demonstration provided direct evidence that the foundation of superlensing is solid, and suggested the path that will enable the observation of superlensing at optical wavelengths. In 2005, a coherent, high-resolution, image was produced (based on the 2003 results). A thinner slab of silver (35 nm) was better for sub–diffraction-limited imaging, which results in one-sixth of the illumination wavelength. This type of lens was used to compensate for wave decay and reconstruct images in the near-field. Prior attempts to create a working superlens used a slab of silver that was too thick. Objects were imaged as small as 40 nm across. In 2005 the imaging resolution limit for optical microscopes was at about one tenth the diameter of a red blood cell. With the silver superlens this results in a resolution of one hundredth of the diameter of a red blood cell. Conventional lenses, whether man-made or natural, create images by capturing the propagating light waves all objects emit and then bending them. The angle of the bend is determined by the index of refraction and has always been positive until the fabrication of artificial negative index materials. Objects also emit evanescent waves that carry details of the object, but are unobtainable with conventional optics. Such evanescent waves decay exponentially and thus never become part of the image resolution, an optics threshold known as the diffraction limit. Breaking this diffraction limit, and capturing evanescent waves are critical to the creation of a 100-percent perfect representation of an object. In addition, conventional optical materials suffer a diffraction limit because only the propagating components are transmitted (by the optical material) from a light source. The non-propagating components, the evanescent waves, are not transmitted. Moreover, lenses that improve image resolution by increasing the index of refraction are limited by the availability of high-index materials, and point by point subwavelength imaging of electron microscopy also has limitations when compared to the potential of a working superlens. Scanning electron and atomic force microscopes are now used to capture detail down to a few nanometers. However, such microscopes create images by scanning objects point by point, which means they are typically limited to non-living samples, and image capture times can take up to several minutes. With current optical microscopes, scientists can only make out relatively large structures within a cell, such as its nucleus and mitochondria. With a superlens, optical microscopes could one day reveal the movements of individual proteins traveling along the microtubules that make up a cell's skeleton, the researchers said. Optical microscopes can capture an entire frame with a single snapshot in a fraction of a second With superlenses this opens up nanoscale imaging to living materials, which can help biologists better understand cell structure and function in real time. Advances of magnetic coupling in the THz and infrared regime provided the realization of a possible metamaterial superlens. However, in the near field, the electric and magnetic responses of materials are decoupled. Therefore, for transverse magnetic (TM) waves, only the permittivity needed to be considered. Noble metals, then become natural selections for superlensing because negative permittivity is easily achieved. By designing the thin metal slab so that the surface current oscillations (the surface plasmons) match the evanescent waves from the object, the superlens is able to substantially enhance the amplitude of the field. Superlensing results from the enhancement of evanescent waves by surface plasmons. The key to the superlens is its ability to significantly enhance and recover the evanescent waves that carry information at very small scales. This enables imaging well below the diffraction limit. No lens is yet able to completely reconstitute all the evanescent waves emitted by an object, so the goal of a 100-percent perfect image will persist. However, many scientists believe that a true perfect lens is not possible because there will always be some energy absorption loss as the waves pass through any known material. In comparison the superlens image is substantially better than the one created without the silver superlens. 50-nm flat silver layer In February 2004, an electromagnetic radiation focusing system, based on a negative index metamaterial plate, accomplished subwavelength imaging in the microwave domain. This showed that obtaining separated images at much less than the wavelength of light is possible. Also, in 2004, a silver layer was used for sub-micrometre near-field imaging. Super resolution was not achieved, but this was intended. The silver layer was too thick to allow significant enhancements of evanescent field components. In early 2005, feature resolution was achieved with a different silver layer. Though this was not an actual image, it was intended. Dense feature resolution down to 250 nm was produced in a 50 nm thick photoresist using illumination from a mercury lamp. Using simulations (FDTD), the study noted that resolution improvements could be expected for imaging through silver lenses, rather than another method of near field imaging. Building on this prior research, super resolution was achieved at optical frequencies using a 50 nm flat silver layer. The capability of resolving an image beyond the diffraction limit, for far-field imaging, is defined here as superresolution. The image fidelity is much improved over earlier results of the previous experimental lens stack. Imaging of sub-micrometre features has been greatly improved by using thinner silver and spacer layers, and by reducing the surface roughness of the lens stack. The ability of the silver lenses to image the gratings has been used as the ultimate resolution test, as there is a concrete limit for the ability of a conventional (far field) lens to image a periodic object – in this case the image is a diffraction grating. For normal-incidence illumination the minimum spatial period that can be resolved with wavelength λ through a medium with refractive index n is λ/n. Zero contrast would therefore be expected in any (conventional) far-field image below this limit, no matter how good the imaging resist might be. The (super) lens stack here results in a computational result of a diffraction-limited resolution of 243 nm. Gratings with periods from 500 nm down to 170 nm are imaged, with the depth of the modulation in the resist reducing as the grating period reduces. All of the gratings with periods above the diffraction limit (243 nm) are well resolved. The key results of this experiment are super-imaging of the sub-diffraction limit for 200 nm and 170 nm periods. In both cases the gratings are resolved, even though the contrast is diminished, but this gives experimental confirmation of Pendry's superlensing proposal. Negative index GRIN lenses Gradient Index (GRIN) – The larger range of material response available in metamaterials should lead to improved GRIN lens design. In particular, since the permittivity and permeability of a metamaterial can be adjusted independently, metamaterial GRIN lenses can presumably be better matched to free space. The GRIN lens is constructed by using a slab of NIM with a variable index of refraction in the y direction, perpendicular to the direction of propagation z. Transmission properties of an optical far-field superlens Also in 2005 a group proposed a theoretical way to overcome the near-field limitation using a new device termed a far-field superlens (FSL), which is a properly designed periodically corrugated metallic slab-based superlens. Metamaterial crystal lens An idea for a far-field scanless optical microscopy, with a resolution below diffraction limit, was investigated by exploiting the special dispersion characteristics of an anisotropic metamaterial crystal. Metamaterial lens goes from near field to far field Imaging is experimentally demonstrated in the far field, taking the next step after near-field experiments. The key element is termed as a far-field superlens (FSL) which consists of a conventional superlens and a nanoscale coupler. Focusing beyond the diffraction limit with far-field time reversal An approach is presented for subwavelength focusing of microwaves using both a time-reversal mirror placed in the far field and a random distribution of scatterers placed in the near field of the focusing point. Once capability for near-field imaging was demonstrated, the next step was to project a near-field image into the far-field. This concept, including technique and materials, is dubbed "hyperlens". The capability of a metamaterial-hyperlens for sub-diffraction-limited imaging is shown below. Sub-diffraction imaging in the far field With conventional optical lenses, the far field is a limit that is too distant for evanescent waves to arrive intact. When imaging an object, this limits the optical resolution of lenses to the order of the wavelength of light These non-propagating waves carry detailed information in the form of high spatial resolution, and overcome limitations. Therefore, projecting image details, normally limited by diffraction into the far field does require recovery of the evanescent waves. In essence steps leading up to this investigation and demonstration was the employment of an anisotropic metamaterial with a hyperbolic dispersion. The effect was such that ordinary evanescent waves propagate along the radial direction of the layered metamaterial. On a microscopic level the large spatial frequency waves propagate through coupled surface plasmon excitations between the metallic layers. In 2007, just such an anisotropic metamaterial was employed as a magnifying optical hyperlens. The hyperlens consisted of a curved periodic stack of thin silver and alumina (at 35 nanometers thick) deposited on a half-cylindrical cavity, and fabricated on a quartz substrate. The radial and tangential permittivities have different signs. Upon illumination, the scattered evanescent field from the object enters the anisotropic medium and propagates along the radial direction. Combined with another effect of the metamaterial, a magnified image at the outer diffraction limit-boundary of the hyperlens occurs. Once the magnified feature is larger than (beyond) the diffraction limit, it can then be imaged with a conventional optical microscope, thus demonstrating magnification and projection of a sub-diffraction-limited image into the far field. The hyperlens magnifies the object by transforming the scattered evanescent waves into propagating waves in the anisotropic medium, projecting a spatial resolution high-resolution image into the far field. This type of metamaterials-based lens, paired with a conventional optical lens is therefore able to reveal patterns too small to be discerned with an ordinary optical microscope. In one experiment, the lens was able to distinguish two 35-nanometer lines etched 150 nanometers apart. Without the metamaterials, the microscope showed only one thick line. (See diagram to the right). In a control experiment, the line pair object was imaged without the hyperlens. The line pair could not be resolved because of the diffraction limit of the (optical) aperture was limited to 260 nm. (See panels B and C of the figure). Because the hyperlens supports the propagation of a very broad spectrum of wave vectors, it can magnify arbitrary objects with sub-diffraction-limited resolution. The recorded image of the letters "ON" shows the fine features of the object. Although this work appears to be limited by being only a cylindrical hyperlens, the next step is to design a spherical lens. That lens will exhibit three-dimensional capability. Near-field optical microscopy uses a tip to scan an object. In contrast, this optical hyperlens magnifies an image that is sub-diffraction-limited. The magnified sub-diffraction image is then projected into the far field. The optical hyperlens shows a notable potential for applications, such as real-time biomolecular imaging and nanolithography. Such a lens could be used to watch cellular processes that have been impossible to see. Conversely, it could be used to project an image with extremely fine features onto a photoresist as a first step in photolithography, a process used to make computer chips. The hyperlens also has applications for DVD technology. In 2010, spherical hyperlens for two dimensional imaging at visible frequencies is demonstrated experimentally. The spherical hyperlens based on silver and titanium oxide alternating layers has strong anisotropic hyperbolic dispersion allowing super-resolution with visible spectrum. The resolution is 160 nm at visible spectrum. It will enable biological imaging such as cell and DNA with a strong benefit of magnifying sub-diffraction resolution into far-field. Plasmon assisted microscopy. (See Near-field scanning optical microscope). Super-imaging in the visible frequency range Continual improvements in optical microscopy are needed to keep up with the progress in nanotechnology and microbiology. Advancement in spatial resolution is key. Conventional optical microscopy is limited by a diffraction limit which is on the order of 200 nanometers (wavelength). This means that viruses, proteins, DNA molecules and many other samples are hard to observe with a regular (optical) microscope. The lens previously demonstrated with negative refractive index material, a thin planar superlens, does not provide magnification beyond the diffraction limit of conventional microscopes. Therefore, images smaller than the conventional diffraction limit will still be unavailable. Another approach achieving super-resolution at visible wavelength is recently developed spherical hyperlens based on silver and titanium oxide alternating layers. It has strong anisotropic hyperbolic dispersion allowing super-resolution with converting evanescent waves into propagating waves. This method is non-fluorescence based super-resolution imaging, which results in real-time imaging without any reconstruction of images and information. Super resolution far-field microscopy techniques By 2008 the diffraction limit has been surpassed and lateral imaging resolutions of 20 to 50 nm have been achieved by several "super-resolution" far-field microscopy techniques, including stimulated emission depletion (STED) and its related RESOLFT (reversible saturable optically linear fluorescent transitions) microscopy; saturated structured illumination microscopy (SSIM) ; stochastic optical reconstruction microscopy (STORM); photoactivated localization microscopy (PALM); and other methods using similar principles. Cylindrical superlens via coordinate transformation This began with a proposal by Sir John Pendry, in 2003. Magnifying the image required a new design concept in which the surface of the negatively refracting lens is curved. One cylinder touches another cylinder, resulting in a curved cylindrical lens which reproduced the contents of the smaller cylinder in magnified but undistorted form outside the larger cylinder. Coordinate transformations are required to curve the original perfect lens into the cylindrical, lens structure. In 2007, a superlens utilizing coordinate transformation was again the subject. However, in addition to image transfer other useful operations were discussed; translation, rotation, mirroring and inversion as well as the superlens effect. Furthermore, elements that perform magnification are described, which are free from geometric aberrations, on both the input and output sides while utilizing free space sourcing (rather than waveguide). These magnifying elements also operate in the near and far field, transferring the image from near field to far field. Nano-optics with metamaterials Nanohole array subwavelength imaging Nanohole array as a lens A recent prior work (2007) demonstrated that a quasi-periodic array of nanoholes, in a metal screen, were able to focus the optical energy of a plane wave to form subwavelength spots (hot spots). The distances for the spots was a few tens of wavelengths on the other side of the array, or, in other words, opposite the side of the incident plane wave. The quasi-periodic array of nanoholes functioned as a light concentrator. In June 2008, this was followed by the demonstrated capability of an array of quasi-crystal nanoholes in a metal screen. More than concentrating hot spots, an image of the point source is displayed a few tens of wavelengths from the array, on the other side of the array (the image plane). Also this type of array exhibited a 1 to 1 linear displacement, – from the location of the point source to its respective, parallel, location on the image plane. In other words from x to x + δx. For example, other point sources were similarly displaced from x' to x' + δx', from x^ to x^ + δx^, and from x^^ to x^^ + δx^^, and so on. Instead of functioning as a light concentrator, this performs the function of conventional lens imaging with a 1 to 1 correspondence, albeit with a point source. However, resolution of more complicated structures can be achieved as constructions of multiple point sources. The fine details, and brighter image, that are normally associated with the high numerical apertures of conventional lenses can be reliably produced. Notable applications for this technology arise when conventional optics is not suitable for the task at hand. For example, this technology is better suited for X-ray imaging, or nano-optical circuits, and so forth. The metamaterial nanolens was constructed of millions of nanowires at 20 nanometers in diameter. These were precisely aligned and a packaged configuration was applied. The lens is able to depict a clear, high-resolution image of nano-sized objects because it uses both normal propagating EM radiation, and evanescent waves to construct the image. Super-resolution imaging was demonstrated over a distance of 6 times the wavelength (λ), in the far-field, with a resolution of at least λ/4. This is a significant improvement over previous research and demonstration of other near field and far field imaging, including nanohole arrays discussed below. Light transmission properties of holey metal films 2009-12. The light transmission properties of holey metal films in the metamaterial limit, where the unit length of the periodic structures is much smaller than the operating wavelength, are analyzed theoretically. Transporting an Image through a subwavelength hole Theoretically it appears possible to transport a complex electromagnetic image through a tiny subwavelength hole with diameter considerably smaller than the diameter of the image, without losing the subwavelength details. Nanoparticle imaging – quantum dots When observing the complex processes in a living cell, significant processes (changes) or details are easy to overlook. This can more easily occur when watching changes that take a long time to unfold and require high-spatial-resolution imaging. However, recent research offers a solution to scrutinize activities that occur over hours or even days inside cells, potentially solving many of the mysteries associated with molecular-scale events occurring in these tiny organisms. A joint research team, working at the National Institute of Standards and Technology (NIST) and the National Institute of Allergy and Infectious Diseases (NIAID), has discovered a method of using nanoparticles to illuminate the cellular interior to reveal these slow processes. Nanoparticles, thousands of times smaller than a cell, have a variety of applications. One type of nanoparticle called a quantum dot glows when exposed to light. These semiconductor particles can be coated with organic materials, which are tailored to be attracted to specific proteins within the part of a cell a scientist wishes to examine. Notably, quantum dots last longer than many organic dyes and fluorescent proteins that were previously used to illuminate the interiors of cells. They also have the advantage of monitoring changes in cellular processes while most high-resolution techniques like electron microscopy only provide images of cellular processes frozen at one moment. Using quantum dots, cellular processes involving the dynamic motions of proteins, are observable (elucidated). The research focused primarily on characterizing quantum dot properties, contrasting them with other imaging techniques. In one example, quantum dots were designed to target a specific type of human red blood cell protein that forms part of a network structure in the cell's inner membrane. When these proteins cluster together in a healthy cell, the network provides mechanical flexibility to the cell so it can squeeze through narrow capillaries and other tight spaces. But when the cell gets infected with the malaria parasite, the structure of the network protein changes. Because the clustering mechanism is not well understood, it was decided to examine it with the quantum dots. If a technique could be developed to visualize the clustering, then the progress of a malaria infection could be understood, which has several distinct developmental stages. Research efforts revealed that as the membrane proteins bunch up, the quantum dots attached to them are induced to cluster themselves and glow more brightly, permitting real time observation as the clustering of proteins progresses. More broadly, the research discovered that when quantum dots attach themselves to other nanomaterials, the dots' optical properties change in unique ways in each case. Furthermore, evidence was discovered that quantum dot optical properties are altered as the nanoscale environment changes, offering greater possibility of using quantum dots to sense the local biochemical environment inside cells. Some concerns remain over toxicity and other properties. However, the overall findings indicate that quantum dots could be a valuable tool to investigate dynamic cellular processes. The abstract from the related published research paper states (in part): Results are presented regarding the dynamic fluorescence properties of bioconjugated nanocrystals or quantum dots (QDs) in different chemical and physical environments. A variety of QD samples was prepared and compared: isolated individual QDs, QD aggregates, and QDs conjugated to other nanoscale materials... A technical view of the original problem The original deficiency related to the perfect lens is elucidated: The general expansion of an EM field emanating from a source consists of both propagating waves and near-field or evanescent waves. An example of a 2-D line source with an electric field which has S-polarization will have plane waves consisting of propagating and evanescent components, which advance parallel to the interface. As both the propagating and the smaller evanescent waves advance in a direction parallel to the medium interface, evanescent waves decay in the direction of propagation. Ordinary (positive index) optical elements can refocus the propagating components, but the exponentially decaying inhomogeneous components are always lost, leading to the diffraction limit for focusing to an image. A superlens is a lens which is capable of subwavelength imaging, allowing for magnification of near field rays. Conventional lenses have a resolution on the order of one wavelength due to the so-called diffraction limit. This limit hinders imaging very small objects, such as individual atoms, which are much smaller than the wavelength of visible light. A superlens is able to beat the diffraction limit. A very well known superlens is the perfect lens described by John Pendry, which uses a slab of material with a negative index of refraction as a flat lens. In theory, Pendry's perfect lens is capable of perfect focusing — meaning that it can perfectly reproduce the electromagnetic field of the source plane at the image plane. The diffraction limit The performance limitation of conventional lenses is due to the diffraction limit. Following Pendry (Pendry, 2000), the diffraction limit can be understood as follows. Consider an object and a lens placed along the z-axis so the rays from the object are traveling in the +z direction. The field emanating from the object can be written in terms of its angular spectrum method, as a superposition of plane waves: where is a function of as: Only the positive square root is taken as the energy is going in the +z direction. All of the components of the angular spectrum of the image for which is real are transmitted and re-focused by an ordinary lens. However, if then becomes imaginary, and the wave is an evanescent wave whose amplitude decays as the wave propagates along the z-axis. This results in the loss of the high angular frequency components of the wave, which contain information about the high frequency (small scale) features of the object being imaged. The highest resolution that can be obtained can be expressed in terms of the wavelength: A superlens overcomes the limit. A Pendry-type superlens has an index of n = −1 (ε = −1, µ = −1), and in such a material, transport of energy in the +z direction requires the z-component of the wave vector to have opposite sign: For large angular frequencies, the evanescent wave now grows, so with proper lens thickness, all components of the angular spectrum can be transmitted through the lens undistorted. There are no problems with conservation of energy, as evanescent waves carry none in the direction of growth: the Poynting vector is oriented perpendicularly to the direction of growth. For traveling waves inside a perfect lens, the Poynting vector points in direction opposite to the phase velocity. Negative index of refraction and Pendry's perfect lens Normally when a wave passes through the interface of two materials, the wave appears on the opposite side of the normal. However, if the interface is between a material with a positive index of refraction and another material with a negative index of refraction, the wave will appear on the same side of the normal. John Pendry's perfect lens is a flat material where n = −1. Such a lens allows for near field rays—which normally decay due to the diffraction limit—to focus once within the lens and once outside the lens, allowing for subwavelength imaging. Superlens was believed impossible until John Pendry showed in 2000 that a simple slab of left-handed material would do the job. The experimental realization of such a lens took, however, some more time, because it is not that easy to fabricate metamaterials with both negative permittivity and permeability. Indeed, no such material exists naturally and construction of the required metamaterials is non-trivial. Furthermore, it was shown that the parameters of the material are extremely sensitive (the index must equal −1); small deviations make the subwavelength resolution unobservable. Due to the resonant nature of metamaterials, on which many (proposed) implementations of superlenses depend, metamaterials are highly dispersive. The sensitive nature of the superlens to the material parameters causes superlenses based on metamaterials to have a limited usable frequency range. However, Pendry also suggested that a lens having only one negative parameter would form an approximate superlens, provided that the distances involved are also very small and provided that the source polarization is appropriate. For visible light this is a useful substitute, since engineering metamaterials with a negative permeability at the frequency of visible light is difficult. Metals are then a good alternative as they have negative permittivity (but not negative permeability). Pendry suggested using silver due to its relatively low loss at the predicted wavelength of operation (356 nm). In 2005, Pendry's suggestion was finally experimentally verified by two independent groups, both using thin layers of silver illuminated with UV light to produce "photographs" of objects smaller than the wavelength. Negative refraction of visible light has been experimentally verified in an yttrium orthovanadate (YVO4) bicrystal in 2003. - Zhang, Xiang; and Liu,Zhaowei (2008). "Superlenses to overcome the diffraction limit" (Free PDF download). Nature Materials 7: 435 – 441. doi:10.1038/nmat2141. Retrieved 2013-06-03. - Aguirre, Edwin L. (09/18/2012). "Creating a ‘Perfect’ Lens for Super-Resolution Imaging". U-Mass Lowell News. doi:10.1117/1.3484153. Retrieved 2013-06-02. - Kawata, S.; Inouye, Y.; Verma, P. (2009). "Plasmonics for near-field nano-imaging and superlensing". Nature Photonics 3 (7): 388–394. Bibcode:2009NaPho...3..388K. doi:10.1038/nphoton.2009.111. - Vinson, V; Chin, G. (2007). "Introduction to special issue – Lights, Camera, Action". Science 316 (5828): 1143. doi:10.1126/science.316.5828.1143. - Pendry, John. Manipulating the Near Field. Volume 15. & Photonics News September 2004. - Anantha, S. Ramakrishna; J.B. Pendry, M.C.K. Wiltshire and W.J. Stewart (2003). "Imaging the Near Field". Journal of Modern Optics (Taylor & Francis) 50 (09): 1419–1430. doi:10.1080/0950034021000020824. - Pendry, J. B. (2000). "Negative Refraction Makes a Perfect Lens". Physical Review Letters 85 (18): 3966–9. Bibcode:2000PhRvL..85.3966P. doi:10.1103/PhysRevLett.85.3966. PMID 11041972. - Fang, N. et al. (2005). "Sub–Diffraction-Limited Optical Imaging with a Silver Superlens". Science 308 (5721): 534–7. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849. - Brumfiel, G (2009). "Metamaterials: Ideal focus" (online web page). Nature News 459 (7246): 504–5. doi:10.1038/459504a. PMID 19478762. - Lauterbur, P. (1973). "Image Formation by Induced Local Interactions: Examples Employing Nuclear Magnetic Resonance". Nature 242 (5394): 190. Bibcode:1973Natur.242..190L. doi:10.1038/242190a0. - "Prof. Sir John Pendry, Imperial College, London". Colloquia Series. Research Laboratory of Electronics. 13 March 2007. Retrieved 2010-04-07. - Yeager, A. (28 March 2009). "Cornering The Terahertz Gap". Science News. Retrieved 2010-03-02. - Savo, S.; Andreone, A.; Di Gennaro, E. (2009). "Superlensing properties of one-dimensional dielectric photonic crystals". Optics Express 17 (22): 19848–56. arXiv:0907.3821. Bibcode:2009OExpr..1719848S. doi:10.1364/OE.17.019848. PMID 19997206. - Parimi, P. et al. (2003). "Imaging by Flat Lens using Negative Refraction". Nature 426 (6965): 404. Bibcode:2003Natur.426..404P. doi:10.1038/426404a. PMID 14647372. - Bullis, Kevin (2007-03-27). "Superlenses and Smaller Computer Chips". Technology Review magazine of Massachusetts Institute of Technology. pp. 2 pages. Retrieved 2010-01-13 - Smith, H.I. (1974). "Fabrication techniques for surface-acoustic-wave and thin-film optical devices". Proceedings of the IEEE 62 (10): 1361–1387. doi:10.1109/PROC.1974.9627. - Srituravanich, W. et al. (2004). "Plasmonic Nanolithography". Nano Letters 4 (6): 1085. Bibcode:2004NanoL...4.1085S. doi:10.1021/nl049573q. - Fischer, U. Ch.; Zingsheim, H. P. (1981). "Submicroscopic pattern replication with visible light". Journal of Vacuum Science and Technology 19 (4): 881. Bibcode:1981JVST...19..881F. doi:10.1116/1.571227. - Schmid, H. et al. (1998). "Light-coupling masks for lensless, sub-wavelength optical lithography". Applied Physics Letters 73 (19): 237. Bibcode:1998ApPhL..72.2379S. doi:10.1063/1.121362. - Grbic, A.; Eleftheriades, G. V. (2004). "Overcoming the Diffraction Limit with a Planar Left-handed Transmission-line Lens" (Free HTML copy of this article). Physical Review Letters 92 (11): 117403. Bibcode:2004PhRvL..92k7403G. doi:10.1103/PhysRevLett.92.117403. PMID 15089166. - Nielsen, R. B.; Thoreson, M. D.; Chen, W.; Kristensen, A.; Hvam, J. M.; Shalaev, V. M.; Boltasseva, A. (2010). "Toward superlensing with metal–dielectric composites and multilayers" (Free PDF download). Applied Physics B 100: 93. Bibcode:2010ApPhB.100...93N. doi:10.1007/s00340-010-4065-z. - Fang, N.; Lee, H; Sun, C; Zhang, X (2005). "Sub-Diffraction-Limited Optical Imaging with a Silver Superlens". Science 308 (5721): 534–7. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849. - D.O.S. Melville, R.J. Blaikie, Optics Express 13, 2127 (2005) - C. Jeppesen, R.B. Nielsen, A. Boltasseva, S. Xiao, N.A. Mortensen, A. Kristensen, Optics Express 17, 22543 (2009) - Valentine, J. et al. (2008). "Three-dimensional optical metamaterial with a negative refractive index". Nature 455 (7211): 376–9. Bibcode:2008Natur.455..376V. doi:10.1038/nature07247. PMID 18690249. - Yao, J. et al. (2008). "Optical Negative Refraction in Bulk Metamaterials of Nanowires". Science 321 (5891): 930. Bibcode:2008Sci...321..930Y. doi:10.1126/science.1157566. PMID 18703734. - W. Cai, D.A. Genov, V.M. Shalaev, Phys. Rev. B 72, 193101 (2005) - A.V. Kildishev,W. Cai, U.K. Chettiar, H.-K. Yuan, A.K. Sarychev, V.P. Drachev, V.M. Shalaev, J. Opt. Soc. Am. B 23, 423 (2006) - L. Shi, L. Gao, S. He, B. Li, Phys. Rev. B 76, 045116 (2007) - L. Shi, L. Gao, S. He, Proc. Int. Symp. Biophot. Nanophot. Metamat. (2006), pp. 463–466 - Z. Jacob, L.V. Alekseyev, E. Narimanov, Opt. Express 14, 8247 (2006) - P.A. Belov, Y. Hao, Phys. Rev. B 73, 113110 (2006) - P. Chaturvedi, N.X. Fang, Mater. Res. Soc. Symp. Proc. 919, 0919-J04-07 (2006) - B. Wood, J.B. Pendry, D.P. Tsai, Phys. Rev. B 74, 115116 (2006) - E. Shamonina, V.A. Kalinin, K.H. Ringhofer, L. Solymar, Electron. Lett. 37, 1243 (2001) - Ziolkowski, R. W.; Heyman, E. (2001). "Wave propagation in media having negative permittivity and permeability". Physical Review E 64 (5): 056625. Bibcode:2001PhRvE..64e6625Z. doi:10.1103/PhysRevE.64.056625. - Smolyaninov, Igor I.; Hung, YJ; Davis, CC (2007-03-27). "Magnifying Superlens in the Visible Frequency Range". Science 315 (5819): 1699–1701. arXiv:physics/0610230. Bibcode:2007Sci...315.1699S. doi:10.1126/science.1138746. PMID 17379804. - Dumé, B. (21 April 2005). "Superlens breakthrough". Physics World. - Pendry, J. B. (18 February 2005). "Collection of photonics references". - Garcia1, N.; Nieto-Vesperinas, M. (2002). "Left-Handed Materials Do Not Make a Perfect Lens". Physical Review Letters 88 (20): 207403. Bibcode:2002PhRvL..88t7403G. doi:10.1103/PhysRevLett.88.207403. PMID 12005605. - Smith, D.R. et al. (2003). "Limitations on subdiffraction imaging with a negative refractive index slab". Applied Physics Letters 82 (10): 1506. arXiv:cond-mat/0206568. Bibcode:2003ApPhL..82.1506S. doi:10.1063/1.1554779. - Shelby, R. A.; Smith, D. R.; Schultz, S. (2001). "Experimental Verification of a Negative Index of Refraction". Science 292 (5514): 77–9. Bibcode:2001Sci...292...77S. doi:10.1126/science.1058847. PMID 11292865. - Wiltshire, M. C. K. et al. (2003). "Metamaterial endoscope for magnetic field transfer: near field imaging with magnetic wires". Optics Express 11 (7): 709–15. Bibcode:2003OExpr..11..709W. doi:10.1364/OE.11.000709. PMID 19461782. - Dumé, B. (4 April 2005). "Superlens breakthrough". Physics World. Retrieved 2009-11-10. - Liu, Z. et al. (2003). "Rapid growth of evanescent wave by a silver superlens". Applied Physics Letters 83 (25): 5184. Bibcode:2003ApPhL..83.5184L. doi:10.1063/1.1636250. - Lagarkov, A. N.; & V. N. Kissel (2004-02-18). "Near-Perfect Imaging in a Focusing System Based on a Left-Handed-Material Plate". Phys. Rev. Lett. 92 (7): 077401 (2004) [4 pages]. Bibcode:2004PhRvL..92g7401L. doi:10.1103/PhysRevLett.92.077401. - Melville, David; and Richard Blaikie (2005-03-21). "Super-resolution imaging through a planar silver layer". Optics Express 13 (6): 2127–2134. Bibcode:2005OExpr..13.2127M. doi:10.1364/OPEX.13.002127. PMID 19495100. Retrieved 2009-10-23. - Blaikie, Richard J; David O. S. Melville (2005-01-20). "Imaging through planar silver lenses in the optical near field". J. Opt. A: Pure Appl. Opt. 7 (2): S176–S183. Bibcode:2005JOptA...7S.176B. doi:10.1088/1464-4258/7/2/023. - Greegor, R. B. et al. (2005-08-25). "Simulation and testing of a graded negative index of refraction lens". Applied Physics Letters 87 (9): 091114. Bibcode:2005ApPhL..87i1114G. doi:10.1063/1.2037202. Retrieved 2009-11-01. - Durant, Stéphane et al. (2005-12-02). "Theory of the transmission properties of an optical far-field superlens for imaging beyond the diffraction limit". J. Opt. Soc. Am. B/Vol. 23, No. 11/November 2006 23 (11): 2383–2392. Bibcode:2006JOSAB..23.2383D. doi:10.1364/JOSAB.23.002383. Retrieved 2009-10-26. - Salandrino, Alessandro; Nader Engheta (2006-08-16). "Far-field subdiffraction optical microscopy using metamaterial crystals: Theory and simulations". Phys. Rev. B 74 (7): 075103. Bibcode:2006PhRvB..74g5103S. doi:10.1103/PhysRevB.74.075103. - Liu, Zhaowei et al. (2007-05-22). "Experimental studies of far-field superlens for sub-diffractional optical imaging". Optics Express 15 (11): 6947–6954. Bibcode:2007OExpr..15.6947L. doi:10.1364/OE.15.006947. PMID 19547010. Retrieved 2009-10-26. - Geoffroy, Lerosey et al. (2007-02-27). "Focusing Beyond the Diffraction Limit with Far-Field Time Reversal". AAAS Science 315 (5815): 1120–1122. Bibcode:2007Sci...315.1120L. doi:10.1126/science.1134824. PMID 17322059. - Liu, Zhaowei et al. (2007-03-27). "Far-Field Optical Hyperlens Magnifying Sub-Diffraction-Limited Objects". AAAS Science 315 (5819): 1686. Bibcode:2007Sci...315.1686L. doi:10.1126/science.1137368. PMID 17379801. - Rho, Junsuk; Ye, Ziliang; Xiong, Yi; Yin, Xiaobo; Liu, Zhaowei; Choi, Hyeunseok; Bartal, Guy; Zhang, Xiang (1 December 2010). "Spherical hyperlens for two-dimensional sub-diffractional imaging at visible frequencies". Nature Communications 1 (9): 143. Bibcode:2010NatCo...1E.143R. doi:10.1038/ncomms1148. - Huang, Bo; Wang, W.; Bates, M.; Zhuang, X. (2008-02-08). "Three-Dimensional Super-Resolution Imaging by Stochastic Optical Reconstruction Microscopy". Science 319 (5864): 810–813. Bibcode:2008Sci...319..810H. doi:10.1126/science.1153529. PMC 2633023. PMID 18174397. - Pendry, John (2003-04-07). "Perfect cylindrical lenses". Optics express 11 (7): 755. Bibcode:2003OExpr..11..755P. doi:10.1364/OE.11.000755. Retrieved 2009-11-04. - Milton, Graeme W.; Nicorovici, Nicolae-Alexandru P.; McPhedran, Ross C.; Podolskiy, Viktor A. (2005-12-08). "A proof of superlensing in the quasistatic regime, and limitations of superlenses in this regime due to anomalous localized resonance". Proceedings of the Royal Society A 461 (2064): 3999 (36 pages). Bibcode:2005RSPSA.461.3999M. doi:10.1098/rspa.2005.1570. - Schurig, D.; J. B. Pendry, and D. R. Smith (2007-10-24). "Transformation-designed optical elements". Optics express 15 (22): 14772 (10 pages). Bibcode:2007OExpr..1514772S. doi:10.1364/OE.15.014772. - Tsang, Mankei; Psaltis, Demetri (2008). "Magnifying perfect lens and superlens design by coordinate transformation". Physical Review B 77 (3): 035122. arXiv:0708.0262. Bibcode:2008PhRvB..77c5122T. doi:10.1103/PhysRevB.77.035122. - Huang, Fu Min et al. (2008-06-24). "Nanohole Array as a Lens". Nano Lett. (American Chemical Society) 8 (8): 2469–2472. Bibcode:2008NanoL...8.2469H. doi:10.1021/nl801476v. PMID 18572971. Retrieved 2009-12-21. - "Northeastern physicists develop 3D metamaterial nanolens that achieves super-resolution imaging". prototype super-resolution metamaterial nanonlens. Nanotechwire.com. 2010-01-18. Retrieved 2010-01-20. - Casse, B. D. F.; Lu, W. T.; Huang, Y. J.; Gultepe, E.; Menon, L.; Sridhar, S. (2010). "Super-resolution imaging using a three-dimensional metamaterials nanolens". Applied Physics Letters 96 (2): 023114. Bibcode:2010ApPhL..96b3114C. doi:10.1063/1.3291677. - Jung, J. and; L. Martín-Moreno and F J García-Vidal (2009-12-09). "Light transmission properties of holey metal films in the metamaterial limit: effective medium theory and subwavelength imaging". New Journal of Physics 11 (12): 123013. Bibcode:2009NJPh...11l3013J. doi:10.1088/1367-2630/11/12/123013. - Silveirinha, Mario G.; Engheta, Nader; Nader Engheta (2009-03-13). "Transporting an Image through a Subwavelength Hole". Physical Review Letters 102 (10): 103902. Bibcode:2009PhRvL.102j3902S. doi:10.1103/PhysRevLett.102.103902. PMID 19392114. - Kang, Hyeong-Gon; Tokumasu, Fuyuki; Clarke, Matthew; Zhou, Zhenping; Tang, Jianyong; Nguyen, Tinh; Hwang, Jeeseong (2010). "Probing dynamic fluorescence properties of single and clustered quantum dots toward quantitative biomedical imaging of cells". Wiley Interdisciplinary Reviews: Nanomedicine and Nanobiotechnology 2: 48–58. doi:10.1002/wnan.62. - "David R Smith (May 10, 2004). "Breaking the diffraction limit". Institute of Physics. Retrieved May 31, 2009. - Pendry, J. B. (2000). "Negative refraction makes a perfect lens". Phys. Rev. Lett. 85 (18): 3966–9. Bibcode:2000PhRvL..85.3966P. doi:10.1103/PhysRevLett.85.3966. PMID 11041972. - Podolskiy, V.A.; Narimanov, EE (2005). "Near-sighted superlens". Opt. Lett. 30 (1): 75–7. arXiv:physics/0403139. Bibcode:2005OptL...30...75P. doi:10.1364/OL.30.000075. PMID 15648643. - Tassin, P.; Veretennicoff, I; Vandersande, G (2006). "Veselago's lens consisting of left-handed materials with arbitrary index of refraction". Opt. Commun. 264: 130. Bibcode:2006OptCo.264..130T. doi:10.1016/j.optcom.2006.02.013. - Melville, DOS; Blaikie, R (2005). "Super-resolution imaging through a planar silver layer". Optics Express 13 (6): 2127–34. Bibcode:2005OExpr..13.2127M. doi:10.1364/OPEX.13.002127. PMID 19495100. - Fang, Nicholas; Lee, H; Sun, C; Zhang, X (2005). "Sub–Diffraction-Limited Optical Imaging with a Silver Superlens". Science 308 (5721): 534–7. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849. - Zhang, Yong; Fluegel, B.; Mascarenhas, A. (2003). "Total Negative Refraction in Real Crystals for Ballistic Electrons and Light". Physical Review Letters 91 (15): 157404. Bibcode:2003PhRvL..91o7404Z. doi:10.1103/PhysRevLett.91.157404. PMID 14611495. - The Quest for the Superlens By John B. Pendry and David R. Smith. Scientific American. July 2006. Free PDF download from Imperial College. - Subwavelength imaging - Professor Sir John Pendry at MIT – "The Perfect Lens: Resolution Beyond the Limits of Wavelength" - Surface plasmon subwavelength optics 2009-12-05 - Superlenses to overcome the diffraction limit - Breaking the diffracion limit Overview of superlens theory - Flat Superlens Simulation EM Talk - Superlens microscope gets up close - Superlens breakthrough - Superlens breaks optical barrier - Materials with negative index of refraction by V.A. Podolskiy - Optimizing the superlens: Manipulating geometry to enhance the resolution by V.A. Podolskiy and Nicholas A. Kuhta - Now you see it, now you don't: cloaking device is not just sci-fi - Initial page describes first demonstration of negative refraction in a natural material - Negative-index materials made easy - Simple 'superlens' sharpens focusing power – A lens able to focus 10 times more intensely than any conventional design could significantly enhance wireless power transmission and photolithography (New Scientist, 24 April 2008) - Far-Field Optical Nanoscopy by Stefan W.Hell. VOL 316. SCIENCE. 25 MAY 2007
http://en.wikipedia.org/wiki/Superlens
13
37
Lesson 3: Interest and Bank Accounts In this "Plan, Save, Succeed!" lesson, students will learn about the importance of saving and the differences between certificates of deposit, checking, and savings accounts. In addition to using fractions, decimals, and percentages to calculate interest, students will also learn the difference between simple and compound interest. - Students will understand the importance of saving and the differences (advantages and disadvantages) between certificates of deposit (CDs), checking, and savings accounts. (financial literacy) - Students will understand the difference between simple and compound interest. (financial literacy and math) - Students will use fractions, decimals, and percentages to calculate interest. (financial literacy and math) - Worksheet 3 Printable (PDF): "Making Money While You Sleep" - Worksheet Answer Key (PDF) - Mini-Poster (PDF) 1. Ask students if there are ways for kids to make money other than work, allowance, or gifts. Explain that banks and other financial institutions pay interest on certain accounts as an incentive to get people to deposit their money with them. They then use this money to make loans to companies and individuals. Banks make money from the interest they charge on the loans. 2. Indicate that financial institutions offer a number of different types of accounts: Checking accounts give individuals the easiest access to their funds via checks, but pay little or no interest. Indicate that most banks require students of middle school age to have a parent as a cosigner when opening a checking account. Individuals also have access to their funds via their debit cards. Savings accounts provide some interest but require the individual to visit a bank branch, use a debit card at an ATM, or go online to have access to his or her funds. Certificates of deposit (CDs) have the advantage of offering higher interest rates than savings accounts, but have tighter restrictions on access to funds. Individuals purchasing a CD must commit to holding it for a period of time, generally ranging from six months to five years or more. Selecting the term of a CD involves some risk. When a long-term CD is purchased and interest rates later go up, the purchaser only receives the interest rate stated at the time of purchase. If it turns out that interest rates go down, the purchaser will be happy to have locked in a favorable rate. People make decisions about the kind of account to open partly based on the amount of interest they'll earn, but also on how easily they can have access to their funds. 3. Indicate that interest is sometimes calculated as simple interest but is more frequently calculated as compound interest. Provide an example of each, explaining each formula: - The simple interest formula is I = PRT, where I is the amount of interest earned, P is the amount deposited (principal), R is the rate of interest, and T is the number of years. So if you deposited $100 for two years with an interest rate of 3½%, you would earn $100 x .035 x 2 or $7. (If necessary, explain how 3½% is converted to .035 in the calculation.) - The compound interest rate formula is A = P(1 + r/n)nt, where A is the ending amount, P is the amount deposited (principal), r is the interest rate, n is the number of compounding periods per year, and t is the number of years. The amount of interest earned is the ending amount minus the principal invested (A - P). Give an example of annual compounding. If you deposit $100 for two years at 3½% interest, you would end up with 100 x (1 + .035)2 or $107.12, representing the initial deposit plus interest. If appropriate for your class, use the same example but have the interest compounded quarterly (100 x (1 +.035/4)2x4 or $107.22 (rounded to the nearest cent) representing the initial deposit plus interest. Note how quarterly compounding results in higher interest income. 4. Distribute Worksheet 3 to students, then review answers with the class.
http://www.scholastic.com/browse/lessonplan.jsp?id=1563
13
16
In the past, most extinctions were due to natural causes. In fact, extinction is a naturally occurring phenomenon that occurs at a rate of roughly one to five species each year; however, scientists currently believe that habitats across the globe are now losing dozens of species each day. Generally, species under threat but at a lower risk of extinction are said to be ?threatened,? while those in more immediate jeopardy throughout a significant portion of their range are termed ?endangered.? The leading causes of extinction are now thought to stem from human activity, with nearly all threatened species also at risk. The biggest threats include habitat loss and degradation, the introduction of non-native species, over-exploitation, and pollution and disease. Climate change is also increasingly being considered as a threat because changes in temperature and rainfall patterns have been observed to alter the native range, food sources, reproduction rates, and predator-prey relationships among flora and fauna. According to the World Conservation Union (IUCN), most species at risk for extinction occur in tropical areas, especially on mountains and islands, in countries that include Australia, Brazil, China, Indonesia and Mexico. The IUCN maintains a "Red List" of species around the world where they are categorized by risk of extinction ranging from ?Least Concern? to ?Extinct.? As of 2007, more than 40,000 species appeared on the list, with 16,306 at risk for extinction. Within the United States, information about at-risk species is kept by the U.S. Fish and Wildlife Service (FWS) and, as of May 2007, the number of threatened and endangered invertebrates, plants, and animals in the U.S. stood at 1,351. Neither is intended to be a definitive accounting of all at-risk species since the lists often reflect those species of greatest interest to humans. As such, invertebrates tend to be vastly under-represented and neither list currently accounts for microorganisms. The potential loss of majestic species, such as the Sumatran tiger or the Asian panda, is often highlighted in order to raise awareness of human threats to endangered species; but ecologists argue that the loss of less-heralded plants and organisms could be more concerning since the ecosystem services that they provide is ? in many cases ? not well understood. As eminent biologist Sir Robert May has said, although there are many organizations for the protection of bird or animal species, there is ?no corresponding society to express sympathy for nematodes.? There are a number of measures in place at local, regional, national, and international levels which aim to reduce the risk of species extinction. The measures most commonly address habitat loss due to human encroachment and over-exploitation through hunting, fishing, and trade. Internationally, the U.S. is a party to the Convention on International Trade in Endangered Species (CITES), a treaty restricting international trade in species known to be threatened with extinction. The primary goal of the treaty is to promote cooperation among governments in order to reduce international wildlife trade, which is estimated to be worth billions of dollars and include millions of plant and animal specimens ranging from live animals and plants to a wide array of products derived from them. The U.S. Congress passed the Endangered Species Act (ESA) in 1973 authorizing federal agencies to undertake conservation programs to protect species, purchase land to protect habitats, and to establish recovery plans in order to ensure species' survival. The Act was later amended in 1982, permitting private landowners and government agencies to negotiate Habitat Conservation Plans (HCP) to permit development while protecting species. Although the actual number of HCPs is relatively low, questions over their effectiveness lingers. In 2007, a new market-based partnership agreement was signed to promote "habitat credits." The approach would offer incentives to landowners who preserve and enhance the habitat of endangered or at-risk species and allows landowners to sell credits to those needing to compensate for their own environmental impacts. The Red List The IUCN Species Survival Commission hosts a separate website for their Red List of Endangered Species providing taxonomic, conservation status, and distribution information on endangered species around the world. The Endangered Species Act: Success or Failure? Environmental Defense's Center for Conservation Incentives published this white paper in May 2005 arguing that while some conservation gains have been achieved, more could be accomplished by the ESA through the creation of conservation incentives. American Field Guide: Teacher Resources Produced by Oregon Public Broadcasting, American Field Guide recasts outdoors programming content from nearly 30 public television stations across the country in a convenient, on-demand format. Resources for teachers include lessons on endangered species, invasive plants, species restoration, and ecotourism in the national parks. U.S. FWS Kid's Corner This site for kids from the U.S. Fish and Wildlife Service's Endangered Species Program demonstrates how loss of habitat and ecosystems can lead to a decline in biodiversity. WhaleNet Sponsored by Wheelock College in Boston , WhaleNet is an interdisciplinary, interactive educational program focused on whales and the marine habitat. Students can plot the path of one of WhaleNet's tagged animals or use the photographs for a classification exercise.
http://enviroliteracy.org/article.php/33.html
13
24
The History of the Death Penalty The History of the Death 's use of the death penalty more than any other country. When European settlers came to the new world, they brought the practice of capital punishment. The first recorded execution in the new colonies was that of Captain George Kendall Jamestown colony of in 1608. Kendall was executed for being a spy for . In 1612, Virginia Governor Sir Thomas Dale enacted the Divine, Moral and Martial Laws, which provided the death penalty for even minor offenses such as stealing grapes, killing chickens, and trading with Indians. Laws regarding the death penalty varied from colony to colony. The Massachusetts Bay Colony held its first execution in 1630, even though the Capital Laws of New England did not go into effect until years later. The New York Colony instituted the Duke's Laws of 1665. Under these laws, offenses such as striking one's mother or father, or denying the "true God," were punishable by death. The abolitionist movement finds its roots in the writings of European theorists Montesquieu, Voltaire and Bentham, and English Quakers John Bellers and John Howard. However, it was Cesare Beccaria's 1767 essay, On Crimes and Punishment, that had an especially strong impact throughout the world. In the essay, Beccaria theorized that there was no justification for the state's taking of a life. The essay gave abolitionists an authoritative voice and renewed energy, one result of which was the abolition of the death penalty in American intellectuals as well were influenced by Beccaria. The first attempted reforms of the death penalty in the U.S. occurred when Thomas a bill to revise 's death penalty laws. The bill proposed that capital punishment be used only for the crimes of murder and treason. It was defeated by only one vote. Also influenced was Dr. Benjamin Rush, a signer of the Declaration of Independence and founder of the Pennsylvania Prison Society. Rush challenged the belief that the death penalty serves as a deterrent. In fact, Rush was an early believer in the "brutalization effect." He held that having a death penalty actually increased criminal conduct. Rush gained the support of Benjamin Franklin and Philadelphia Attorney General William Bradford. Bradford, who would later become the U.S. Attorney General, led to become the first state to consider degrees of murder based on culpability. In repealed the death penalty for all offenses except first degree murder. the early to mid-Nineteenth Century, the abolitionist movement gained momentum in the northeast. In the early part of the century, many states reduced the number of their capital crimes and built state penitentiaries.In 1834, became the first state to move executions away from the public eye and carrying them out in correctional facilities. became the first state to abolish the death penalty for all crimes except Rhode Island and abolished the death penalty for all crimes. By the end of the century, the world would see the countries of Costa Rica , states began abolishing the death penalty, most states held onto capital punishment. Some states made more crimes capital offenses, especially for offenses committed by slaves. In 1838, in an effort to make the death penalty more palatable to the public, some states began passing laws against mandatory death sentencing instead enacting discretionary death penalty statutes. The 1838 enactment of discretionary death penalty statutes in Tennessee , and later in , were seen as a great reform. This introduction of sentencing discretion in the capital process was perceived as a victory for abolitionists because prior to the enactment of these statutes, all states mandated the death penalty for anyone convicted of a capital crime, regardless of circumstances. With the exception of a small number of rarely committed crimes in a few jurisdictions, all mandatory capital punishment laws had been abolished by 1963. During the Civil War, opposition to the death penalty waned, as more attention was given to the anti-slavery movement. After the war, new developments in the means of executions emerged. The electric chair was introduced at the end of the built the first electric chair in 1888, and in 1890 executed William Kemmler. Soon, other states adopted this execution method. Early and Mid-Twentieth Century Although some states abolished the death penalty in the mid-Nineteenth Century, it was actually the first half of the Twentieth Century that marked the beginning of the "Progressive Period" of reform in the . From 1907 to 1917, six states completely outlawed the death penalty and three limited it to the rarely committed crimes of treason and first degree murder of a law enforcement official. However, this reform was short-lived. There was a frenzied atmosphere in the , as citizens began to panic about the threat of revolution in the wake of the Russian Revolution. In addition, the had just entered World War I and there were intense class conflicts as socialists mounted the first serious challenge to capitalism. As a result, five of the six abolitionist states reinstated their death penalty by 1920. 1924, the use of cyanide gas was introduced, as sought a more humane way of executing its inmates. Gee Jon was the first person executed by lethal gas. The state tried to pump cyanide gas into Jon's cell while he slept, but this proved impossible, and the gas chamber was constructed. From the 1920s to the 1940s, there was a resurgence in the use of the death penalty. This was due, in part, to the writings of criminologists, who argued that the death penalty was a necessary social measure. In the , Americans were suffering through Prohibition and the Great Depression. There were more executions in the 1930s than in any other decade in American history, an average of 167 per year. the 1950s, public sentiment began to turn away from capital punishment. Many allied nations either abolished or limited the death penalty, and in the , the number of executions dropped dramatically. Whereas there were 1,289 executions in the 1940s, there were 715 in the 1950s, and the number fell even further, to only 191, from 1960 to 1976. In 1966, support for capital punishment reached an all-time low. A poll showed support for the death penalty at only 42%. Challenging the Death The 1960s brought challenges to the fundamental legality of the death penalty. Before then, the Fifth, Eighth, and Fourteenth Amendments were interpreted as permitting the death penalty. However, in the early 1960s, it was suggested that the death penalty was a "cruel and unusual" punishment, and therefore unconstitutional under the Eighth Amendment. In 1958, the Supreme Court had Trop v. Dulles 86), that the Eighth Amendment contained an "evolving standard of decency that marked the progress of a maturing society." Although Trop was not a death penalty case, abolitionists applied the Court's logic to executions and maintained that the had, in fact, progressed to a point that its "standard of decency" should no longer tolerate the death penalty. In the late 1960s, the Supreme Court began "fine tuning" the way the death penalty was administered. To this effect, the Court heard two cases in 1968 dealing with the discretion given to the prosecutor and the jury in capital cases. The first case was U.S. v. Jackson (390 U.S. 570), where the Supreme Court heard arguments regarding a provision of the federal kidnapping statute requiring that the death penalty be imposed only upon recommendation of a jury. The Court held that this practice was unconstitutional because it encouraged defendants to waive their right to a jury trial to ensure they would not receive a death sentence. 1968 case was Witherspoon v. Illinois (391 510). In this case, the Supreme Court held that a potential juror's mere reservations about the death penalty were insufficient grounds to prevent that person from serving on the jury in a death penalty case. Jurors could be disqualified only if prosecutors could show that the juror's attitude toward capital punishment would prevent him or her from making an impartial decision about the punishment. In 1971, the Supreme Court again addressed the problems associated with the role of jurors and their discretion in capital cases. The Court decided Crampton v. Ohio and McGautha v. California (consolidated under 402 183). The defendants argued it was a violation of their Fourteenth Amendment right to due process for jurors to have unrestricted discretion in deciding whether the defendants should live or die, and such discretion resulted in arbitrary and capricious sentencing. Crampton also argued that it was unconstitutional to have his guilt and sentence determined in one set of deliberations, as the jurors in his case were instructed that a first-degree murder conviction would result in a death sentence. The Court, however, rejected these claims, thereby approving of unfettered jury discretion and a single proceeding to determine guilt and sentence. The Court stated that guiding capital sentencing discretion was "beyond present human ability." Suspending the Death Penalty The issue of arbitrariness of the death penalty was again be brought before the Supreme Court in 1972 in Furman v. Georgia, Jackson v. Georgia, and Branch v. Texas (known collectively as the landmark case Furman v. Georgia (408 U.S. 238)). Furman, like McGautha, argued that capital cases resulted in arbitrary and capricious sentencing. Furman, however, was a challenge brought under the Eighth Amendment, unlike McGautha, which was a Fourteenth Amendment due process claim. With the Furman decision the Supreme Court set the standard that a punishment would be "cruel and unusual" if it was too severe for the crime, if it was arbitrary, if it offended society's sense of justice, or it if was not more effective than a less severe penalty. In 9 separate opinions, and by a vote of 5 to 4, the Court held that 's death penalty statute, which gave the jury complete sentencing discretion, could result in arbitrary sentencing. The Court held that the scheme of punishment under the statute was therefore "cruel and unusual" and violated the Eighth Amendment. Thus, on June 29, 1972, the Supreme Court effectively voided 40 death penalty statutes, thereby commuting the sentences of 629 death row inmates around the country and suspending the death penalty because existing statutes were no longer valid. Reinstating the Death Penalty Although the separate opinions by Justices Brennan and Marshall stated that the death penalty itself was unconstitutional, the overall holding in Furman was that the specific death penalty statutes were unconstitutional. With that holding, the Court essentially opened the door to states to rewrite their death penalty statutes to eliminate the problems cited in Furman. Advocates of capital punishment began proposing new statutes that they believed would end arbitrariness in capital sentencing. The states were led by , which rewrote its death penalty statute only five months after Furman. Shortly after, 34 other states proceeded to enact new death penalty statutes. To address the unconstitutionality of unguided jury discretion, some states removed all of that discretion by mandating capital punishment for those convicted of capital crimes. However, this practice was held unconstitutional by the Supreme Court in Woodson v. North Carolina (428 U.S. 280 (1976)). Other states sought to limit that discretion by providing sentencing guidelines for the judge and jury when deciding whether to impose death. The guidelines allowed for the introduction of aggravating and mitigating factors in determining sentencing. These guided discretion statutes were approved in 1976 by the Supreme Court in Gregg v. Georgia (428 U.S. 153), Jurek v. Texas U.S. 262), and Proffitt v. 242), collectively referred to as the Gregg decision. This landmark decision held that the new death penalty statutes in Georgia , and were constitutional, thus reinstating the death penalty in those states. The Court also held that the death penalty itself was constitutional under the In addition to sentencing guidelines, three other procedural reforms were approved by the Court in Gregg. The first was bifurcated trials, in which there are separate deliberations for the guilt and penalty phases of the trial. Only after the jury has determined that the defendant is guilty of capital murder does it decide in a second trial whether the defendant should be sentenced to death or given a lesser sentence of prison time. Another reform was the practice of automatic appellate review of convictions and sentence. The final procedural reform from Gregg was proportionality review, a practice that helps the state to identify and eliminate sentencing disparities. Through this process, the state appellate court can compare the sentence in the case being reviewed with other cases within the state, to see if it is Because these reforms were accepted by the Supreme Court, some states wishing to reinstate the death penalty included them in their new death penalty statutes. The Court, however, did not require that each of the reforms be present in the new statutes. Therefore, some of the resulting new statutes include variations on the procedural reforms found in Gregg. The ten-year moratorium on executions that had begun with the Jackson and Witherspoon decisions ended on January 17, 1977, with the execution of Gary Gilmore by firing squad in . Gilmore did not challenge his death sentence. That same year, Oklahoma became the first state to adopt lethal injection as a means of execution, though it would be five more years until Charles Brooks became the first person executed by lethal injection in on December 7, 1982. Limitations within the Despite growing European abolition, the retained the death penalty, but established limitations on capital punishment. In 1977, the United States Supreme Court held in Coker v. Georgia (433 584) that the death penalty is an unconstitutional punishment for the rape of an adult woman when the victim was not killed. Other limits to the death penalty followed in the next decade. Mental Illness and Mental Retardation In 1986, the Supreme Court banned the execution of insane persons and required an adversarial process for determining mental competency in Ford v. Wainwright (477 U.S. 399). In Penry v. Lynaugh (492 U.S. 584 (1989)), the Court held that executing persons with mental retardation was not a violation of the Eighth Amendment. However, in 2002 in Atkins v. Virginia, (536 U.S. 304), the Court held that a national consensus had evolved against the execution of the mentally retarded and concluded that such a punishment violates the Eighth Amendment's ban on crual and unusual Race became the focus of the criminal justice debate when the Supreme Court held in Batson v. Kentucky (476 U.S. 79 (1986)) that a prosecutor who strikes a disproportionate number of citizens of the same race in selecting a jury is required to rebut the inference of discrimination by showing neutral reasons for the strikes. Race was again in the forefront when the Supreme Court decided the 1987 case, McCleskey v. Kemp (481 279). McCleskey argued that there was racial discrimination in the application 's death penalty, by presenting a statistical analysis showing a pattern of racial disparities in death sentences, based on the race of the victim. The Supreme Court held, however, that racial disparities would not be recognized as a constitutional violation of "equal protection of the law" unless intentional racial discrimination against the defendant could be shown. In the late 1980s, the Supreme Court decided three cases regarding the constitutionality of executing juvenile offenders. In 1988, in Thompson v. Oklahoma (487 815), four Justices held that the execution of offenders aged fifteen and younger at the time of their crimes was unconstitutional. The fifth vote was Justice O'Connor's concurrence, which restricted Thompson only to states without a specific minimum age limit in their death penalty statute. The combined effect of the opinions by the four Justices and Justice O'Connor in Thompson is that no state without a minimum age in its death penalty statute can execute someone who was under sixteen at the time of the crime. The following year, the Supreme Court held that the Eighth Amendment does not prohibit the death penalty for crimes committed at age sixteen or seventeen. (Stanford v. Kentucky, and Wilkins v. Missouri (collectively, 492 361)). At present, 19 states with the death penalty bar the execution of anyone under 18 at the time of his or her crime. In 1992, the ratified the International Covenant on Civil and Political Rights. Article 6(5) of this international human rights doctrine requires that the death penalty not be used on those who committed their crimes when they were below the age of 18. However, in doing so but the reserved the right to execute juvenile offenders. The is the only country with an outstanding reservation to this Article. International reaction has been highly critical of this reservation, and ten countries have filed formal objections to the the fall of 2004, in Roper v. Simmons, the United States Supreme Court will revisit its decision in Stanford and may declare the practice of executing defendants whose crimes were committed as juveniles unconstitutional. The comments are owned by the poster. We aren't responsible for their content.
http://off2dr.com/modules/cjaycontent/index.php?id=15
13
17
1. Chanukah living museum: Break down the different parts of the Chanukah story (e.g. life for the Jews in Judea, ascendancy of Antiochus and his new laws, dilemmas for Jews, Judah organizes, Maccabees fight, cleaning of the Temple, Temple rededication). Assign each group one part. Each group should use costumes and props to act out their part of the story. Invite parents to visit the classrooms in order and see a re-enactment of the Chanukah story. This activity can be done in one classroom, class-wide or school-wide. 2. Study and debate: Study Talmud, Shabbat 21b. Divide your high school class into two groups (Beit Shamai and Beit Hillel) and debate how to light Chanukah candles. 3. Olive oil: Learn all about olive oil. Research how and where olive oil is made. 4. School-wide chanukiah competition: Divide students into groups to create original chanukiot. Let their imaginations run wild. Award points for creativity, originality of theme, workmanship, etc. 5. Chanukah today: Discuss the "assimilation versus tradition" message of Chanukah. Are Jews culturally assimilated today? What makes Jews distinct from the larger population? To what are your students loyal? What lifestyle would they defend? 6. Chanukah serenade: Visit different classes and sing Chanukah classics for schoolmates. Alternatively, the students can rewrite the lyrics of popular songs to retell the story of Chanukah. 7. Chanukah around the world: Research the differences in Chanukah celebrations in Ashkenazic and Sephardic households. How is Chanukah perceived in Israel? How is it perceived in the rest of the world? 8. Poetry: Ask the students to write a poem with Chanukah themes of light vs. dark, freedom vs. oppression, etc. Highlight the poems at a Chanukah party. 9. ABC Chanukah game: Go around the room, with each student saying a word that is connected with Chanukah in alphabetical order (i.e. applesauce, blessings, chanukiah, dreidel, etc.) . 10. Chanukah hero project: Students choose a Chanukah hero (e.g. Judah the Maccabee, Chana and her seven sons, Yehudit, etc.) or a contemporary hero who overcame overwhelming odds (e.g. Natan Sharansky) and ask them to research the hero and prepare a powerpoint presentation to share with the class. 11. Chanukah concert: Offer a Chanukah concert to a local nursing home. Don’t forget to light candles with the residents. 12. Green Chanukah: Organize a green Chanukah fair where participants can make beeswax candles, create chanukiot, dreidels and decorations from recycled materials, learn about Israeli geography including Modi’in where the Chanukah story began, visit an oil press demonstration and exhibits about green initiatives in Israel. 13. Stage a debate: Divide your high school class into two or three groups (Hellenist Jews, Traditional Jews, optional: Syrian-Greeks). Debate the wisdom of each ideology. 14. Winter holidays: Discuss the differences and similarities between Chanukah and Christmas. What is the significance of each holiday? What are their themes? What are the rituals surrounding each holiday? Are they major holidays of their respective religion? What other holidays does each religion have? 15. School-wide dreidel contest: At a school-wide assembly, staff and students should compete in a dreidel contest using either the traditional rules or the rules of the Major League Dreidel. 16. Chanukah food fry: Ask your class to think of possible fried foods. Collect recipes, choose a few to make in the school kitchen. Make sure that sufganiot and latkes feature prominently! 17. Dreidel/chanukiah collector: Invite a dreidel or chanukiah collector from your area to tell you about their collection. 18. Chanukah Geography: Learn about the battles of the Maccabees. Where were they? What kind of a terrain did they fight on? How did this help in the battles with the Syrians? See battle maps here. 19. Literary adventure: Read the Book of the Maccabees. You may wish to use Philip Goodman's Hanukkah Anthology for selections. 20. Chanukah charades/mime contest: Make a list of Chanukah scenes or stories (e.g. finding the oil in the Temple, Chana and her seven sons, etc.), write each one on a separate card, then divide your class into groups and play charades or run a miming contest. 21. Chanukah plays: Dramatize the story of Chanukah. Some freely available plays are listed here. Other plays are Chanukah-Behind the Scenes by Stan J. Beiner, Miracles Aren’t Just Magic by Debbie Friedman, and several Chanukah musicals by Meredith Shaw Patera. 22. Food fair: Have students bring in a traditional Chanukah food that is made with oil. 23. Math/Physics Chanukah: Learn counting and dividing with young students. Study probability in high school Mathematics. Ask the students how many candles are needed for the entire 8 days, if every day one candle is added on. Invite the physics teacher to elaborate on dreidel spinning. 24. Chanukah PR Project: Sometimes Chanukah gets a bad rap compared to Christmas. Have the students create brochures/video advertisements for Chanukah explaining what it is, why it is unique, and how the holiday is celebrated. 25. Emotion-walk for Chanukah: Students brainstorm about how Jews felt at various times during the Chanukah story (e.g. joy, sadness, disgust, fear, anger, anticipation, optimism, submission, relief, rage, etc.) and “walk” the part by adjusting the timing, weight, direction, tension and focus of their walk. Students have to guess what emotion their classmates are portraying. 26. Chanukah puppet theatre: Ask young students to act out stories about Chanukah, using either stuffed animal puppets, sock puppets or inanimate objects (e.g. chanukiah with a “face” glued on) 27. Chanukah teamwork: Divide students into groups. Make a human chanukiah. Dress up one team member as a dreidel. Play dreidel relay races. 28. Stitching Chanukah: Young students cut two pieces of felt/fabric in the shape of a latke, stuff it and stitch! See the tutorial here. Submitted by Meira Josephy. 29. Festive fires: Invite firefighters from your local statation to teach about fire safety in general and about menorah safety in specific. Submitted by Carla Porath. 30. Menorah history: Research the history of the menorah using the Internet. What does it symbolize? When has the menorah used in history (i.e. hint: Bar Kokhba revolt, State of Israel)? Collect menorah images and use them to make a huge collage in the shape of a menorah to hang in the school hallway. Submitted by D. Weinberg. 31. Chanuakah Learn-a-thon: The Macabbees fought to protect our way of life. Organize a chanukah lighting ceremony in your school and learn Torah in small groups (parents-children) while the candles burn! Teachers move around the room to help groups. Submitted by Ezra. 32. Chanuakah Dance: For those kids who need to get up and move! Choreograph a Chanukah dance of the Macabbee battles, complete with costumes and props. Perform at the annual Chanukah party! Submitted by Hannah Serkin. 33. Chanukah Lights: Sing songs about lights (i.e. This Little Light of Mine, Ner Li, etc.) and brainstorm how we bring light to the world by doing caring acts. Submitted by Yavneh Academy. 34. Chanukah Cookies: With three and four year olds, make cookies in the shape of menorahs, pots of oil, stars and Judah the Maccabee and retell the story of the holiday. Builds motor skills and makes use of many of the five senses (smell, taste and feel). From BJE of Metropolitan Chicago's Early Childhood Centers. 35. Chanukah Wordle: Using Wordle, create word clouds in Hebrew or in English with Chanukah vocabulary. These make wonderful Chanukah decorations! Inspired by Educational Technology in ELT blog. 36. Chanukah Giving: Raise money in school to buy Chanukah presents for impoverished Jewish children. Use this as an opportunity to teach that Jewish poverty does exist. Submitted by Chavi Berger. 37. Burning Questions: The Partnership for Jewish Life and Learning is launching a web-based project called Burning Questions - daily Hanukkah emails with Jewish wisdom about the holiday and its many meanings, and "burning questions" for reflection and discussion. To see a list of the questions go here, and to sign up to get them in your email go here. Submitted by Miriam Stein, PJLL. 38. Chanukah Media: Watch the G-dcast video of the holiday with your class, using the teacher's guide posted here. Submitted by Sarah Lefton, G-dcast. 39. Chanukah Treasure Hunt: Young students listen to the story of Chanukah and then hunt for chocolate Chanukah gelt, "hidden" in Chanukah-related places (for example, next to the menorah). Submitted by HM. 40. Chanukah Book-off Competition: Students read Chanukah-related books over the holiday (parents initial next to book name). Student who reads the most in the class wins a gift certificate to bookstore. School librarians can provide book lists. 41. Chanukah Archeology Hunt: In a sandbox, bury Chanukah artifacts (e.g. oil flask, candles, etc). Children dig to find the artifacts, but have to explain their meaning before redeeming them for chanukah gelt! Submitted by Jennie Fisher. 42. Chanukah Freedom Discussion: The Maccabees revolted to fight for religious freedom. When have there been similar rebellions/revolutions? What happened? Is there religious freedom in the world now? Where? Where is there little/no religious freedom? 43. Chanukah Alef-Bet: Write a list of Chanukah words in Hebrew. Students have to put the words in the order of the Alef-Bet. 44. Olive Oil Scavenger Hunt: This activity is good for students of all ages/levels. Take 10 cue cards. On each card, write a question about Chanukah and hide the card around the classroom/school. Divide the students into groups. Present the groups with the first card. If they get the answer right, the teacher rewards them with a clue so they can find the next cue card. After the final card, they will receive a final clue, which will point them in the direction of a hidden flask of olive oil. Once the olive oil is found, light the menorah with the students. Submitted by Miriam Levy. 45. The Light of Chanukah: Chanukah is a holiday celebrating the end of religious oppression. To bring this lesson home to the students, research a case of current religious oppression (sadly, many instances exist) and organize a campaign to educate about it and fight it (e.g. letters to congressmen, editorials). Submitted by Marcus P. 46. Chanukah Heroes: Class discussion on the Jewish heroes, Macabbees. Examine the concept of Jewish heroism, focusing on modern day examples. What are a Jewish heroes' characteristics? What Jewish values do they embody? 47. Hanukka Dramatic Cleanup: I did this to "kick off" our discussion of about Hanukka with my 2nd graders, but I think 3rd and 4th could appreciate it. Re-create the defiled Temple. Students are not told what to expect. As they wait outside the classroom, hand out things the Macabees might have carried or worn (head scarves, shields, etc.). Let students into the room which has been purposely (--but carefully--) "messed up". Turn desks over, scatter (non-essential) school supplies around. Place something to represent an "oil lamp" in the room (I used a metal gravy boat which I hung in front of my blackboard) and put an empty bottle of (Olive cooking) oil in it. Pre-set a duplicate hidden bottle of oil in the room. Children pretend they are the Jews / Maccabees re-entering their Temple and clean it up. Then they may search for the (hidden) oil to fill the "lamp". Submitted by Sue Keitelman. 48. Chanukah in the Media: ask your middle school students to write newspaper headlines/breaking news alerts announcing the Chanukah events (e.g. Greeks Take the Temple!). Students might have more fun writing about how they would instant message/tweet the narrative. Submitted by Daniel Swirski. 49. Chanukah Media Skills: The "Big6" approach is used to teach fundamental information and technology skills to students. Complete this Big Six Chanukah Activity and decorate the classroom with the Chanukah posters. Submitted by Reuven Werber. 50. Dreidel Math: Bring in 100 dreidels to early childhood class. Activities: sort by color, count dreidels (which color has more, which less)create patterns (simple: red, green, blue, red, green blue; more difficult: tip up, tip down). From CELC Room Daled blog. 51. Floating/Sinking Chanukah: For early childhood/elementary. Bring in a large tub and fill it with water. Bring in different objects: cork, paper, fabric, clay. Have the children predict what will sink or float and let them experiment to test their hypotheses. Encourage the students to explain why the object did what it did. Now bring in sample liquids: oil, regular soda, diet soda, and experiment with those. Discuss with children. 52. Chanukah Story Retold: For middle school and up. Using their choice of media, a class is tasked with retelling the story of Chanukah. Students are in charge of all aspects of production including: selecting a media, brainstorming, writing, filming, programming, editing, etc. Do they want to create a documentary film? What about a history focusing on symbols? How about a cartoon version? The teacher's role is to guide them, providing direction, resources, and advice. To add to the list, click here. For other Chanukah ideas, go here. (c) The Lookstein Center.
http://www.lookstein.org/resources/chanukah_activities.htm
13
19
Anemia (uh-NEE-me-eh) is a condition in which your blood has a lower than normal number of red blood cells. This condition also can occur if your red blood cells don’t contain enough hemoglobin (HEE-muh-glow-bin). Hemoglobin is an iron-rich protein that gives blood its red color. This protein helps red blood cells carry oxygen from the lungs to the rest of the body. If you have anemia, your body doesn’t get enough oxygen-rich blood. As a result, you may feel tired and have other symptoms. With severe or long-lasting anemia, the lack of oxygen in the blood can damage the heart, brain, and other organs of the body. Very severe anemia may even cause death. Red blood cells are disc-shaped and look like doughnuts without holes in the center. They carry oxygen and remove carbon dioxide (a waste product) from your body. These cells are made in the bone marrow—a sponge-like tissue inside the bones. Red blood cells live for about 120 days in the bloodstream and then die. White blood cells and platelets (PLATE-lets) also are made in the bone marrow. White blood cells help fight infection. Platelets stick together to seal small cuts or breaks on the blood vessel walls and stop bleeding. With some types of anemia, you may have low numbers of all three types of blood cells. Anemia has three main causes: blood loss, lack of red blood cell production, or high rates of red blood cell destruction. These causes may be due to a number of diseases, conditions, or other factors. Many types of anemia can be mild, short term, and easily treated. Some types can even be prevented with a healthy diet. Other types can be treated with dietary supplements. However, certain types of anemia may be severe, long lasting, and life threatening if not diagnosed and treated. If you have signs and symptoms of anemia, you should see your doctor to find out whether you have the condition. Treatment will depend on what has caused the anemia and how severe it is. What Causes Anemia? The three main causes of anemia are: - Blood loss - Lack of red blood cell production - High rates of red blood cell destruction Some people have anemia due to more than one of these factors. Blood loss is the most common cause of anemia, especially iron-deficiency anemia. Blood loss can be short term or persist over time. Heavy menstrual periods or bleeding in the digestive or urinary tract can cause blood loss. Surgery, trauma, or cancer also can cause blood loss. If a lot of blood is lost, the body may lose enough red blood cells to cause anemia. Lack of Red Blood Cell Production Both acquired and inherited conditions and factors can prevent your body from making enough red blood cells. “Acquired” means you aren’t born with the condition, but you develop it. “Inherited” means your parents passed the gene for the condition on to you. Examples of acquired conditions and factors that can prevent your body from making enough red blood cells include diet, hormones, some chronic (ongoing) diseases, and pregnancy. Aplastic anemia also can prevent your body from making enough red blood cells. This condition can be acquired or inherited. A diet that lacks iron, folic acid (folate), or vitamin B12 can prevent your body from making enough red blood cells. Your body also needs small amounts of vitamin C, riboflavin, and copper to make red blood cells. Conditions that make it hard for your body to absorb nutrients also can cause your body to make too few red blood cells. Your body needs the hormone erythropoietin (eh-rith-ro-POY-eh-tin) to make red blood cells. This hormone stimulates the bone marrow to make these cells. A low level of this hormone can lead to anemia. Diseases and Disease Treatments Chronic (long-term) diseases, like kidney disease and cancer, can make it hard for the body to make enough red blood cells. Some cancer treatments may damage the bone marrow or damage the red blood cells’ ability to carry oxygen. If the bone marrow is damaged, it can’t make red blood cells fast enough to replace the ones that died or were destroyed. People who have HIV/AIDS may develop anemia due to infections or medicines used to treat their diseases. Anemia can occur during pregnancy due to low levels of iron and folic acid (folate) and changes in the blood. During the first 6 months of pregnancy, the fluid portion of a woman’s blood (the plasma) increases faster than the number of red blood cells. This dilutes the blood and can lead to anemia. Some infants are born without the ability to make enough red blood cells. This condition is called aplastic anemia. Infants and children who have aplastic anemia often need blood transfusions to increase the number of red blood cells in their blood. Acquired conditions or factors, such as certain medicines, toxins, and infectious diseases, also can cause aplastic anemia. High Rates of Red Blood Cell Destruction Both acquired and inherited conditions and factors can cause your body to destroy too many red blood cells. One example of an acquired condition that can cause your body to destroy too many red blood cells is an enlarged or diseased spleen. The spleen is an organ that removes worn-out red blood cells from the body. If the spleen is enlarged or diseased, it may remove more red blood cells than normal, causing anemia. Examples of inherited conditions that can cause your body to destroy too many red blood cells include sickle cell anemia, thalassemias, and lack of certain enzymes. These conditions create defects in the red blood cells that cause them to die faster than healthy red blood cells. Hemolytic anemia is another example of a condition in which your body destroys too many red blood cells. Inherited conditions can cause this type of anemia. Acquired conditions or factors, such as immune disorders, infections, certain medicines, or reactions to blood transfusions, also can cause hemolytic anemia. Who Is At Risk for Anemia? Anemia is a common condition. It occurs in all age groups and all racial and ethnic groups. Both men and women can have anemia, but women of childbearing age are at higher risk for the condition. This is because women in this age range lose blood from menstruation. Anemia can develop during pregnancy due to low levels of iron and folic acid (folate) and changes in the blood. During the first 6 months of pregnancy, the fluid portion of a woman’s blood (the plasma) increases faster than the number of red blood cells. This dilutes the blood and can lead to anemia. Infants younger than 2 years old also are at risk for anemia. This is because they may not get enough iron in their diets, especially if they drink a lot of cow's milk. Cow's milk is low in the iron needed for growth. Drinking too much cow’s milk may keep an infant or toddler from eating enough iron-rich foods. It also may keep his or her body from absorbing iron from iron-rich food. Researchers continue to study how anemia affects older adults. More than 10 percent of older adults have mild forms of anemia. Many of these people have other medical conditions as well. Major Risk Factors Factors that raise your risk for anemia include: - A diet that is low in iron, vitamins, or minerals - Blood loss from surgery or an injury - Long-term or serious illnesses, such as kidney disease, cancer, diabetes, rheumatoid arthritis, HIV/AIDS, inflammatory bowel disease (including Crohn’s disease), liver disease, heart failure, and thyroid disease - Long-term infections - A family history of inherited anemia, such as sickle cell anemia or thalassemias What Are the Signs and Symptoms of Anemia? The most common symptom of anemia is fatigue (feeling tired or weak). If you have anemia, it may seem hard to find the energy to do normal activities. Other signs and symptoms of anemia include: - Shortness of breath - Coldness in the hands and feet - Pale skin - Chest pain These signs and symptoms can occur because your heart has to work harder to pump more oxygen-rich blood through your body. Mild to moderate anemia may cause very mild symptoms or none at all. Complications of Anemia Some people who have anemia may have arrhythmias (ah-RITH-me-ahs). An arrhythmia is a problem with the rate or rhythm of the heartbeat. Over time, arrhythmias can damage your heart and possibly lead to heart failure. Anemia also can damage other organs in your body because your blood can’t get enough oxygen to them. Anemia can weaken people who have cancer or HIV/AIDS. This can make their treatments not work as well. Anemia also can cause many other medical problems. People who have kidney disease and anemia are more likely to have heart problems. In some types of anemia, too little fluid intake or too much loss of fluid in the blood and body can occur. Severe loss of fluid can be life threatening. How Is Anemia Diagnosed? Your doctor will diagnose anemia based on your medical and family histories, a physical exam, and results from tests and procedures. Because anemia doesn’t always cause symptoms, your doctor may find out you have it while checking for another condition. Medical and Family Histories Your doctor may ask whether you have any of the common signs or symptoms of anemia. He or she may ask whether you’ve had an illness or condition that could cause anemia. Your doctor also may ask about the medicines you take, your diet, and whether you have family members who have anemia or a history of it. Your doctor will do a physical exam to find out how severe your anemia is and to check for possible causes. He or she may: - Listen to your heart for a rapid or irregular heartbeat - Listen to your lungs for rapid or uneven breathing - Feel your abdomen to check the size of your liver and spleen Your doctor also may do a pelvic or rectal exam to check for common sources of blood loss. Diagnostic Tests and Procedures Your doctor may order various tests or procedures to find out what type of anemia you have and how severe it is. Complete Blood Count Often, the first test used to diagnose anemia is a complete blood count (CBC). The CBC measures many different parts of your blood. This test checks your hemoglobin and hematocrit (hee-MAT-oh-crit) levels. Hemoglobin is the iron-rich protein in red blood cells that carries oxygen to the body. Hematocrit is a measure of how much space red blood cells take up in your blood. A low level of hemoglobin or hematocrit is a sign of anemia. The normal range of these levels may be lower in certain racial and ethnic populations. Your doctor can explain your test results to you. The CBC also checks the number of red blood cells, white blood cells, and platelets in your blood. Abnormal results may be a sign of anemia, a blood disorder, an infection, or another condition. Finally, the CBC looks at mean corpuscular (kor-PUS-kyu-lar) volume (MCV). MCV is a measure of the average size of your red blood cells and a clue as to the cause of your anemia. In iron-deficiency anemia, for example, red blood cells usually are smaller than normal. Other Tests and Procedures If the CBC results show that you have anemia, you may need other tests such as: - Hemoglobin electrophoresis (e-lek-tro-FOR-e-sis). This test looks at the different types of hemoglobin in your blood. It can help diagnose the type of anemia you have. - A reticulocyte (re-TIK-u-lo-site) count. This test measures the number of young red blood cells in your blood. The test shows whether your bone marrow is making red blood cells at the correct rate. - Tests for the level of iron in your blood and body. These include serum iron and serum ferritin tests. Transferrin level and total iron-binding capacity also test iron levels. Because anemia has many causes, you also may be tested for conditions such as kidney failure, lead poisoning (in children), and vitamin deficiencies (lack of vitamins, such as B12 and folic acid). If your doctor thinks that you have anemia due to internal bleeding, he or she may suggest several tests to look for the source of the bleeding. A test to check the stool for blood may be done in your doctor’s office or at home. Your doctor can give you a kit to help you get a sample at home. He or she will tell you to bring the sample back to the office or send it to a lab. If blood is found in the stool, other tests may be used to find the source of the bleeding. One such test is endoscopy (en-DOS-ko-pe). For this test, a tube with a tiny camera is used to view the lining of the digestive tract. Your doctor also may want to do bone marrow tests. These tests show whether your bone marrow is healthy and making enough blood cells. How Is Anemia Treated? Treatment for anemia depends on the type, cause, and severity of the condition. Treatments may include dietary changes or supplements, medicines, or procedures. Goals of Treatment The goal of treatment is to increase the amount of oxygen that your blood can carry. This is done by raising the red blood cell count and/or hemoglobin level. Another goal is to treat the underlying condition or cause of the anemia. Dietary Changes and Supplements Low levels of vitamins or iron in the body can cause some types of anemia. These low levels may be due to poor diet or certain diseases or conditions. To raise your vitamin or iron levels, your doctor may ask you to change your diet or take vitamin or iron supplements. Common vitamin supplements are vitamin B12 and folic acid (folate). Vitamin C is sometimes given to help the body absorb iron. Your body needs iron to make hemoglobin. Your body can more easily absorb iron from meats than from vegetables or other foods. To treat your anemia, your doctor may suggest eating more meat—especially red meat, such as beef or liver—as well as chicken, turkey, pork, fish, and shellfish. Nonmeat foods that are good sources of iron include: - Spinach and other dark green leafy vegetables - Peanuts, peanut butter, and almonds - Peas; lentils; and white, red, and baked beans - Dried fruits, such as raisins, apricots, and peaches - Prune juice Iron is added to some foods, such as cereal, bread, and pasta. You can look at the Nutrition Facts label on a food to find out how much iron it contains. The amount is given as a percentage of the total amount of iron you need every day. Iron can be given as a mineral supplement. It’s usually combined with multivitamins and other minerals that help your body absorb iron. Low levels of vitamin B12 can lead to pernicious anemia. This type of anemia is often treated with vitamin B12 supplements. Good food sources of vitamin B12 include: - Breakfast cereals with added vitamin B12 - Meats such as beef, liver, poultry, fish, and shellfish - Egg and dairy products (such as milk, yogurt, and cheese) Folic acid (folate) is a form of vitamin B that’s found in foods. Your body needs folic acid to make and maintain new cells. Folic acid also is very important for pregnant women. It helps them avoid anemia and promotes healthy growth of the fetus. Good sources of folic acid include: - Bread, pasta, and rice with added folic acid - Spinach and other dark green leafy vegetables - Black-eyed peas and dried beans - Beef liver - Bananas, oranges, orange juice, and some other fruits and juices Vitamin C helps the body absorb iron. Good sources of vitamin C are vegetables and fruits, especially citrus fruits. Citrus fruits include oranges, grapefruits, tangerines, and similar fruits. Fresh and frozen fruits, vegetables, and juices usually have more vitamin C than canned ones. If you’re taking medicines, ask your doctor or pharmacist whether you can eat grapefruit or drink grapefruit juice. This fruit can affect the strength of a few medicines and how well they work. Other fruits rich in vitamin C include kiwi fruit, mangos, apricots, strawberries, cantaloupes, and watermelons. Vegetables rich in vitamin C include broccoli, peppers, tomatoes, cabbage, potatoes, and leafy green vegetables like romaine lettuce, turnip greens, and spinach. Your doctor may prescribe medicines to increase the number of red blood cells your body makes or to treat an underlying cause of anemia. Some of these medicines include: - Antibiotics to treat infections. - Hormones to treat adult and teenaged women who have heavy menstrual bleeding. - A man-made version of erythropoietin to stimulate your body to make more red blood cells. This hormone has some risks. You and your doctor will decide whether the benefits of this treatment outweigh the risks. - Medicines to prevent the body’s immune system from destroying its own red blood cells. - Chelation (ke-LAY-shun) therapy for lead poisoning. Chelation therapy is used mainly in children. This is because children who have iron-deficiency anemia are at increased risk for lead poisoning. If your anemia is severe, you may need a medical procedure to treat it. Procedures include blood transfusions and blood and marrow stem cells transplants. A blood transfusion is a safe, common procedure in which blood is given to you through an intravenous (IV) line in one of your blood vessels. Transfusions require careful matching of donated blood with the recipient’s blood. For more information, see the Diseases and Conditions Index Blood Transfusion article. Blood and Marrow Stem Cell Transplant A blood and marrow stem cell transplant replaces your abnormal or faulty stem cells with healthy ones from another person (a donor). Stem cells are found in the bone marrow. They develop into red and white blood cells and platelets. During the transplant, which is like a blood transfusion, you get donated stem cells through a tube placed in a vein in your chest. Once the stem cells are in your body, they travel to your bone marrow and begin making new blood cells. For more information, see the Diseases and Conditions Index Blood and Marrow Stem Cell Transplant article. If you have serious or life-threatening bleeding that’s causing anemia, you may need surgery. For example, you may need surgery to control ongoing bleeding due to a stomach ulcer or colon cancer. If your body is destroying red blood cells at a high rate, you may need to have your spleen removed. The spleen is an organ that removes worn-out red blood cells from the body. An enlarged or diseased spleen may remove more red blood cells than normal, causing anemia. How Can Anemia Be Prevented? You may be able to prevent repeat episodes of some types of anemia, especially those caused by lack of iron or vitamins. Dietary changes or supplements can prevent these types of anemia from occurring again. Treating the condition’s underlying cause may prevent anemia (or prevent repeat episodes). For example, if your doctor finds out that a medicine is causing your anemia, talk to him or her about other medicine options. To prevent your anemia from becoming more severe, tell your doctor about all of your signs and symptoms. Discuss the tests you may need with your doctor and follow your treatment plan. You can’t prevent some types of inherited anemia, such as sickle cell anemia. If you have an inherited anemia, talk to your doctor about treatment and ongoing care. Living With Anemia Often, you can treat and control anemia. If you have signs and symptoms of this condition, seek prompt diagnosis and treatment. Treatment may give you a greater energy and activity level, improve your quality of life, and help you live longer. With proper treatment, many types of anemia are mild and short term. However, anemia can be severe, long lasting, or even fatal when it’s due to an inherited disease, chronic disease, or trauma. Anemia and Children/Teens Infants and young children have a greater need for iron because of their rapid growth. Not enough iron can lead to anemia. Preterm and low-birth-weight babies are often watched closely for anemia. Most of the iron your child needs comes from food. Talk to your child’s doctor about a healthy diet and good sources of iron, vitamins B12 and C, and folic acid (folate). Only give your child iron supplements if the doctor prescribes them. You should carefully follow instructions on how to give your child these supplements. If your child has anemia, his or her doctor may ask whether the child has been exposed to lead. Lead poisoning in children has been linked to iron-deficiency anemia. Teenagers are at risk for anemia, especially iron-deficiency anemia, because of their growth spurts. Routine screenings for anemia are often started in the teen years. Older children and teens who have certain types of severe anemia may be at higher risk for injuries or infections. Talk to your child’s doctor about whether your child needs to avoid high-risk activities, such as contact sports.
http://www.theoncologyinstitute.com/?pgid=459&parid=457&rid=457
13
20
Share this page Excel - Using macros A macro command is a series of instructions that are always executed one after the other in the same order. They're very practical to automate repetitive tasks. The exercise that follows will demonstrate how to create a macro command. The next macro basically changes the background color of the selected cells. It contains only that single command. But after you've finished, you will be able create your own "macros" and insert as many instructions as you need. Write the following numbers in the appropriate cells. You must place the cursor in that cell before you can begin your "macro". You'll see why later on. the Tools menu, select the Macro option. A new window will open asking you for some information about the new macro. The first box asks you for the name that you give to this macro. You can also put a letter or a number in the shortcut box. You'll be able to execute the macro by pressing the Ctrl and a keys. You can place in any letter or number that you want. The shortcut key is compulsory. The window asks you if you want to store the macro in this file or in another worksheet file. It's possible "to reuse" the macro commands in a personal macro file. The same macro can then be used for several files. But this is only for those that are really serious about using macros. For the exercise: Give a name to your macro. It should represent the actions that will be done such as " Printing_the_budget" . The name of the macro cannot have any spaces. An underline can be used to link words. It's also possible to have a shortcut key to activate a macro command. This avoids you having to go into the Tools menu, followed by the Macro, Macro commands, select the macro of your choice and pressing on the Execute button. the data as shown in the picture above. All the actions that you go to make will be added to the macro command until you stop the recording. As soon as you press the OK button, the window disappears and a small toolbar appears in its place. This small toolbar has only two buttons. The first one is to stop the macro from recording. The second is to activate or deactivate the relative reference option. This can be important according to the type of macro that you want to carry out. You do not activate this option if you want the macro to do whatever you want it to do at always the same location. You activate this option if you want the macro to start where the active cursor is located. There will be more details of this option a little later on this page. Press the second button to be sure that the relative option is activated. (It's very important for this demonstration). Make a block out of the A1 to C1 cells. Press the Fill button to change the cell's background color to the color of your choice. the first button of the macro toolbar to stop the macro from recording. The new macro command is now complete. It could have had a lot more instructions than this example. But this is only to demonstrate what a macro can do. It's time to see if you may repeat it. the cursor in the A3 cell. Here is the result of the macro. The cells of the third row have now the same background color as the one you chose for the cells of the first row. There is an explanation if it didn't work. You forgot to activate the relative position option when asked. Excel will repeat the macro at the same location instead of beginning it where the cursor is located. It's as for that reason that it was asked to you to move the cursor in the A3 cell and to activate the relative button. Because the relative reference option was activated, you may execute the macro to another place rather than where it was created. You just need to put the cursor where you need to activate the macro. If the relative option is not activated, the macro will always repeat itself at the same location. It can be practical for your needs. It depends on the box. You decide when you should activate the relative reference option or not. It's practical when you know that you'll want to apply the macro to another place in your file. For the last exercise, it was necessary to be able to apply it to the A3 to C3 cells. A 1004 error message can also appear. Generally, it's because you forgot to stop the macro command from recording. The macro is then caught in an endless loop. The macro is recalled before it's even finished! It will be necessary to change the macro. the Tools menu, select the Macro option The Visual BASIC editor will appear with the code of the macro command that you want to change. The macro indicates that three cells were selected (A1 to C1). It then indicates the color as well as the pattern of the background of the selected cells. It's possible to print the code while changeing the macro command. From the File menu, select the Print option. To return to Excel... the File menu, select the Close and return to Microsoft Excel option. It's sometimes very interesting to be able to execute macro commands by just pressing a button; even more if you leave your file to another person. They probably don't know all the macro commands that you created to work faster. The next exercise consists in attaching a macro command to a button. IMPORTANT: You must already have created a macro that you will need before attaching it to a button. the Edit menu, select the Toolbar option. Excel will automatically ask you for the name of the macro that you want to attach to the newly created button. the macro of your choice. the button is still selected, you can change the text on the button to whatever you need it to be. To execute the macro that's now connected to the button. Place the cursor in the A5 cell. The button now executes the macro. You can apply any macro that you make to a button. It's that easy! If you wish to change the options of the button. the cursor over the button. A context menu will appear next to the button. If you want to affect the presentation of this macro button, select the Format Control option. The properties window of the button will allow you to change all the options of your choice. All the possible options you will be found under these seven tabs: Font, Alignment, Size, Protection, Properties, Margins and Web. Under the Font tab you will find all the options for the presentation of the text on the button. You can change the font, font style, size, color and effects. The Alignment tab allows you to change the placement of the text inside the button including its' orientation. The Size tab allows you to determine exactly the size of the button on the worksheet. Like for the protection for cells, it's also possible to protect buttons under the Protection tab. By default, all the buttons are protected when the protection is activated. You should leave it protected unless you want the user to be able change the button properties for some reason. The Properties tab allows you to decide if the button should change size and placement when you change you change you size and placement of the cells of the worksheet below it. You can also decide to move or not the button if you insert or delete rows and columns. By default, the button will not be printed unless you activate the Print the object option. The Margins tab allows you to control the margin, or the space between the text on the button and its border. You can use the predetermined margins or change them to your choice. You can always save this worksheet as a Web page. So Excel offers you some Web properties under the Web tab. For buttons, it only allows you to put some alternate It's interesting, even practical, to place a macro on a command button. It's easier for the users to use the options that you prepared them for them. But these buttons lack originality. They're grey! That's why Excel also offers you the possibility of placing a macro on a drawing. With a bit of work, these drawings can have very interesting forms. Here are some examples. Before being able to attach a macro to a drawing, you need two things: a drawing and a macro. Let's presume that you already have both. The next part consists only in attaching a macro to a drawing. the cursor over the drawing. If you can't select the Assign macro option, click on the border of the drawing instead of the text inside it. You should then be able to assign the macro of your choice. You can repeat this operation on as many drawings as you want. It certainly puts a little fun in your file! Assigning a button to a toolbar can be done in two Create a new toolbar From the View menu, select the options Toolbar and Customize. Press the New button. Enter the name you wish to give to your new toolbar and press the OK button. An empty toolbar will appear on the screen. You can now drag any button that you wish to that toolbar; including macros. Creating a button for the toolbar Now is the time to add buttons that you will be able to place on any toolbar. You just create a new toolbar that has no buttons in it. This next step will show you how to add buttons to it, or to any toolbar. Select the Commands tab. The result is that you now have a button on your toolbar. You can drag other button onto it. Close the Customize toolbar window. The only thing left to do is to assign a macro to that new button. Place the cursor over the button. From the content menu, select the Assign Macro option. From the list of available macros, select the one you wish to assign to the button. Let's see if it works. Place the cursor in the A7 cell. You can now make macros, apply them where you want them and even create toolbars! But it's not over. A default button picture was placed on the toolbar when you added a macro command to it. Excel offers you a list of pictures you can use to better represent what your macro command will do. Place the pointer over the new toolbar button. From the context menu, select the Change Button Image option. The picture will be replaced. You can change again if you wish. The number of pictures you can select from is limited and may not answer your needs. So you can change the pictures or create your own. Place the pointer over the new toolbar button. The size of the picture and the number of colors is limited but not your imagination. You can try any form, put a shade on it, anything you can think! Once you created the picture you wish, press the OK button. You can now create you own macros and toolbars. This will help you personalize Excel to better answer you needs make you more efficient. That's what this site is all about!
http://ulearnoffice.com/excel/macro.htm
13
22
IDENTIFYING CHEMICAL REACTIONS: Matter can combine or break apart to produce new types of matter with very ________________________________. When this occurs, it is said that matter has undergone a chemical reaction. EVIDENCE FOR CHEMICAL REACTIONS: Studying chemistry can be just like solving a mystery. To determine whether or not a chemical reaction has occurred, one needs to look for observable clues. If any one of these four things occur (and more than one can occur at the same instance), a chemical reaction has occurred. WRITING A CHEMICAL EQUATION: Many chemical reactions are important principally for the ________________________________ they release. The popular fuel, propane, is used for cooking food and heating homes. Propane is in one of a group of compounds called Hydrocarbons. A hydrocarbon contains carbon and hydrogen atoms (and sometimes oxygen is within the molecule), and they combine with oxygen in chemical reactions to form carbon dioxide and water. The reaction that occurs when propane is burned can be represented by the following word equation. A chemical reaction is an equation that shows what happens in a reaction. All chemical reactions are composed of ________________________________ (chemicals that are present before the reaction) and ________________________________ (chemicals that are produced after the reaction takes place.) In the chemical equation above, the Na+1 and the Cl-1 are the reactants. The NaCl is the product. The symbol between the Cl-1 and the NaCl, ( ® ), is the YIELDS sign. The arrow points towards the products and is used to show how the reaction takes place. ||Read plus or and. Used between two formulas to indicate reactants combined or products formed| ||Read yields or produces. Used to separate reactants (on the left) from products (on the right). The arrow points in the direction of change (we will always point the arrow toward the RIGHT)| ||Read solid. Written after a symbol or formula to indicate that the physical state of the substance is solid.| ||Read liquid. Written after a symbol or formula to indicate that the physical state of the substance is liquid.| ||Read gas. Written after a symbol or formula to indicate that the physical state of the substance is gaseous.| ||Read aqueous. Written after a symbol or formula to indicate that the substance is dissolved in water.| ||Indicates that the reaction is reversible.| ||Read No Reaction. Indicates that the given reactants do not react with each other.| SYMBOLS THAT INDICATE THE STATE: Equations can also be written to indicate the physical state of the reactants and products. In fact, sometimes an equation cannot be fully understood unless this information is shown. The symbols (g), (l), (s), and (aq) indicate whether the substance is a ________________________________, a liquid, a ________________________________, or one dissolved in water. The above graphic shows these other conventions used in writing chemical equations. While the arrow in an equation shows the direction of change, it implies that reactions occur in only one direction. This is not always the case, under suitable conditions, many chemical reactions can be reversed. H2 (g) + O2 (g) ® H2O (l) + Energy You have already learned that water can be separated into its elements. The reaction can be written as follows. Notice that the second equation is the reverse of the first. A chemical reaction that is reversible can be described by a single equation in which a double arrow shows that the reaction is possible in both directions. The equation says that hydrogen and oxygen combine to form water, releasing energy in the process. It also says that with the addition of energy to water, under suitable conditions, hydrogen and oxygen will be formed. In a chemical reaction, matter cannot be created ________________________________. For example in the reaction... There are 2 atoms of hydrogen and 2 atoms of chlorine on the reactants side, but only one atom of hydrogen and chlorine is on the products side. To fix this problem and to follow the rule that matter cannot be created nor destroyed, we must balance the equation. By balancing the equation, you must have equal numbers of each of the different atoms on both the reactants and the products sides. To balance equations, there are a few rules that must be followed. First, locate the most complex compound and start balancing each of the different atoms (saving oxygen and hydrogen for last). For example, 1. Start with NaCl. (This is the most complex compound because HCl has hydrogen in it and we save that for last.) 2. There is one atom of sodium on each side so move on to the chlorine. 3. There is one atom of chlorine on each side, so move on to the HCl. 4. There is one atom of hydrogen in HCl and two atoms of hydrogen on the reactants side, so put a 1/2 coefficient in front of H2, so there is only one atom of hydrogen on each side. 5. Since there can NOT be any fractions in the final answer - multiply all the coefficients by 2 as to eliminate the 1/2 coefficient. 6. Check for the lowest common coefficients. Another way of working this problem is to take inventory of each atom on the Reactants side and another inventory of each atom on the Products side of the yields sign. Given the following equation below: First, take "Inventory of each atom" by counting the number of atoms for each element on each side of the yields sign. Second, If you have an Even number of one-type of element on one side you must have an Even number of that element on the other side. Therefore, look for the even - odd combinations. Which in this case is oxygen. For now, the number of potassium atoms, and the number of chlorine atoms are equal - so we will not do anything to those molecules that contain potassium (K) or chlorine (Cl) at this time (in order to change potassium or chloride numbers). However, there are three Oxygen atoms in Potassium Chlorate (KClO3), and Two Oxygen atoms in oxygen gas (O2). In most cases, begin working with the "odd" number first to get your even number of atoms. So, place a two in front of the Potassium Chlorate molecule: And take "Inventory" again. Remember, that the Coefficient in front of the molecule means that there are now that many molecules. Example: ___2___ KClO3 = KClO3 + KClO3 (or two molecules of potassium chlorate). Therefore we must use the coefficient as a multiplier for each element - for example: 2 multiplied by O3 = Six oxygen: 2 x 3 oxygen = 6 oxygen. By placing the two in front of the potassium chlorate molecule - that gives us our Even number of Oxygen atoms (which we want) and that also changes the number of potassium atoms and chlorine atoms. So let us balance the potassium and chlorine atoms on each side of the yields sign - by placing a two in front of the potassium chloride molecule (KCl). And take "Inventory" again. After taking "Inventory" we notice that we need six oxygen atoms on the products side of the yield sign to balance the equation. So we place a three in front of the oxygen gas molecule. And take "Inventory" again. Now that all our atoms are equal on the reactants side as well as on the products side - we are finished. Our balanced equation now reads: Reading the above equation: Two molecules of Potassium Chlorate yields two molecules of Potassium Chloride and three molecules of Oxygen gas. Remember, not all equations are this easy to balance, and may require changing the numbers often, but to assist in a few short-cuts - save oxygen and hydrogen for last to balance, and save any "free-standing" elements for last as well. Free-standing means those elements which do not form molecules, or form diatomic molecules (H2 , N2 , O2 , F2 , Cl2 ). Write and balance the equation for the burning, or combustion of ethylene, C2H4. - Since ethylene is a hydrocarbon like propane, the equation for the combustion of propane can serve as a model. The equation for the combustion of ethylene are these. - Since both Carbon and Hydrogen occur only once on each side of the arrow, you can begin with either element. If you start with hydrogen, the equation can be balanced by placing a 2 in front of the water on the right. Now the number of hydrogen atoms is balanced, but the number of carbon and oxygen atoms is not. Balance the carbon next. Notice that if you change the coefficient of C2H4, you also change the number of hydrogens. Now both the carbon and hydrogen are balanced, but there are six oxygen atoms indicated on the right and only two on the left. Placing a three in front of the oxygen in the equation will complete the process. 1. Write and balance the equation for the reaction of sodium and water to produce sodium hydroxide and hydrogen gas. 2. Write and balance the equation for the formation of magnesium nitride from its elements. Write the balanced equations for each of the following reactions: 1a. Cu + H20 ® CuO + H2 1b. Al(NO3)3 + NaOH ® Al(OH)3 + NaNO3 1c. KNO3 ® KNO2 + O2 1d. Fe + H2SO4 ® Fe2(SO4)3 + H2 1e. O2 + CS2 ® CO2 + SO2 1f. Mg + N2 ® Mg3N2 1g. When copper(II) carbonate is heated, it forms copper(II) oxide and carbon dioxide gas. 1h. Sodium reacts with water to produce sodium hydroxide and hydrogen gas. 1i. Copper combines with sulfur to form copper(I) sulfide. 1j. Silver nitrate reacts with sulfuric acid to produce silver sulfate and nitric acid (HNO3) BALANCING COMBUSTION REACTIONS Combustion is an exothermic reaction in which a substance combines with oxygen forming products in which all elements are combined with oxygen. It is a process we commonly call burning. Usually energy is released in the form of heat and light. The general form of combustion equations for hydrocarbons is: Most combustion reactions are the oxidation of a fuel material with oxygen gas. A complete combustion produces carbon dioxide from all the carbon in the fuel, water from the hydrogen in the fuel, and sulfur dioxide from any sulfur in the fuel. Methane burns in air to make carbon dioxide and water. First, place a two in front of the water to take care of all the hydrogens and a two in front of the oxygen. Anything you have to gather (any atom that comes from two or more sources in the reactants or gets distributed to two or more products) should be considered last. What if the oxygen does not come out right? Lets consider the equation for the burning of butane, C4H10. Insert the coefficients for carbon dioxide and water. We now have two oxygens on the left and thirteen oxygens on the right. The real problem is that we must write the oxygen as a diatomic gas. The chemical equation is not any different from an algebraic equation in that you can multiply both sides by the same thing and not change the equation. Multiply both sides by two to get the following. Now the oxygens are easy to balance. There are twenty-six oxygens on the right, so the coefficient for the oxygen gas on the left must be thirteen. Now it is correctly balanced. What if you finally balanced the same equation with: Either equation is balanced, but not to the LOWEST integer. Algebraically you can divide these equations by two or three, respectively, to get the lowest integer coefficients in front of all of the materials in the equation. Now that we are complete pyromaniacs, lets try burning isopropyl alcohol, C3H7OH. First take care of the carbon and hydrogen. But again we come up with an oxygen problem. The same process works here. Multiply the whole equation (except oxygen) by two. Now the number thirteen fits in the oxygen coefficient. (Do you understand why?) The equation is balanced with six carbons, sixteen hydrogens, and twenty-eight oxygens on each side. ALSO CALLED COMBINATION, CONSTRUCTION, OR COMPOSITION REACTIONS The title of this section contains four names for the same type of reaction. Your text may use any of these. I prefer the first of the names and will use "synthesis" where your text may use one of the other words. The hallmark of a synthesis reaction is a single product. A synthesis reaction might be symbolized by: Predicting synthesis reactions: What would you expect for a product if aluminum metal reacts with chlorine gas? Based on the observed regularity, the following reaction seems likely. Keep in mind that this is not a prediction that aluminum metal will react with chlorine. Whether a reaction occurs depends on many factors. At the moment, you have no basis for concluding that a reaction between aluminum and chlorine will occur. You can predict an equation for the reaction between aluminum and chloride because of the observed regularity that elements react to form compounds. You can write the formula for the product because of regularities concerning chemical formulas. You learned that aluminum always has a +3 charge in compounds, and that chlorine always has a -1 charge when it forms a binary compound with a metal. Two materials, elements or compounds, come together to make a single product. Some examples of synthesis reactions are: Hydrogen gas and Oxygen gas burn to produce water. Sulfur Trioxide reacts with Water to make Sulfuric Acid. What would you see in a test tube if you were witness to a synthesis reaction? You would see two different materials combine. A single new material appears. ALSO CALLED DESYNTHESIS, DECOMBINATION, OR DECONSTRUCTION Of the names for this type of reaction, I prefer the name decomposition. Mozart composed until age 35. After that, he decomposed. Yes, a decomposition is a coming apart. A single reactant comes apart into two or more products, symbolized by: A decomposition reaction is opposite of a synthesis reaction. In a decomposition reaction, a compound breaks down to form two or more simpler substances. What is the equation for the decomposition of water, H2O? Since water contains only two elements, the decomposition products can be predicted as the individual elements. Decomposition versus Dissociation: 1. 2 NaCl(l) ® 2 Na (s) + Cl2 (g) Electrolysis Reaction - Decomposition Reaction 2. NaCl(s) ® Na+1(aq) + Cl-1(aq) Dissolved in Water - Dissociation Reaction Reaction "A", a Decomposition Reaction, there is a _____________________ CHANGE from the sodium chloride molecule which produces sodium metal and chlorine gas, substances with properties very different from those of salt (NaCl) when an electric current is passed through it. Most decomposition reactions form ELEMENTAL SUBSTANCES. The change in Reaction "B" produces _____________________ substances. The sodium ions and chloride ions are present in sodium chloride. The ions (sodium and chloride) have very different properties from those of neutral elements shown in equation "A". Reactions similar to "B" are considered a Dissociation not a Decomposition. Other Dissociations are described by the following equations: Some examples of decomposition reactions are: potassium chlorate when heated comes apart into oxygen gas and potassium chloride: and heating sodium bicarbonate releases water and carbon dioxide and sodium carbonate. In a "test tube" you would see a single material coming apart into more than one new material. Problem: Write an equation for the decomposition of lithium chloride, LiCl, and an equation for its dissociation. SINGLE REPLACEMENT REACTIONS ALSO CALLED SINGLE DISPLACEMENT, SINGLE SUBSTITUTION, OR ACTIVITY REPLACEMENT Synthesis reactions occur between two or more different elements. But elements may also react with compounds. A reaction in which one element takes the place of another element as part of a compound, is called a single replacement reaction. In this type of reaction, a metal always replaces another metal and a nonmetal always replaces another nonmetal. The general equation for a single replacement reaction is: Notice that element "A" replaces "C" in the compound "BC." Is the product, "C", an element or a compound? Consider the following reaction: If chlorine gas is bubbled through a solution of potassium bromide, chlorine replaces bromine in the compound and elemental bromine is produced. Notice that before reacting, chlorine is uncombined. It is a free (uncombined) element. Bromine, however, is combined with potassium in the compound potassium bromide. After reacting, the opposite is true. The chlorine is now combined with potassium in the compound potassium chloride, and bromine exists as a free element. The reaction is usually described with the words, chlorine has replaced bromine, and the reaction is called a single replacement reaction. Replacement reactions are not reversible. In other words, the following reaction will NOT take place! Therefore, the reaction will not happen so the equation is written as: Predicting if a reaction will occur: Activity Series: Halogens: There is an interesting regularity observed in replacement reactions involving the halogens: Fluorine, Chlorine, Bromine, and Iodine. Each halogen will react to replace any of the halogens below it in the periodic table, but will not replace those above. For example, chlorine will replace Bromine and Iodine, but it will not replace Fluorine. Activity Series: Metals: Metals also undergo replacement reactions, and regularities similar to those described for the halogens are observed. Metals can be listed in a series in which each metal will replace all the metals below it on the list, but none of the metals above it. Such list is commonly called an activity series. The activity series for metals is determined by experiments in which pairs of metals are compared for reactivity. The MORE active elements are closer to the TOP of the list, while the LESS active elements are closer to the BOTTOM of the list. Use the Activity Series to predict whether the following reaction can occur under normal conditions. Since Magnesium is above copper on the activity series, magnesium is more active and will replace copper in the compound CuSO4. This reaction will occur. Here is an example of a single replacement reaction: silver nitrate solution has a piece of copper placed into it. The solution begins to turn blue and the copper seems to disappear. Instead, a silvery-white material appears. How could you predict that this reaction would take place without stepping into the lab? Answer: Again, you need to look at the activity series and locate copper and silver. Since copper is higher on the list (more active) than silver (less active), it is safe to assume this reaction will occur. Confirmation, again, can only take place in the lab. DOUBLE REPLACEMENT REACTIONS ALSO CALLED DOUBLE DISPLACEMENT OR METATHESIS Double replacement reactions or metathesis reactions (metathesis is a Greek term meaning "changing partners" and accurately describes what happens.) The general equation for a double replacement reaction is: Predicting Double Replacement Reactions: Deciding whether a double replacement reaction will occur is a matter of predicting whether an insoluble product can form. If sodium nitrate is substituted for lead nitrate, you will see no reaction when the solutions in the two test tubes are mixed. Why Not? If you write out all possible combinations of metal and nonmetal ions and check the table below, you will see that none of them is insoluble. The combinations of a number of different positive and negative ions to form precipitates and soluble compounds. Some texts refer to single and double replacement reactions as solution reactions or ion reactions. That is understandable, considering these are mostly done in solutions in which the major materials we would be considering are in ion form. I think that there is some good reason to call double replacement reactions de-ionizing reactions because a pair of ions are taken from the solution in these reactions. Lets take an example. Above is the way the reaction might be published in a book, but the equation does not tell the whole story. Dissolved silver nitrate becomes a solution of silver ions and nitrate ions. Potassium chloride ionizes the same way. When the two solutions are added together, the silver ions and chloride ions find each other and become a solid precipitate. (They rain or drop out of the solution, this time as a solid.) Since silver chloride (See Chart Above) is insoluble in water, the ions take each other out of the solution. Here is another way to take the ions out of solution. Hydrochloric acid and sodium hydroxide (acid and base) neutralize each other to make water and a salt. Again the solution of hydrochloric acid is a solution of hydrogen (hydronium ions in the acid and base section) and chloride ions. The other solution to add to it, sodium hydroxide, has sodium ions and hydroxide ions. The hydrogen and hydroxide ions take each other out of the solution by making a covalent compound (water). One more way for the ions to be taken out of the water is for some of the ions to escape as a gas. The carbonate and hydrogen ions became water and carbon dioxide. The carbon dioxide is lost as a gas to the ionic solution, so the equation can not go back. One way to consider double replacement reactions is as follows: Two solutions of ionic compounds are really just sets of dissolved ions, each solution with a positive and a negative ion material. The two are added together, forming a mixture of four ions. If two of the ions can form (1) an insoluble material, (2) a covalent material such as water, or (2) a gas that can escape, it qualifies as a reaction. Not all of the ions are really involved in the reaction. Those ions that remain in solution after the reaction has completed are called spectator ions, that is, they are not involved in the reaction. There is some question as to whether they can see the action of the other ions, but that is what they are called. OXIDATION and REDUCTION: Many familiar chemical processes belong to a class of reactions called Oxidation-Reduction, or Redox, reactions. Every minute, redox reactions are taking place in your body and all around you. Reactions in batteries, burning of wood in a campfire, corrosion of metals, ripening of fruit, and combustion of gasoline to name a few examples. You have already seen some examples of oxidation and reduction reactions, when you classified reactions earlier this chapter. Recall that in a combustion reaction, such as the combustion of methane or the rusting of iron, oxygen is a reactant. Hopefully, you learned that these reactions can be classified as synthesis or combustion. The term oxidation also seems reasonable for describing these reactions because, in both cases, a reactant combines with oxygen. The term reduction is used to describe the reverse process. An example of reduction is the decomposition of water. However, not all oxidation reactions involve oxygen. There are many similar reactions between metals and nonmetals that are classified as oxidation or reduction reactions. When a piece of copper is placed in a colorless silver nitrate solution, you can tell that a chemical reaction occurs because the solution turns blue over a period of time. You also notice a silvery coating that forms on the piece of copper. You know that copper(II) ions in solution are blue, so copper(II) ions must be forming. The silver nitrate solution contains silver ions; these ions must be coming out of solution to form solid silver-colored material. The unbalanced equation for the reaction between solid copper and silver ions is this: The equation tells you that solid copper atoms are changing to copper ions at the same time that silver ions are changing to solid silver atoms. Consider what is happening to the two reactants (Cu and Ag). Each copper atom loses two electrons to form a copper(II) ion: Each neutral copper atom acquired a +2 charge by losing two electrons. Whenever an atom or ion becomes more positively charged (positive charge increases) in a chemical reaction, the process is called ________________________________. As you know, electrons do not exist alone. In this reaction, electrons are transferred to the silver ions in solution: When a silver ion acquires one electron, it loses its +1 charge and becomes a neutral atom. Whenever an atom or ion becomes less positively charged or more negative (positive charge is decreased) in a chemical reaction, the process is called ________________________________. Each of these equations describes only half of what takes place when solid copper reacts with silver ions. Reactions that show just half a process are called Half-Reactions. You always need two half-reactions - one for oxidation and one for reduction - to describe any Redox reaction. It is impossible for oxidation to occur by itself. The electrons given up by oxidation cannot exist alone; they must be used by a reduction reaction. Thus, there will always be an oxidation half-reaction whenever there is a reduction half-reaction, and vice-versa. NET EQUATION FOR A REDOX REACTION: You might simply add the equations for the half-reaction, but doing this may not give you a balanced equation. Balancing the net equation for a redox reaction also requires consideration of the ________________________________ in each of the half-reaction equations. If electrons appear in the net equation for the redox reaction, you know you have made a mistake, because electrons cannot exist by themselves. The overall equation is balanced only when the number of electrons lost in one half-reaction equals the electrons gained in one half-reaction. Example: Balance the following equation: The equations for the half-reactions can be written like this: Balance the number of electrons in the two half-reactions by multiplying the second equation by two: The number of electrons in both equations is now equal. Add the equations for the two half-reactions to give the balanced redox equation: Oxidation and reduction occur together. The solid copper (from the example) is called the ________________________________, because it brings about the reduction of silver ions. If the silver ions were not present, the copper would not oxidize. The substance that contains the silver ion(s) is the ________________________________, because it causes the oxidation of copper atoms. Notice that the substance that is reduced is the oxidizing agent and the substance that is oxidized is the reducing agent. And then there are some REDOX equations that need a POSITIVE Charge added. Since we use the electron as our negative symbol we want to use a proton as our positive symbol. The proton can be represented as p+1, but a better way is to use H +1. That way we can add water molecules to the problem to equal the number of "extra" hydrogen. Let's try the question below... Balance the following equation for a redox reaction. 2a. Balance the following equation for a redox reaction. 2b. Write a balanced equation for the reduction of iron(III) ions to iron atoms by the oxidation of nickel atoms to nickel(II) ions in an aqueous solution. 2c. In the reaction Fe+2 + Mg ® Fe + Mg+2, which reactant is the oxidizing agent? and reducing agent? 2d, Balance the half reaction for: Br-1 + MnO4-1 ® Br2 + Mn+2 To keep track of electron transfers in redox reactions more easily, oxidation numbers have been assigned to all atoms and ions. An Oxidation number is the real or ________________________________ on an atom or ion has when all bonds are assumed to be ionic What is the oxidation number for each element in the compound of Na3PO4? Determine the oxidation number for each element in the compound:
http://www.avon-chemistry.com/chem_intro_lecture.html
13
23
WHERE IS OIL FOUND? Brian J. O'Neill Shell Offshore Inc. P.O. Box 61933 New Orleans, LA 70161 Level: Grades 2 - 6 Estimated Time Required: 30 minutes Anticipated Learning Outcomes Students will learn that crude oil is found in porous rocks (reservoirs) rather than in caves or caverns. A common misunderstanding of the oil business is what a "reservoir" is. The term "pool of oil" conjures up images of underground caves filled with oil. In fact oil and natural gas are found in the pore spaces surrounding grains comprising sedimentary rocks. In this demonstration students will see what porosity is by observing the filling of pore space by a liquid. Fill bottle with water and add several drops of food coloring to tint the water (I use blue food coloring to tint the water dark blue to simulate crude oil). Fill one jar completely with marbles. Keep bottle of colored water hidden until ready to use. Oil was formed from layers of sediments rich in the remains of tiny (microscopic) plants and animals. As the layers were buried deeper and deeper below younger layers of sediment, the plant and animal remains were heated and squeezed, and altered into crude oil. This "high pressure cooking" expelled the oil from the "source rocks", the layers in which the microscopic plants and animals originally were deposited. Oil floats on water because is is lighter (less dense). The newly formed oil migrates through pores and cracks in surrounding rocks upward toward the surface. The oil will float on the groundwater within porous layers of rock. The crude oil continues to migrate until it reaches the surface at an "oil seep" (a famous example is La Brea Tar Pits in California). Many times, though, the oil is trapped underground by impermeable layers (such as shales), in which the pores are too fine to allow the crude oil to flow through. It is this trapped oil that explorers seek. Oil wells are drilled into these traps and the oil can then be brought to the surface and transported to an oil refinery for processing. Gasoline and motor oil come from refining crude oil. This activity was inspired by discussions with Linda E. Okland about an ARCO Alaska speakers' kit which included a similar demonstration. |Return to Activity-Age Table|
http://www.beloit.edu/sepm/Geology_and_the_enviro/Where_is_oil.html
13
18
Tillage is the agricultural preparation of the soil by mechanical, draught-animal or human-powered agitation, such as ploughing, digging, overturning, shovelling, hoeing and raking. Small-scale farming tends to use smaller-scale methods using hand-tools and in some cases draught animals, whereas medium to large-scale farming tends to use the larger-scale methods such as tractors. The overall goal of tillage is to increase crop production while conserving resources (soil and water) and protecting the environment (IBSRAM, 1990). Conservation tillage refers to a number of strategies and techniques for establishing crops in a previous crop’s residues, which are purposely left on the soil surface (see Figure 1 and Figure 2). Conservation tillage practices typically leave about one-third of crop residue on the soil surface (see Figure 3). This slows water movement, which reduces the amount of soil erosion. Conservation tillage is suitable for a range of crops including grains, vegetables, root crops, sugar cane, cassava, fruit and vines. Conservation tillage is a popular technology in the Americas, with approximately 44 per cent practised in Latin America. Studies suggest there is great potential to bring this technology to Africa, Asia and Eastern Europe, although limiting factors have to be taken into account (see Barriers below) (Derpsch, 2001; GTZ, 1998). The most common conservation tillage practices are no-till, ridge-till and mulch-till. No-till is a way of growing crops without disturbing the soil. This practice involves leaving the residue from last year's crop undisturbed and planting directly into the residue on the seedbed. No-till requires specialised seeding equipment designed to plant seeds into undisturbed crop residues and soil. No-till farming changes weed composition drastically. Faster growing weeds may no longer be a problem in the face of increased competition, but shrubs and trees may begin to grow eventually. Cover crops – ‘green manure’ – can be used in a no-till system to help control weeds. Cover crops are usually leguminous which are typically high in nitrogen can often increase soil fertility. In ridge-till practices, the soil is left undisturbed from harvest to planting and crops are planted on raised ridges (Figure 4). Planting usually involves the removal of the top of the ridge. Planting is completed with sweeps, disk openers, coulters, or row cleaners. Residue is left on the surface between ridges. Weed control is accomplished with cover crops, herbicides and/or cultivation. Ridges are rebuilt during row cultivation. Mulch-till techniques involve disturbing the soil between harvesting one crop and planting the next but leaving around a third of the soil covered with residues after seeding. Implements used for mulch-till techniques include chisels, sweeps, and field cultivators. Unpredictability of rainfall and an increase in the mean temperature may affect soil moisture levels leading to damages to and failures in crop yields. Conservation tillage practices reduce risk from drought by reducing soil erosion, enhancing moisture retention and minimising soil impaction. In combination, these factors improve resilience to climatic effects of drought and floods (Smith, 2009). Improved soil nutrient recycling may also help combat crop pests and diseases (Holland, 2003). Conservation tillage benefits farming by minimising erosion, increasing soil fertility and improving yield. Ploughing loosens and aerates the soil which can facilitate some deeper penetration of roots. Tillage is believed to help in the growth of microorganisms present in the soil and helps in the mix the residue from the harvest, organic matter and nutrients evenly in the soil. Conservation tillage systems also benefit farmers by reducing fuel consumption and soil compaction. By reducing the number of times the farmer travels over the field, farmers make significant savings in fuel and labour. Labour inputs for land preparation and weeding are also reduced once the system becomes established. In turn, this can increase time available for additional farm work or off-farm activities for livelihood diversification. Also once the system is established, requirement for herbicides and fertilisers can be reduced. According to Sorrenson et al (1998), total economic benefits arising from adoption of the no-tillage technique in small farms of generally less than 20ha in Paraguay have reached around $941 million. Conservation tillage may require the application of herbicides in the case of heavy weed infestation, particularly in the transition phase, until the new balance of weed populations is established (FAO, no date). The practice of conservation may also lead to soil compaction over time; however this can be prevented with chisel ploughs or subsoilers. Initial investment of time and money along with purchases of equipment and herbicides will be necessary for establishing the system. Higher levels of surface residue may result in higher plant disease and pest infestations, if not managed properly. There is a strong relationship between this technology and appropriate soil characteristics. This is detrimental in high clay content and compact soils. The cost of equipment for conservation will depend on whether the land is tilled with motorised traction, animal-draught or manpower. The most important cost for larger producers will be machinery and fuel. However, higher herbicide applications could offset these savings, especially in the initial adoption stages. On smaller-sized farms, savings in labour costs could be substantial. A study in Nigeria has shown conservation tillage practices to reduce labour inputs by around 50 per cent compared to traditional systems (Ehui et al, 1990). Financial incentives and subsidies may be required to assist farmers to adopt this practice. In Brazil, monetary incentives were found to be highly successful in motivating group formation among farmers, leading to an increase in cooperation and technology uptake (World Bank, 2000). Farmers can be supported by national, regional and local farmer’s organisations to equipment. In Zambia, the Africare Smallholder Agricultural Mechanisation Promotions (SAMeP) programme has assisted small-scale farmers to access technologies for conservation tillage through working with rural entrepreneurs to broaden the equipment supply base and provide spare parts (Sakala, 1999). This style of programme could be broadened to improve access to other inputs such as cover crop seeds, herbicides and fertilisers. Private and public sector equipment suppliers also have a role in responding to demands from different types of farmers for adapted tools and equipment. Farmers need extensive training to implement conservation tillage.This includes knowledge of crop rotation; analysing soil conditions; monitoring soil temperature and moisture; adjusting nutrient and weed management approaches; and selecting appropriate equipment. Studies in Latin America have shown that the main limitation to the spread of no-tillage technology has been a lack of specific site knowledge about weed control. Information on common weeds, herbicide products (including details of chemical and toxicological characteristics) and application technologies are therefore a key knowledge requirement for application of no-tillage technologies (Derpsch, 2001). A lack of locally-appropriate knowledge and/or poor research and development for conservation tillage technology presents one of the main barriers to uptake (Derpsch, 2001). Likewise, where there is no local production or availability of equipment and other inputs, such as herbicides, then costs will rise significantly and may present a barrier to implementation. Ecological barriers to no-tillage production systems include low precipitation with low biomass production, short growing seasons and soils at risk of water logging. Socio-economic constraining factors include: strong demand for crop residues as forage for livestock, uncertain land use rights, poorly developed infrastructure (market, credit, extension service) (GTZ, 1998). In Latin America, the uptake of this technology was greatly facilitated by exchange of information through farmers associations (World Bank, 2000), provision of publications with adequate, practical information on technology implementation and studies showing positive economic returns (Derpsch, 2001). Derpsch, R. (2001) Keynote: Frontiers in conservation tillage and advances in conservation practice, in Stott D. E., Mo htar, R. H., and Steinhart G. C (Eds.) Sustaining the global farm. Selected papers from the 10th International Soil Conservation Organisation Meeting held May 24-29, 1999 at Purdue University and the USSA-ARS National Soil Erosion Research Laboratory. Ehui, S. K., B. T. Kang, and D. S. C. Spencer (1990) Economic analysis of soil erosion effects in alley cropping, no-till and bush fallow systems in south western Nigeria. Agricultural Systems, 34, 349-368. FAO (Food and Agriculture Organisation) (no date) Conservation Agriculture: Matching production with sustainability, FAO GTZ (1998) Conserving natural resources and enhancing food security by adopting no-tillage. An assessment for the potential for soil-conserving production systems in various agro-ecological zones of Africa, GTZ Eschborn, Tropical Ecology Support Programme, TÖB Publication. Holland, J. M. (2004) The environmental consequences of adopting conservation tillage in Europe: reviewing the evidence, Agriculture, Ecosystems and Environment 103, 1-25 IBSRAM (International Board for Soil Research and Management) (1990) Organic-matter management and tillage in humid and sub-humid Africa. IBSRAM Proceedings No. 10. Bangkok: IBSRAM. Sakala, I. (1999) Efforts and initiatives for supply of conservation tillage equipment in Zambia in Kaumbutho P G and Simalenga T E (eds), 1999. Conservation tillage with animal traction. A resource book of the Animal Traction Network for Eastern and Southern Africa (ATNESA). Harare. Zimbabwe. 173p. Smith, P. (2005) Agriculture and Climate Change: An agenda for negotiation in Copenhagen, IFPRI Focus 16. Sorrenson, W. J., C. Duarte, and J. López-Portillo, (1998) Economics of non-till compared to conventional cultivation Systems on small farms in Paraguay, policy and investment implications, Report Soil Conservation Project MAG-GTZ, August 1998 World Bank (2000) Implementation completion report Brazil: Land management II - Santa Catarina project: implementation completion report (Loan 3160-BR). Report #20482, Washington, DC, World Bank.
http://climatetechwiki.org/content/conservation-tillage
13
31
Alan Taylor is professor in the History Department of the University of California at Davis. This essay derives from parts of chapters 8 and 14 in his American Colonies (New York: Viking Penguin, 2001). In 1776, a revolution created a new nation from the British colonies of the Atlantic seaboard. The first colonial revolution for independence followed the emergence of those colonies as the most densely settled region of North America. North of the Rio Grande, the British colonial population of 2.5 million eclipsed the number of native peoples (about 800,000 in 1776) and the small enclaves of French (about 75,000) and Spanish (25,000). Numbers and the relative prosperity of the British colonies gave colonial British leaders a confidence that they could and should achieve independence. But the ethnic, racial, and regional diversity of that colonial population also gave the leading revolutionaries pause. Although most of the Continental Congressmen came from families of English origin, they meant to govern a far more diverse population, which included thousands of Germans, Dutch, Scots, Scots-Irish and at least 500,000 enslaved Africans. With good cause, the Founding Fathers of the revolutionary generation wondered if a union of thirteen disparate states could endure without a greater sense of common identity among the constituent people. To understand both the ultimate success and the immediate problems facing the American revolutionaries, we also need to examine the sources and divisions among British Americans. This examination must begin by noting the regional differences within North America, which can be divided into five distinct regions: New England, Middle Colonies, Chesapeake, Lower South, and the West Indies. In addition to differences in geography and climate, these regions assumed cultural differences that corresponded with the variations in the places of origin of the peoples each region attracted: their social background, the timing of their arrival, and their relative success at achieving population growth and prosperity. The Atlantic Seaboard colonies gradually emerged during the seventeenth century as part of an English empire, which became “British” in 1707 with the formal union of Scotland and England. This union opened the colonies to Scottish emigrants; prior to it, the majority of the colonial emigrants came from England (including Wales), settling in the West Indies and the Chesapeake, rather than in New England. Indeed, the New England emigrants represented only 30 percent of all the English who crossed the Atlantic to the various colonies during the 1630s. By colonial standards, New England attracted an unusual set of emigrants: they were the sort of skilled and prosperous people who ordinarily might have stayed at home rather than risk the rigors of a transatlantic crossing and the uncertainties of colonial life. The majority of seventeenth-century English emigrants were poor, young, single men, lacking good prospects in the mother country, gambling their lives as indentured servants in the Chesapeake or West Indies, where the warmer climate permitted plantation crops that demanded—and generated the profits that permitted—the importation of laborers. In sharp contrast, most of the New England colonists could pay their own way and emigrated as family groups. They also enjoyed a more even balance between the sexes. At mid-century, the New England sex ratio was six males for every four females, compared to four men for every woman in the Chesapeake. This more even balance encouraged a more stable society and faster population growth. New England’s healthier population sustained a rapid growth through natural increase, while in the Chesapeake and West Indies, population growth depended on human imports. During the seventeenth century, New England received only 21,000 emigrants—a fraction of the 120,000 transported to the Chesapeake or the 190,000 who colonized the West Indies. Yet in 1700 New England’s colonial population of 91,000 exceeded the 85,000 whites in the Chesapeake and the 33,000 white residents in the West Indies. During the eighteenth century, the British navy and its merchant marine dominated the Atlantic, to the benefit of the British colonies along the Atlantic seaboard of North America. A swelling volume of British shipping carried information, goods, and people across the Atlantic. The annual transatlantic crossings tripled from about 500 during the 1670s to 1,500 by the late 1730s. Increased shipping, accompanied by a decline in piracy, reduced insurance costs and freight charges, encouraging the shipment of greater cargos. More frequent voyages and larger ships, some dedicated to the emigrant trade, also cut in half the price of a passage from Europe to the colonies between 1720 and 1770. The ocean, formerly a barrier, now became a bridge between the two shores of the empire. Clustered close to the Atlantic, most colonists were oriented eastward toward the ocean and Europe, rather than westward into the interior. The continental interior, with its dense forests, native tribes, and immense but uncertain dimensions, was far more mysterious and daunting than an ocean passage. The Atlantic was regularly traversed, but few of the colonists ventured into the North American continent. During the first two-thirds of the eighteenth century, both the ocean and the passage of time worked to draw the colonists closer to their mother country. They became significantly better informed about events and ideas in Britain, especially in London, than in earlier years. The swelling volume of shipping also led to more complex trading patterns, feeding an impressive growth in the colonial economy. Consequently, most free colonists enjoyed rising incomes that permitted increased consumption of British manufactures. This consumption reinforced the ties between the mother country and the colonies, especially for the colonial elite. Paradoxically, as the Atlantic became more British in its shipping, information, and goods, it also became a conduit for a greater diversity of emigrants. Despite the proliferation of British shipping, the overall number of emigrants from the mother country declined during the early eighteenth century from its seventeenth-century peak. During the early decades of colonization, when the English economy and state were weak, ruling opinion had regarded the realm as dangerously overpopulated. And to reduce unemployment and social discontent at home, England’s rulers had encouraged emigration to the colonies, where laborers could develop staple commodities for the mother country and dissidents could be exiled from political influence. Late in the seventeenth century, however, ruling opinion shifted, as the home government became more tolerant of religious diversity; English manufacturing expanded, increasing the demand for cheap labor; and the realm frequently had need of additional thousands for an enlarged military. Thereafter, English emigration became an economic and strategic loss to the mother country. Hence, in the early eighteenth century, free colonists arrived from elsewhere in Europe, primarily Scotland, Ireland, and Germany. While discouraging English emigration, imperial officials wished to continue colonial development and bolster colonial defenses, and these obligations called for an alternative supply of colonists. Hence, they recruited for colonists from elsewhere in Europe, with the idea of strengthening the colonies without weakening the mother country. More than any other eighteenth-century empire, the British came to rely on foreign emigrants for human capital. The new recruitment invented America as an asylum from religious persecution and political oppression in Europe—with the important proviso that the immigrants had to be Protestants. Colonial laws and prejudices continued to discourage the emigration of Catholics and Jews to British America, for fear they would subvert Protestantism and betray the empire to France or Spain. Moreover, during the eighteenth century even larger numbers of enslaved Africans poured across the Atlantic as the slave trade escalated, eclipsing the movement of all free emigrants to British America. In 1718 an English visitor remarked, “The labour of negroes is the principal foundation of riches from the plantations.” As a land of freedom and opportunity, British America was a limited venture, in reality. Thus, relatively few eighteenth-century emigrants came from England: only 80,000 between 1700 and 1775, compared to 350,000 during the seventeenth century. The decline is especially striking because, after 1700, the colonies became cheaper and easier to reach by sea and safer to inhabit. But England’s growing economy provided rising real wages for laboring families, enabling more to remain in the mother country, while the growing militarization of the empire absorbed more laboring men into the enlarged army and navy for longer periods. In wartime, many would-be emigrants also balked at the greater dangers of a transatlantic passage. Especially depressed during wartime, colonial emigration partially revived during the intervals of peace, when the Crown quickly demobilized thousands of soldiers and sailors, temporarily saturating the English labor market. Unable to find work, some people entered indentures to emigrate and serve in the colonies. Other demobilized men went unwillingly as convicted and transported criminals. In England, crime surged with every peace as thousands of unemployed and desperate men stole to live. The inefficient but grim justice of eighteenth–century England imposed the death penalty for 160 crimes, including grand larceny, which was loosely defined as stealing anything worth more than a shilling. In 1717, shortly after the military demobilization of 1713–14, Parliament began to subsidize the shipment of convicted felons to the colonies as an alternative to their execution. The Crown generally paid £3 per convict to shippers, who carried the felons to America for sale as indentured servants with especially long terms: usually fourteen years. The shippers’ profits came from combining the sales price (about £12) with the Crown subsidy, less the £5 to £6 cost of transportation. Between 1718 and 1775, the empire transported about 50,000 felons, more than half of all English emigrants to America during that period. The transported were overwhelmingly young, unmarried men lacking marketable skills—the cannon-fodder of war and the jail-bait of peace. About 80 percent of the convicts went to Virginia and Maryland, riding in the English ships of the tobacco trade. Convicts provided a profitable sideline for the tobacco shippers, who had plenty of empty cargo space on the outbound voyage from England, and Chesapeake planters were willing to buy convict labor. At about a third of the £35 an African male slave cost, the convict appealed to some planters as a better investment. The majority of the purchasers were small-scale planters with limited budgets. In a pinch, however, large plantation owners including George Washington bought a few convicts to supplement their slaves. In time, despite its profitability, colonial leaders regarded the convict trade as an insult that treated the colonies as inferior to the mother country. The colonists wondered why they should have to accept convicts deemed too dangerous to live in England and dreaded the possibility that white convicts would make common cause in rebellion with the black slaves. In a political satire, Benjamin Franklin advocated sending American rattlesnakes to England in exchange. But ultimately the colonists colluded in the convict trade. In 1725 Maryland’s governor conceded, “While we purchase, they will send them, and we bring the Evil upon ourselves.” Meanwhile, Scots emigration to the colonies soared to 145,000 between 1707 and 1775. Generally poorer than the English, the Scots had greater incentives to emigrate and the British Union of 1707 gave them legal access to all of the colonies. The growth in Scots overseas shipping also provided more opportunities and lower costs for passage. After a few early emigrants prospered, their reports homeward attracted growing numbers in a chain migration. During a tour of northwestern Scotland, James Boswell and Samuel Johnson saw the locals perform a popular and symbolic new dance called “America,” in which a few original dancers gradually drew in the entire audience. The Scottish diaspora flowed in three streams: Lowland Scots, Highland Scots, and Ulster Scots. Assimilated to English ways, the Lowland Scots were primarily skilled tradesmen, farmers, and professionals pulled by greater economic opportunity in America. They usually emigrated as individuals or single families, then dispersed in the colonies and completed their assimilation to Anglo-American ways. More desperate than the Lowland Scots, the Highlanders responded primarily to the push of their deteriorating circumstances. In 1746 the British army brutally suppressed a rebellion in the Highlands, and Parliament outlawed many of their traditions and institutions. At mid-century, the common Highlanders also suffered from a pervasive rural poverty worsened by the rising rents demanded by their callous landlords. The emigrants primarily came from the relatively prosperous peasants, who possessed the means to emigrate and feared remaining in the Highlands, lest they fall into the growing ranks of the impoverished. After 1750 emigration brokers and ambitious colonial land speculators frequented the northwest coast of Scotland to procure Highland emigrants. The brokers and speculators recognized that the poor but tough Highlanders were especially well-prepared for the rigors of a transatlantic passage and colonial settlement. Confined to cheap (and often dangerous) lands, the Highland Scots clustered in frontier valleys, especially along the Cape Fear River in North Carolina, the Mohawk River of New York, and the Altamaha River in Georgia. By clustering, they preserved their distinctive Gaelic language and Highland customs, in contrast to the assimilation practiced by the Lowland emigrants. Nearly half of the Scots emigrants came from Ulster, in Northern Ireland, which their parents and grandparents had colonized during the 1690s. Like the Highlanders, the Ulster Scots sought to escape from deteriorating conditions. During the 1710s–20s they clashed with the Irish Catholic and endured a depressed market for their linen, several poor harvests, and increasing rents. The Ulster emigration to the colonies began in 1718 and accelerated during the 1720s. The destitute sold themselves into indentured servitude, while the families of middling means liquidated their livestock to procure the cost of passage. Of course, most of the Ulster Scots remained at home, preferring the known hardships of Ireland to the uncertain prospects of distant America. The Ulster Scots emigrated in groups, generally organized by their Presbyterian ministers, who negotiated with shippers to arrange passage. Once in the colonies, the Ulster Scots gravitated to the frontier, where land was cheaper, enabling large groups to settle together. In the colonies, they became known as “the Scotch-Irish.” At first, the Ulster Scots emigrated to Boston, but some violent episodes of New English intolerance persuaded most, after 1720, to head for Philadelphia, a more welcoming seaport in a more tolerant colony. More sparsely settled than New England, Pennsylvania needed more settlers to develop and defend the hinterland. It also became the haven for the other major group of eighteenth-century free emigrants, the Germans. Outnumbering the English emigrants, the 100,000 Germans were second only to the Scots as eighteenth-century immigrants to British America. Most were Protestants, but they divided into multiple denominations: Lutherans, Reformed, Moravians, Baptists, and Pietists of many stripes. Drawn from both the poor and the middling sort, they emigrated primarily in families. Almost all came from the Rhine valley and its major tributaries in southwestern Germany and northern Switzerland. Flowing north and east, the navigable Rhine channeled emigrants downstream to the great Dutch port of Rotterdam, their gateway across the Atlantic to British America. About three-quarters of the Germans landed in Philadelphia, the great magnet for colonial migration. About three ships bearing a total of 600 Germans arrived in Philadelphia annually during the late 1720s. By the early 1750s some 20 ships and 5,600 Germans landed every year. Most emigrants filtered into rural Pennsylvania seeking farms. From there, some families headed south to settle on the frontiers of Maryland and Virginia. A second, much smaller and less sustained migration flowed from Rotterdam to Charles Town, South Carolina, which served as the gateway to the Georgia and Carolina frontiers. The colonial emigration was a modest subset of a much larger movement of Germans out of the Rhineland. Between 1680 and 1780 about 500,000 southwestern Germans emigrated, but only a fifth went to British America. Many more headed east, seeking opportunities in Prussia, Hungary, and Russia. Since they received subsidies from the eastern rulers but nothing from the British, the Rhineland princes actively discouraged colonial emigration. Why, then, did so many Rhinelanders undertake such a daunting journey across an ocean to a strange land? There were many push factors. Germany was subdivided into many small principalities, frequently embroiled in the great wars of the continent. To build palaces and conduct war, the authoritarian princes taxed their subjects heavily and conscripted their young men. Most princes also demanded religious conformity from their subjects, inflicting fines and imprisonment on dissidents. In addition, a swelling population pressed against the limits of the rural economy, blighting the prospects for thousands of young peasants and artisans. The push factors, necessary but not sufficient in themselves for emigration, became pressing only once an uneasy people learned of an attractive alternative. They had to begin to perceive a great shortfall between their probable prospects at home compared to their apparent opportunities in a particular elsewhere. Good news from Pennsylvania pulled discontented Rhinelanders across the Atlantic. In 1682, William Penn recruited a few Germans to settle in Pennsylvania, where they prospered. Word of their material success in a tolerant colony intrigued growing numbers in their old homeland. Letters home reported that wages were high and land and food cheap. The average Pennsylvania farm of 125 acres was six times larger than a typical peasant holding in southwestern Germany. In addition, the soil was more fertile, yielding three times as much wheat per acre. Lacking princes and aristocrats or an established church, Pennsylvania neither demanded taxes nor conscripted its inhabitants. But even pulls and pushes could not sustain a major migration. Potential emigrants also needed an infrastructure to facilitate and finance their passage: a network of information, guides, ships, and merchants willing to provide passage on credit. Such an infrastructure began with the couriers carrying the letters from Pennsylvania to Germany. Known as “Newlanders,” the couriers were former emigrants returning home for a visit, often to collect debts or an inheritance. For a fee, they carried letters and conducted business in Germany for their neighbors who remained in Pennsylvania. By recruiting Germans to emigrate, the Newlanders could earn a free return passage to Philadelphia, and sometimes a modest commission too, from a British shipper. By speaking from experience and guiding the new emigrants down the Rhine to Rotterdam and onto waiting ships, the Newlanders eased the decision and passage of thousands who had balked at a journey on their own into the unknown. The opponents of German emigration denounced the Newlanders as dangerous charlatans, and a few unscrupulous men earned that reputation, but most provided accurate information and valuable services. The German emigration was organized by the British merchants at Rotterdam, who saw a profitable opportunity to ship Germans to the colonies. Because of the Navigation Acts, only British (including colonial) ships could transport emigrants to the colonies. The merchants could profit by filling a ship with 100 to 200 emigrants at a charge of ₤5–6 per head. About two-thirds of the emigrants had sufficient means to pay their own way; the poorer third came as indentured servants. Sometimes parents could afford their own passage and that of younger children but had to indenture their adolescents, who had the highest value as laborers. The German emigrant trade developed a relatively attractive form of indentured servitude adapted to the needs of families. Known as “redemptioners,” the Germans contracted to serve for about four to five years. Unlike other indentured servants, the redemptioner families had to be kept together by their employers and not divided for sale. Most contracts also gave the emigrant family a grace period of two weeks upon arrival to find a relative or acquaintance who might purchase their labor contract. Often arranged by prior correspondence, these deals afforded the emigrants some confidence in their destination and employer. If the two-week period passed, the redemption became open to general bidding from any colonist who needed laborers. After serving out their indentures, the redemptioners became free to stake out their own farms, usually on the frontier, where land was cheaper. And so as the origins of the free colonists changed, so did the destinations of choice. During the seventeenth century, most of the colonists had been English, and their three primary destinations the West Indies, the Chesapeake, and New England. During the next century, newer colonial districts offered greater opportunities in the form of more fertile and abundant farmland. Consequently, the eighteenth–century emigrants primarily went to the more recently settled Carolinas and Georgia, as well as to the Middle Colonies (New York, New Jersey, Delaware, Pennsylvania). The greatest magnet for emigrants was Pennsylvania, which enjoyed a temperate climate, fertile farmland, peaceful relations with its Native Americans, and a reputation for religious toleration derived from its quaker founder, William Penn. The new waves of Scotch-Irish and German emigrants helped to swell Pennsylvania’s population from 18,000 in 1700 to 120,000 by 1750. The quakers became a minority in their own colony, slipping to just a quarter of the population by 1750. The Scotch-Irish accounted for an equal share of Pennsylvania’s inhabitants, while the Germans became even more numerous, about 40 percent of the total. Although the diverse groups often disliked one another and longed for a more homogeneous society, none had the numbers and power to impose their own beliefs or to drive out others. By necessity, almost all gradually accepted the mutual forbearance of a pluralistic society as an economic boom and the best guarantee for their own faith. But freedom was only part of the colonial American story; slavery was another part. Contrary to popular myth, most eighteenth-century emigrants did not come to America by their own free will in search of liberty. Nor were they Europeans. Most were enslaved Africans forced across the Atlantic to work on plantations raising American crops for the European market. During the eighteenth century, the British colonies imported 1.5 million slaves—more than three times the number of free immigrants. The expanding slave trade and plantation agriculture jointly enriched the European empires—especially the British. The slave traders provided the labor essential to the plantations producing the commodities (sugar, tobacco, and rice) that drove the expansion of British overseas trade in North America. During the eighteenth century, the British seized a commanding lead in the transatlantic slave trade, carrying about 2.5 million slaves, compared to the 1.8 million borne by the second-place Portuguese (primarily to Brazil) and the 1.2 million transported by the third-place French. The British slavers sold about half of their imports to their own colonies and the other half to the French and Spanish colonies, often illegally. At first the West Indies consumed almost all of the slaves imported into British America: 96 percent of the 275,000 brought in the seventeenth century. During the next century, the West Indian proportion slipped as the growing volume and competition of the slave trade carried more slaves on to the Chesapeake and Carolinas. The West Indies had the greatest demand because sugar plantations were both profitable and deadly places to work. The profits enabled the planters to pay premium prices for slaves to replace the thousands consumed by a brutal work regimen and tropical diseases. In addition to the high death rate, slaves suffered from a low birth rate, as the protein-deficient diet and harsh field work under the tropical sun depressed female fertility and increased infant mortality. On the colonial mainland, slave births exceeded their deaths, enabling that population to grow through natural increase, especially after 1740. In the Chesapeake in particular, the slaves were better nourished and exposed to less disease. During the colonial era, the mainland colonists imported 250,000 slaves, but they sustained a black population of 576,000 by 1780. The British West Indies had only 350,000 slaves in 1780 despite importing 1.2 million during the preceding two centuries. Where did the slaves come from? About three-quarters of the British colonial slave imports originated from Africa’s west coast between the Senegal River in the north and the Congo River to the south. Although they did not directly seize slaves, the European traders indirectly promoted African wars and kidnapping gangs by offering premium prices for captives. The traders provided guns that African clients could employ in raids for captives to pay for the weapons. Some kingdoms, principally Ashanti and Dahomey, became wealthy and powerful by slave-raiding their poorly armed neighbors. As guns became essential for defense, the people had to procure them by raiding on behalf of their suppliers, lest they instead participate in the slave trade as victims. The shippers had two not entirely compatible goals: to cram as many slaves aboard as possible and to get as many as possible across the Atlantic alive and healthy. One school of thought, the “loose packers,” argued that a little more room, better food, and some exercise landed a healthier and more profitable cargo in the colonies. But most slavers were “tight packers,” calculating that the greatest profits came from landing the largest number, accepting the loss of some en route as an essential cost. Seventeenth-century slave voyages probably killed about 20 percent of the passengers. By the late eighteenth century, modest improvements in food, water, and cleanliness gradually cut the mortality rate in half to about 10 percent. By comparison, only about 4 percent of the English convicts died during their passages across the Atlantic. Arriving with distinct languages and identities such as Ashantis, Fulanis, Ibos, Malagasies, Mandingos, and Yorubas, the slaves would find a new commonality as Africans in America. Within British America, that acculturation varied considerably by region, with the greatest assimilation in the northern colonies, where Africans were a minority, and the least in the southern, especially the West Indies, where slaves were the majority. During the mid-eighteenth century, African slaves were small minorities in New England (about 2 percent) and the Middle Colonies (about 8 percent). Dispersed or living in cramped settings, the northern blacks often found it difficult to form families and raise children. Many northern masters actively discouraged slave marriages and childbearing, considering children an unwarranted expense. The shortage of women among northern slave imports also frustrated black men. Many found relief by marrying women of the Indian enclave communities, where service in colonial wars had disproportionately killed native men, skewing the Native American population in favor of women. As small minorities dispersed among many households, the northern slaves lived and worked beside and among whites, often sleeping and eating in the master’s house. By necessity, the northern slaves quickly absorbed Euro-American culture, including the English language and the Christian faith. In the process, they lost most of their African culture, including their native languages. The slavery of the West Indies was far harsher. Sugar plantations enforced the most regimented and demanding work conditions of any crop grown in the empire. On the sugar islands, slaves outnumbered whites by about nine to one (with some variation between islands). That preponderance alarmed masters, as did the steady arrival of African newcomers, with alien ways and defiant attitudes. Slave life in the marshy South Carolina and Georgia low country more closely resembled that in the West Indies than in the northern colonies. Directly imported from Africa by the hundreds to work on rice and indigo plantations, the low-country slaves outnumbered whites by more than 2:1. Dwelling in large concentrations on rural plantations, the Carolina and Georgia slaves could preserve (by adaptation) much of their African culture, including traditional African names. The low-country interplay of tradition and innovation led to the development of a new, composite language—Gullah—based on several African languages and distinct in grammar and structure from English. But the low-country slaves paid a very heavy price for their cultural autonomy and measure of control over their work. Labor in the hot swamps and exposure to subtropical diseases killed blacks more quickly than they could reproduce. As in the West Indies, only continued imports from Africa kept the slave population growing. In turn, those regular infusions of “New Negroes” helped maintain the predominantly African culture of the lowland slaves. Both the conditions and numbers of slaves in the eighteenth-century Chesapeake area fell, between the northern colonies at one extreme, and the West Indies or the Carolina-Georgia low country, at the other. In 1750 the Chesapeake hosted the great majority of slaves in mainland British America: 150,000 compared to 60,000 in the Low Country and 33,000 in the northern colonies. Slaves comprised about 40 percent of the population in Maryland and Virginia—large enough to concern, but rarely to terrify, their masters, and large enough to preserve some but not all African ways and words. Chesapeake blacks enjoyed the best demographic conditions allowed slaves within the British empire. Cultivating tobacco was hard work, but less brutal than slogging in a rice field or broiling in a cane field—and less exposed to mosquitoes bearing malaria and yellow fever. Unlike many northern slaves, Chesapeake slaves also lived in sufficient concentrations to find marriage partners and bear children. Consequently, natural increase swelled the Chesapeake slave population, which enabled the planters to reduce their African imports after 1750. Thereafter, Creole slaves predominated in the Chesapeake. As the African infusion shrank, the Chesapeake slave culture became more American. Compared to northern blacks, the Chesapeake slaves were less surrounded and watched by whites. Compared to West Indian or low-country slaves, the Chesapeake blacks were more exposed to the culture of their masters. Chesapeake blacks developed no distinct language and rarely preserved African names for their children, but they put their own content into the cultural forms that they selectively borrowed from their masters. In the slave dialect, English words were affixed to an African grammar and syntax. The blacks gradually adapted evangelical Christianity to their own emphasis on healing magic, emotional singing, and on raucous funeral rituals that celebrated death as a spiritual liberation and return to Africa. They also added European instruments to their banjos, rattles, and drums to craft a music that expressed the African emphasis on rhythm and percussion. In turn, the new African-American culture influenced the white children raised by black servants on Chesapeake plantations. As the colonial population became less English, it assumed a new ethnic and racial complexity that increased the gap between freedom and slavery, privilege and prejudice, wealth and poverty, and white and black. Eighteenth-century America became simultaneously and inseparably a land of black slavery and white opportunity. Primarily on the discrimination of race, the colonies offered some emigrants greater liberty and prosperity, while others suffered the intense exploitation and deprivation of plantation slavery. Enslaved Africans dominated the eighteenth-century human flow across the Atlantic to British America, but the colonial white population remained more than twice as large. This paradox reflected the demographic stress of American slavery for Africans and the demographic rewards to the descendants of Europeans. In 1780 the black population in British America was less than half the total number of African emigrants received during the preceding century, while the white population exceeded its emigrant source by 3:1 , thanks especially to the healthy conditions in New England and the Middle Colonies. Far from becoming more homogeneous and united during the eighteenth century, Americans became ever more diverse. Historian Jill Lepore calculates that “the percentage of non-native speakers in the United States was actually greater in 1790 than in 1990.” Consequently, the leaders of the American Revolution and the early republic faced a daunting task: to construct a sense of nationalism to unify an ethnically, racially, and linguistically disparate population. Contrary to the patriotic schoolbook version of the revolution, which depicts a homogeneous people united behind a confident elite, America’s leaders often expressed fears that the new nation was too big and its people too different to hang together for long. Thomas Paine noted: If there is a country in the world where concord … would be least expected, it is America. Made up, as it is, of people from different nations, accustomed to different forms and habits of government, speaking different languages, and more different in their modes of worship, it would appear that the union of such a people was impracticable. Consequently, we must admire the Founders for crafting, under great duress, a collective identity and purpose defined by revolutionary documents and victories, a shared set of achievements that held the new nation together until the Civil War. For Chesapeake emigration and settlement, see Kathleen M. Brown, Good Wives, Nasty Wenches, and Anxious Patriarchs: Gender, Race, and Power in Colonial Virginia (Chapel Hill: University of North Carolina Press, 1996); James Horn, Adapting to a New World: English Society in the Seventeenth-Century Chesapeake (Chapel Hill: University of North Carolina Press, 1994); John J. McCusker and Russell R. Menard, The Economy of British America, 1607–1789 (Chapel Hill: University of North Carolina Press, 1985); Darrett B. Rutman and Anita H. Rutman, A Place in Time: Middlesex County, Virginia, 1650–1750 (New York: W. W. Norton & Co., 1984). For New England emigration and settlement, see Virginia DeJohn Anderson, New England’s Generation: The Great Migration and the Formation of Society and Culture in the Seventeenth Century (New York: Cambridge University Press, 1991); Richard Archer, “New England Mosaic: A Demographic Analysis for the Seventeenth Century,” William and Mary quarterly, 3d. Ser., XLVII (Oct. 1990), pp. 477–502; David Cressy, Coming Over: Migration and Communication between England and New England in the Seventeenth Century (New York: Cambridge University Press, 1987). For the British Atlantic of the eighteenth century, see Marilyn C. Baseler, Asylum for Mankind: America, 1607–1800 (Ithaca, N.Y.: Cornell University Press, 1998); Richard S. Dunn, “Servants and Slaves: The Recruitment and Employment of Labor,” in Jack P. Green and J. R. Pole, eds., Colonial British America: Essays in the New History of the Early Modern Era (Baltimore: Johns Hopkins University Press, 1984), pp. 157–194; and Ian K. Steele, The English Atlantic, 1675–1740: An Exploration of Communication and Community (New York: Oxford University Press, 1986). For eighteenth-century English emigration, see Bernard Bailyn, Voyagers to the West: A Passage in the Peopling of America on the Eve of the Revolution (New York: Alfred A. Knopf, 1986); A. Roger Ekirch, Bound for America: The Transportation of British Convicts to the Colonies, 1718–1775 (Oxford: Clarendon Press, 1987); James Horn, “British Diaspora: Emigration from Britain, 1680–1815,” in P. J. Marshall, ed., The Oxford History of the British Empire, Volume II: The Eighteenth Century (New York: Oxford University Press, 1998), pp. 28–52. For the transatlantic slave trade, see Robin Blackburn, The Making of New World Slavery: From the Baroque to the Modern, 1492–1800 (New York: Verso, 1997); Philip D. Curtin, The Atlantic Slave Trade: A Census (Madison: University of Wisconsin Press, 1969); Herbert S. Klein, The Atlantic Slave Trade (New York: Cambridge University Press, 1999); John Thornton, Africa and Africans in the Making of the Atlantic World, 1400–1680 (New York: Cambridge University Press, 1992); James Walvin, Black Ivory: A History of British Slavery (London: Harper Collins, 1992). For Scottish emigration to the colonies, see William R. Brock, Scotus Americanus: A Survey of the Sources for Links between Scotland and America in the Eighteenth Century (Edinburgh: Edinburgh University Press, 1982); Ian C. C. Graham, Colonists from Scotland: Emigration to North America, 1707–1783 (Ithaca, N.Y.: Cornell University Press, 1956); Alan L. Karras, Sojourners in the Sun: Scottish Migrants to Jamaica and the Chesapeake, 1740–1800 (Ithaca, N.Y.: Cornell University Press, 1992); Ned C. Landsman, Scotland and its First American Colony, 1683–1765 (Princeton, N.J.: Princeton University Press, 1985); Ned C. Landsman, “The Provinces and the Empire: Scotland, the American Colonies and the Development of British Provincial Identity,” in Lawrence Stone, ed., An Imperial State at War: Britain from 1689 to 1815 (New York: Routledge, 1994), pp. 258–87; Ned C. Landsman, “Nation, Migration, and the Province in the First British Empire: Scotland and the Americas, 1600–1800,” American Historical Review CV (April, 1999), pp. 463–75; T. C. Smout, N. C. Landsman, and T. M. Devine, “Scottish Emigration in the Seventeenth and Eighteenth Centuries,” in Nicholas Canny, ed., Europeans on the Move: Studies on European Migration, 1500–1800 (Oxford: Clarendon Press, 1994), pp. 76–112. For Ulster Scots emigration to the colonies, see R. J. Dickson, Ulster Emigration to Colonial America, 1718–1775 (London: Routledge, 1966); Maldwyn A. Jones, “The Scotch-Irish in British America,” in Bernard Bailyn and Philip D. Morgan, eds., Strangers within the Realm: Cultural Margins of the First British Empire (Chapel Hill: University of North Carolina Press, 1991), 284–313; James G. Leyburn, The Scotch-Irish: A Social History (Chapel Hill: University of North Carolina Press, 1962). For the German migration, see Georg Fertig, “Transatlantic Migration from the German-Speaking Parts of Central Europe, 1600–1800: Proportions, Structures, and Explanations,” in Nicholas Canny, ed., Europeans on the Move: Studies on European Migration, 1500–1800 (Oxford: Clarendon Press, 1994), pp. 192–235; Aaron Spencer Fogleman, Hopeful Journeys: German Immigration, Settlement, and Political Culture in Colonial America, 1717–1775 (Philadelphia: University of Pennsylvania Press, 1996); A. G. Roeber, “In German Ways? Problems and Potentials of Eighteenth-Century German Social and Emigration History,” William and Mary quarterly, 3d Ser., XLIV (Oct., 1987), pp. 750–74; A. G. Roeber, “The Origin of Whatever is Not English among Us: The Dutch-speaking and the German-speaking Peoples of Colonial British America,” in Philip D. Morgan, eds., Strangers within the Realm: Cultural Margins of the First British Empire (Chapel Hill, N.C.: University of North Carolina Press, 1991), pp. 220–83; Marianne Wokeck, “Harnessing the Lure of the ‘Best Poor Man’s Country’: The Dynamics of German-Speaking Immigration to British North America, 1683–1783,” in Ida Altman and James Horn, To Make America: European Emigration in the Early Modern Period (Berkeley: University of California Press, 1991), pp. 204–43. For colonial slavery see, Ira Berlin, “Time, Space, and the Evolution of Afro-American Society on British Mainland North America,” American Historical Review LXXXV (Jan. 1980), pp. 44–78; Ira Berlin, Many Thousands Gone: The First Two Centuries of Slavery in North America (Cambridge, Mass.: Harvard University Press, 1998); Barbara Bush, Slave Women in Caribbean Society, 1650–1838 (Bloomington: Indiana University Press, 1990); (Bloomington: Indiana University Press, 1990); Sylvia R. Frey, Water From the Rock: Black Resistance in a Revolutionary Age (Princeton: Princeton University Press, 1991); Allan Kulikoff, Tobacco and Slaves: The Development of Southern Cultures in the Chesapeake, 1660–1800 (Chapel Hill: University of North Carolina Press, 1986); Philip D. Morgan, “British Encounters with Africans and African-Americans, circa 1600–1780,” in Morgan and Bernard Bailyn, eds., Strangers within the Realm: Cultural Margins of the First British Empire (Chapel Hill: University of North Carolina Press, 1991); Philip D. Morgan, Slave Counterpoint: Black Culture in the Eighteenth-Century Chesapeake and Lowcountry (Chapel Hill: University of North Carolina Press, 1998). For the economic development of the West Indies see Philip D. Curtin, The Rise and Fall of the Plantation Complex: Essays in Atlantic History (New York: Cambridge University Press, 1998); K. G. Davies, The North Atlantic World in the Seventeenth Century (Minneapolis: University of Minnesota Press, 1974); John J. McCusker and Russell R. Menard, The Economy of British America, 1607-1789 (Chapel Hill: University of North Carolina Press, 1985); Richard B. Sheridan, Sugar and Slavery: An Economic History of the British West Indies, 1623–1775 (Baltimore: Johns Hopkins Press, 1973). Jill Lepore, A is for American: Letters and Other Characters in the Newly United States (New York: Alfred A. Knopf, 2002), pp. 27–8. Ibid., p. 27. On November 15th at the FPRI annual dinner Fouad Ajami was presented with the Seventh Annual Benjamin Franklin Public Service Award. The event was attended by over 360 people. Dr. John M. Templeton, Jr. was dinner chairman. Special Partner Event Al Qaeda and Jihadi Movements After Bin Laden Special Partner Event The Longest War: The Enduring Conflict between America and Al Qaeda
http://www.fpri.org/orbis/4702/taylor.peoplebritishamerica1700.html
13
14
The meridian circle is an instrument for timing of the passage of stars across the local meridian, an event known as a transit, while at the same time measuring their angular distance from the nadir. These are special purpose telescopes mounted so as to allow pointing only in the meridian, the great circle through the north point of the horizon, the zenith, the south point of the horizon, and the nadir. Meridian telescopes rely on the rotation of the Earth to bring objects into their field of view and are mounted on a fixed, horizontal, east-west axis. The similar transit instrument, transit circle or transit telescope is likewise mounted on a horizontal axis, but the axis need not be fixed in the east-west direction. For instance, a surveyor's theodolite can function as a transit instrument if its telescope is capable of a full revolution about the horizontal axis. Meridian circles are often called by these names, although they are less specific. For many years, transit timings were the most accurate method of measuring the positions of heavenly bodies, and meridian instruments were relied upon to perform this painstaking work. Before spectroscopy, photography, and the perfection of reflecting telescopes, the measuring of positions (and the deriving of orbits and astronomical constants) was the major work of observatories. Fixing a telescope to move only in the meridian has advantages in the high-precision work for which these instruments are employed: - The very simple mounting is easier to manufacture and maintain to a high precision. - At most locations on the Earth, the meridian is the only plane in which celestial coordinates can be indexed directly with such a simple mounting; the equatorial coordinate system aligns naturally with the meridian at all times. Revolving the telescope about its axis moves it directly in declination, and objects move through its field of view in right ascension. - All objects in the sky are subject to the distortion of atmospheric refraction, which tends to make objects appear slightly higher in the sky than they actually are. At the meridian, this distortion is in declination only, and is easily accounted for; elsewhere in the sky, refraction causes a complex distortion in coordinates which is more difficult to reduce. Such complex analysis is not conducive to high precision. Basic instrument The state of the art of meridian instruments of the late 19th and early 20th century is described here, giving some idea of the precise methods of construction, operation and adjustment employed. The earliest transit telescope was not placed in the middle of the axis, but nearer to one end, to prevent the axis from bending under the weight of the telescope. Later, it was usually placed in the centre of the axis, which consisted of one piece of brass or gun metal with turned cylindrical steel pivots at each end. Several instruments were made entirely of steel, which was much more rigid than brass. The pivots rested on V-shaped bearings, either set into massive stone or brick piers which supported the instrument, or attached to metal frameworks on the tops of the piers. The temperature of the bearings was monitored by thermometers. The piers were usually separate from the foundation of the building, to prevent transmission of vibration from the building to the telescope. To relieve the pivots from the weight of the instrument, which would have distorted their shape, each end of the axis was supported by a hook with friction rollers, suspended from a lever supported by the pier, counterbalanced so as to leave only about 10 pounds force (45 N) on each bearing. In some cases, the counterweight pushed up on the bearing from below. The bearings were set nearly in a true east-west line, but fine adjustment was possible by horizontal and vertical screws. A spirit level was used to monitor for any inclination of the axis to the horizon. Eccentricity (an off-center condition) of the telescope's axis was accounted for, in some cases, by providing another telescope through the axis itself. By observing the motion of an artificial star through this axis telescope as the main telescope was rotated, the shape of the pivots, and any wobble of the axis, could be determined. Near each end of the axis, attached to the axis and turning with it, was a circle or wheel for measuring the angle of the telescope to the horizon. Generally of 3 feet to 3.5 ft diameter, it was divided to 2 or 5 arcminutes, on a slip of silver set into the face of the circle near the circumference. These graduations were read by microscopes, generally four for each circle, mounted to the piers or a framework surrounding the axis, at 90° intervals around the circles. By averaging the four readings the eccentricity (from inaccurate centering of the circles) and the errors of graduation were greatly reduced. Each microscope was furnished with a micrometer screw, which moved crosshairs, with which the distance of the circle graduations from the centre of the field of view could be measured. The drum of the screw was divided to measure single seconds of arc (0.1" being estimated), while the number of revolutions were counted by a kind of comb in the field of view. The microscopes were placed at such a distance from the circle that one revolution of the screw corresponded to 1 arcminute (1') on the circle. The error was determined occasionally by measuring standard intervals of 2' or 5' on the circle. The periodic errors of the screw were accounted for. On some instruments, one of the circles was graduated and read more coarsely than the other, and was used only in finding the target stars. The telescope consisted of two tubes screwed to the central cube of the axis. The tubes were usually conical and as stiff as possible to help prevent flexure. The connection to the axis was also as firm as possible, as flexure of the tube would affect declinations deduced from observations. The flexure in the horizontal position of the tube was determined by two collimators - telescopes placed horizontally in the meridian, north and south of the transit circle, with their objective lenses towards it. These were pointed at one another (through holes in the tube of the telescope, or by removing the telescope from its mount) so that the crosshairs in their foci coincided. The collimators were often permanently mounted in these positions, with their objectives and eyepieces fixed to separate piers. The meridian telescope was pointed to one collimator and then the other, moving through exactly 180°, and by reading the circle the amount of flexure (the amount the readings differed from 180°) was found. Absolute flexure, that is, a fixed bend in the tube, was detected by arranging that eyepiece and objective lens could be interchanged, and the average of the two observations of the same star was free from this error. Parts of the apparatus were sometimes enclosed in glass cases to protect them from dust. These cases had openings for access. Other parts were closed against dust by removable silk covers. Certain instrumental errors could be averaged out by reversing the telescope on its mounting. A carriage was provided, which ran on rails between the piers, and on which the axis, circles and telescope could be raised by a screw-jack, wheeled out from between the piers, turned 180°, wheeled back, and lowered again. The observing building housing the meridian circle did not have a rotating dome, as is often seen at observatories. Since the telescope observed only in the meridian, a vertical slot in the north and south walls, and across the roof between these, was all that was necessary. The building was unheated and kept as much as possible at the temperature of the outside air, to avoid air currents which would disturb the telescopic view. The building also housed the clocks, recorders, and other equipment for making observations. At the focal plane, the eye end of the telescope had a number of vertical and one or two horizontal wires (crosshairs). In observing stars, the telescope was first directed downward at a basin of mercury forming a perfectly horizontal mirror and reflecting an image of the crosshairs back up the telescope tube. The crosshairs were adjusted until coincident with their reflection, and the line of sight was then perfectly vertical; in this position the circles were read for the nadir point. The telescope was next brought up to the approximate declination of the target star by watching the finder circle. The instrument was provided with a clamping apparatus, by which the observer, after having set the approximate declination, could clamp the axis so the telescope could not be moved in declination, except very slowly by a fine screw. By this slow motion, the telescope was adjusted until the star moved along the horizontal wire (or if there were two, in the middle between them), from the east side of the field of view to the west. Following this, the circles were read by the microscopes for a measurement of the apparent altitude of the star. The difference between this measurement and the nadir point was the nadir distance of the star. A movable horizontal wire or declination-micrometer was also used. Another method of observing the apparent altitude of a star was to take half of the angular distance between the star observed directly and its reflection observed in a basin of mercury. The average of these two readings was the reading when the line of sight was horizontal, the horizontal point of the circle. The small difference in latitude between the telescope and the basin of mercury was accounted for. The vertical wires were used for observing transits of stars, each wire furnishing a separate result. The time of transit over the middle wire was estimated, during subsequent analysis of the data, for each wire by adding or subtracting the known interval between the middle wire and the wire in question. These known intervals were predetermined by timing a star of known declination passing from one wire to the other, the pole star being best on account of its slow motion. Timings were originally made by an "eye and ear" method, estimating the interval between two beats of a clock. Later, timings were registered by pressing a key, the electrical signal making a mark on a strip recorder. Later still, the eye end of the telescope was usually fitted with an impersonal micrometer, a device which allowed matching a vertical crosshair's motion to the star's motion. Set precisely on the moving star, the crosshair would trigger the electrical timing of the meridian crossing, removing the observer's personal equation from the measurement. The field of the wires could be illuminated; the lamps were placed at some distance from the piers in order not to heat the instrument, and the light passed through holes in the piers and through the hollow axis to the center, whence it was directed to the eye-end by a system of prisms. To determine absolute declinations or polar distances, it was necessary to determine the observatory's colatitude, or distance of the celestial pole from the zenith, by observing the upper and lower culmination of a number of circumpolar stars. The difference between the circle reading after observing a star and the reading corresponding to the zenith was the zenith distance of the star, and this plus the colatitude was the north polar distance. To determine the zenith point of the circle, the telescope was directed vertically downwards at a basin of mercury, the surface of which formed an absolutely horizontal mirror. The observer saw the horizontal wire and its reflected image, and moving the telescope to make these coincide, its optical axis was made perpendicular to the plane of the horizon, and the circle reading was 180° + zenith point. In observations of stars refraction was taken into account as well as the errors of graduation and flexure. If the bisection of the star on the horizontal wire was not made in the centre of the field, allowance was made for curvature, or the deviation of the star's path from a great circle, and for the inclination of the horizontal wire to the horizon. The amount of this inclination was found by taking repeated observations of the zenith distance of a star during the one transit, the pole star being the most suitable because of its slow motion. Attempts were made to record the transits of a star photographically. A photographic plate was placed in the focus of a transit instrument and a number of short exposures made, their length and the time being registered automatically by a clock. The exposing shutter was a thin strip of steel, fixed to the armature of an electromagnet. The plate thus recorded a series of dots or short lines, and the vertical wires were photographed on the plate by throwing light through the objective lens for one or two seconds. Meridian circles required precise adjustment to do accurate work. The rotation axis of the main telescope needed to be exactly horizontal. A sensitive spirit level, designed to rest on the pivots of the axis, performed this function. By adjusting one of the V-shaped bearings, the bubble was centered. The line of sight of the telescope needed to be exactly perpendicular to the axis of rotation. This could be done by sighting a distant, stationary object, lifting and reversing the telescope on its bearings, and again sighting the object. If the crosshairs did not intersect the object, the line of sight was halfway between the new position of the crosshairs and the distant object; the crosshairs were adjusted accordingly and the process repeated as necessary. Also, if the rotation axis was known to be perfectly horizontal, the telescope could be directed downward at a basin of mercury, and the crosshairs illuminated. The mercury acted as a perfectly horizontal mirror, reflecting an image of the crosshairs back up the telescope tube. The crosshairs could then be adjusted until coincident with their reflection, and the line of sight was then perpendicular to the axis. The line of sight of the telescope needed to be exactly within the plane of the meridian. This was done approximately by building the piers and the bearings of the axis on an east-west line. The telescope was then brought into the meridian by repeatedly timing the (apparent, incorrect) upper and lower meridian transits of a circumpolar star and adjusting one of the bearings horizontally until the interval between the transits was equal. Another method used calculated meridian crossing times for particular stars as established by other observatories. This was an important adjustment and much effort was spent in perfecting it. In practice, none of these adjustments were perfect. The small errors introduced by the imperfections were mathematically corrected during the analysis of the data. Zenith telescopes Some telescopes designed to measure star transits are zenith telescopes designed to point straight up at or near the zenith for extreme precision measurement of star positions. They use an altazimuth mount, instead of a meridian circle, fitted with leveling screws. Extremely sensitive levels are attached to the telescope mount to make angle measurements and the telescope has an eyepiece fitted with a micrometer. The idea of having an instrument (quadrant) fixed in the plane of the meridian occurred even to the ancient astronomers and is mentioned by Ptolemy, but it was not carried into practice until Tycho Brahe constructed a large meridian quadrant. Meridian circles have been used since the 18th century to accurately measure positions of stars in order to catalog them. This is done by measuring the instant when the star passes through the local meridian. Its altitude above the horizon is noted as well. Knowing one's geographic latitude and longitude these measurements can be used to derive the star's right ascension and declination. Once good star catalogs were available a transit telescope could be used anywhere in the world to accurately measure local longitude and time by observing local meridian transit times of catalogue stars. Prior to the invention of the atomic clock this was the most reliable source of accurate time. In the Almagest Ptolemy describes a meridian circle which consisted of a fixed graduated outer ring and a movable inner ring with tabs that used a shadow to set the Sun's position. It was mounted vertically and aligned with the meridian. The instrument was used to measure the altitude of the Sun at noon in order to determine the path of the ecliptic. 17th century (1600s) A meridian circle enabled the observer to determine simultaneously right ascension and declination, but it does not appear to have been much used for right ascension during the 17th century, the method of equal altitudes by portable quadrants or measures of the angular distance between stars with an astronomical sextant being preferred. These methods were very inconvenient and in 1690 Ole Rømer invented the transit instrument. 18th century (1700s) The transit instrument consists of a horizontal axis in the direction east and west resting on firmly fixed supports, and having a telescope fixed at right angles to it, revolving freely in the plane of the meridian: At the same time Rømer invented the altitude and azimuth instrument for measuring vertical and horizontal angles, and in 1704 he combined a vertical circle with his transit instrument, so as to determine both co-ordinates at the same time. This latter idea was, however, not adopted elsewhere although the transit instrument soon came into universal use (the first one at Greenwich was mounted in 1721), and the mural quadrant continued till the end of the century to be employed for determining declinations. The advantage of using a whole circle, as less liable to change its figure, and not requiring reversal in order to observe stars north of the zenith, was then again recognized by Jesse Ramsden, who also improved the method of reading off angles by means of a micrometer microscope as described below. 19th century (1800s) The making of circles was shortly afterwards taken up by Edward Troughton, who in 1806 constructed the first modern transit circle for Groombridge's observatory at Blackheath, the Groombridge Transit Circle (a meridian transit circle). Troughton afterwards abandoned the idea, and designed the mural circle to take the place of the mural quadrant. In the United Kingdom the transit instrument and mural circle continued till the middle of the 19th century to be the principal instrument in observatories, the first transit circle constructed there being that at Greenwich (mounted in 1850) but on the continent the transit circle superseded them from the years 1818-1819, when two circles by Johann Georg Repsold and by Reichenbach were mounted at Göttingen, and one by Reichenbach at Königsberg. The firm of Repsold and Sons was for a number of years eclipsed by that of Pistor and Martins in Berlin, who furnished various observatories with first-class instruments, but following the death of Martins the Repsolds again took the lead, and made many transit circles. The observatories of Harvard College (United States), Cambridge and Edinburgh had large circles by Troughton and Simms, who also made the Greenwich circle from the design of Airy. 20th century and beyond (1900s and 2000s) A modern day example of this type of telescope is the 8 inch (~0.2m) Flagstaff Astrometric Scanning Transit Telescope (FASTT) at the USNO Flagstaff Station Observatory. Modern meridian circles are usually automated. The observer is replaced with a CCD camera. As the sky drifts across the field of view, the image built up in the CCD is clocked across (and out of) the chip at the same rate. This allows some improvements: - The CCD can collect light for as long as the image is crossing it, allowing a dimmer limiting magnitude to be reached. - The data can be collected for as long as the telescope is in operation - an entire night is possible, allowing a strip of sky many degrees in length to be scanned. - Data can be compared directly to any reference object which happens to be within the scan - usually a bright extragalactic object, like a quasar, with an accurately-known position. This eliminates the need for some of the painstaking adjustment of the meridian instrument, although monitoring of declination, azimuth, and level is still performed with CCD scanners and laser interferometers. - Atmospheric refraction can be accounted for automatically, by monitoring temperature, pressure, and dew point of the air electronically. - Data can be stored and analyzed at will. - Groombridge Transit Circle (1806) - Carlsberg Meridian Telescope (Carlsberg Automatic Meridian Circle) (1984) - Tokyo Photoelectric Meridian Circle (1985) See also - This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Transit Circle". Encyclopædia Britannica (11th ed.). Cambridge University Press. - Chauvenet, William (1868). A Manual of Spherical and Practical Astronomy, II. Trubner & Co., London. pp. 131, 282., at Google books - Newcomb, Simon (1906). A Compendium of Spherical Astronomy. MacMillan Co., New York. p. 317ff, 331ff. , at Google books - Norton, William A. (1867). A Treatise on Astronomy, Spherical and Physical. John Wiley & Son, New York. p. 24ff. , at Google books - Chauvenet (1868), p. 132, art. 119; p. 283, art. 195 - Norton (1867), p. 39ff - Bond, William C.; Bond, George P.; Winlock, Joseph (1876). Annals of the Astronomical Observatory of Harvard College. Press of John Wilson and Son, Cambridge, Mass. p. 25. , at Google books - Bond, Bond and Winlock (1876), p. 25 - Bond, Bond and Winlock (1876), p. 27 - Bond, Bond and Winlock (1876), p. 25 - Bond, Bond and Winlock (1876), p. 26 - Chauvenet (1868), p. 138, art. 121 - Norton (1867), p. 33ff - Ptolemy, Claudius; Toomer, G. J. (1998). Ptolemy's Almagest. Princeton University Press. p. 61. ISBN 0-691-00260-6. - Stone, Ronald C.; Monet, David G. (1990). "The USNO (Flagstaff Station) CCD Transit Telescope and Star Positions Measured From Extragalactic Sources". Proceedings of IAU Symposium No. 141. pp. 369–370., at SAO/NASA ADS - The Carlsberg Meridian Telescope Further reading - The Practical Astronomer, Thomas Dick (1848) - Elements of Astronomy, Robert Stawell Ball (1886) - Meridian circle observations made at the Lick Observatory, University of California, 1901-1906, Richard H. Tucker (1907) - an example of the adjustments and observations of an early 20th century instrument
http://en.wikipedia.org/wiki/Meridian_circle
13
14
Access to land and its economic bounty has been a constant issue throughout Zimbabwe's history. Under both British colonial and then white minority government rule, black Africans were denied access to the best agricultural land and forced to eke out their living from small plots on "tribal reserves." the Land Apportionment Act of 1930 restricted blacks' access to land and forced them into wage labor. 1965, the former British colony known as Rhodesia -- after Britain's Cecil Rhodes who arrived there in the late 1800s -- declared itself an independent state. Prime Minister Ian Smith, however, refused pressures from his government to address historical racial inadequacies and any move toward a black African majority rule. The push for independence failed. was in response to this that two major black African political parties formed in the late 1960s, the Zimbabwe African People's Union and the Zimbabwe African National Union, to agitate for land and political rights for black Africans. Lancaster House Agreement years of intense guerrilla fighting, Smith eventually yielded to British-brokered peace talks in 1979. The result was the Lancaster House Agreement, signed by British and black Zimbabwean liberation leaders from the African National Conference, ZANU and ZAPU parties. The agreement marked the end of colonization but provided restrictions on land acquisition that protected white farm owners. the agreement, the new Zimbabwean government, led by President Robert Mugabe -- who won British-supervised elections in 1980 -- could not seize white-owned land for the first ten years of independence. The government could buy land from white farmers only through a willing-seller, willing-buyer program at full market prices. Britain provided 44 million pounds to the government for land resettlement projects. In the first decade of independence the government acquired 40 percent of the targeted 8 million hectares (19.77 million acres) of land. More than 50,000 families were resettled on more than 3 million hectares (7.41 million acres). Phase one land reform 1990 the willing-seller clause of the Lancaster House Agreement expired and Mugabe's government amended the constitution to allow compulsory acquisition of white-owned 1992 Land Acquisition Act followed and provided the government with additional land resettlement tools, including the removal of full market price restrictions, limiting the size of farms and introducing a land tax, though the tax was never second decade (1990-1997) of land redistribution was slow and Britain accused Mugabe's government of giving land and money to its political cronies, instead of the landless poor. than 1 million hectares (2.47 million acres) were acquired and less than 20,000 families were resettled during that time. Much of the land acquired during what has become known as "phase one" of land reform was of poor quality, according to Human Rights Watch. Only 19 percent of the almost 3.5 million hectares (8.65 million acres) of resettled land was considered prime, or farmable. initial 44 million pound resettlement grant, which Mugabe's government spent by 1988, formally expired in 1996. Phase two land reform 1997, with pressure from landless blacks mounting in a declining economy, Mugabe announced that he would seize approximately 1,500 white-owned farms. He said that Britain should pay for any compensation to these farmers as Rhodesian settlers originally stole the land from black farmers. Britain responded by saying it was not responsible to meet the costs of land purchases. 1999 Mugabe announced that he would attempt to amend the country's 19-year-old constitution in order to strengthen the executive arm of the government and extend its powers to acquire land at will and without compensation. in the fight against the proposed changes was the newly formed Movement for Democratic Change, the first true opposition party in post-independent Zimbabwe, but Mugabe's ZANU-PF government won the 2000 parliamentary elections but lost the referendum to change the constitution. appease the landless masses and maintain political popularity, Mugabe's government officially encouraged veterans to occupy white-owned farms. In some instances, members of the army helped facilitate land grabs and police were told not to respond to landowners' complaints or to remove squatters. As a result of the land grabs, many white farmers and their black workers were killed or subjected to violent Fast track land reform to the Commercial Farmers' Union, an organization that represents 4,000 white farmers across Zimbabwe, despite the government's efforts to take back land, by 1999 4,500 commercial farmers still held 11 million hectares (27.18 million acres) of the most fertile land in Zimbabwe. the presidential election only two years away and pressure still on from opposition party MDC, Mugabe announced his government's "fast track" resettlement program in July 2000 and defended the land seizures. Zimbabwe, and only because of the color line arising from British colonialism, 70 percent of the best arable land is owned by less than 1 percent of the population who happen to be white, while the black majority are congested on barren land," Mugabe said in a speech to the United Nations Millennium Summit in New York City on Sept. 8, 2000. have sought to redress this inequity through a land reform and resettlement program" that will result in "economic and social justice and [adhere to] our constitution and laws," he added. said his new program would be a "one farmer-one farm" program. It would be an attempt to achieve more equitable distribution of land away from foreign commercial farmers who owned more than one farm to redistribute to poor and middle-class landless black Zimbabweans. Under the Zimbabwean constitution, underused or derelict farms would be targeted. The result of Mugabe's legacy June 2000 and February 2001, the government listed 2,706 farms, covering more than 6 million hectares (14.83 million acres), for compulsory acquisition. The process was complicated, chaotic and in violation of the original Zimbabwean constitution. Often, land seizure was based on whether the farm owner was a supporter of MDC, or, according to a United Nations Development Programme report, error-filled. Some farms listed for acquisition and resettlement included flooded land, industrial land and even land already resettled, the report said. won contested presidential elections marred by violence and corruption in March 2002. By May, a special session in Zimbabwe's parliament passed additional amendments to the Land Acquisition Amendment Act. All acquisition orders, even those signed before May, immediately transferred ownership of land to the state. Farmers were to cease farming after 45 days and leave the land within 90 days. October 2003, the BBC reported that the Zimbabwean government had seized approximately 8.6 million hectares of land (some 4,300 farms) as part of its program and that 1,323 white farmers remain. About 127,000 blacks have been resettled. By Annie Schleicher, Online NewsHour
http://www.pbs.org/newshour/bb/africa/land/gp_zimbabwe.html
13
14
IN NOVEMBER 1975, after nearly five centuries as a Portuguese colony, Angola became an independent state. By late 1988, however, despite fertile land, large deposits of oil and gas, and great mineral wealth, Angola had achieved neither prosperity nor peace-- the national economy was stagnating and warfare was ravaging the countryside. True independence also remained unrealized as foreign powers continued to determine Angola's future. But unattained potential and instability were hardships well known to the Angolan people. They had suffered the outrage of slavery and the indignity of forced labor and had experienced years of turmoil going back to the early days of the indigenous kingdoms. The ancestors of most present-day Angolans found their way to the region long before the first Portuguese arrived in the late fifteenth century. The development of indigenous states, such as the Kongo Kingdom, was well under way before then. The primary objective of the first Portuguese settlers in Angola, and the motive behind most of their explorations, was the establishment of a slave trade. Although several early Portuguese explorers recognized the economic and strategic advantages of establishing friendly relations with the leaders of the kingdoms in the Angolan interior, by the middle of the sixteenth century the slave trade had engendered an enmity between the Portuguese and the Africans that persisted until independence. Most of the Portuguese who settled in Angola through the nineteenth century were exiled criminals, called degredados, who were actively involved in the slave trade and spread disorder and corruption throughout the colony. Because of the unscrupulous behavior of the degredados, most Angolan Africans soon came to despise and distrust their Portuguese colonizers. Those Portuguese who settled in Angola in the early twentieth century were peasants who had fled the poverty of their homeland and who tended to establish themselves in Angolan towns in search of a means of livelihood other than agriculture. In the process, they squeezed out the mestiços (people of mixed African and white descent) and urban Africans who had hitherto played a part in the urban economy. In general, these later settlers lacked capital, education, and commitment to their new homelands. When in the early 1930s António Salazar established the New State (Estado Novo) in Portugal, Angola was expected to survive on its own. Accordingly, Portugal neither maintained an adequate social and economic infrastructure nor invested directly in longterm development. Ideologically, Portugal maintained that increasing the density of white rural settlement in Angola was a means of "civilizing" the African. Generally, the Portuguese regarded Africans as inferior and gave them few opportunities to develop either in terms of their own cultures or in response to the market. The Portuguese also discriminated politically, socially, and economically against assimilados --those Africans who, by acquiring a certain level of education and a mode of life similar to that of Europeans, were entitled to become citizens of Portugal. Those few Portuguese officials and others who called attention to the mistreatment of Africans were largely ignored or silenced by the colonial governments. By the 1950s, African-led or mestiço-led associations with explicit political goals began to spring up in Angola. The authoritarian Salazar regime forced these movements and their leaders to operate in exile. By the early 1960s, however, political groups were sufficiently organized (if also divided by ethnic loyalties and personal animosities) to begin their drives for independence. Moreover, at least some segments of the African population had been so strongly affected by the loss of land, forced labor, and stresses produced by a declining economy that they were ready to rebel on their own. The result was a series of violent events in urban and rural areas that marked the beginning of a long and often ineffective armed struggle for independence. To continue its political and economic control over the colony, Portugal was prepared to use whatever military means were necessary. In 1974 the Portuguese army, tired of warfare not only in Angola but in Portugal's other African colonies, overthrew the Lisbon regime. The new regime left Angola to its own devices--in effect, abandoning it to the three major anticolonial movements. Ideological differences and rivalry among their leaderships divided these movements. Immediately following independence in 1975, civil war erupted between the Popular Movement for the Liberation of Angola (Movimento Popular de Libertação de Angola -- MPLA) on the one hand and the National Front for the Liberation of Angola (Frente Nacional de Libertação de Angola -- FNLA) and the National Union for the Total Independence of Angola (União Nacional para a Independência Total de Angola -- UNITA) on the other hand. The MPLA received support from the Soviet Union and Cuba, while the FNLA turned to the United States. UNITA, unable to gain more than nominal support from China, turned to South Africa. Viewing the prospect of a Soviet-sponsored MPLA government with alarm, South Africa invaded Angola. The Soviet and Cuban reaction was swift: the former provided the logistical support, and the latter provided troops. By the end of 1976, the MPLA, under the leadership of Agostinho Neto, was in firm control of the government. Members of UNITA retreated to the bush to wage a guerrilla war against the MPLA government, while the FNLA became increasingly ineffective in the north in the late 1970s. The MPLA, which in 1977 had declared itself a Marxist-Leninist vanguard party, faced the task of restoring the agricultural and production sectors that nearly had been destroyed with the departure of the Portuguese. Recognizing that traditional MarxistLeninist policies of large-scale expropriation and state ownership would undermine redevelopment efforts, Neto permitted private involvement in commercial and small-scale industry and developed substantial economic relations with Western states, especially in connection with Angola's oil industry. After Neto's death in 1979, José Eduardo dos Santos inherited considerable economic difficulties, including the enormous military costs required to fight UNITA and South African forces. By the end of 1985, the security of the Luanda regime depended almost entirely on Soviet-supplied weaponry and Cuban troop support. Consequently, in the late 1980s Luanda's two main priorities were to end the UNITA insurgency and to make progress toward economic development. By late 1988, a United States-sponsored peace agreement held out some hope that, given time, both priorities could be achieved. |Country Studies main page | Angola Country Studies main page | Celebrity|
http://country-studies.com/angola/history.html
13
16
Satisfying the world's growing appetite for energy could set the stage for global climate change. Much of our electrical power and heat comes from the combustion of fossil fuels, which release carbon dioxide (CO2) into the atmosphere. CO2 is one of the primary gases that contribute to the "greenhouse effect"—the phenomenon in which certain trace gases in the atmosphere trap the earth's radiated energy, causing a gradual warming of its surface. Significant global warming might lead to regional shifts in agricultural and forest productivity and cause the spread of disease and the relocation of coastal populations. To delay the onset of significant global warming, the developed nations may change their portfolio of energy sources. Some are considering replacing coal with natural gas because combustion of gas emits almost half as much CO2 as the combustion of coal. Other options are to make greater use of renewable energy sources (including hydropower facilities) and nuclear power because they do not produce CO2. A second approach to cutting CO2 emissions is to develop technologies, such as "smart" cars, buildings, and appliances, that use energy more efficiently (see the article, "Driving the Transportation Revolution."). ORNL also has contributed in this area by developing more efficient refrigerators and heat pumps. A third approach is to focus on both understanding the effects of rising levels of atmospheric CO2 and pre- venting it from building up to undesirable concentrations. Computer modeling experts are predicting the impacts of increasing emissions of CO2 on climate, and ecologists are studying the effects of elevated atmospheric CO2 concentrations on forest productivity. Other scientists are exploring the emerging science and technology of carbon sequestration—the capture and secure storage of CO2 emitted from the combustion of fossil fuels. The U.S. Department of Energy supports all these approaches. In 1998 ORNL researchers had some outstanding achievements in these areas. ORNL Recommends Building Three Hydro Projects Small dams that generate electricity are needed to help meet growing power demands in the Pacific Northwest, but the benefits of proposed hydroelectric projects must outweigh their environmental costs. To balance power needs with potential environmental impacts, as required by the Federal Power Act and National Environmental Policy Act, the Federal Energy Regulatory Commission (FERC) conducts environmental assessments of new and existing projects proposed for licensing. |Small mountain streams in northwestern Washington have been proposed as sites for hydroelectric projects.| In April 1998, ORNL completed for FERC the final environmental impact statement for eight new hydroelectric projects proposed for the Skagit River Basin in Washington. The ORNL group—Bo Saulsbury, Rich McLean, and Bill Staub (Energy Division) and Warren Webb, Glenn Cada, and Mark Bevelhimer (Environmental Sciences Division)—recommended that three of the eight projects be licensed for construction and operation, provided that the applicants implement certain mitigation measures. If constructed, these three projects would generate about 72 gigawatt hours of electricity annually. Complex environmental issues arise around the plan to clear land and dam streams for hydroelectric projects in the Pacific Northwest. Will old-growth forest be cleared or protected? Will project construction or an accidental rupture of the project pipeline adversely affect slope stability and result in erosion that could affect water quality? Will water quality degradation or changes in stream flows further threaten Pacific salmon stocks? Can threatened and endangered species, such as the spotted owl and marbled murrelet, still be protected? Will Native American treaty rights and cultural practices be respected and preserved? What will be the socioeconomic effects of construction and operation, such as the impact on housing and schools of workers and their families moving into the community? The ORNL team recommended these mitigation measures for the three projects: (1) prevent erosion and control sediment to protect water quality; (2) increase stream flows and restock resident fish populations in the projects' bypassed reaches; and (3) acquire and preserve between 7 and 10 acres of forest along other streams to replace the forest habitat cleared for the projects. These mitigation measures would be implemented along with other environmental measures proposed by the applicants. |ORNL ecologist Warren Webb stands in an old-growth forest near a proposed hydroelectric project site.| "We did not recommend licensing the five other proposed projects," Saulsbury says, "because they would pose significant environmental impacts even with available mitigation measures." Computational Tool Could Aid Search for Oil and Gas The propagation of sound waves underground may contain relevant information about the presence of oil and gas. Therefore, many petroleum exploration companies use seismic analysis for hydrocarbon exploration. Seismic data are obtained by recording the energy returning to the earth's surface from an underground source of acoustic waves. These waves propagated into the earth are reflected back whenever they encounter a change in acoustic impedance (e.g., passing from a dense shale into a porous sandstone layer that may contain oil). An array of receivers on land or underwater picks up sound waves from each reflected signal. Petroleum industry researchers plug the recorded data into a computer code that provides an image of the subsurface geological structure. |To solve the problem of faulty seismic image focusing, which plagues the oil and gas exploration industry, (from left) Jacob Barhen, Edward Oblow, Vladimir Protopopescu, and David Reister developed TRUST, a computational method for global optimization. For their work on TRUST, they received an R&D 100 Award in 1998.| Unfortunately, the reflected signals carrying useful information are often buried in the noise from the sensor electronics and from disturbances arising from the degradation and misalignment of some seismic signals. Misalignment is caused by unpredictable delays in the recorded travel time of the seismic waves (which pass more quickly through solid rock layers compressed deep underground than through less rigid rock layers near the surface). As a result, the image of subsurface structures is highly distorted. For large-scale seismic surveys, this problem typically had been considered intractable by industry experts, until ORNL came up with a mathematical and computational solution. To address the challenge of faulty seismic image focusing, Jacob Barhen, David Reister, Vladimir Protopopescu, and Edward Oblow, all of ORNL's Computer Science and Mathematics Division, developed Terminal Repeller Unconstrained Subenergy Tunneling (TRUST), a computational method for global optimization. This fast, powerful, and robust tool could be used with a petroleum industry computer code to combine and correlate relevant data from all the receivers to get the sharpest possible image. By enabling a multisensor fusion algorithm to identify the meaningful reflections by separating them from the noise, the TRUST algorithm solves the seismic-image focusing problem plaguing the oil and gas industry, potentially reducing exploration costs. "TRUST rapidly and reliably eliminates large, useless regions of the search space before they are actually searched," says Barhen, an ORNL corporate fellow. "Hence, it increases the overall efficiency up to 45 times higher than any competitive approach." The development of TRUST was sponsored by the Engineering Research Program of DOE's Office of Science. Its application for geophysical imaging was funded by DOE's Office of Fossil Energy in conjunction with the DeepLock petroleum industry consortium. In 1998 the TRUST developers received an R&D 100 Award. A More Efficient Gas-Fired Heat Pump ORNL, in a cost-shared program with York International, has developed a triple-effect absorption chiller, an advanced natural-gas-fired heat pump that is 30 to 40% more energy efficient than other heat pumps. The device will be used to provide space cooling for large commercial buildings. In comparison with the double-effect chiller developed more than 40 years ago, the ORNL chiller's emissions of CO2 are 99.9% lower. In addition, its emissions of sulfur dioxide and total particulate solids are reduced by 73% and 99% respectively. Bob DeVault of the Energy Division, a co-inventor of the triple-effect absorption chiller, says the "triple effect" comes from feeding a refrigerant-containing absorbent solution through high-, medium-, and low-temperature generators. "The high-temperature condenser receiving vaporous refrigerant from the high-temperature generator is coupled to both the medium-temperature and low-temperature generators," DeVault says. "As a result, the internal recovery of heat within the system is boosted, increasing its thermal efficiency, improving indoor comfort and indoor air quality, and greatly reducing CO2 and other emissions." |From left in front of the field test model of the triple-effect absorption chiller at the Clark County Government Center in Nevada are Ronald Fiskum, DOE program manager and, from ORNL, Bob DeVault, Patti Garland, Abdi Zaltash, and Tony Schaffhauser. The five were celebrating an agreement signed by various partners October 27, 1998, to proceed with the test deployment of the world's first triple-effect absorption chiller at the center.| In an October 1998 speech, Secretary of Energy Bill Richardson noted that laboratory testing of the triple-effect chiller prototype at York International showed that it uses 40% less energy than other types of heat pumps. He also announced that the first field test of a full-size triple-effect chiller will be conducted in Clark County, Nevada, in 1999 and 2000. Patti Garland of the Energy Division is leading ORNL's participation in the test. ORNL's Role in Energy Savings By 2005, all U.S. federal agencies must use 30% less energy in their buildings than they consumed in 1985. That's the mandate of the Energy Policy Act of 1992 and Executive Order 12902. But energy efficiency improvements cost money, so how can federal government agencies reduce their energy use when their capital expenditures budgets are so tight? One solution is an alternative financing arrangement called energy savings performance contracting (ESPC). |This map shows the Super ESPC regions in the United States. ORNL has played a major role in energy savings performance contracts for the Southeast region.| Instead of relying on traditional congressional appropriations of capital funds to finance energy efficiency improvements in federal buildings, federal agencies sign contracts with private energy service companies that agree to pay up-front costs for identifying building energy cost-saving measures and acquiring, designing, installing, operating, and maintaining the energy-efficient equipment. In exchange, the contractor receives fixed payments from the cost savings resulting from these improvements until the contract period expires, up to 25 years later. At that time, the federal government retains all the savings and equipment. ORNL and DOE's Oak Ridge Operations (ORO) are participating in the Super ESPC program for DOE's Federal Energy Management Program. Super ESPCs are regional "all-purpose" or national "technology-specific" contracts that allow agencies to negotiate ESPC delivery orders with an energy service company without having to start the contracting process from scratch. The Oak Ridge team has awarded contracts potentially worth $750 million to six private companies for "all-purpose" ESPC in the Southeast. The team has also awarded contracts potentially worth $500 million to five private companies for geothermal heat pump "technology specific" ESPC nationwide. Key technical participants on the Oak Ridge team are Patrick Hughes and George Courville, both of ORNL's Energy Division, and Angela Carroll and Wayne Lin, both of ORO. Energy services companies are being contracted to help government agencies reduce their energy costs, meet federal energy savings requirements, and eliminate the maintenance and repair costs of aging or obsolete energy-consuming equipment. The contractors also are responsible for operating and maintaining the new energy-saving equipment during the contract term if the federal site so desires. For example, under an ESPC agreement, energy-efficient lighting, variable-speed motor drives, and an energy management control system are being installed at the Statue of Liberty. For ORNL a contract has been signed with Duke Solutions, Inc., to work with Hicks & Ingle Corporation to quickly replace a failed water chiller with a more efficient one in an Environmental Sciences Division building. In other federal complexes, lighting retrofits, additional insulation, cogeneration systems, and geothermal heat pumps (to replace conventional heating and air conditioning units) are being installed. Angela Carroll at DOE-ORO is the contracting officer for the six Southeast region and five geothermal heat pump contracts. The DOE contracting officer's representatives for all 11 contracts are Doug Culbreth and David Waldrop of the DOE Atlanta Regional Support Office. Patrick Hughes and a team of project facilitators from ORNL's Energy and Engineering divisions lead acquisition teams at federal agency sites through the delivery order process and provide technical assistance. "Our team," says Hughes, "will verify that each project's annual cost savings from reduced need for energy and maintenance will exceed the agency's annual payments to the energy service company for providing the energy efficiency improvements and negotiated services." The ORNL-ORO team will also support an appropriate integration of advanced technologies sponsored by DOE's Office of Energy Efficiency and Renewable Energy into the Super ESPC program, such as was accomplished for the geothermal heat pump. A newer way to use energy efficiently is to harness superconducting wire chilled by liquid nitrogen because the high-temperature superconductor offers no resistance to electrical flow. Researchers in ORNL's Fusion Energy Division have been involved in developments that would use this wire to transmit higher amounts of electrical current underground and to change voltage and current levels. ||Jonathan Demko (left) and Winston Lue check out components of the superconducting cable at the test facility at the Southwire Company plant in Georgia.| ORNL has entered a partnership with Southwire Company to develop a 30-meter, high-temperature superconducting (HTS) cable at the company's Georgia headquarters. The cable will carry enough energy to power a small city. In 1998 ORNL staff researchers J. Winston Lue, Michael J. Gouge, and Jonathan A. Demko designed, completed, and operated a research facility to support the successful development and testing of the first U.S. system prototype of an HTS power transmission cable. The goal is to retrofit HTS cables in existing underground ducts with cables that can carry 3 to 5 times more current. The Laboratory played a significant role in the fabrication and testing in 1998 of the first U.S. experimental electric transformer made from HTS wire. The team included ORNL, Waukesha Electric Systems, and Intermagnetics General Corporation. The ORNL team members were Bill Schwenterly, Jonathan Demko, Andy Fadnek, Randy James, Ben McConnell, and Isidor Sauers. "We were responsible for the innovative cryogenic system that allowed the 1-million-volt-ampere (MVA)-rated transformer to be cooled to 20 Kelvin without liquid helium," Schwenterly says. "Our work set the standard for cryogenic cooling systems for emerging electrical applications based on HTS technology." Compared with traditional paper-oil-insulated transformers wound with copper wire, HTS power transformers will increase efficiency of power delivery, eliminate the use of oil (a fire hazard and environmental contaminant), and upgrade the capability to handle power overloads. The team is now participating in the development of a 5-MVA transformer to be operated on the utility grid at Waukesha Electric's factory in Wisconsin. Regional Climate Modeling First, think globally. What are the effects on future climate of rising concentrations of CO2 from increased fossil fuel combustion? No one knows for sure, but global climate models now being developed for parallel supercomputers may predict these effects accurately someday. Now, think locally, or at least, regionally. If significant global climate changes are expected, what are the implications for the southeastern United States? The answer depends on the ability of computer specialists to predict changes in regional climate based on results of global scenarios. Assessment at ORNL The challenge is to present these changes on a much finer spatial and temporal scale. If such "downscaling" could be done, it might be possible, for example, to predict accurately whether East Tennessee will have less precipitation and more tornadoes in the next decade. Or whether the Carolinas will endure more hurricanes in the first two decades of the next century than they did during the last two decades in this century. Or whether the sea will rise and inundate the coast of Florida in the middle of the next century. John B. Drake of ORNL's Computer Science and Mathematics Division and three investigators in ORNL's Environmental Sciences Division—Mac Post, Tony King, and Mike Sale—recently received funding for a regional climate modeling and assessment project. The source was ORNL's internally funded Laboratory Directed Research and Development Program. "We have developed a statistical conceptual framework for simulating regional climate," Drake says. "The framework will house a variety of models, compare models, and combine results of different models. It uses physically based weather models, results of ecological experiments, and historical climate observations. Eventually, we will be able to predict temperature and precipitation data for any 1-kilometer grid for a particular decade or longer. "But, what our customers want is predictions of extreme events. Our goal is to be able to predict that a region in the Southeast during a certain decade will experience, for example, 30% more tornadoes, or 20% fewer hurricanes, or 25% more big storms that cause major floods than it did in the 1980s. Of course, we will also provide error bounds because we cannot make such predictions with 100% certainty." |Regional modeling requires many highly resolved data fields, such as terrain (elevation), land use category, vegetation categories, soil categories, and ground temperature. The color scheme for land use for the Southeast (shown here) is as follows: - black—urban land; - purple—treeless grassland; - green—deciduous forest; - dark green—coniferous forest; - red—mixed forest and wetland; - blue—water; and - light blue—marsh or wetland. The ORNL group has been studying the ability of today's global circulation models to predict the fate of rainfall in the Southeast and keep an accurate freshwater budget. "Our model tells us how much rain goes east of the continental divide to the Atlantic Ocean and how much goes west to the Mississippi River or to the Gulf of Mexico coast," Drake says. "But we want to improve the model's resolution by partitioning rainfall so we can predict how much actually goes into each of the major rivers, such as the Ohio and Tennessee rivers." Making predictions for the Southeast will require scientific discovery of the relationship between local biogeochemical processes and large-scale weather and climate shifts. "If the climate becomes warmer and drier," says King, "the growth of smaller, shallow-rooted trees in a region's forests may be reduced, decreasing the region's uptake of carbon. The rise in temperature could increase the rates of tree respiration and decomposition of soil and litter, resulting in greater releases of carbon to the atmosphere that could bring increased climatic warming. "In addition, w1e need to discover the effects on climate of changes in water availability to regional forests. These changes affect transpiration, the way in which trees transfer rainwater back to the atmosphere. We also must determine the effects on forests of seasonal changes such as early springs—which could result in early leaf production, increased growth, and greater carbon uptake—and early springs punctuated with later freezes that could hamper reproduction, reducing the long-term productivity of the forest. "We will look at the impacts on forests of summer and winter droughts, which are expected to have different effects on forest growth," King adds. "Our models will run different climate scenarios to determine how they affect the ability of forests, crops, and other plants to take up carbon and influence future climate." The purpose of the regional climate model is to provide scientifically grounded information for modelers in the assessment community. These researchers seek to predict the impacts of climate change on health, food production, the environment, and the economy. "Suppose that our model predicts a slightly warmer and drier climate for the Southeast in the next few decades," Drake says. "These results could be plugged into models used to determine the effects of temperature and precipitation changes on mosquito proliferation and the spread of malaria." Other modelers will look at climate impacts on agricultural production, growth of forest trees, and reproduction of wildlife species. Some modelers will try to determine if a climatic warming could have immediate economic impacts, such as severe coastal flooding from a rise in sea level and a higher frequency of hurricanes. The ORNL modelers are expecting to examine the impact on regional climate of various CO2 emission levels in the Southeast. They may be running different scenarios in which regional firms burning fossil fuels pay other nations for the right to exceed limits in emitting carbon. They will also look at the effects on climate of enhancing the natural sequestration of carbon by improved management of land, forests, and agriculture. "By predicting future climate for the Southeast," Drake says, "our community of ORNL researchers could become an important link between the modeling community and the policy-oriented impact assessment communities who are devising strategies to deal with increasing atmospheric CO2 and the predicted impacts of global and regional warming." For that reason, as part of the U.S. Scientific Simulation Initiative, ORNL is proposing to serve as the regional climate prediction center for the Southeast. ORNL researchers use global circulation models, but for climate prediction, they are starting to think regionally. High-Performance Storage System and Climate Data Archive A scientist needs data about how different types of clouds reflect, absorb, and transmit the energy of sunlight. The data, based on measurements taken by instruments on the ground and aboard airplanes and satellites, will help the scientist improve the accuracy of a computer model in predicting the influence of human activities on climate. |This ARM millimeter cloud radar instrument in Oklahoma cattle country enables scientists to determine whether a cloud contains mostly ice crystals or liquid water. Such measurements help scientists predict the degree to which the cloud reflects, absorbs, or transmits sunlight. | The scientist accesses a web-based interface and requests 100 files of data from DOE's Atmospheric Radiation Measurement (ARM) data archive, located at ORNL. In this archive are more than three million files containing more than 15 terabytes of data. Three robots retrieve the tapes on which the requested files are stored and load them for copying on the disk drive of the ARM web-site server. Within an hour, the scientist can access the requested files. For the past two years, the ARM data archive has been using the High-Performance Storage System (HPSS), storage-system software that leads the computer industry in capacity and transfer speeds. HPSS was developed by a consortium of DOE national laboratories and IBM. The DOE participants are ORNL, Sandia, Lawrence Berkeley, Los Alamos, and Lawrence Livermore national laboratories. HPSS, which received an R&D 100 Award in 1997, is marketed by IBM. Deployed at about 20 sites and used productively for more than two years, HPSS is now the standard for storage systems in the high-performance computing community. The HPSS community has been joined by two new industrial partners, Sun Microsystems and Storage Technology Corporation. HPSS 3.2 has been the version in production use at most sites for more than a year. In December 1998, HPSS 4.1 was released by the collaboration and is expected soon to become the production storage system software at most sites. Version 4.1 provides significant improvements in scalability, performance, end-user access, small-file support, and input-output support for massively parallel supercomputers. ORNL's primary customer for HPSS is the ARM project; the Laboratory's role is to provide and support the data archive. The ORNL HPSS system manages the hierarchy of devices storing more than 3.5 billion measurements. It can place 2000 new files a day into storage. It will eventually be able to routinely find and retrieve up to 5000 files an hour to meet the growing requests for information related to global change. Facing a Future of More Take an eastern deciduous forest—the type that displays brilliantly colored leaves in the fall. Expose it to air enriched in 50% more CO2 than is present in the atmosphere. Reduce the amount of water normally available to this forest. Carbon Dioxide for Forest Trees Is this a recipe for slower or faster forest growth? Because of the expected rise in the combustion of fossil fuels to satisfy the world's growing energy appetite, scientists want to know if additional emissions of CO2 might significantly affect the growth of forest trees. What about feedbacks from the forest to the atmosphere? If a forest is affected by increases in CO2 concentrations that can influence the climate, could the forest itself affect the climate? To face these tough questions, ORNL has a world-class user facility in a forest that features free-air CO2 enrichment (FACE) technology. The hardware for the selected hardwoods—a 10-year-old sweetgum plantation in the Oak Ridge National Environmental Research Park—elevates the air's CO2 concentration across the plantation's 25-meter-diameter plots. Because the plantation has no walls, the effects of elevated CO2 can be studied under natural field conditions. The facility is open not only to nature but also to researchers from universities and other laboratories across the nation who wish to study the response of forests to atmospheric CO2 enrichment. |Standing tall in a small sweetgum plantation are a tower and vent pipes that provide the trees with additional amounts of carbon dioxide. This hardware is part of ORNL's novel free-air CO2 enrichment system. During the first year of exposure to the increased CO2 concentrations, the trees grew faster and conserved water. "Plant physiologists and ecologists have learned a great deal about how small trees and other plants will respond to increasing CO2 concentrations in the atmosphere, but it is much harder to say how a whole forest will respond," says Richard J. Norby, leader of the collaboration at the FACE facility. "Understanding the response of forests is challenging because they are tall and biologically complex. Fortunately, next-generation technology in the FACE facility should help us better evaluate the sensitivity of forests to global change and, in turn, understand the dynamic role played by forested ecosystems in the earth's climate system." It is known that forests provide a critical "biotic" feedback between the earth's terrestrial vegetation and our ever-changing climatic system. Each is dependent on the other largely because forests and the atmosphere are sources of water and CO2 to each other. Large-scale studies of these interdependencies are needed for accurate climate predictions and for understanding the structure and function of our future forest resources. These interdependencies were underscored by the first-year results at the FACE facility—forest growth was increased and the limited supply of water was conserved in the CO2-enriched plots. During the first year of the experiment, the ORNL scientists observed that the tree leaf pores (stomata) that allow CO2 to enter and water vapor to escape were not open as wide in plots receiving the extra CO2. As a result, trees in the CO2-enriched atmosphere conserved water, while maintaining much higher rates of photosynthesis—the process by which plants use the energy from sunlight to convert CO2 and water into the sugars needed for growth. The researchers also detected a significant increase in the production of wood in the tree trunks and very fine roots in the soil. Evaluation of changes in the nitrogen content in trees and soil will help scientists determine if these important growth responses will be sustained for many years. The FACE facility complements the Throughfall Displacement Experiment in Walker Branch Watershed, which allows study of the responses of forest trees to not only ambient but also above ambient and below ambient levels of precipitation that may be typical of a changing climate. ORNL scientists are setting the standard for large-scale ecological research that could provide a recipe for success in predicting correctly the impact of future climate on forest productivity. Capturing and Isolating Carbon In April 1999, DOE released a 200-page "working draft" describing research paths that could lead to long-term technologies that might slow or stop the buildup of CO2 in the atmosphere and delay possibly undesirable climatic effects. This "research and development roadmap" identifies key research needed to allow development of a variety of carbon sequestration technologies. These technologies might separate and capture CO2 from energy systems, make products from some of the carbon, and sequester the rest in oceans, geological formations, and terrestrial ecosystems such as forests, vegetation, soils, and crops. Just recently DOE awarded a research contract for a collaborative team of ORNL, Pacific Northwest Laboratory, Argonne National Laboratory, and several universities to form a center to perform research on ways to enhance uptake and long-term sequestration of atmospheric CO2 by terrestrial ecosystems. DOE also awarded a center to Lawrence Livermore National Laboratory and Lawrence Berkeley National Laboratory to perform research on ocean sequestration of carbon. The draft was compiled, edited, and printed at ORNL. Its chapters were coauthored by experts from DOE national laboratories and universities throughout the nation. One of the leaders for this DOE effort was ORNL Associate Director David Reichle, and key chapter authors included ORNL researchers Rod Judkins, Gary Jacobs, Allen Croff, and others. The expected need for carbon sequestration technologies is likely to open up new research opportunities for ORNL scientists and engineers in clean energy technologies and climate effects research.
http://www.ornl.gov/info/ornlreview/v32_2_99/clean.htm
13
26
The seigneurial system was a form of land settlement modelled on the French feudal system. It began in New France in 1627 with the formation of the Compagnie des Cent-Associés that was initially responsible for handing out land grants and seigneurial rights. The land was divided into 5 by 15 kilometre plots, usually along major rivers like the St. Lawrence. They were then further subdivided into narrow, but long lots for settlement. These lots were usually long enough to be suitable for faming, and they provided everyone who lived on them with equal access to neighbouring farms and the river. Around 1637, to encourage French immigrants to settle in the St. Lawrence Valley, then known as ‘Canada’, the king implemented the seigneurial system, by distributing large tracts of land to settlement agents called ‘seigneurs’. These agents had to subdivide the tracts of land into lots or censives each measuring approximately three arpents of frontage by 30 arpents in depth (180 by 1,800 metres). These lots were granted at no cost to new arrivals. In return for this ‘free’ land, a habitant was required to pay certain annual fees that constituted a form of the income and consumption taxes. These included not only the cens, which ranged from two to six sols per arpent, and the rente, usually 20 sols per arpent of frontage, but also, goods in kind, such as a pig or bag of wheat. In addition, a habitant wanting to graze farm animals on the common had to pay a few sols. To have wheat ground at the mill, a habitant paid the seigneur banalités, every fourteenth bushel of grain to pay off the cost of the building and pay the miller’s wages. Similarly, a habitant was required to give every fourteenth fish to the seigneur in exchange for permission to fish the waters bordering the habitant’s land grant. Beginning in 1670, tenants under the seigneurial system were required to remit a tithe to the Church. The tithe, equal to a twenty-sixth of the wheat crop, was used to maintain the religious buildings and property that the tenants used, such as the chapel, the rectory and the cemetery. Finally, the obligation to provide days of unpaid labour or corvée, dating back to the medieval period, remained in effect. A habitant was required to provide three to five days of unpaid labour each year for the maintenance of bridges and roads and for the construction of various buildings or structures, such as the manor house, the mill, barns, stables and fences. In return, the habitant had access to the seigneury’s services and benefited from the security it provided. The seigneurial system was central to France’s colonisation policy and came to play a major role in traditional Québec society. Despite the attractions of city life and the fur trade, 75-80% of the population lived on seigneurial land until the mid-nineteenth century. The roughly 200 seigneuries granted during the French regime covered virtually all the inhabited areas on both banks of the St Lawrence River between Montréal and Québec and the Chaudière and Richelieu valleys and extended to the Gaspé. Seigneuries were granted to the nobility, to religious institutions in return for education and hospital services, to military officers and to civil administrators. By the end of the seventeenth century, much of the land adjacent to the river was occupied and this led to the building of a road behind the first concessions parallel to the river: this was the ‘rang’. Further land was then granted in long strips from this first ‘rang’ and this continued until the limits of arable land were reached. This structure formed the basic unit of the rural community and established a network of community solidarity. More generally, people were identified with their parish or canton (township), but it was the village or market town where people met to discuss politics, farm prices or offers of employment. Initially, the village was little more than a collection of houses spread at regular intervals along a road. At crossroads, or where a waterfall allowed the building of a mill, settlement was more densely concentrated giving rise to hamlets and markets. This also led to the development of commercial, administrative and industrial activities. The number of large villages was small until after the Conquest of 1760, but, between 1815 and 1850, the number of villages in Lower Canada grew from 50 to 300 containing 86,000 people. This rapid growth can be explained by the parallel growth of the Lower Canadian population that rose from 355,000 in 1815 to a million inhabitants by 1855. The birth rate was exceptionally high: in 1865, women had an average of seven children. Population growth led to increasing demands for goods and services that were provided by villages and this led to an intensification of the domestic trade in Lower Canada. Largely because of the initiative of more entrepreneurial seigneurs, landless labourers were employed to develop rural industries especially in the communities round Montreal. There had always been artisans in villages and towns but by 1800 commodities were produced in smaller centres; for example, in the village of Saint-Charles in the Richelieu Valley, hat-making and pottery became important occupations and Saint-Jean became a centre for earthenware production. The appearance of coopers, tailors and carriage makers in other small centres emphasised the growing diversification of the rural economy. Despite diversification of the rural economy, more than 80% of French Canadians were employed in farming in 1850. Soon prosperous market towns appeared such as Sainte-Rose, Terrebonne, Saint-Jérôme and Joliette on the north bank of the Richelieu, Saint-Charles, Napierville and Saint-Hyacinthe on the south bank and Saint-Romuald to the south of Quebec. This expansion of villages represented an essential transitory stage between the countryside and the cities by giving people their first experience of urban life. Most villages were organised on similar lines. The various buildings of the village were generally made of superimposed beams of squared timber. In the centre there was the church and presbytery generally built or rebuilt in stone. These buildings were extremely expensive to build and the community devoted much time and resources to them as the cost of building and running a church and the maintenance of the priest was met by the village. Parochial structures dated back to the seventeenth century and the territory was already squared into parishes by 1760. However, the increase in the number of villages and the growing power of the Church after 1840 led to the creation of new parishes and the sub-division of some that already existed. Although parishes often included several villages, they tended to take their name from the most important village community. When villages were established as municipalities after Confederation in 1867, their boundaries tended to follow existing parishes. This explains why there are so many small municipalities in Quebec today and why they often named after a saint. Opposite the village church and the school that was generally attached to it, was the main street of the village occupied by the professional middle-class: lawyers, notaries, doctors, land-surveyors and principal businessman and merchants, the post master and blacksmiths. Because of the slowness of transport, there was also at least one inn that provided shelter for travellers. Buildings were less concentrated as one moved away from the centre of the village. The commercial activities tended to be located at the edges of villages: quays, warehouses, grain markets and industrial activities, such as sawmill, shipyards, potash factories, distilleries, etc. These enterprises were directly responsible for the growth of villages in the first half of the nineteenth century. They only employed a few workers but were found in significant numbers along the main transport routes. There was considerable similarity in the social structure of villages across Lower Canada. Villages were generally founded by a seigneur though by the early-nineteenth century their economic and social role was declining. The priest was an important figure providing for the spiritual needs of the parish. The major problem in the early nineteenth century was that their numbers were limited: in 1837, there were only 273 priests for Lower Canada. While the number of priests remained small, there was a significant increase in the number of professionals. Between 1791 and 1836, the number of notaries increased from 55 to 373, lawyers from 17 to 208 and doctors from 50 to 260. They occupied the newer houses close to the centre of the village and were regarded as its natural leaders leading to many being elected to the Legislative Assembly. Closely linked to the professionals were artisans and merchants. They had many different roles and often travelled in search of work. Farmers and share-croppers played an important role in the community providing much of the work for day labourers. Their wealth depended on their access to the market and they were, as was the case just before the Rebellions, particularly vulnerable to economic depression. They tended to be conservative in their economic and political attitudes. Farmers and their families tended to live outside the village. Their prosperity depended on access to the major road and river communication routes but there were wide variations of wealth across Lower Canada. Seigneurial tenure prevented the concentration of land but, as one writer commented in 1832, had the immense merit of obtaining land cheaply since habitants did not have to pay cash to benefit from it. Most work was carried out by the family especially the sons who hoped to obtain the paternal land in the future and who were prepared to work for nothing in the present. Habitants were not well regarded by the British or townspeople in general. In the eighteenth century, Frances Brooke found them ignorant, lazy, dirty and stupid to an incredible degree. However, others saw them as honest, hard working, hospitable but also docile and submissive. In 1836, Romuald Trudeau criticised their lack of education and their idleness and their liking for spending time in inns. Observers were critical of habitant farming methods though this often reflected a failure to understand the peculiarities of colonial agriculture. Levels of illiteracy among the French-speaking people were about 73% in 1838 but reached 88% in the countryside. This reflected not only inconsistencies in the provision of schooling in Lower Canada and a lack of interest by government but also habitant resistance to education. Habitants frequently earned their living as day labourers. This sort of work was manual labour and labourers needed to be mobile to benefit from it. This appealed especially to unmarried men who could find themselves within a few months working in the city, then on logging sites or working for farmers during the labour-intensive harvest. Often casualties of the crisis in farming in the first decades of the nineteenth century and the resultant surplus of labour, their number is difficult to assess. Constantly on the move, they tramped the province in search of often transitory work. French Canadians moved as lumbermen into new forest frontiers, under employers largely from Britain or the United States and Quebec City, the centre of the lumber trade, grew in economic importance, offsetting Montreal’s hold on the fur trade. It was not until after 1850 that the expansion of the industrial economy that day labourers gravitated towards working in factories and on public projects such as railway construction. Finally, at the edge of the rural world there were squatters who farmed less fertile land. They often lived in intense poverty and a great isolation. The missionary priest provided some comfort to these thinly spread people who often lived in the immense and undeveloped forests. Diversification was also present among this marginal farming population that became increasingly dependent on cash income from forest work, a feature of the agro-forest economy of the Ottawa, Saint-Maurice and Saguenay valleys. Family survival depended on men supplementing farm production with winter work in the forests. The nature of rural economy and society is exemplified by the community that developed at Saint-Eustache. In 1683, Michel-Sidrac Du Gué, seigneur de Boisbriand and captain of the regiment de Carignan-Salières was granted the Seigneurie-des-Mille-Isles. Because he did not attempt to settle it, the Crown withdrew his grant and in 1714 gave it to Jean Petit, treasurer of the Navy and Charles Gaspard Piot de Langloiserie. In 1733, Charlotte-Louise Petit, daughter of Jean Petit married Eustache Lambert-Dumont and Langloiserie’s daughter married Jean-Baptiste Céloron de Blainville. The Seigneurie-des-Mille-Isles was divided into the Seigneurie de Blainville and the Seigneurie Dumont also called the Seigneurie de la Rivière-du-Chêne. From 1755 to 1762, Dumont granted land to settlers along the Rivière-du-Chêne and also during this period he had the seigneurie’s first flour mill built. The construction of the church from 1780 to 1783 combined with the mill favoured the development of the market town of Saint-Eustache. Between 1784 and 1790, population increased from 1,958 to 2,385 habitants and the number of grants reached 336 by 1800 by which time the land along the Rivière-du-Chêne was completely settled. The population of the region of Saint-Eustache continued to rise reaching 4,830 habitants in 1831 though, probably as a result of the Rebellion, this fell back to 3,195 in 1840. Before 1800, wheat farming made up eighty% of agricultural production in the Seigneurie de la Rivière-du-Chêne. Competition from Upper Canada from the mid-1800s led to a dramatic fall in demand for Lower Canadian wheat that led to depression in agriculture in the Seigneurie de la Rivière-du-Chêne that lasted until the 1830s. In the seigneury, the major causes of depression were obsolete farming techniques and severe shortages of land caused by the dramatic increase in the population. Following bad harvests in 1826-1827, the growing of corn was replaced with potatoes and oats. By the early 1830s, Saint-Eustache was the third most important town in the province in terms of population and an important political and cultural centre. Its population was largely made up of farmers and members of the professions. It was one of the first centres to mobilise against the government during the 1837 Rebellion, a consequence of the strength of the local Patriote organisation led by Doctor Jean-Olivier Chénier and W. H. Scott since 1834. The parish of Saint-Eustache was founded in 1825 with a population of 4,343 habitants of whom 393 lived in the village and the remainder in the surrounding area. There were 61 houses in the village and 777 in the countryside. Saint- Eustache was a large agricultural parish with 26,000 arpents of land. From 1 July 1855, legislation gave the village of Saint-Eustache ‘municipal’ status but it was not until 1948 that the village obtained the status of ‘town’ and it was three years later that its population exceeded that of its surrounding countryside. An arpent is a unit of land measurement that equals 192 feet or 58.5 metres. The ‘cens’ was a nominal tax or land tax that replaced the ‘taille’, a direct tax levied on individuals in France. In New France settlers were also called ‘censitaires’, derived from this word that also relates to the word ‘census.’ The register in which the seigneur wrote the date and the amount paid each year by the censitaires was known as the ‘censier.’ The ‘sol’ was the French money in the colony. The ‘rente’ was duty, royalty or tax paid in cash and, in many cases, with goods in kind. Under seigneurial tenure, grist mills (and, in many cases all mills), were seigneurial monopolies. This prevented a form of peasant rural accumulation common in freehold areas. It did not guarantee the existence of ‘service close to the producer’ since seigneurs commonly waited until there was a large enough pool of habitants before they would build a mill. The roads ‘provided’ by the seigneurs in Quebec were built by habitants on their own land or through communal labour, the ‘corvee’. The road system was actually governed by a colonial official, the ‘grand voyer’, not by the seigneurs. See, Desbarats, C., ‘Agriculture within the Seigneurial Régime of Eighteenth-Century Canada’, Canadian Historical Review, Vol. 73, (1992), pp. 1-29; Harris. R. C., The Seigneurial System in Early Canada: A Geographical Study, (McGill-Queen’s University Press), 1984. Courville, Serge, Entre ville et campagne: L’essor du village dans les seigneuries du Bas-Canada, (Éditions PUL), 1990 remains an important study. Laurin, Serge, Les régions du Québec: Les Laurentides, (Éditions de l’IQRC), 2000 provides greater geographical and historical breadth. Courville, S., Robert, J-C and Séguin, N., ‘The Spread of Rural Industry in Lower Canada, 1831-1851’, Journal of the Canadian Historical Association, n.s., Vol. 2, (1991), pp. 43-70. Hardy, Jean-Pierre, La Vie Quotidienne dans la Vallée du Saint-Laurent 1790-1835, (Septentrion), 2001 provides an excellent synposis of life in the early-nineteenth century. Frances Brooke, The History of Emily Montague by the Author of Lady Julia Mandeville, (J. Dodsley), 1769, pp. 146-147. Frances Brooke (1724-1789) lived in Canada between 1763 and 1768 when her husband was military chaplain at Quebec and History of Emily Montague was one of the first novels written in the new World and certainly the first in Canada. See, Hammill, Faye, Canadian Literature, (Edinburgh University Press), 2007, pp. 33-38. Lambert, John, Travels Through Lower Canada and the United States of North America in the Years 1806, 1807 and 1808, 3 Vols. (Richard Phillips), 1810, Vol. 1, pp. 133-145 and Laterrière, Pierre de Sales, and Taunton, Henry Labouchere, A Political and Historical Account of Lower Canada, (W. Marsh & A. Miller), 1830, pp. 123-125, gave a negative view of farming. Innis, Harold A., The Fur Trade in Canada: An Introduction to Canadian Economic History, (Yale University Press), 1930, pp. 263-282 and Lower, Arthur R.M., Great Britain’s woodyard: Nritish America and the timber trade 1763-1876, (McGill-Queen’s University Press), 1973. Hardy, R., and Séguin, N., Forêt et societé en Mauricie, (Boréal Express), 1984. Grignon, Claude-Henri and Giroux, André, Le vécu à Saint-Eustache de 1683 à 1972: en hommage à nos patriotes, (Éditions Corporation des fêtes de Saint-Eustache), 1987. Giroux, André and Chapdelaine, Claude, Histoire du territoire de la municipalité régionale de comté de Deux-Montagnes, nd, pp. 15-17. Ouellet, Fernand, Le Bas-Canada de 1791 à 1840: Changements structuraux et crise, (Éditions de l’Université d’Ottawa), 1976, p. 243. Ibid, Giroux, André and Chapdelaine, Claude, Histoire du territoire de la municipalité régionale de comté de Deux-Montagnes, p. 19. Prévost, Robert, Chénier, l’opiniâtre, (Institut de la Nouvelle-France), 1940 is a short biography but see also Globensky, Maximilien, La Rébellion de 1837 à Saint-Eustache, pp. 220-224, passim. Messier, p. 109; Bernard, Jean-Paul, ‘Jean-Olivier Chénier’, DCB, Vol. 7, 1836-1850, pp. 171-174, is more recent. See also Laurin, Clément, ‘Bibliographie de Jean-Olivier Chénier’, Cahiers d’histoire de Deux-Montagnes, Vol. 5, (2), (1982), pp. 58-66. Globensky, Maximilien, La Rébellion de 1837 à Saint-Eustache, 1883, extended edition, 1889, reprinted (Éditions Du Jour), 1974, pp. 224-225, provided a brief, slanted biography. Messier, p. 441; Gouin, Jacques, ‘William Henry Scott’, DCB, Vol. 8, 1851-1860, pp. 791-792, is more balanced. Dubois, Abbé Émile, Le feu de la Rivière-du-Chêne, Étude historique sur le mouvement insurrectionnel de 1837 au nord de Montréal, (E.A. Deschamps), 1937; Paquin, Jacques, ‘La bataille de Saint-Eustache et le triste sort de Saint-Benoît’ in Boileau Gilles and Paquin, Jacques, Les Patriotes de Saint-Eustache, Cahier d’histoire de Deux-Montagnes, (Saint-Eustache: Société d’histoire de Deux-Montagnes), 1989.
http://richardjohnbr.blogspot.co.uk/2010/10/seigneurial-system-and-settlement.html
13
21
The Mexican Repatriation refers to a mass migration that took place between 1929 and 1939, when as many as 500,000 people of Mexican descent were forced or pressured to leave the US. These occurred during the latter end of the Hoover Presidency and into Franklin Delano Roosevelt's second term. . The event, carried out by American authorities, took place without due process. The Immigration and Naturalization Service targeted Mexicans because of "the proximity of the Mexican border, the physical distinctiveness of mestizos, and easily identifiable barrios." Studies have provided conflicting numbers for how many people were “repatriated” during the Great Depression. The State of California passed an "Apology Act" that estimated 2 million people were forced to relocate to Mexico and an estimated 1.2 million were US citizens. Authors Balderrama and Rodriguez have estimated that the total number of repatriates was about one million, and 60 percent of those were citizens of the United States. These estimates come from newspaper articles and government records and the authors assert all previous estimates severely under counted the number of repatriates (Balderrama). An older study conducted by Hoffman argues that about 500,000 people were sent to Mexico. His data comes from the "Departmento de Migracion de Mexico" or “Mexican Migration Service,” which is said to be a reliable source since the Mexican government had many ports along the border in which Mexicans were required to register and could do so free of charge (Aguila and Hoffman). The Repatriation is not widely discussed in American history textbooks; in a 2006 survey of the nine most commonly used American history textbooks in the United States, four did not mention the Repatriation, and only one devoted more than half a page to the topic. Nevertheless, many mainstream textbooks now carry this topic. In total, they devoted four pages to the Repatriation, compared with eighteen pages for the Japanese American internment which, though also a gross violation of the rights of citizens, affected a much smaller number of people, even by the more conservative estimates for the Mexican deportations. These actions were authorized by President Herbert Hoover and continued by FDR who was the 32nd President of the United States (1933–1945) and targeted areas with large Hispanic populations, mostly in California, Texas, Colorado, Illinois, and Michigan. Historical background information History of Mexicans in the US and Mexican immigration to the United States “Even immigration scholars have frequently labeled Mexicans as part of a ‘new immigrant’ grouping in comparison to Europeans such as the Irish and italians ถGermans,” which he claims to be a widely held misconception. Mexicans have been immigrating to the United States for more than a century in response to US labor demands and have been citizens of the United States in the Southwestern states since the Mexican-American war in the mid-1800s. Mexican-American War From 1846 to 1848 the US and Mexico fought a war that would result in the Mexico ceding the present-day states of California, Nevada, Utah, New Mexico, Arizona, and parts of Texas, Colorado, and Wyoming to the United States under the Treaty of Guadalupe Hidalgo. The United States paid $15 million for the land that reduced Mexican territory to 55 percent of what it was before the war. The treaty promised US citizenship to the estimated 80,000 Mexican citizens residing in the territories ceded to the US, although Mexican citizens who were considered Native Americans were excluded, and about 2,000 of the total 80,000 decided to move further south into territories still considered Mexico. Moreover, although Mexicans were considered US citizens and counted as white on the US census all the way up until 1930, the public most likely did not see them this way and treated them as foreigners. Economic incentive During the California Gold Rush, Mexicans had immigrated in order to work in the California mines or to help build the railroads. Following the Chinese Exclusion Act of 1882, Mexican immigrants began to increase in numbers in order to fill the labor demand that had previously been held mainly by Chinese immigrants. At the onset of the 20th century, “US employers went so far as to make request directly to the president of Mexico to send more labor into the United States” and hired “aggressive labor recruiters who work outside the parameters of the US” in order to recruit Mexican labor for jobs in industry, railroads, meatpacking, steelmills, and agriculture. “By 1900 approximately 500,000 people of Mexican ancestry lived in the United States. Roughly 100,000 of these residents were born in Mexico; the remainder were second-generation inhabitants . . . and their offspring.” Mexican Revolution The Mexican Revolution caused many Mexicans to flee Mexico during the war years of 1910-1920. An estimated 2,000,000 people died during this time, causing many to immigrate north into the US in order to escape the violence. Also, during this revolutionary period many farmers were unable to cope with the drastic increase in the cost of living that had risen 70 percent, forcing many to immigrate North in search of employment. Citizenship and immigration law prior to WWII In 1924, the first official border patrol was established on the Mexico-US border. Prior to this date, however, Mexican immigration was not restricted in the way it is today: “a Mexican caught crossing the border illegally was told that if he wished to enter the US, he had to do so at a regular station and pay the fees.” Moreover, immigration from the Western Hemisphere had remained unrestricted until 1965 with the passage of the Immigration and Nationality Act, even though countries from Eastern Europe and Africa, for example, had limitations, severely restricting immigration from those countries. Due to the laxity of immigration enforcement during these times, many citizens, legal residents, and immigrants did not have the papers proving their citizenship, had lost their papers, or just never applied for citizenship. The feeling of not belonging and of being viewed as a foreigner lingered among the Mexican population in the US, as described by Hoffman: “the privileges of American citizenship offered little of substance to the Mexican national who knew that if he became a citizen he would still be, in the eyes of the Anglos, a Mexican” (Hoffman 20). For these reasons, and because there was a feeling of protection by remaining a Mexican citizen and a sense of group pressure not to apply for citizenship by other Mexicans (Hoffman 19), many Mexicans did not have the paperwork in order to prove their legitimacy in the United States, or the citizenship in order to secure them the rights provided to American citizens (Aguila). The Great Depression Following the Stock Market Crash of 1929 in the United States, the US economy began to crumble, and the ensuing devastation quickly reverberated throughout the world, affecting the economies of countries both rich and poor for almost a decade. As a result of the Great Depression, thousands of banks closed, international trade plummeted, and hundreds of thousands of Americans were consumed by the depression and lost everything, including their homes, their jobs, and many could not even afford to feed their families. United States unemployment jumped from a low of 4.2 percent in 1928 to a high of 25 percent in 1933, the highest unemployment rate in US history. By 1938, unemployment remained high at 19 percent and did not fall below 10 percent until 1941. The Hoover administration’s inability to curtail the disintegrating economy during the initial, and worst, years of the depression led many to despise President Hoover. The perceived lack of assistance from the federal government upset many citizens and organized labor, and, in order to improve “organized labor’s hostile attitude toward his administration,” President Hoover used immigrants in the country as a scapegoat to divert criticism (Balderrama 4). The impact of the Great Depression on the Bridgeport coal miners was devastating. These laborers possessed very few skills other than coal mining. Most were unable to obtain other employment, and many returned to Mexico either by choice but many by force. There were few economic opportunities for unskilled workers in Mexico in the 1930s. The economic advances achieved by these immigrant Mexican laborers and their US-born children during the early decades of the twentieth century were probably little to nothing. Immigration to the United States was sharply ceased during the Depression. Few Mexican repatriates were able to reenter the United States during the 1930s, though it is probable, however, that many of the Bridgeport miners returned to the United States when immigration restrictions were relaxed. Repatriation efforts Justifications given for repatriation According to county officials, returning immigrants to their country of origin would save the city money by reducing the number of needy families using up federal welfare funds and free up jobs for “real” Americans. A telegram to the US Government Coordinator of Unemployment Relief sent by C.P. Visel, the spokesman for Los Angeles Citizens Committee for Coordination of Unemployment Relief, wrote of the “deportable aliens” in LA county. He stated, “local U.S. Department of Immigration personnel not sufficient to handle. You advise please as to method of getting rid. We need their jobs for needy citizens” (Balderrama 67). A member of the Los Angeles County board of Supervisors, H.M. Blaine “allegedly remarked that the majority of the Mexicans in the Los Angeles Colonia were either on relief or were public charges,” even though sources at the time documented that less than 10 percent of people on welfare across the country were Mexican or of Mexican descent (Balderrama 99). Many white American citizens who were experiencing the negative effects of the Great Depression followed suit in blaming immigrants for their desperation and thought that removing immigrants from relief rolls and having them deported out of the country would solve their problems (Balderrama 100). Independent groups such as the American Federation of Labor (AFL) and the National Club of America for Americans thought that deporting Mexicans would free up jobs for citizens and the latter group urged Americans to pressure the government into deporting Mexicans (Balderrama 68). Balderrama’s book cites a study conducted during the 1930s analyzing deportation costs. It puts into question the prevailing argument of the time that immigrants deported would reduce city costs overall. He writes, “if 1,200 aliens were deported, they would leave behind 1,478 dependents who would be eligible for public welfare. $90,000 in government costs to deport individuals and $147,000 yearly to provide for their families indefinitely or until they reached legal age. 80% of those deported would be eligible to obtain non-quota preference for reentry due to the fact that they had wives, children, or other relatives who were citizens or legal residents” (Balderrama 77). Federal government’s involvement As the effects of the Great Depression worsened and affected larger amounts of people, feelings of hostility toward immigrants increased rapidly, and the Mexican community as a whole suffered as a result. States began passing laws that required all public employees to be American citizens and employers were subject to harsh penalties such as a five hundred dollar fine or six months in jail if they hired immigrants. Although the law was hardly enforced, “employers used it as a convenient excuse for not hiring Mexicans. It also made it difficult for any Mexican, whether American citizens or foreign born, to get hired” (Balderrama 89). The federal government posed restrictions for immigrant labor as well, requiring firms that supply the government with goods and services refrain from hiring immigrants and, as a result, most larger corporations followed suit, and as a result, many employers fired their Mexican employees and few hired new Mexican workers causing unemployment to increase among the Mexican population (Balderrama 89-91). President Hoover desperately needed a way in which to improve his popularity among citizens. In order to achieve this goal, he publicly endorsed Secretary of Labor William N. Doak and his campaign to add “245 more agents to assist in the deportation of 500,000 foreigners” (Balderrama 75). Doak’s endeavors to rid the country of Mexican immigrants has been described as unscrupulous. His measures included monitoring labor protests or agrarian strikes and labeling protestors and protest leaders as possible subversives, communists, or radicals. “Strike leaders and picketers would be arrested, charged with being illegal aliens or engaging in illegal activities, and thus be subject to arbitrary deportation” (Balderrama 76). Labeling Mexican activists in this way was a way to garner public support for actions taken by the immigration agents and federal government such as mass raids, arbitrary arrests, and deportation campaigns. In response to LA county’s Unemployment Relief Coordinator Visel’s aforementioned telegram, the federal government sent Supervisors of the Bureau of Immigration, Walter E. Carr and W.F. Watkins (both at different times) to LA to help conduct deportations in the Los Angeles area. Local involvement—local welfare and profitable charitable agencies According to Hoffman, “from 1931 on, cities and counties across the country intensified and embarked upon repatriation programs, conducted under the auspices of either local welfare bureaus or private charitable agencies” (83). Los Angeles chairman of the board of supervisors‘ charities and public welfare committee, Frank L. Shaw had researched about the legality of deportation but was advised by legal counsel that only the federal government was legally allowed to engage in deportation proceedings (Hoffman). As a result, the county decided that their campaign would be called “repatriation,” which Balderrama asserts was a euphemism for deportation. C.P. Visel, the spokesman for Los Angeles Citizens Committee for Coordination of Unemployment Relief began his “unemployment relief measure” that would create a “psychological gesture” intended to “scarehead” Mexicans out of the United States. His idea was to have a series of “publicity releases announcing the deportation campaign, a few arrests would be made “with all publicity possible and pictures,” and both police and deputy sheriffs would assist” (Balderrama 2). Watkins, Supervisor of the Bureau of Immigration, and his agents were responsible for many mass raids and deportations, and the local government was responsible for the media attention that was given to these raids in order to “scarehead” immigrants, specifically Mexicans, although there were repeated press releases from LA city officials that affirmed Mexicans were not being targeted. Actions taken by immigration officials proved otherwise, provoking many vociferous complaints and criticisms from The Mexican Consulate and Spanish language magaizine, La Opinión (Balderrama). Raids and legal proceedings According to Hoffman, the streets of East Los Angeles, a heavily populated Mexican area, had been deserted only after the first few days that raids had been conducted (Hoffman). Local merchants complained to investigators that the raids were bad for their businesses. According to Balderrama, “raids assumed the logistics of full-scale paramilitary operations. Federal officials, country deputy sheriffs, and city police cooperated in local roundups in order to assure maximum success” (71). Sheriff Traeger and his deputies were infamous for their unscrupulous tactics including large round ups of Mexicans who were arbitrarily arrested and taken to jail without checking whether or not the people were carrying documentation (Hoffman). Jose David Orozco described on his local radio station the ““women crying in the streets when not finding their husbands” after deportation sweeps had occurred” (Balderrama 70). Mexican Consulates across the country were receiving complaints of “harassment, beatings, heavy-handed tactics, and verbal abuse” (Balderrama 79). Historians have identified and discussed various raids. Three of them include the San Fernando Raid, La Placita Raid, and El Monte Raid. The San Fernando Raid took place on Ash Wednesday. Immigration agents and deputies blocked off all exits to the colonia and “rode around the neighborhood with their sirens wailing and advising people to surrender themselves to the authorities” (Balderrama 72). La Placita Raid occurred on February 26, 1931. Led by Watkins, immigration officers enclosed a park with 400 Mexicans. Everyone in the park was made to line up and show evidence of legal entry into the United States before they could leave (Balderrama). In the El Monte Raid, 300 people were stopped and questioned, 13 were jailed, and of the 13 jailed, 12 were Mexican (Hoffman). Most people were unconstitutionally denied their legal rights of Due Process and Equal Protection under the Fourth and Fourteenth Amendment. Any presence of the law was absent whilst hundreds of thousands of people were interrogated and detained by authorities. When it came to federal deportation proceedings, undocumented immigrants, once apprehended, had two options. They could either ask for a hearing or “voluntarily” return back to their native country. The benefit to asking for a hearing was the potential to persuade the immigration officer that if they were returned to their home country they would be placed in a life threatening situation (which was the case for those who had fled the war or were escaping religious persecution) and would be able to stay under the current immigration law as refugees, but if they lost the hearing, they would be barred from ever returning to the United States legally again. Although requesting a hearing was a possibility, immigration officers rarely informed undocumented immigrants of their rights, and the hearings were “official but informal,” in that immigration inspectors “acted as interpreter, accuser, judge, and jury” (Balderrama 67). Moreover, the deportee was seldom represented by a lawyer, a privilege that could only be granted at the discretion of the immigration officer (Balderrama). The second option, which was to voluntarily deport themselves from the US, would allow these individuals to reenter the US legally at a later date because “no arrest warrant was issued and no legal record or judicial transcript of the incident was kept” (Balderrama 79). However, many were being misled and enticed to leave the country by county officials who told Mexicans if they left now they would be able to return later. But many were given a “stamp on their card by the Department of Charities/County Welfare Department which makes it impossible for any of the Mexican born to return, since it shows that they have been county charities. All that the American officials had to do was invoke the “liable to become a public charge” clause of the 1917 Immigration Act and deny readmission” (Hoffman 91). Many were also threatened by county officials that insisted individuals and their family members would be removed from relief rolls if they did not accept the county’s offer to pay for their return to Mexico (Balderrama). In this way, individuals were simultaneously threatened and enticed by the offer for a free trip to Mexico. The Mexican Consulate during these repatriation campaigns was also promulgating and sponsoring campaigns to repatriate Mexicans - the expenses would be paid and some would even be repatriated to a job in Mexico, although these sort of programs could not be sponsored throughout the entire repatriation campaign (Balderrama). The federal government has not apologized for the repatriations. In 2006, representatives Hilda Solis and Luis Gutiérrez introduced a bill calling for a commission to study the issue, and called for an apology. The state of California was the first state to apologize when it passed the "Apology Act for the 1930s Mexican Repatriation Program" in 2005, officially recognizing the "unconstitutional removal and coerced emigration of United States citizens and legal residents of Mexican descent" and apologizing to residents of California "for the fundamental violations of their basic civil liberties and constitutional rights committed during the period of illegal deportation and coerced emigration." Similar historical events targeting Mexicans and Mexican Americans Between 1920 and 1921the US economy was hard hit by a short but deep depression. Unemployment was estimated to have risen from 1.4% in 1919 to 11.7% in 1921. Immediately when the depression hit, “US officials and employers advised the US government that a massive deportation program was the only option for relieving local and national benevolence agencies of the burden of helping braceros and their families” (Aguila 213). Although it was recorded the federal government deported 1,268 Mexicans during this year, the government told employers “the American government would not help any emigrant who came on their own in search of work and advised employers to send them home” (Aguila 214). Workers and their families were so desperate that the Mexican government set up programs in order to pay to repatriate 150,000 Mexican emigrants. Operation Wetback The federal government responded to the increased levels of immigration that began during the war years with the official 1954 INS program called Operation Wetback in which an estimated one million persons, the majority of which were Mexican nationals and undocumented immigrants but some were also US citizens, were deported to Mexico. Documentary film by Vicente Serrano and produced by Mechicano Films called “A Forgotten Injustice” “includes interviews with historians, politicians and survivors. Among them, Former California State Senator Joseph Dunn, John Coatsworth, Dean, School of International and Public Affairs at Columbia University, Hilda Solis, US Representative, Raymond Rodriguez, Professor of History, emeritus, Long Beach City College, Francisco Balderrama, co-author of “Decade of Betrayal”, Ernesto Nava Villa, Son of Pancho Villa, and John Eastman, Dean, Kennedy Law School at Chapman University.” See also - Bisbee Deportation (1917) - Deportee (Plane Wreck At Los Gatos) (1948) - Operation Wetback (1954) - Chandler Roundup (1997) - Bracero Program Further reading - Abraham Hoffman, Unwanted Mexican Americans in the Great Depression: Repatriation Pressures, 1929-1939 (Tucson: University of Arizona Press, 1974) - Francisco Balderrama and Raymond Rodríguez, Decade of Betrayal: Mexican Repatriation in the 1930s (Albuquerque: University of New Mexico Press, 1995), ISBN 0-8263-1575-5 - John Chavez, The Lost Land: A Chicano Image of the Southwest, (New Mexico University, 1984) - INS Yearbook of Statistics-Years 1929 to 1939 - Robert R. McKay, "The Federal Deportation Campaign in Texas: Mexican Deportation from the Lower Rio Grande Valley during the Great Depression," Borderlands Journal, Fall 1981 - National Academy of Sciences, 1998, "The Immigration Debate" - Peter Skerry "Mexican Americans: The Ambivalent Minority" - Christine Valenciana, "Unconstitutional Deportation of Mexican Americans During the 1930s: A Family History and Oral History," Multicultural Education, Spring 2006 - Immigration: Mexican: Depression and the Struggle for Survival, the Library of Congress ||This article needs additional citations for verification. (October 2009)| - Johnson, Kevin (Fall 2005). "The Forgotten "Repatriation" of Persons of Mexican Ancestry and Lessons for the "War on Terror"". Davis, California: Pace Law Review. - Navarro, Sharon Ann and Mejia, Armando Xavier, Latino Americans and Political Participation Santa Barbara, Calif.: ABC-CLIO, 2004. ISBN 1-85109-523-3. page 277. - Ruiz, Vicki L. (1998). From Out of the Shadows: Mexican Women in Twentieth-Century America. New York: Oxford University Press. ISBN 0-19-513099-5. - (Hunt 2006) - Aguila, Jamie R. (March 2007). "Mexican/U.S. Immigration Policy Prior to the Great Depression". The Journal of the Society for Historians of American Foreign Relations Diplomatic History 31: 207–225. - "Treaty of Guadalupe Hidalgo 1848". archives.gov. Retrieved May 24, 2011. - Allan Englekirk. "Mexican Americans". Retrieved May 24, 2011. - PBS. "U.S.-Mexican War, 1846-1848". Retrieved May 24, 2011. - Rodriguez, Clara E. (2000). Changing Race: Latinos, the Census and the History of Ethnicity. New York: New York University Press. - Hoffman, Abraham (1974). Unwanted Mexican Americans in the Great Depression:Repatriation Pressures, 1929-1939. Tucson University Press. - Hoffman, Abraham (1974). Unwanted Mexican Americans in the Great Depression:Repatriation Pressures, 1929-1939. Tucson University Press. - Hayes, Helene (2001). U.S. Immigration Policy and the Undocumented. Connecticut: Praeger Publisher. - Balderrama, Francisco (2006). Decade of Betrayal: Mexican Repatriation in the 1930s. Albuquerque: University of New Mexico. - Garraty, John A. (1986). The Great Depression. San Diego: Harcourt Brace Jovanovich. - "U.S. Bureau of Labor Statistics". US Department of labor. Retrieved 05 July 2012. - Bogardus, Emory S. 1933- "Mexican Repatriates," Sociology and Social Research. 18 (November/Decembef). 169-76 - Betlen. Neil, arxj Raymond A Mohl, 1973. "From Discrimination to Ftepalfiation. Mexican Life in Gary. Indiana, during ihe Great Depresswn." Pacific Historical Review, 42 (Augusl):37O-88 - Koch, Wendy (2006-04-05). "1930S Deportees Await Apology". USA Today. Retrieved 2010-05-12. - SB 670 Senate Bill - CHAPTERED - wiki. "Operation Wetback". Retrieved May 24, 2011. - Texas State Historical Association. "Operation Wetback". Retrieved May 24, 2011. - "A Forgotten Injustice". Retrieved May 24, 2011. - History From The Margins - The American Apple: From Family Grown to Foreign Migrant Labor - Letter of repatriation (1933) sent to California resident
http://en.wikipedia.org/wiki/Mexican_Repatriation
13
14
Global warming is the long-term, cumulative effect that greenhouse gases, primarily carbon dioxide and methane, have on Earth's temperature when they build up in the atmosphere and trap the sun's heat. It's also a hotly debated topic. Some wonder if it's really happening and, if it's real, is it the fault of human actions, natural causes or both? When we talk about global warming, we're not talking about how this summer's temperatures were hotter than last year's. Instead, we're talking about climate change, changes that happen to our environment, atmosphere and weather over time. Think decades, not seasons. The term global warming itself is a bit deceptive because it implies we should expect things to get hotter -- not necessarily stormier, drier and even, in some instances, colder. Climate change impacts the hydrology and biology of the planet -- everything, including winds, rains and temperature, is linked. Scientists have observed that the Earth's climate has a long history of variability, from the cold climes of the Ice Age to temperatures as hot as an Easy-Bake oven. These changes are sometimes noted over a few decades and sometimes stretch over thousands of years. What can we expect from a planet undergoing climate changes? Scientists studying our climate have been able to observe and measure changes happening around us. For example, mountain glaciers are smaller now than they were 150 years ago, and in the last 100 years, the average global temperature has increased by roughly 1.4 degrees F (0.8 degrees C) [source: EPA]. Computer modeling allows scientists to predict what could happen if the climate pattern continues on its current course, projecting, for instance, that temperatures could rise an average of 2 to 11.5 degrees F (1.1 to 6.4 degrees C) by the end of the 21st century [source: EPA]. In this article, we'll look at 10 of the worst effects of climate change, including some immediate effects observed and some hypothesized through climate modeling. 10. Rising Sea Level Earth's hotter temperature doesn't necessarily mean the Miami lifestyle is moving to the Arctic, but it does mean rising sea levels. How are hotter temperatures linked to rising waters? Hotter temperatures mean ice -- glaciers, sea ice and polar ice sheets -- is melting, increasing the amount of water in the world's seas and oceans. Scientists are able to measure that melt water from Greenland's ice cap directly impacts people in the United States: The flow of the Colorado River has increased sixfold [source: Scientific American]. And scientists project that as the ice shelves on Greenland and Antarctica melt, sea levels could be more than 20 feet (6 meters) higher in 2100 than they are today [source: An Inconvenient Truth]. Such levels would submerge many of Indonesia's tropical islands and flood low-lying areas such as Miami, New York City's Lower Manhattan and Bangladesh. 9. Shrinking Glaciers You don't need special equipment to see that glaciers around the world are shrinking. Tundra once covered with thick permafrost is melting with rising surface temperatures and is now coated with plant life. In the span of a century, glaciers in Montana's Glacier National Park have deteriorated from 150 to just 35 [source: New York Times]. And the Himalayan glaciers that feed the Ganges River, which supplies drinking and irrigation water to 500 million people, are reportedly shrinking by 40 yards (37 meters) each year [source: The Washington Post]. 8. Heat Waves The deadly heat wave that swept across Europe in 2003, killing an estimated 35,000 people, could be the harbinger of an intense heat trend that scientists began tracking in the early 1900s [source: MSNBC]. Extreme heat waves are happening two to four times more often now, steadily rising over the last 50 to 100 years, and are projected to be 100 times more likely over the next 40 years [source: Global Development and Environment Institute, Tufts University]. Experts suggest continued heat waves may mean future increases in wildfires, heat-related illness and a general rise in the planet's mean temperature. 7. Storms and Floods Experts use climate models to project the impact rising global temperatures will have on precipitation. However, no modeling is needed to see that severe storms are happening more frequently: In just 30 years the occurrence of the strongest hurricanes -- categories 4 and 5 -- has nearly doubled [source: An Inconvenient Truth]. Warm waters give hurricanes their strength, and scientists are correlating the increase in ocean and atmospheric temperatures to the rate of violent storms. During the last few years, both the United States and Britain have experienced extreme storms and flooding, costing lives and billions of dollars in damages. Between 1905 and 2005 the frequency of hurricanes has been on a steady ascent. From 1905 to 1930, there were an average of 3.5 hurricanes per year; 5.1 between 1931 and 1994; and 8.4 between 1995 and 2005 [source: USA Today]. In 2005, a record number of tropical storms developed, and in 2007, the worst flooding in 60 years hit Britain [sources: Reuters, Center for American Progress]. While some parts of the world may find themselves deluged by increasing storms and rising waters, other areas may find themselves suffering from drought. As the climate warms, experts estimate drought conditions may increase by at least 66 percent [source: Scientific American]. An increase in drought conditions leads quickly to a shrinking water supply and a decrease in quality agricultural conditions. This puts global food production and supply in danger and leaves populations at risk for starvation. Today, India, Pakistan and sub-Saharan Africa already experience droughts, and experts predict precipitation could continue to dwindle in the coming decades. Estimates paint a dire picture. The Intergovernmental Panel on Climate Change suggests that by 2020, 75 to 250 million Africans may experience water shortages, and the continent's agricultural output will decrease by 50 percent [source: BBC]. Depending on where you live, you may use bug repellant to protect against West Nile virus or Lyme disease. But when was the last time you considered your risk of contracting dengue fever? Warmer temperatures along with associated floods and droughts are encouraging worldwide health threats by creating an environment where mosquitoes, ticks, mice and other disease-carrying creatures thrive. The World Health Organization (WHO) reports that outbreaks of new or resurgent diseases are on the rise and in more disparate countries than ever before, including tropical illnesses in once cold climates -- such as mosquitoes infecting Canadians with West Nile virus. While more than 150,000 people die from climate change-related sickness each year, everything from heat-related heart and respiratory problems to malaria are on the rise [source: The Washington Post]. Cases of allergies and asthma are also increasing. How is hay fever related to global warming? Global warming fosters increased smog -- which is linked to mounting instances of asthma attacks -- and also advances weed growth, a bane for allergy sufferers. 4. Economic Consequences The costs associated with climate change rise along with the temperatures. Severe storms and floods combined with agricultural losses cause billions of dollars in damages, and money is needed to treat and control the spread of disease. Extreme weather can create extreme financial setbacks. For example, during the record-breaking hurricane year of 2005, Louisiana saw a 15 percent drop in income during the months following the storms, while property damage was estimated at $135 billion [source: Global Development and Environment Institute, Tufts University]. Economic considerations reach into nearly every facet of our lives. Consumers face rising food and energy costs along with increased insurance premiums for health and home. Governments suffer the consequences of diminished tourism and industrial profits, soaring energy, food and water demands, disaster cleanup and border tensions. And ignoring the problem won't make it go away. A recent study conducted by the Global Development and Environment Institute at Tufts University suggests that inaction in the face of global warming crises could result in a $20 trillion price tag by 2100 [source: Global Development and Environment Institute, Tufts University]. 3. Conflicts and War Declining amounts of quality food, water and land may be leading to an increase in global security threats, conflict and war. National security experts analyzing the current conflict in Sudan's Darfur region suggest that while global warming is not the sole cause of the crisis, its roots may be traced to the impact of climate change, specifically the reduction of available natural resources [source: Seattle Post-Intelligencer]. The violence in Darfur broke out during a time of drought, after two decades of little-to-no rain along with rising temperatures in the nearby Indian Ocean. Scientists and military analysts alike are theorizing climate change and its consequences such as food and water instability pose threats for war and conflict, suggesting that violence and ecological crises are entangled. Countries suffering from water shortages and crop loss become vulnerable to security trouble, including regional instability, panic and aggression. 2. Loss of Biodiversity Species loss and endangerment is rising along with global temperatures. As many as 30 percent of plant and animal species alive today risk extinction by 2050 if average temperatures rise more than 2 to 11.5 degrees F (1.1 to 6.4 degrees C) [sources: EPA, Scientific American]. Such extinctions will be due to loss of habitat through desertification, deforestation and ocean warming, as well as the inability to adapt to climate warming. Wildlife researchers have noted some of the more resilient species migrating to the poles, far north and far south to maintain their needed habitat; the red fox, for example, normally an inhabitant of North America, is now seen living in the Arctic. Humans also aren't immune to the threat. Desertification and rising sea levels threaten human habitats. And when plants and animals are lost to climate change, human food, fuel and income are lost as well. 1. Destruction of Ecosystems Changing climatic conditions and dramatic increases in carbon dioxide will put our ecosystems to the test, threatening supplies of fresh water, clean air, fuel and energy resources, food, medicine and other matters we depend upon not just for our lifestyles but for our survival. Evidence shows effects of climate change on physical and biological systems, which means no part of the world is spared from the impact of changes to land, water and life. Scientists are already observing the bleaching and death of coral reefs due to warming ocean waters, as well as the migration of vulnerable plants and animals to alternate geographic ranges due to rising air and water temperatures and melting ice sheets. Models based on varied temperature increases predict scenarios of devastating floods, drought, wildfires, ocean acidification and eventual collapse of functioning ecosystems worldwide, terrestrial and aquatic alike. Forecasts of famine, war and death paint a dire picture of climate change on our planet. Scientists are researching the causes of these changes the vulnerability of Earth not to predict the end of days but rather to help us mitigate or reduce changes that may be caused by humans. If we know and understand the problems and take action through adaptation, the use of more energy-efficient and sustainable resources and the adoption of other green ways of living, we may be able to make some impact on the climate change process. Lots More Information - Wild World: Ozone Pollution Quiz - Top 10 Ways Man Is Destroying the Environment - 10 Ways to Conserve Energy At Work - Environmental Protection Puzzles - Renewing the Grid Pictures More Great Links - "50 million on the run from deserts, warming?" MSNBC. 2007. http://www.msnbc.msn.com/id/19479607/ - Ackerman, Frank and Elizabeth Stanton. "Climate Change - the Costs of Inaction." Global Development and Environment Institute, Tufts University. 2006. http://ase.tufts.edu/gdae/Pubs/rp/Climate-CostsofInaction.pdf - "Americans warned of global warming 'dust bowl'" Australian Broadcasting Corporation. 2007. http://www.abc.net.au/news/newsitems/200704/s1891479.htm - "Arctic sea ice cover at record low." CNN. 2007. http://www.cnn.com/2007/TECH/science/09/11/arctic.ice.cover/index.html?iref=mpstoryview - "Basic Information." Climate Change. U.S. Environmental Protection Agency. http://www.epa.gov/climatechange/basicinfo.html - Biello, David. "State of the Science: Beyond the Worst Case Climate Change Scenario." Scientific American. 2007. http://www.sciam.com/article.cfm?id=state-of-the-science-beyond-the-worst-climate-change-case - "Billions face climate change risk." BBC News. 2007. http://news.bbc.co.uk/go/pr/fr/-/2/hi/science/nature/6532323.stm - "Changed climate, changed species." MSNBC. 2007. http://www.msnbc.msn.com/id/17861866/ - Charles, Dan. "Food & Climate: A Complicated but Optimistic View." NPR. 2007. http://www.npr.org/templates/story/story.php?storyId=15747012 - "Effects of Global Warming." National Geographic. http://science.nationalgeographic.com/science/environment/global-warming/gw-effects.html - Eilperin, Juliet. "Military Sharpens Focus on Climate Change." The Washington Post. 2007. http://www.washingtonpost.com/wp-dyn/content/article/2007/04/14/AR2007041401209.html - Faris, Stephan. "The Real Roots of Darfur." The Atlantic. 2007. http://www.theatlantic.com/doc/200704/darfur-climate - "Feeling the heat: Climate change and biodiversity loss." Nature. 2004. http://www.nature.com/nature/links/040108/040108-1.html - "Frequent Questions - Effects." Climate Change. U.S. Environmental Protection Agency. 2008. http://www.epa.gov/climatechange/fq/effects.html - "Global Insecurity: Conflicts heat up." Seattle Post-Intelligencer. 2007. http://seattlepi.nwsource.com/opinion/320929_secured.html - "Global Warming." The New York Times. http://topics.nytimes.com/top/news/science/topics/globalwarming/index.html - "Global warming might hurt your heart." MSNBC. 2007. http://www.msnbc.msn.com/id/20607048/ - "Global Warming Would Foster Spread Of Dengue Fever Into Some Temperate Regions." ScienceDaily. 1998. http://www.sciencedaily.com/releases/1998/03/980310081157.htm - "Impacts of Climate Change on Washington's Economy." Washington State Department of Ecology. http://www.ecy.wa.gov/climatechange/economic_impacts.htm - Leahy, Stephen. "Grim Signs Mark Global Warming." Wired. 2004. http://www.wired.com/science/discoveries/news/2004/11/65654 - Loney, Jim. "Warming may reduce hurricanes hitting U.S." Reuters. 2008. http://www.reuters.com/article/environmentNews/idUSN2364087920080123?feedType=RSS&feedName=environmentNews&sp=true - Moon, Ban Ki. "A Climate Culprit in Darfur." The Washington Post. 2007. http://www.washingtonpost.com/wp-dyn/content/article/2007/06/15/AR2007061501857.html - Morell, Virginia. "Signs From Earth: Now What?" National Geographic. 2004. http://science.nationalgeographic.com/science/environment/global-warming/time-signs.html - "NPAA El Niño Page." U.S. Department of Commerce National Oceanic and Atmospheric Administration. http://www.elnino.noaa.gov/ - "People & Ecosystems." World Resources Institute. http://www.wri.org/node/4048 - Perlman, David. "Shrinking glaciers evidence of global warming." The San Francisco Chronicle. 2004. http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2004/12/17/MNGARADH401.DTL - "Scientific Facts on Climate Change 2007 Update." GreenFacts. 2007. http://www.greenfacts.org/en/climate-change-ar4/index.htm - "Shrinking Glaciers." The New York Times. December 1, 2002. http://query.nytimes.com/gst/fullpage.html?res=9B00E7D91038F932A35751C1A9649C8B63 - Struck, Doug. "Climate Change Drives Disease To New Territory." The Washington Post. 2006. http://www.washingtonpost.com/wp-dyn/content/article/2006/05/04/AR2006050401931.html - "The Basics of Global Warming." Sierra Club. http://www.sierraclub.org/energy/overview/ - "The Top 100 Effects of Global Warming." Center for American Progress. 2007. http://www.americanprogress.org/issues/2007/09/climate_100.html - "Understanding climate change." USA Today. http://www.usatoday.com/weather/resources/climate/wclisci0.htm?loc=interstitialskip - Vergano, Dan. "Study links more hurricanes, climate change." USA Today. 2007. http://www.usatoday.com/weather/hurricane/2007-07-29-more-hurricanes_N.htm - Wax, Emily. "A Sacred River Endangered by Global Warming." The Washington Post. 2007. http://www.washingtonpost.com/wp-dyn/content/article/2007/06/16/AR2007061600461.html - "What is an El Niño?" U.S. Department of Commerce National Oceanic and Atmospheric Administration. http://www.pmel.noaa.gov/tao/elnino/el-nino-story.html - "What is Global Warming?" An Inconvenient Truth. http://www.climatecrisis.net/thescience/
http://dsc.discovery.com/tv-shows/curiosity/topics/worst-effects-global-warming.htm
13
16
Foraging is searching for and exploiting food resources. It affects an animal's fitness because it plays an important role in an animal's ability to survive and reproduce. Foraging theory is a branch of behavioral ecology that studies the foraging behavior of animals in response to the environment where the animal lives. Behavioral ecologists use economic models to understand foraging; many of these models are a type of optimality model. Thus foraging theory is discussed in terms of optimizing a payoff from a foraging decision. The payoff for many of these models is the amount of energy an animal receives per unit time, more specifically, the highest ratio of energetic gain to cost while foraging. Foraging theory predicts that the decisions that maximize energy per unit time and thus deliver the highest payoff will be selected for and persist. Key words used to describe foraging behavior include; 1) Resources, the elements necessary for survival and reproduction and yet have a limited supply, 2) A predator, any organism that consumes others and 3) Prey, an organism that is eaten in part or whole by another. Behavioral ecologists first tackled this topic in the 1960s and 1970s. Their goal was to quantify and formalize a set of models to test their null hypothesis that animals forage randomly. Important contributions to foraging theory have been made by: - Eric Charnov, who developed the marginal value theorem to predict the behavior of foragers using patches; - Sir John Krebs, with work on the optimal diet model in relation to tits and chickadees; - John Goss-Custard, who first tested the optimal diet model against behavior in the field, using redshank, and then proceeded to an extensive study of foraging in the Common Pied Oystercatcher. Factors influencing Foraging Behavior Several factors affect an animal's ability to forage and acquire highly profitable resources. Learning is defined as an adaptive change or modification of a behavior based on a previous experience. Since an animal's environment is constantly changing, the ability to adjust foraging behavior is essential for maximization of fitness. Studies in social insects have shown that there is a significant correlation between learning and foraging performance. In nonhuman primates, young individuals learn foraging behavior from their peers and elders by watching other group members forage and by copying their behavior. Observing and learning from other members of the group ensure that the younger members of the group learn what is safe to eat and become proficient foragers. One measure of learning is 'Foraging innovation'—an animal consuming new food, or using a new foraging technique in response to their dynamic living environment. Foraging innovation is considered learning because it involves behavioral plasticity on the animal's part. The animal recognizes the need to come up with a new foraging strategy and introduce something it has never used before to maximize his or her fitness (survival). Forebrain size has been associated with learning behavior. Animals with larger brain sizes are expected to learn better. A higher ability to innovate has been linked to larger forebrain sizes in North American and British Isle birds according to Lefebvre et al. (1997). In this study, bird orders that contained individuals with larger forebrain sizes displayed a higher amount of foraging innovation. Examples of innovations recorded in birds include following tractors and eating frogs or other insects killed by it and using swaying trees to catch their prey. Foraging behavior can also be influenced by genetics. The genes associated with foraging behavior have been widely studied in honeybees with reference to the following; onset of foraging behavior, task division between foragers and workers, and bias in foraging for either pollen or nectar. Honey bee foraging activity occurs both inside and outside the hive for either pollen or nectar. Studies using Quantitative Trait Loci (QTL) mapping have associated the following loci with the matched functions; Pln-1 and Pln-4 with onset of foraging age, Pln-1 and 2 with the size of the pollen loads collected by workers, and Pln-2 and pln-3 were shown to influence the sugar concentration of the nectar collected. Predation refers to the presence of predators while an animal is foraging. In general, foragers balance the risk of predation with their needs, thus deviating form the foraging behaviour that would be expected in the absence of predators Similarly, parasitism can affect the way in which animals forage. Parasitism can affect foraging at several levels. Animals might simply avoid food items that increase their risk of being parasitism, as when the prey items are intermediate hosts of parasites. Animals might also avoid areas that would expose them to a high risk of parasitism. Finally, animals might effectively self-medicate, either prophylactically or therapeutically. Types of Foraging Foraging can be categorized into two main types. The first is solitary foraging, when animals forage by themselves. The second is group foraging. Group foraging includes when animals can be seen foraging together when it is beneficial for them to do so (called an aggregation economy) and when it is detrimental for them to do so (called a dispersion economy). Solitary Foraging Solitary foraging is when animals find, capture and consume their prey alone. Individuals can manually exploit patches or they can use tools to exploit their prey. Animals may choose to forage on their own when the resources are abundant, which can occur when the habitat is rich or when the number of conspecifics foraging are few. In these cases there may be no need for group foraging. In addition, foraging alone can result in less interaction with other foragers, which can decrease the amount of competition and dominance interactions an animal deals with. It will also ensure that a solitary forager is less conspicuous to predators. Solitary foraging strategies characterize many of the phocids (the true seals) such as the elephant and harbor seals. An example of an exclusive solitary forager is the South American species of the harvester ant, Pogonomyrmex vermiculatus. Tool use in Solitary foraging Some examples of tool use include dolphins using sponges to feed on fish that bury themselves in the sediment, New Caledonian crows that use sticks to get larvae out of trees, and chimpanzees that similarly use sticks to capture and consume termites. Solitary Foraging and Optimal Foraging Theory The theory scientists use to understand solitary foraging is called optimal foraging theory. Optimal foraging theory (OFT) was first proposed in 1966, in two papers published independently, by Robert MacArthur and Eric Pianka, and by J. Merritt Emlen. This theory argues that because of the key importance of successful foraging to an individual's survival, it should be possible to predict foraging behavior by using decision theory to determine the behavior that an "optimal forager" would exhibit. Such a forager has perfect knowledge of what to do to maximize usable food intake. While the behavior of real animals inevitably departs from that of the optimal forager, optimal foraging theory has proved very useful in developing hypotheses for describing real foraging behavior. Departures from optimality often help to identify constraints either in the animal's behavioral or cognitive repertoire, or in the environment, that had not previously been suspected. With those constraints identified, foraging behavior often does approach the optimal pattern even if it is not identical to it. In other words, we know from Optimal Foraging Theory that animals are not foraging randomly even if their behavior doesn't perfectly match what is predicted by OFT. Versions of OFT There are many versions of optimal foraging theory that are relevant to different foraging situations. These models generally possess the following components according to Stephens et al. 2007; - currency: an objective function, what we want to maximize, in this case energy over time as a currency of fitness - decision: set of choices under the organism's control, or the decisions that the organism exhibits - constraints: "an organism's choices are constrained by genetics, physiology neurology, morphology and the laws of chemistry of physics" Some of these versions include: 1. The optimal diet model, which analyzes the behavior of a forager that encounters different types of prey and must choose which to attack. This model is also known as the prey model or the attack model. In this model the predator encounters different prey items and decides whether to spend time handling or eating the prey. It predicts that foragers should ignore low profitability prey items when more profitable items are present and abundant. The objective of this model is to identify the choice that will maximize fitness. How profitable a prey item is depends on ecological variables such as the time required to find, capture, and consume the prey in addition to the energy it provides. It is likely that an individual will settle for a trade off between maximizing the intake rate while eating and the search interval between prey. 2. Patch selection theory, which describes the behavior of a forager whose prey is concentrated in small areas known as patches with a significant travel time between them. The model seeks to find out how much time an individual will spend on one patch before deciding to move to the next patch. To understand whether an animal should stay at a patch or move to a new one, think of a bear in a patch of berry bushes. The longer a bear stays at the patch of berry bushes the less berries there are for that bear to eat. The bear must decide how long to stay and thus when to leave that patch and move to a new patch. Movement depends on the travel time between patches and the energy gained from one patch versus another. This is based on the marginal value theorem. 2.1. Central place foraging theory, is a version of the patch model. This model describes the behavior of a forager that must return to a particular place to consume food, or perhaps to hoard food or feed it to a mate or offspring. Chipmunks are a good example of this model. As travel time between the patch and their hiding place increased, the chipmunks stayed longer at the patch. In recent decades, optimal foraging theory has often been applied to the foraging behavior of human hunter-gatherers. Although this is controversial, coming under some of the same kinds of attack as the application of socio biological theory to human behavior, it does represent a convergence of ideas from human ecology and economic anthropology that has proved fruitful and interesting. Group Foraging Group foraging is when animals find, capture and consume prey in the presence of other individuals. In other words it is foraging when success depends not only on your own foraging behaviors but the behaviors of others as well. An important note here is that group foraging can emerge in two types of situations. The first situation is frequently thought of and occurs when foraging in a group is beneficial and brings greater rewards known as an aggregation economy. The second situation occurs when a group of animals forage together but it may not be in an animal's best interest to do so known as a dispersion economy. Think of a cardinal at a bird feeder for the dispersion economy. We might see a group of birds foraging at that bird feeder but it is not in the best interest of the cardinal for any of the other birds to be there too. The amount of food the cardinal can get from that bird feeder depends on how much it can take from the bird feeder but also depends on how much the other birds take as well. Cost and benefits of group foraging As already mentioned, group foraging brings both costs and benefits to the members of that group. Some of the benefits of group foraging include being able to capture larger prey, being able to create aggregations of prey, being able to capture prey that are difficult or dangerous and most importantly reduction of predation threat. With regard to costs, however, group foraging results in competition for available resources by other group members. Competition for resources can be characterized by either scramble competition whereby each individual strives to get a portion of the shared resource, or by interference competition whereby the presence of competitors prevents a forager's accessibility to resources. Group foraging can thus reduce an animal's foraging payoff. Group foraging may be influenced by the size of a group. In some species like lions and wild dogs, foraging success increases with an increase in group size then declines once the optimal size is exceeded. A myriad number of factors affect the group sizes in different species. For example lionesses (female lions) do not make decisions about foraging in a vacuum. They make decisions that reflect a balance between obtaining food, defending their territory and protecting their young. In fact, we see that lion foraging behavior does not maximize their energy gain. They are not behaving optimally with respect to foraging because they have to defend their territory and protect young so they hunt in small groups to reduce the risk of being caught alone. Another factor that may influence group size is the cost of hunting. To understand the behavior of wild dogs and the average group size we must incorporate the distance the dogs run Theorizing on hominid foraging during the Aurignacian Blades et al (2001) defined the forager performing the activity to the optimal efficiency when the individual is having considered the balance of costs for search and pursuit of prey in considerations of prey selection. Also in selecting an area to work within the individual would have had to decide the correct time to move to another location corresponding to perception of yield remaining and potential yields of any given area available. Group foraging and the Ideal Free Distribution The theory scientists use to understand group foraging is called the Ideal free distribution. This is the null model for thinking about what would draw animals into groups to forage and how they would behave in the process. This model predicts that animals will make an instantaneous decision about where to forage based on the quality (prey availability) of the patches available at that time and will choose the most profitable patch, the one that maximizes their energy intake. This quality depends on the starting quality of the patch and the number of predators already there consuming the prey. ||This section is written like a personal reflection or opinion essay rather than an encyclopedic description of the subject. (November 2012)| Successful foraging is essential for the survival and reproduction of an organism. Foraging behavior is affected by so many factors and this likely differs across species. Foraging behavior is a phenotype thus it is determined by the genotype of the individual and its environment (availability of resources and the presence of predators). It is important to understand how foraging behavior fits in the context of an organism's life history and how this in turn affects the foraging decisions organism makes. In times of crisis such as depletion of resources, animals will gain from having foraging innovation abilities to survive. Since there is such a clear link between foraging behavior and fitness it is easy to understand how those behaviors that benefit the organism and help them survive and reproduce will be selected for and passed on. For some organisms this might be the ability to use tools. Without tools the individual might not be able to find the most profitable prey (New Caledonian crows). For others it might be the size of the pollen load an individual collects (Honeybees ). For others it might be creating a way to cooperatively hunt schools of fish in the dark ocean (Spinner Dolphins). Every species and every individual is different but the main aim is to find a way to balance maximizing food intake with other aspects of life. See also - Danchin, E., Giraldeau, L., and Cezilly, F. (2008). Behavioural Ecology. New York: Oxford University Press. ISBN 978-0-19-920629-2. - Hughes, Roger N, ed. (1989), Behavioural Mechanisms of Food Selection, London & New York, p. v, ISBN 0-387-51762-6 Text "Springer-Verlag " ignored (help) - Raine, N.E. and Chittka, L. (2008). "The correlation of learning speed and natural foraging success in bumble-bees'". Proceedings of the Royal Society B: Biological Sciences 275 (1636): 803. - Rapaport, L.G. and Brown, G.R. (2008). "Social influences on foraging behavior in young nonhuman primates:learning what, where and how to eat". Evolutionary Anthropology: Issues, News and Reviews 17 (4): 189–201. - Dugatkin, Lee Ann (2004). Principles of Animal Behavior. - Lefebvre, Louis; Patrick Whittle, Evan Lascaris and Adam Finkelstein (1997). "Feeding innovations and forebrain size in birds". Animal Behavior 53: 549–560. doi:10.1006/anbe.1996.0330. - Hunt, G.J., et al. (2007). "Behavioral genomics of honeybee foraging and nest defense". Naturwissenschaften 94 (4): 247–267. - Riedman, Marianne (1990). The pinnipeds: seals, sea lions, and walruses. Berkeley: University of California Press. ISBN 0-520-06497-6. - le Roux, Aliza; Michael I. Cherry and Lorenz Gygax (5 May 2009). "Vigilance behaviour and fitness consequences: comparing a solitary foraging and an obligate group-foraging mammal". Behavioral Ecology and Sociobiology 63: 1097–1107. doi:10.1007/s00265-009-0762-1. - Torres-Contreras, Hugo; Ruby Olivares-Donoso and Hermann M. Niemeyer (2007). "Solitary Foraging in the Ancestral South American Ant, Pogonomyrmex vermiculatus. Is it Due to Constraints in the Production or Perception of Trail Pheromones?". Journal of Chemical Ecology 33 (2): 435–440. doi:10.1007/s10886-006-9240-7. - Patterson, E.M. and Mann, J. (2011). "The Ecological Conditions That Favor Tool Use and Innovation in Wild Bottlenose Dolphins (Tursiops sp.)". PLoS ONE 6 (7). doi:10.1371/journal.pone.0022243. - Rutz, C., et al. (2010). "The ecological significance of tool use in New Caledonian Crows". Science 329 (5998): 1523. doi:10.1126/science.1192053. - Goodall, Jane (1964). "Tool-using and aimed throwing in a community of free-living chimpanzees". Nature 201 (4926): 1264–6. doi:10.1038/2011264a0. PMID 14151401. - MacArthur RH, Pianka ER (1966), "On the optimal use of a patchy environment.", American Naturalist 100 (916): 603–9, doi:10.1086/282454, JSTOR 2459298 - Emlen, J. M (1966), "The role of time and energy in food preference", The American Naturalist 100: 611–617, doi:10.1086/282455, JSTOR 2459299 Text "American Naturalist " ignored (help) - Stephens, D.W., Brown, J.S., and Ydenberg, R.C. (2007). Foraging: Behavior and Ecology. Chicago: University of Chicago Press. - Packer, C.; Scheel, D., and Pusey, A.E. (1990). "'Why lions form groups: food is not enough'". American Naturalist. - Benoit-Bird, Kelly; Whitlow W. L. Au (January 2009). "Cooperative prey herding by the pelagic dolphin, Stenella longirostris". JASA 125. - Creel, S; Creel N M (1995). "Communal hunting and pack size in African wild dogs, Lycaon pictus". Animal Behaviour. - BS Blades - Aurignacian Lithic Economy: Ecological Perspectives from Southwestern France Springer, 31 Jan 2001 Retrieved 2012-07-08 ISBN 0306463342 - South West Outdoor Travelers- Wild Edibles, Medicinals, Foraging, Primitive Skills & More - Institute for the Study of Edible Wild Plants and Other Foragables - The Big Green Idea Wild Foraging Factsheet - Caress, Badiday. (2000), The emergence and stability of cooperative fishing on Ifaluk Atoll, for Human Behavior and Adaptation: an Anthropological Perspective, edited by L. Cronk, N. Chagnon, and B. Iro ns, pp. 437–472.
http://en.wikipedia.org/wiki/Foraging
13
30
Holocaust Education & Archive Research Team The Lodz Ghetto Introduction to the Ghettos of the Holocaust The Lodz Ghetto 1940 - 1944 The First Months of Occupation The persecution of Jews began soon after Lodz was occupied by the Germans on 8 September 1939. The racist Nuremburg Laws of September 1935 which the Nazis had applied to German, Austrian and Czech Jews, were immediately enforced. From 9 November 1939, Lodz became under the authority of Gauleiter Artur Greiser, who was a resolute advocate of rapid and total Germanisation of the areas, under his command. In a short time, together with his subordinate Friedrich Übelhör, President of the Kalisz- Lodz region, and Leister, Commissioner of Lodz, Greiser enacted a series of drastic decrees. The Jews of Lodz were subjected to legal restrictions and various orders and bans, many of which were applied for the first time, the laws introduced earlier in Germany, Austria and Czechoslovakia were made more rigorous in Lodz. Chronologically the first discrimination measures directed against the Jews were orders of the Chief of Civil Administration, 8th Army, published on 18 September 1939, which restricted currency exchange and prohibited trade in leather and textile goods. The inclusion of Lodz in the Reich intensified the legal terror against the Jews. New instructions constituted typically anti- Jewish legislation. The occupation authorities aimed at the complete separation of Jews from the general population, and limiting their freedom of movement. The first of these regulations was announced on 31 October 1939 by the Chief of Police of Lodz – an order to mark every enterprise and shop with prominent signs indicating the owners nationality, which made the pillage of Jewish stores and workshops easier by the Germans. On 14 November 1939 Friedrich Übelhör, announced additional restrictive measures: “The Jews were to wear on their right arm, directly under the armpit, with no regard to age or sex, an 10cm wide band of Jewish yellow colour as a special sign. Those who violate this order are liable to face the death penalty”. Übelhör also introduced a curfew, which prevented Jews from leaving their apartments from 5:00pm till 8:00am Übelhör’s order to mark the Jews was the first of its kind enacted in the Third Reich having no basis in Nazi legislation. Heydrich’s decree concerning the marking of Jews in the Reich was published on 1 October 1941. It did not apply to children under the age of six, and violations were not punished by death, but with a fine of 150 RM or up to 6 weeks arrest. Less than a month later that directive was amended with a decree published on 11 December 1939, by Artur Greiser the Gauleiter of the Wartheland , Jews were ordered to wear a yellow star of David on the chest and back instead of armbands. Simultaneously, numerous restrictive police control measures were introduced. Jews were not allowed to walk along Piotrkowska Street or to enter city parks and forbidden to use public transportation. All Jewish workers employed in “Aryan” enterprises were to be dismissed. Pillage of Jewish property proceeded on a wide scale. In the first days of the occupation the plunder was out of control. On the third day after Lodz had been occupied, armed soldiers and police began to invade Jewish apartments, workshops and stores. Many precious objects were stolen under the pretext of searching for weapons. These robberies often ended with serious assaults on the owners. The process of plundering was actively pursued by the Germans living in Lodz , who chose to rob wealthier Jews of the best apartments and shops. Jews were often forced to pay a ransom during these actions. A little later the military, administrative and economic authorities of Lodz began to pursue a policy of officially organised pillage. The military forces authorised the Association of Combing Mills as early as mid-September 1939 to make an inventory of textile raw materials owned by Jewish merchants, and to confiscate them. The plunder amounted to 1.8 million RM in goods. A few days later on 29 September 1939, according to the degree enacted by the general commander of the Land Forces, all enterprises, workshops and real estate abandoned by their owners went into receivership. On 1 November 1939 the military authorities rights regarding confiscation and selling of enterprises not belonging to German’s were transferred to the General Receivership Bureau East (Haupttreuhandstelle Ost) established in October 1939. Pillage and confiscation of Jewish factories proceeded rapidly thereafter, and most confiscated raw materials were transferred to the Reich. The expropriation of Jewish property in Lodz accelerated in November and December 1939. Raw materials, half –finished goods and products expropriation was from then on joined by the Lodz branch of General Bührmann’s office and the Lodz Trade Society established in the second half of November, which aimed at taking over all the goods in stock from Jewish merchants. The Nazi authorities aimed at weakening the Jewish population by destroying its financial base. Thus, as well as the pillage of property, Jews were also ejected from economic life. Harry von Craushaar, Chief of the Military Administration of the 8th Army on 18 September 1939, issued an order blocking all Jewish bank accounts, deposits and safes. Jews were not allowed to withdraw more than 500 zloty a week from their bank accounts, and no more than 250 zloty a week from their savings accounts. They were also forbidden to have no more than 2000 zloty at home. On 13 October 1939 the same authority ordered all factory owners, shipping and transport companies and store owners to report all raw materials and goods produced after 10 September 1939, to the special receiver dealing with textile raw materials. All reported goods were confiscated, leaving the Jewish workshops and factories without raw materials for production. Five days later on 18 October 1939, Jews were forbidden to trade in textile goods, leather goods and raw materials by order of the Border Guard Middle Section Commander. Failure to comply was subject to an unlimited fine, arrest or even the death penalty. As a result, Jewish handicrafts, which had flourished in Lodz were seriously undermined and eventually destroyed. On 2 December 1939 the deputy president of Police issued an order which excluded Jews from work involving road transport, a restriction which prevented approximately a thousand Jews from earning a living. Intellectuals, lawyers, teachers, artists and doctors were also ejected from economic life. The boycott of Jewish doctors forced 40 physicians who lived in the city centre, to move to the Jewish district – The Old Town – in January 1940. The Jewish community were thus imprisoned in their own apartments by the above measures, and were thus prevented from supporting themselves. The Jews of Lodz, especially the poor, were left without means to survive. Already from the first months of the occupation, constant round-ups of Jews in the streets became a massive danger. They were violently dragged from houses and streetcars and forced to do hard work by the Germans. Jews began to hide in cellars and attics. They sat there from dawn to dusk, often for several days, in constant fear of being caught and assaulted. The streets in the Jewish districts stood empty. To protect people from round-ups, the Jewish Congregation, who self-governed the Jewish community offered co-operation with the German authorities regarding recruiting workers. The offer was accepted and on 7 October 1939, a Labour Recruitment Office(Arbeitseinsatz 1) was established and located at 18 Pomorska Street – later at 10 Poludniowa Street. The office delivered contingents of labourers to the occupation authorities. At first the authorities were supplied with 600 workers a day – later the number increased to 2000 a day. Jews received no payment for their work, although they were forced to do the heaviest labour. Until the ghetto was sealed off, tens of thousands suffered from the degradations accompanying forced labour. From the very beginning of the occupation the Nazi authorities used terror against the Jewish population. Jewish politicians, social activists and intellectuals were seized according to lists prepared in advance, and imprisoned in a concentration camp created without delay in Glaser’s factory in Radogoszcz. They were tortured and subsequently shot or transported to Dachau and Mauthausen concentration camps. On 2 November 1939, the Germans executed in Lagiewniki Woods 15 men arrested a day earlier in the Astoria Café. Many others were savagely beaten and tortured. Ten days later on 10 November, two Poles and a Jew named Radner were hanged in public. Their bodies were left hanging for several days. On the same night all four great synagogues of Lodz were blown up and burned, and on 11 November the Jewish Kehillah premises were surrounded and nearly all members of the Council of Elders were arrested. Of the 30 only 6 councillors were released – the rest were tortured and shot in Lagiewniki Woods. Simultaneously with organised terror, numerous individual murders and “spontaneous” pogroms of the Jews were taking place. One pogrom took place on 8 October 1939, carried out by the local Germans on the occasion of Josef Goebbbels visit to Lodz. In September 1939 thousands of Lodz Jews decided to become refugees, some managed to escape to Russia, others especially the wealthy ones, to neutral countries. Many Jews fled to the General-Government. On 12 December 1939 the occupation forces commenced the deportations to the General- Government, to fulfil the Nazi plans of removing Jews and Poles from the territories annexed into the Reich. These actions were carried out with extreme cruelty and pillaging of property. Many Jews were shot and many froze to death, according to estimates more than 71,000 Jews either left or were deported from Lodz, during the first few months of the occupation. The Beginning of the “Closed District” The policy of building ghettos was set out in a secret memorandum of 21 September 1939 by Reinhard Heydrich to the commanders of the SD Einsatzgruppen and other central offices of the Third Reich. The letter contained general instructions on solving the “Jewish question”, in two main stages. The first stage was the concentration of all Jews in designated areas, the second stage followed the total annihilation, camouflaged under the term “final goal” (Endziel). Rumours regarding the creation of a ghetto spread through the city already at the end of September or the beginning of October 1939. The official position was detailed in a confidential circular from Friedrich Übelhör, who wrote on 10 December 1939 that in view of the fact that a complete evacuation of Jews from Lodz was considered impractical at that moment in time, a ghetto would be established. The location of the ghetto would be established in the northern part of the city, and this was completed by the beginning of February 1940. The order to to establish an isolated district for Jews was announced by the Chief of Police Johann Schäfer in the Lodscher Zeitung on 8 February 1940. The notice contained a rough map of the ghetto, and a detailed plan for resettling Jews from other districts. The ghetto was located in the most neglected part of northern Lodz – Baluty and the Old Town with an area of 4.13km. When the ghetto was first established, the streets that formed the boundaries were as follows: Goplanska – Zurawia – Wspolna – Stefana – Okopowa – Czarnieckiego – Sukiennicza - Marysinska – Inflancka – along the Jewish cemetery walls, and then Bracka – Przemyslowa – Srodkowa – Glowackiego – Brzezinska – Oblegorska – Chlodna – Smugowa – Nad Lodka – Stodolnia – na – Podrzeczna – Drewnowska – Majowa – Wrzesnienska – Piwna – Urzednicza – to Zgierska and Goplanska. A year later in May 1941, the German authorities separated the area limited by the streets Drewnowska, Majowa, and Jeneralska from the ghetto. As a result the ghetto border was moved eastwards and the ghetto area shrank to 3.82km. Excluded from the ghetto area were Nowomiejska – Zgierska and Limanowskiego streets, cutting the ghetto into three parts. At first the traffic between them was directed through specially built gates, which were opened at certain hours. However, traffic moved very slowly through the gates, and in the summer of 1940, three wooden bridges for pedestrians were erected over Zgierska Street near Podrzeczna Street and Lutomierska Street,and over Limanowskiego Street near Masarska Street. The final enclosure of the ghetto and its total isolation from other parts of the city took place on 30 April 1940. Barriers and barbed wire entanglements were placed around the ghetto and along its two main isolated arteries – Nowomiejska and B. Limanowskiego. Telephones in the ghetto could be used only by administration officers, and only for official matters. On 13 July 1940, conditions of the mail exchange between the ghetto and the outer world were established. Only postcards were allowed, which had to be written clearly and in German – only personal news could be mentioned. Unofficial contacts between the Jews and the “Aryan” inhabitants of Lodz were made even more difficult by the fact that the city had a German minority of about 70,000, loyal to the new authorities. Houses next to the ghetto were demolished. Baluty and the Old Town had no sewage system, so it was not possible to get out of the ghetto through sewage canals. These factors made the Lodz ghetto “tighter” and easier to guard than the other ghettos in the General – Government, with tragic consequences for the inhabitants of the ghetto, since smuggling food and medicines to the ghetto was virtually impossible. According to official records from 12 June 1940, 160,320 Jews were enclosed in the ghetto, of whom 153,849 were former inhabitants of Lodz and 6471 came from the Warthegau area. A year and half later, between 17 October and 4 November 1941, 19722 Jews from Austria, Czechoslovakia, Luxemburg and Germany were deported to Lodz. In the period between 7 December 1941 and 28 August 1942 , 17,826 Jews from the liquidated provincial ghettos in Warthegau (Wloclawek, Glowno, Ozorkow, Strykow, Lask, Pabiance, Wielun, Sieradz, Zdunska Wola) were deposited in the ghetto. In sum, more than 200,000 Jews from Warthegau and Western Europe went through the Lodz ghetto. Two isolated camps for Gypsies – Zigeunerlager and for Polish youths – Polenjugendverwahrlager – were placed within the ghetto area. The 5007 Gypsies from the Austrian – Hungarian border( Burgenland) were transported to Lodz between 5 and 9 November 1941. A camp was established for them in the quarter of Brzezinska, Towianskiego, Starosikawska and Glowackiego Streets. Sanitary conditions were tragic, resulting in approximately 600 deaths from typhus and other diseases. The Gypsy camp existed until 16 January 1942, when its inhabitants were transported to the death camp at Chelmno on the Ner. The camp for Polish youth and children, located at Przemyslowa Street, began functioning on 1 December 1942. Inmates were children between the ages of 8 –16, whose parents were in camps or prisons, children from orphanages, educational institutions, and homeless children. A large group of prisoners were minors accused of co-operation with the resistance movement, illegal trade, refusal to work and petty thefts. In the years 1943/44, the camp held was 1086 boys and about 250 girls. Conditions in the camp were unhealthy, weakening the young bodies. The food rations were below starvation level, as in concentration camps. All children over the age of 8 were forced to work 10-12 hours a day. Many children died of hunger, diseases, and from beatings administered by German guards. The camp existed until the liberation of the city. German Police in the Ghetto The Gestapo had the leading role in supervising the ghetto and in the subsequent liquidation of its inhabitants. Since the end of 1941 the Gestapo had functioned as a unit carrying out orders of the Main Security Office (RSHA) regarding the final solution of the “Jewish question”. The Gestapo post in the ghetto was opened at the end of April 1940, in two rooms of a building at the corner of Limanowskiego and Zgierska Streets, occupied by the 6th Schupo District. Personnel consisted of officers from the Jewish Department of the Lodz Gestapo marked with the symbol llB4, later IV4b. The Gestapo post participated with the criminal police station of the ghetto in pillaging Jewish property. The Police station was located in a parish building at 8 and 10 Koscielna Street, known in the ghetto as the “Red House”. In the first period of the ghetto’s existence the criminal police was to fight smuggling and the black market, later a basic task was to discover and confiscate the ghetto inhabitants belongings. The Police station – The Red House – evoked terrible fear among the Jews. In its cellars information about hidden property was obtained with the use of sophisticated torture. Many of the tortured people died, others became cripples. The 6th Schupo District was also located in the ghetto, in the same reinforced building occupied by the Gestapo at the corner of Limanowskiego and Zgierska Street. Schupo posts located every 50 to 100 meters guarded the ghetto borders. Additionally at strategic points sentry boxes were located at intersections of streets – l - Drewnowska and Zachodnia, II – Kilinskiego and Smugowa, III – Sporna and Boya, IV- Inflancka and Zagajnikowa , V- Okopowka and Franciszkanska. Soldiers from special battalions stood guard. The main task of the Schupo was to keep order within the ghetto. The policemen together with Gestapo officers searched new arrivals in the ghetto and confiscated their belongings, especially jewellery and hard currency. The members of the Schupo were particularly brutal and used their weapons frequently. Administratively, the ghetto was subordinated to the City Board. At first the Mayor – Karol Marder separated a branch office from the City Board Economy and Food Supplies Department. It was located at 11 Cegielniana Street – its manager until 5 May 1940 was Johann Moldenhauer, later that function was taken over by Hans Biebow, a commissioned merchant from Bremen. On 10 October 1940 the branch office was changed into an independent department of the City Board under the name of Gettoverwaltung (the Ghetto Management). The office reported directly to Oberbürgermeister Werner Ventzki. At first, the main task of the Gettoverwaltung was to supply the ghetto with food and medicines, and to administer financial transactions between the Jewish district and the city. After October 1940 the office managed the process of transforming the ghetto into a labour camp, taking over its inhabitants property, supervising the exploitation of the labour force. Beginning in January 1942 Hans Biebow, and his deputies Joseph Hammerle and Wilhelm Ribbe participated in the selection of people brought from the liquidated provincial ghettos and in deportations from the Lodz ghetto to the death camps. Soon Hans Biebow came to be appreciated by the central and Warthegau authorities. He performed well with his murderous exploitation of the ghetto’s labour force, and by imposing the idea of financial self-sufficiency of the ghetto. Using his many years of business experience, Hans Biebow became comfortable and widely accessible to the local bosses as a dispatcher of robbed Jewish goods. Biebow sold valuable objects at considerably lower prices, and sent gifts to dignitaries, such as Artur Greiser. These activities enlarged the circle of his protectors and allowed him to exercise almost absolute authority over the Lodz ghetto. The ghetto management team developed rapidly from only 24 employees in May 1940, to 216 clerks, officials and workers by the middle of 1942. The Jewish administrative body – the so-called Ältestenrat [Council of Elders] reported to the Gettoverwaltung officials. It was particularly well-developed in Lodz, because the ghetto economy was extremely centralised. In contrast with other ghettos in Poland, private enterprises were not allowed in the Lodz ghetto. The head of the Jewish administration carried the title of the Eldest of the Jews in Litzmannstadt Ghetto. Note: Litzmannstadt was the name the Germans gave to "now German" Lodz. The occupation authorities appointed Chaim Morderchaj Rumkowski to the position on 13 September 1939. The Council of Elders appointed by him was to perform the role of an advisory body, but in reality played no role at all. Rumkowski became the main go-between for the Jewish administration and the German authorities. The Germans forced him to co-operate in transforming the ghetto into a labour camp and in stealing its inhabitant’s belongings using blackmail threats of further reductions in food supplies for the ghetto. Rumkowski was given a great deal of independence in the inner organisation of the ghetto. He had authority in police and judicial matters, including the right to arrest and send people to prison. He managed the economy and the administration. He was also entitled to create new offices, departments, labour workshops and various agencies. The structure of the Jewish administration in the Lodz ghetto differed from the usual administration systems. The administration consisted of departments, headquarters, workshops and various posts, which constituted independent agencies with various degrees of competence. Some were liquidated, some were created or reactivated, some were transformed in response to actual needs. In the years 1940 –1944, the administration numbered from 27 to 32 agencies, with 13 to 14 thousand employees. Chaim Rumkowski managed this complex structure through the Central Secretariat commonly called the “The Headquarters”, which was located, as were most of the important agencies, in buildings and barracks at Balucki Rynek (Baluty Marketplace). The Rynek was separated from the rest of the ghetto and provided place of contact with the German management of the ghetto. All supplies were delivered and unloaded a this location. From there, stolen Jewish belongings and part of the labour workshops production were transported out. Only persons with special passes and those employed there were allowed to enter the square. The “Headquarters” opened 7 May 1940. Rumkowski was assisted by Dora Fuchs, who exercised her duties without interruption, until the liquidation of the ghetto in August 1944. Apart from handling current correspondence among the ghetto departments and workshops and with the authorities, the Headquarters had additional functions: it gathered reports on the activity of each agency, completed and kept Rumkowski orders and circulars; was instrumental in handing over to the Gettoverwaltung the stolen Jewish belongings: and dealt with all cemetery matters. Several departments reported to the Headquarters. Most important was the Presidential Department, also called the Presidential Secretariat, originally located at 4 Plac Koscielny and subsequently at 1 Dworska Street. The Department prepared reports, edited and published Rumkowski’s circulars and orders, distributed passes and food coupons, and issued identity cards. Other important organisational units of the Headquarters included the Personnel Department at 4 Dworska Street, dealing with employment issues, and the Main Treasury at 4 Plac Koscielny, and Central Book-Keeping at 1 Dworska Street. From 1 October 1940 Balucki Rynek was also the location of the Central Office of Labour Workshops, directed by commissioner Aron Jakubowicz. This unit managed all production matters in the ghetto and employment in the workshops. Specifically the office acted as an intermediary between the German management of the ghetto and the workshops when carrying out orders. More than 90% of production was on orders from state authorities – mainly military and police, and paramilitary organisations. The largest consumers of uniforms, coats sheepskin coats, warm jackets, caps, shoes, straw boots, knapsacks, rucksacks etc were the Wehrmacht – Beschaffungsamt in Berlin, Heeresbekleidungsamt in Berlin, and its branch office in Poznan, Marinebekleidungsamt in Kiel and Wilhelmshaven, Polizeibekleidungsamt in Poznan and the Poznan branch of Organisation Todt. Ghetto production was supervised by the Armed Inspection of the XXI Military District in Poznan and the Arms Headquarters in Lodz. Approximately 10% of the ghetto’s production was for orders by famous department stores and private enterprises. The customers included: · They were the main consumers of underwear, clothing, footwear, bags, textiles, furniture, lamps, lampshades etc. State and private orders in 1944 were completed by 114 factories and workshops, which employed nearly 70,000 workers. A profit of 2.2 million Reichsmarks was realised as a result of these goods being produced. The largest producers and their premises were as follows: · Tailor workshops – headquarters at 45 Lagiewincka Street · Shoemaking workshop – 27 Franciszkanska Street · Straw Products workshop – 79 Brzezinska Street · Metal Products workshop – 63 Lagiewincka Street · Carpentry Woekshop – 12 Drukarska Street · Rubber Products workshop – 9 Zgierska Street · Textile Workshop – 77 Drewnowska Street This department was created as early as 16 October 1939, its location in the Ghetto was the building of National Health Service Hospital at 34/36 Lagiewnicka Street. The Health Department was directed in succession by Dr Dawid Helman, Dr Leon Szykier, and Dr Witkor Millier. Thanks to their efforts and with considerable help from a large group of doctors, they managed to organise an efficient health service. At the height of its development the ghetto had 7 hospitals, 7 pharmacies, 4 clinics, 2 emergency stations, 2 preventative clinics for children and 2 nursing homes for the elderly. 120 doctors worked in the hospitals at: The hospitals functioned till mid-September 1942 – they contained in total 2600 places for patients. About 10 to 12 thousand patients went through the hospitals per month, the four clinics serviced about 6000 patients a month. The Jewish health service constantly fought epidemics and gave medical help to tens of thousands Jews, and hundreds of Gypsies, imprisoned in the camp within the ghetto, and to Polish children from the camp at Przemyslowa Street. The results of the health service’s work were negated because of lack of food and medicines. The ghetto hospitals were liquidated on 15 September 1942 after the so-called – Allgemeine Gehsperre – the patients were either murdered in the hospitals or deported to the death camp at Chelmno, and the hospitals were turned into workshops. At the end of 1942, with the consent of the occupation authorities, two new hospitals were created – a surgical one at 7 Mickiewicza Street and an isolation one at 74 Dworska Street. The Education Department The Education Department was established by Chaim M Rumkowski in October 1939 at 27 Franciszkanska Street. Elijahu Tabaksblat was appointed director and in a short time a number of primary and secondary was created. In the year 1940/1941 the Lodz ghetto had 36 primary schools, 4 religious schools, 2 special schools, 2 grammar schools and one musical school. The schools were attended by about 14,800 pupils. All schools had their own canteens and health care, which was free for poor children. The Education Department also organised two-week summer camps at Marysin for 10,000 children and youths. In October 1941, the Education Department and the schools ceased to function when 20,000 Jews from Western Europe were resettled in the Ghetto. The school buildings were inhabited by Jews from Prague, Vienna, Berlin, Luxemburg and Frankfurt. From then on the education of children and youths was conducted partly in secret and in a limited form within the official programme of professional training for workers. The Supplies Department The Supplies Department was located at Balucki Rynek and began work in May 1940 under the management of H. Szczesliwy. The main tasks of this department included distribution of food, fuel and medicines received from the German authorities. It was administered according to the issue of a coupon system. The department was divided into 7 sections: The Kitchen Section and Food Coupon Section tried to fight starvation in the ghetto by establishing 462 cheap public canteens, and from January 1941 73 municipal and communal canteens. These measures managed to keep alive many of the ghetto inhabitants despite a constant lack of food. Thanks to the rationed distribution of food the rate of deaths caused by starvation was 50% lower in Lodz than in the Warsaw ghetto, which was much wealthier and supported by smuggling. The Accommodation Department The Accommodation Department was established on 26 January 1940. It was initially managed by Henryk Naftalin, later its head was Baruch Praszkier. The department was located at 11 Lutomierska Street, later at 10/12 Rybna Street. It dealt with allocating flats to people resettled in the ghetto. The Registration Department The Registration Department at 4 Miodowa Street – later 4/6 Plac Koscielny functioned from 10 May 1940. After January 1941 the office was connected with the Registry and the Statistics Department, managed by Henryk Naftalin. The office’s task was registering the inhabitants, keeping up to date records and a current index of accommodation and addresses. Later, in 1941 the office began keeping record books of births, marriages and deaths and statistics of population movements within the ghetto. An additional function of that department was to provide addresses of ghetto inhabitants to the ghetto administration and to the German authorities. The Rabbinical College The Rabbinical College under the leadership of rabbi Szlomo Trajstman functioned until mid-September 1942, until the office was closed by the occupation authorities. The rabbis were sent to work in various administration agencies. The College was located in a building at 4 Plac Koscielny. Rabbinical decisions on religious matters and rituals like marriages, divorces, circumcisions were among the responsibilities of the College. On 17 July 1940 Rumkowski appointed the Rabbinical College a civil institution in order to co-operate with the Registration Department. The Bank of Issue The Bank of Issue was established on 26 June 1940, as ordered by Friedrich Übelhör. Pinchas Gierszowski was appointed managing director. The bank was located at 71 Maryinska Street, and the branch opened on 8 July 1940, at 56 Limanowskiego Street. The Bank printed and supervised distribution of the ghetto currency- Markquittungen – commonly called “rumki” after Rumkowski. Rumki became the only legal tender in the Jewish quarter, this was another measure intended to block illegal trade with the “Aryan” part of the city. The Bank also controlled rates of exchange for German and foreign currency and ghetto notes. The Bank for Purchasing Valuable Objects and Clothing The Bank for Purchasing Valuable Objects and Clothing was established on 12 August 1940. It was located at 7 Ciesielska Street and its president was L. Szyffer. The bank purchased German marks, foreign currency, golden coins, jewellery, carpets, fur coats, clothes, postage stamps, collections and paintings from the ghetto inhabitants. The Post Office The post office in the ghetto opened on 15 March 1940 at 4/6 Koscielny. Three weeks later, on 4 April, an additional branch and a parcel section began functioning at 1 Dworska Street. The mail exchange took place in a special barrack at Balucki Rynek. The Ghetto mail service was managed, in succession, by Herbert Grawe, Maurycy Goldblum, Jakub Dawidowicz and Mosze Gumener. Despite considerable limitations and censorship the postal service provided the opportunity for contacts between the ghetto and the outside world. Through its doors went the correspondence, parcels, and postal orders from occupied Poland and from abroad. The post office also carried out all deliveries of private and official correspondence in the ghetto, using its own postage steps worth 5, 10 and 20 ghetto marks, only for the internal service. The Ghetto Trams Department The Ghetto Trams Department was created in August 1941 in cooperation with the Municipal Tramways Enterprise. Wladyslaw Dawidowicz and Leopold Szreter were appointed as directors. In May 1943, the department merged with the Transportation Department, located at 7 Dworska Street under the direction of Marian Kleiman. Between October 1941 and March 1942, a tram line was built from Brzezinska Street through Marysinska and Jagiellonska Street to the railway ramp Lodz – Radogoszcz. Simultaneously several side tracks were built from Franciszkanska to Jakuba Street and at 1/3, 32, 45, 63 Lagiewnicka Street, which connected the most important labour workshops and the railway station at Radogoszcz. The trams in the ghetto were used initially for the transportation of goods, raw materials food and fuel. The first passenger cars first saw service in the second half of 1943 to transport people working at the Marysin labour workshops between the hours of 06:00- 07:30 and 18:30 –19:00 The Welfare Department The Welfare Department at 20 Dworska Street and the Community Centre at 3 Krawiecka Street are worth mentioning. The last one performed an important role in keeping up the spirits of the ghetto’s inhabitants. Music concerts were performed, by a symphonic orchestra, under the direction of the famous conductor Teodor Ryder. The popular music programs and the amateur revue theatre were directed by Dawid Bajgelmam, Bronislawa Rotsztajtowna, a famous violin player and the tenor Nikodem Sztajman held recitals there. An important role in fighting crime and ensuring peace and order in the Lodz Ghetto was undertaken by the High Chamber of Control, the Jewish police and the administration of justice. The High Chamber of Control The High Chamber of Control was established on 6 November 1940. Its first location was a building at 1 Dworska Street, later at 25 Lagiewnicka Street. It was directed by a presidium of 4 members, with Jozef Rumlowski as chairman. The Chamber was charged with fighting offences and crimes of all kinds, and prevention of potential corrupt practices. As such the Chamber controlled the activities of all departments, administration agencies and associations, was entitled to remove clerks from their posts and to search people, their houses and offices. The Chamber also had the right of temporary arrest of suspects until the decision on their cases was made by the Superior of the Elders Council or by the court. The Chamber was liquidated on 12 November 1942. Courts and the Public Prosecutors Office Chaim Rumkowski established this office on 18 September 1940, and it was headed by Stanislaw Jackobson, and its location was at 20 and 22 Gnieznienska Street. Sentences were passed according to the rules of civil and criminal procedure enacted by the ghetto lawyers. On 11 March 1941 Rumkowski appointed the Summary Court, which was located at 27 Franciszkanska Street. Rumkowski also empowered himself to pass sentences without the participation of the prosecutor and defence. The sentence was passed by a judge in the presence of two assessors, without a right of appeal. A juvenile court operated at 13 Lutomierska Street. Persons sentenced by the court were transported to the Central Prison located in a block of buildings at 6-18 Czarneckiego Street. During the resettlement period the prison functioned as a transit camp, from where people were transported under escort to the railway ramp at Radogoszcz. The Jewish Police The Jewish Police (Ordnungsdienst) was created by Chaim Rumkowski on 1 May 1940. Their headquarters was located at 1 Lutomierska Street and the commander was Leon Rozenblat. The ghetto police which numbered about 1,200 men in 1943, was charged with keeping order in the ghetto, fighting the black market and taking part in the deportations of the ghetto inhabitants to death camps. The area of the ghetto was divided into five districts with police stations as follows: · I – 27 Franciszkanska Street · II – 56 Aleksandrowska Street · III – 61 Lagiewnicka Street · IV – 69 Marysinska Street · V – 36 Zagajnikowa Street The ghetto also had a Special Section of the Jewish police whose headquarters was at 96 Lagiewnicka Street, which assisted the Germans in pillaging Jewish property. The Commanders of the Special Section were Dawid Gertler and Marek Kligier. Starvation and Diseases The most grievous form of the indirect extermination of the ghetto inhabitants was starvation. In 1940, the daily caloric ration in the ghetto was equal that for regular prisoners – about 1800 kcal. By mid-1942 the ration fluctuated about 600 kcal. The caloric deficit increased from 40% to 80%, which in practice appeared as a feeling of constant hunger and subsequent wasting away, famine tremors and death from starvation. The death rate due to starvation increased in geometric progression. In 1940, starvation was the cause of only 206 deaths, but in 1942 2811 people died from starvation. In other words, a 10-fold increase. In subsequent years starvation was still the direct cause of 18% of all deaths The general weakening of the body caused by starvation added to the danger of contracting diseases. Tuberculosis increased rapidly due to a lack of vitamins and malnutrition. Officially, TB was discovered in 20% of the population, but as many as 60% of the ghetto inhabitants may have been infected. In 1940 589 persons died of tuberculosis – in 1942 this had risen to 2182 persons. In the final days of the ghetto in 1944 tuberculosis was the cause of 39% of all deaths. Altogether 7269 persons died from TB in the Lodz ghetto. Other contagious diseases were widespread also. An epidemic of dysentery broke out in 1940, which killed 1117 persons of the several thousands infected. As many as 6431 ghetto inhabitants contracted typhus- 320 died. The proportion of patients with diphtheria, diarrhoea, scarlet fever, trachoma, and meningitis was three times higher than in the years between the wars. Starvation, heavy labour, uncertain future and constant fear of deportation had a great impact on the increase in the number of heart and circulatory system diseases. These were the most often cause of death until 1942. in mid-1942 3066 persons died from cardiovascular disorders. The death rate was particularly high among Jews from Western Europe. Often elderly and formerly wealthy, these people quickly lost their strength and health under the extremely hard conditions in the ghetto. During only 7 months of their stay in the ghetto between October 1941 and May 1942, 3418 Western Jews died. The number of births decreased considerably in the ghetto. Many children were stillborn. Within the entire period of the ghetto’s existence only 2306 children were born, less than any single year before the war. Number of the deceased in the Lodz Ghetto 1940 -1944 Terror and Extermination Ten days after the ghetto had been sealed off, a draconian directive was enacted by the police chief of Lodz on 10 May 1941. From that day, the use of weapons without warning was permitted against any Jew trying to leave the ghetto area. That instruction became the basis for “legal” hunting of Jews who approached the ghetto fence. From numerous eyewitness accounts people were occasionally shot for “sport” or from boredom. Other incidents involved the use of weapons in the context of the so-called control of window black-outs. The windows of Jewish flats were directly shot at, supposedly as a punishment for a light being visible from the street. A common event was also “hunting for humans” organised by German officers – shooting casual passers- by without any provocation. The statistical reports in the files of the Head of the Council of Elders in the Lodz ghetto, and in the periodical reports of the criminal police indicate that: The Nazis carried out public executions in order to intimidate the Jews. In 1942 two such executions took place at the Plac Bazarowy. On 21 February 1942 Maks Hertz, a fugitive from Cologne was hanged. On 7 September 1942 the Nazis hanged 17 Jews deported in August 1942 from Pabiance. They were accused of resistance against the German authorities. In March 1940 and July 1941 the mentally ill from the hospital at Wesola Street were murdered and many handicapped people fell victim to the Germans. The Germans carried out the most destructive of actions against the ghetto inhabitants during the Allgemeine Gehsperre between 5 -12 September 1942. In Rumkowski’s speech on 4 September 1942 the Chairman announced, “that by order of the authorities, about 25,000 Jews under the age of 10 and over 65 must be resettled out of the ghetto”. The children, the elderly, the sick from the hospitals were loaded into 5-ton trucks which transported the Jews to a railroad station outside Lodz. The actual number deported was lower than the figure quoted in Rumkowski’s speech, 15,681 children and elderly people were deported from the Lodz ghetto. The victims were loaded onto railroad cars and sent via Kutno to Kolo, from whence they were taken to the death camp at Chelmno, where they were gassed in specially constructed, stationary gas-vans. The genocide of the Jews from Lodz began on the 16 January 1942 with the first transport of Jews sent to the death camp in Chelmno. The action lasted with breaks until the 15 May 1942. 57064 Jews from the Lodz ghetto, including 10943 Jews from Western Europe were gassed at the Chelmno camp. The Jewish victims were described by the Nazis as “dole-takers”, traders and criminals. By the autumn of 1942 72,745 people, defined as “dispensable non-working element “ had been exterminated – and thus the ghetto had been transformed into a labour camp. Deportations to the Chelmno death camp began anew in the middle of June 1944. The liquidation of the ghetto was ordered by Heinrich Himmler supposedly as early as the beginning of May 1944. Albert Speer who was interested in war production in the ghetto, argued against its destruction. Yet he was not supported by Artur Greiser, and the liquidation went ahead. Between 23 June 1944 and 14 July 1944, ten transports with 7196 people departed from Lodz to the death camp at Chelmno. Supposedly as a result of Albert Speer’s intervention with Adolf Hitler, the liquidation action was ceased on the 15 July 1944. When the Warsaw uprising erupted on 1 August 1944 , Chaim Rumkowski was notified about the resumption of the evacuation of the Jewish people to the ‘Altreich’. The appeals of Rumkowski, and the German authorities for volunteers willing to go the Reich went unheeded – only a few dozen people went to the gathering points. The German authorities began blocking the streets and organising round-ups – this action lasted 20 days. On 29 August 1944 the last transport of Jews departed from Lodz to the concentration camp at Auschwitz. Most of the deported were murdered in the gas chambers of Birkenau. The Lodz ghetto which still numbered over 72,000 inhabitants in August 1944 ceased to exist. Only a cleaning –commando and a handful of people in hiding numbering 20 –30 people remained. About 600 people were kept for a short time in collection camps at 36 and 63 Lagiewnicka Street. Those people included Aron Jakubowicz and Marek Kligier, with their families, a large group of physicians, engineers, artisans, and employees from Balucki Rynek. These people were selected by Hans Biebow himself to be sent to labour camps in Konigswustenhausen near Berlin and to factories in Dresden. Chaim Mordechai Rumkowski and his family were transported to Auschwitz-Birkenau on 28 August 1944 and murdered. Liquidation of the Jewish Population – Lodz Ghetto * Note – About 72,000 Jews were deported to Auschwitz – Birkenau at that time. From that number 5 to 7,000 survived according to Szmuel Krakowski. The Lodz Ghetto 1940 -1944 – Vademecum – Archiwum Panstwowe W Lodz & Bilbo – Lodz 1999 The Chronicle of the Lodz Ghetto 1941-1944 edited by Lucjan Dobroszycki published by Yale University Press New Haven and London. 1984 Alphabetical Index of Street Names in the Lodz Ghetto Dedicated to Arek Hersh (Herszlikowicz) "Born in Sieradz, Poland in 1928. At the age of 11 he was sent to a slave labour camp at Otoczno, near Arek was liberated in Theresienstadt on 8 May 1945." The Chronicle of the Lodz Ghetto 1941-1944 edited by Lucjan Dobroszycki published by Yale University Press New Haven and London. 1984 Copyright SJ H.E.A.R.T 2007
http://www.holocaustresearchproject.net/ghettos/Lodz/lodzghetto.html
13
29
Water scarcity involves water stress, water shortage or deficits, and water crisis. The concept of water stress is relatively new. Water stress is the difficulty of obtaining sources of fresh water for use, because of depleting resources. A water crisis is a situation where the available potable, unpolluted water within a region is less than that region's demand. Some have presented maps showing the physical existence of water in nature to show nations with lower or higher volumes of water available for use. Others have related water availability to population. A popular approach has been to rank countries according to the amount of annual water resources available per person. For example, according to the Falkenmark Water Stress Indicator, a country or region is said to experience "water stress" when annual water supplies drop below 1,700 cubic metres per person per year. At levels between 1,700 and 1,000 cubic metres per person per year, periodic or limited water shortages can be expected. When water supplies drop below 1,000 cubic metres per person per year, the country faces "water scarcity". The United Nations' FAO states that by 2025, 1.9 billion people will be living in countries or regions with absolute water scarcity, and two-thirds of the world population could be under stress conditions. The World Bank adds that climate change could profoundly alter future patterns of both water availability and use,thereby increasing levels of water stress and insecurity, both at the global scale and in sectors that depend on water. Another measurement, calculated as part of a wider assessment of water management in 2007, aimed to relate water availability to how the resource was actually used. It therefore divided water scarcity into ‘physical’ and ‘economic’. Physical water scarcity is where there is not enough water to meet all demands, including that needed for ecosystems to function effectively. Arid regions frequently suffer from physical water scarcity. It also occurs where water seems abundant but where resources are over-committed, such as when there is overdevelopment of hydraulic infrastructure for irrigation. Symptoms of physical water scarcity include environmental degradation and declining groundwater. Water stress harms living things because every organism needs water to live. Economic water scarcity, meanwhile, is caused by a lack of investment in water or insufficient human capacity to satisfy the demand for water. Symptoms of economic water scarcity include a lack of infrastructure, with people often having to fetch water from rivers for domestic and agricultural uses. Large parts of Africa suffer from economic water scarcity; developing water infrastructure in those areas could therefore help to reduce poverty. Critical conditions often arise for economically poor and politically weak communities living in already dry environment.In other countries like India also suffer with water scarcity. In India water scarcity is the major problems as compared to other countries. Peoples go to well and collect water for their domestic uses.Specially poor people in villages and small small towns peoples life are so difficult without water .They collected water very far from the villages about 40-50km away from their house . Fifty years ago, when there were fewer than half the current number of people on the planet, the common perception was that water was an infinite resource. People were not as wealthy then as they are today, consumed fewer calories and ate less meat, so less water was needed to produce their food. They required a third of the volume of water we presently take from rivers. Today, the competition for water resources is much more intense. This is because there are now over seven billion people on the planet, their consumption of water-thirsty meat and vegetables is rising, and there is increasing competition for water from industry, urbanisation and biofuel crops. The total amount of available freshwater supply is also decreasing because of climate change, which has caused receding glaciers, reduced stream and river flow, and shrinking lakes. Many aquifers have been over-pumped and are not recharging quickly. Although the total fresh water supply is not used up, much has become polluted, salted, unsuitable or otherwise unavailable for drinking, industry and agriculture. To avoid a global water crisis, farmers will have to strive to increase productivity to meet growing demands for food, while industry and cities find ways to use water more efficiently. The New York Times article, "Southeast Drought Study Ties Water Shortage to Population, Not Global Warming", summarises the findings of Columbia University researcher on the subject of the droughts in the American Southeast between 2005 and 2007. The findings were published in the Journal of Climate. They say the water shortages resulted from population size more than rainfall. Census figures show that Georgia’s population rose from 6.48 to 9.54 million between 1990 and 2007. After studying data from weather instruments, computer models and measurements of tree rings which reflect rainfall, they found that the droughts were not unprecedented and result from normal climate patterns and random weather events. "Similar droughts unfolded over the last thousand years", the researchers wrote, "Regardless of climate change, they added, similar weather patterns can be expected regularly in the future, with similar results." As the temperature increases, rainfall in the Southeast will increase but because of evaporation the area may get even drier. The researchers concluded with a statement saying that any rainfall comes from complicated internal processes in the atmosphere and are very hard to predict because of the large amount of variables. When then there is not enough potable water for given necessity, the threat of a water crisis is realized. The United Nations and other world organisations consider a variety of regions to have water crises such that it is a global concern. Other organisations, such as the Food and Agriculture Organisation, argue that there is no water crises in such places, but that steps must still be taken to avoid one. There are several principal manifestations of the water crisis. - Inadequate access to safe drinking water for about 884 million people - Inadequate access to water for sanitation and waste disposal for 2.5 billion people - Groundwater overdrafting (excessive use) leading to diminished agricultural yields - Overuse and pollution of water resources harming biodiversity - Regional conflicts over scarce water resources sometimes resulting in warfare Waterborne diseases and the absence of sanitary domestic water are one of the leading causes of death worldwide. For children under age five, waterborne diseases are the leading cause of death. At any given time, half of the world's hospital beds are occupied by patients suffering from waterborne diseases. According to the World Bank, 88 percent of all waterborne diseases are caused by unsafe drinking water, inadequate sanitation and poor hygiene. Water is the underlying tenuous balance of safe water supply, but controllable factors such as the management and distribution of the water supply itself contribute to further scarcity. A 2006 United Nations report focuses on issues of governance as the core of the water crisis, saying "There is enough water for everyone" and "Water insufficiency is often due to mismanagement, corruption, lack of appropriate institutions, bureaucratic inertia and a shortage of investment in both human capacity and physical infrastructure". Official data also shows a clear correlation between access to safe water and GDP per capita. It has also been claimed, primarily by economists, that the water situation has occurred because of a lack of property rights, government regulations and subsidies in the water sector, causing prices to be too low and consumption too high. Vegetation and wildlife are fundamentally dependent upon adequate freshwater resources. Marshes, bogs and riparian zones are more obviously dependent upon sustainable water supply, but forests and other upland ecosystems are equally at risk of significant productivity changes as water availability is diminished. In the case of wetlands, considerable area has been simply taken from wildlife use to feed and house the expanding human population. But other areas have suffered reduced productivity from gradual diminishing of freshwater inflow, as upstream sources are diverted for human use. In seven states of the U.S. over 80 percent of all historic wetlands were filled by the 1980s, when Congress acted to create a “no net loss” of wetlands. In Europe extensive loss of wetlands has also occurred with resulting loss of biodiversity. For example many bogs in Scotland have been developed or diminished through human population expansion. One example is the Portlethen Moss in Aberdeenshire. On Madagascar’s highland plateau, a massive transformation occurred that eliminated virtually all the heavily forested vegetation in the period 1970 to 2000. The slash and burn agriculture eliminated about ten percent of the total country’s native biomass and converted it to a barren wasteland. These effects were from overpopulation and the necessity to feed poor indigenous peoples, but the adverse effects included widespread gully erosion that in turn produced heavily silted rivers that “run red” decades after the deforestation. This eliminated a large amount of usable fresh water and also destroyed much of the riverine ecosystems of several large west-flowing rivers. Several fish species have been driven to the edge of extinction and some, such as the disturbed Tokios coral reef formations in the Indian Ocean, are effectively lost. In October 2008, Peter Brabeck-Letmathe, chairman and former chief executive of Nestlé, warned that the production of biofuels will further deplete the world's water supply. Overview of regions suffering crisis impacts There are many other countries of the world that are severely impacted with regard to human health and inadequate drinking water. The following is a partial list of some of the countries with significant populations (numerical population of affected population listed) whose only consumption is of contaminated water: According to the California Department of Resources, if more supplies aren’t found by 2020, the region will face a shortfall nearly as great as the amount consumed today. Los Angeles is a coastal desert able to support at most 1 million people on its own water; the Los Angeles basin now is the core of a megacity that spans 220 miles (350 km) from Santa Barbara to the Mexican border. The region’s population is expected to reach 41 million by 2020, up from 28 million in 2009. The population of California continues to grow by more than two million a year and is expected to reach 75 million in 2030, up from 49 million in 2009. But water shortage is likely to surface well before then. Water deficits, which are already spurring heavy grain imports in numerous smaller countries, may soon do the same in larger countries, such as China and India. The water tables are falling in scores of countries (including Northern China, the US, and India) due to widespread overpumping using powerful diesel and electric pumps. Other countries affected include Pakistan, Iran, and Mexico. This will eventually lead to water scarcity and cutbacks in grain harvest. Even with the overpumping of its aquifers, China is developing a grain deficit. When this happens, it will almost certainly drive grain prices upward. Most of the 3 billion people projected to be added worldwide by mid-century will be born in countries already experiencing water shortages. Unless population growth can be slowed quickly it is feared that there may not be a practical non-violent or humane solution to the emerging world water shortage. After China and India, there is a second tier of smaller countries with large water deficits — Algeria, Egypt, Iran, Mexico, and Pakistan. Four of these already import a large share of their grain. But with a population expanding by 4 million a year, it will also likely soon turn to the world market for grain. According to a UN climate report, the Himalayan glaciers that are the sources of Asia's biggest rivers - Ganges, Indus, Brahmaputra, Yangtze, Mekong, Salween and Yellow - could disappear by 2035 as temperatures rise. It was later revealed that the source used by the UN climate report actually stated 2350, not 2035. Approximately 2.4 billion people live in the drainage basin of the Himalayan rivers. India, China, Pakistan, Bangladesh, Nepal and Myanmar could experience floods followed by droughts in coming decades. In India alone, the Ganges provides water for drinking and farming for more than 500 million people. The west coast of North America, which gets much of its water from glaciers in mountain ranges such as the Rocky Mountains and Sierra Nevada, also would be affected. By far the largest part of Australia is desert or semi-arid lands commonly known as the outback. In June 2008 it became known that an expert panel had warned of long term, possibly irreversible, severe ecological damage for the whole Murray-Darling basin if it does not receive sufficient water by October. Water restrictions are currently in place in many regions and cities of Australia in response to chronic shortages resulting from drought. The Australian of the year 2007, environmentalist Tim Flannery, predicted that unless it made drastic changes, Perth in Western Australia could become the world’s first ghost metropolis, an abandoned city with no more water to sustain its population. However, Western Australia's dams reached 50% capacity for the first time since 2000 as of September 2009. As a result, heavy rains have brought forth positive results for the region. Nonetheless, the following year, 2010, Perth suffered its second-driest winter on record and the water corporation tightened water restrictions for spring. Effects on climate Aquifer drawdown or overdrafting and the pumping of fossil water increases the total amount of water within the hydrosphere subject to transpiration and evaporation processes, thereby causing accretion in water vapour and cloud cover, the primary absorbers of infrared radiation in the earth's atmosphere. Adding water to the system has a forcing effect on the whole earth system, an accurate estimate of which hydrogeological fact is yet to be quantified. Construction of wastewater treatment plants and reduction of groundwater overdrafting appear to be obvious solutions to the worldwide problem; however, a deeper look reveals more fundamental issues in play. Wastewater treatment is highly capital intensive, restricting access to this technology in some regions; furthermore the rapid increase in population of many countries makes this a race that is difficult to win. As if those factors are not daunting enough, one must consider the enormous costs and skill sets involved to maintain wastewater treatment plants even if they are successfully developed. Reduction in groundwater overdrafting is usually politically very unpopular and has major economic impacts to farmers; moreover, this strategy will necessarily reduce crop output, which is something the world can ill-afford, given the population level at present. At more realistic levels, developing countries can strive to achieve primary wastewater treatment or secure septic systems, and carefully analyse wastewater outfall design to minimise impacts to drinking water and to ecosystems. Developed countries can not only share technology better, including cost-effective wastewater and water treatment systems but also in hydrological transport modeling. At the individual level, people in developed countries can look inward and reduce overconsumption, which further strains worldwide water consumption. Both developed and developing countries can increase protection of ecosystems, especially wetlands and riparian zones. These measures will not only conserve biota, but also render more effective the natural water cycle flushing and transport that make water systems more healthy for humans. A range of local, low-tech solutions are being pursued by a number of companies. These efforts center around the use of solar power to distill water at temperatures slightly beneath that at which water boils. By developing the capability to purify any available water source, local business models could be built around the new technologies, accelerating their uptake. Global experiences in managing water crisis ||This section is written like a personal reflection or opinion essay rather than an encyclopedic description of the subject. (September 2009)| It is alleged that the likelihood of conflict rises if the rate of change within the basin exceeds the capacity of institution to absorb that change. Although water crisis is closely related to regional tensions, history showed that acute conflicts over water are far less than the record of cooperation. The key lies in strong institutions and cooperation. The Indus River Commission and the Indus Water Treaty survived two wars between India and Pakistan despite their hostility, proving to be a successful mechanism in resolving conflicts by providing a framework for consultation inspection and exchange of data. The Mekong Committee has also functioned since 1957 and survived the Vietnam War. In contrast, regional instability results when there is an absence of institutions to co-operate in regional collaboration, like Egypt’s plan for a high dam on the Nile. However, there is currently no global institution in place for the management and management of trans-boundary water sources, and international co-operation has happened through ad hoc collaborations between agencies, like the Mekong Committee which was formed due to an alliance between UNICEF and the US Bureau of Reclamation. Formation of strong international institutions seems to be a way forward - they fuel early intervention and management, preventing the costly dispute resolution process. One common feature of almost all resolved disputes is that the negotiations had a “need-based” instead of a “right–based” paradigm. Irrigable lands, population, technicalities of projects define "needs". The success of a need-based paradigm is reflected in the only water agreement ever negotiated in the Jordan River Basin, which focuses in needs not on rights of riparians. In the Indian subcontinent, irrigation requirements of Bangladesh determine water allocations of The Ganges River. A need based, regional approach focuses on satisfying individuals with their need of water, ensuring that minimum quantitative needs are being met. It removes the conflict that arises when countries view the treaty from a national interest point of view, move away from the zero-sum approach to a positive sum, integrative approach that equitably allocated the water and its benefits. - 1998 Klang Valley water crisis - Arable land - California Water Wars - Chinese water crisis - Consumptive water use - Deficit irrigation - Drought rhizogenesis - Green Revolution - Irrigation in viticulture - Life Saver bottle - Living Water International - Ogallala Aquifer - Peak water - Seawater Greenhouse - Spragg Bag - Sustainable development in an urban water supply network - Water conflict - Water contamination - Water footprint - Water resource policy - Water resources - Water scarcity in Africa - Water scarcity in India - WaterPartners International - Freshwater: lifeblood of the planet - Falkenmark and Lindh 1976, quoted in UNEP/WMO. [/climate/ipcc_tar/wg2/180.htm "Climate Change 2001: Working Group II: Impacts, Adaptation and Vulnerability"] Check |url=scheme (help). UNEP. Retrieved 2009-02-03. - Samuel T. L. Larsen. [/g/grossmaz/LARSENST/ "Lack of Freshwater Throughout the World"] Check |url=scheme (help). Evergreen State College. Retrieved 2009-02-01. - FAO Hot issues: Water scarcity - The World Bank, 2009 [/water/publications/water-and-climate-change-understanding-risks-and-making-climate-smart-investment-decisi "Water and Climate Change: Understanding the Risks and Making Climate-Smart Investment Decisions"] Check |url=scheme (help). pp. 21–24. Retrieved 2011-10-24. - Molden, D. (Ed). Water for food, Water for life: A Comprehensive Assessment of Water Management in Agriculture. Earthscan/IWMI, 2007. - Chartres, C. and Varma, S. Out of water. From Abundance to Scarcity and How to Solve the World’s Water Problems FT Press (USA), 2010 - http://www.nytimes.com/2009/10/02/science/earth/02drought.html NYTimes 2009 - Columbia University - [/apps/news/story.asp?NewsID=17551&Cr=&Cr1 "United Nations statement on water crisis"] Check |url=scheme (help). Un.org. 2006-02-20. Retrieved 2011-03-10. - UN World Summit on Sustainable Development addresses the water crisis[dead link] - "No global water crisis - but may developing countries will face water scarcity", FAO.org 12 March 2003 - [/media/files/Joint_Monitoring_Report_-_17_July_2008.pdf "Progress in Drinking-water and Sanitation: special focus on sanitation"] Check |url=scheme (help). MDG Assessment Report 2008 (WHO/UNICEF Joint Monitoring Programme for Water Supply and Sanitation). July 17, 2008. p. 25. - [/media/media_44093.html "Updated Numbers: WHO-UNICEF JMP Report 2008"] Check |url=scheme (help). Unicef.org. Retrieved 2011-03-10. - [/g/grossmaz/WORMKA/ "Water is Life - Groundwater drawdown"] Check |url=scheme (help). Academic.evergreen.edu. Retrieved 2011-03-10. - WaterPartners International: Learn about the Water Crisis - [/2007/WORLD/asiapcf/12/17/eco.about.water/ "All About: Water and Health"] Check |url=scheme (help). CNN. December 18, 2007. - Water, a shared responsibility. The United Nations World Water Development Report 2, 2006 - [/videos/gapcasts/gapcast-9-public-services/ "Public Services"]"] Check |url=scheme (help). Gapminder video. - Fredrik Segerfeldt (2005), "Private Water Saves Lives", Financial Times 25 August - David Zetland, "Running Out of Water" - David Zetland, "Water Crisis" - "Looming water crisis simply a management problem" by Jonathan Chenoweth, New Scientist 28 Aug., 2008, pp. 28-32. - [/site/PageServer?pagename=iic_immigrationissuecenters19af "U.S. Water Supply"] Check |url=scheme (help). Fairus.org. Retrieved 2011-03-10. - Jul 21, 2006 (2006-07-21). [/atimes/South_Asia/HG21Df01.html "India grows a grain crisis"] Check |url=scheme (help). Atimes.com. Retrieved 2011-03-10. - [/Books/Seg/PB2ch03_ss6.htm "Water Scarcity Crossing National Borders"] Check |url=scheme (help). Earth-policy.org. 2006-09-27. Retrieved 2011-03-10. - Water Shortages May Cause Food Shortages - Yemen's Capital Facing Water Shortage Due to Rapid Increase in Population[dead link] - [/TFBE.php "The Food Bubble Economy"] Check |url=scheme (help). I-sis.org.uk. 2002-04-12. Retrieved 2011-03-10. - [/dailynewsstory.cfm/newsid/42387/story.htm "Vanishing Himalayan Glaciers Threaten a Billion"] Check |url=scheme (help). Planetark.com. 2007-06-05. Retrieved 2011-03-10. - Bagla, Pallava (December 5, 2009). [/2/hi/south_asia/8387737.stm "Himalayan glaciers melting deadline 'a mistake'"] Check |url=scheme (help). BBC. Retrieved 2009-12-12. - Big melt threatens millions, says UN[dead link] - [/news/2007/jul/24indus.htm "Ganges, Indus may not survive: climatologists"] Check |url=scheme (help). Rediff.com. 2004-12-31. Retrieved 2011-03-10. - [email protected] (2007-07-24). [/90001/90781/90879/6222327.html "Glaciers melting at alarming speed"] Check |url=scheme (help). English.peopledaily.com.cn. Retrieved 2011-03-10. - Singh, Navin (2004-11-10). [/2/hi/science/nature/3998967.stm "Himalaya glaciers melt unnoticed"] Check |url=scheme (help). BBC News. Retrieved 2011-03-10. - [/releases/2008/03/080317154235.htm "Glaciers Are Melting Faster Than Expected, UN Reports"] Check |url=scheme (help). Sciencedaily.com. 2008-03-18. Retrieved 2011-03-10. - Water shortage worst in decades, official says, Los Angeles Times - Bryant, Nick (June 18, 2008). [/2/hi/asia-pacific/7460492.stm "Australian rivers 'face disaster'"] Check |url=scheme (help). BBC News. Retrieved December 2, 2011. - Ayre, Maggie (May 3, 2007). [/2/hi/science/nature/6620919.stm "Metropolis strives to meet its thirst"] Check |url=scheme (help). BBC News. Retrieved December 2, 2011. - [/news/stories/2009/09/15/2686535.htm "Dams at record levels"] Check |url=scheme (help). ABC News. 2009-09-15. Retrieved 2009-09-25. - [/news/stories/2010/08/31/2998259.htm?site=perth "More winter blues as rainfall dries up"] Check |url=scheme (help). ABC News. 2010-08-31. Retrieved 2011-01-13. - [/m/media_detail.cfm?id=3656 "Saving water in spring"] Check |url=scheme (help). Water corporation (Western Australia). 2010-09-23. Retrieved 2011-01-13. - Tapping A Market CNBC European Business, October 2008 - An International Food Policy Research Institute book about the intersection of water policy, globalization and food security: Ringler, C., Biswas, A., and Cline, S., eds. 2010. Global Change: Impacts on Water and Food Security. Heidelberg: Springer. - Steven Solomon (c2010). Water: The Epic Struggle for Wealth, Power, and Civilization. Harper. p. 608. ISBN 978-0-06-054830-8. - Alexander Bell (c2009). Peak Water : Civilisation and the world's water crisis. Edinburgh: Luath. p. 208. ISBN 1-906817-19-7. - Peter H. Gleick, ed. (c2009). The World's Water 2008-2009: The Biennial Report on Freshwater Resources. Washington D.C. : Island Press. p. 402. ISBN 10: 1-59726-505-5 Check - Maude Barlow (c2007). Blue covenant : the global water crisis and the coming battle for the right to water. New York : New Press : Distributed by W.W. Norton. p. 196. ISBN 978-1-59558-186-0. - Richard Heinberg (c2007). Peak Everything: Waking Up to the Century of Declines. Gabriola, BC : New Society Publishers. p. 213. ISBN 978-0-86571-598-1. - Engelbert, Ernest A., and Ann Foley Scheuring, ed. (c1984). [/ark:/13030/ft0f59n72f/ Water Scarcity: Impacts on Western Agriculture] Check |url=scheme (help). Berkeley: University of California Press. - Jameel M. Zayed. [/commentary/“no-peace-without-water”-–-the-role-of-hydropolitics-in-the-israel-palestine-conflict "No Peace Without Water – The Role of Hydropolitics in the Israel-Palestine Conflict"] Check |url=scheme (help). London. - The World Bank, (2010) Water and Climate Change: Understanding the Risks and Making Climate-Smart Investment Decisions. |Wikibooks has a book on the topic of: Drinking water| - [/en/reports/global/hdr2006/ "Beyond scarcity: Power, poverty and the global water crisis"] Check |url=scheme (help). United Nations Development Programme (UNDP). 2006. - Water Availability and Use in the Arab World an infographic by Carboun - Food exports can drain arid regions: Dry regions ‘export’ water as agricultural products 24.March.2012 Science News - The World Bank's work and publications on water resources - The Global Water Crisis - myHydros.org | Everything About Water - BBC News World Water Crisis Maps - International Action: Fighting the Water Crisis in Haiti - World Water Council: Water Crisis[dead link] - Food and Water Security under Global Change and Water Policy at the [http://www.ifpri.org/ International Food Policy Research Institute (IFPRI). - China water crisis - Greenpeace China - Water Wars: Multimedia coverage of East Africa's water crisis from CLPMag.org - Water Crisis Information Guide - From Middletown Thrall Library. Subjects include: Drinking Water, Government Information, International Challenges and Efforts, Global Water Issues, Oceanography, Sea Levels, Desalination, Water Scarcity, Pollution and Contaminants, Conservation and Recycling, News and Special Reports, and library catalog subject headings for further research. - Water and Conflict: Incorporating Peacebuilding into Water Development - Raipur Water Crisis Website For World - Water Wars: A Global Crisis - interview with Dr. Richard Schuhmann - Water crisis explained in two mins.
http://en.wikipedia.org/wiki/Water_crisis
13
27
Value for money can be divided into two parts, as follows: a. Nominal value and intrinsic value Face value is the value written in each currency or the value written on the money itself. Intrinsic value is the value or price of materials used to make currency. b. The value of internal and external value Internal value is the value of money or resources to purchase some goods or services. Internal value is a real value, that value can be measured by the number of objects that indicate the purchasing power of money. External value is the value of a currency as measured in foreign currency (foreign currencies), which is called exchange of money or exchange value of money. Currently most of the money made from paper. The current paper money is virtually has no intrinsic value, but the public would accept the money because public confidence in money itself. Since the enactment of the bill on the basis of trust, then paper money called a trust or fiduciary money. Value of the currency of a country different from the value of the other country’s currency, hence currency of a country can not be exchanged for currencies of other countries with the same amount. To perform the exchange of foreign currency needed foreign exchange. Foreign exchange rate is the price of its own currency or the price of foreign currency expressed in dollars. Some theories about the value of money a. Inflation. If the circulation of money is too much then it will result in inflation, the falling value of money is not proportional to the flow of goods and services. If inflation can not be controlled by the government, there will be hyperinflation. b. Deflation. If the ratio of the amount of money circulating in the community is smaller than the flow of goods and services will result in deflation, the rising value of money and prices will be low or low-cost goods. c. Devaluation. Devaluation is government policy to reduce the currency’s value against foreign currencies. In the state that issued the policy of devaluation, the price of goods exports (in foreign markets) to be cheap, so the demand for overseas goods more or increases, and domestic purchasing power was growing stronger. - internal value of money - internal and external value of money - internal and external value of currency - external value of money - internal value for money - what does it mean by internal and external value of money? - external value of currency - internal value of a currency - internal value and external value of currency - internal value of currency - meaning of internal value of currency - internal value of money definition - relationship between internal and external value of currency - relationship between internal and external value of money - relationship between the internal value and external value of money - relationshipo between ezternal value and internal value of money - what determines internal value of currency - what does external value of a currency mean - what is external value of currency - what is internal and external value of currency - what is internal value of money - dailytape com internal-value-of-money - internal value and external value of money - decrease in internal value of money - define internal value of money
http://dailytape.com/tag/internal-value-money/
13
20
The African Presence in New Spain, c. 1528-1700 By Dr. Rhonda M. Gonzales Associate Professor of History University of Texas at San Antonio The ubiquitous border that represents the boundary between the United States of America and Mexico has been the physical meeting point of histories important both to nation-state interests as well as to the histories of generations of people who have made their homes within the lands of that area. In this section, we are interested in the latter, beginning with the first half of the sixteenth century, a pivotal era for the region. It was then that the nascent, shifting, and fluid milieus of New Spain became the geographic crucible in which the earliest generations of eventual Mexicans and Americans, whose descendants form the core of the two nation’s citizenry today, rooted themselves. The people involved in that rooting process were diverse, and hailed from many different areas of the world. Besides those who came from distant lands, key among the actors implicit in this historical period included many peoples whose ancestors were long indigenous to the areas that the Spanish Empire claimed for itself. The diverse languages and cultures of those indigenous populations had for millennia before 1528 unfolded within this landscape. Their histories are only part of the period’s historical narrative that will be discussed here. In 1519, a fundamental change began in what is now known as southern and central Mexico. That change started when foreigners arriving from great distances across the Atlantic Ocean intruded upon indigenous populations living in the regions of the Yucatán and Veracruz. From the start, newly arriving immigrants were heterogeneous in their lands of origin, languages, and cultures. Some of them hailed from places that spanned the Iberian world, in the lands of then emerging Spanish and Portuguese Empires. In addition to Iberians, many of the newcomers arrived from the west and west-central shores and hinterland regions of the African continent. The native histories of each of these “Old World” regions and peoples are as multifaceted as any. However, those who ended up in “New World” lands found historical experiences unlike any they could have had in their homelands. Upon their arrival in the New World, they were likely viewed by their indigenous hosts simply and collectively as newcomers, without much regard, at least initially, for their diverse backgrounds and their motives for visiting. Likewise, the newcomer views of the indigenous people was apt to be comparably monolithic. In their respective minds, however, none of these populations likely saw themselves in such simplistic terms. The Iberian and African immigrant populations developed along historical trajectories in New Spain, throughout the Americas, and neighboring islands, spawning diverse experiences. However, much about their particular histories in those lands, especially in the earliest periods, are not evenly represented in historical texts. Specifically, we are interested in charting the broad histories of populations of African ancestry – those whom we will collectively term Afro-Mexicanos – who traversed and lived within the broad region that today encompasses Southern Texas and Mexico between the years 1528 and 1700, when the entire region was under the domain of New Spain (colonial Mexico). We do this to address gaps in historical narratives that commonly underplay, or overlook entirely, the presence and the roles people from Africa and their eventual American born descendants played in the histories of both Mexico and the United States during those nations’ formative years. But to do this, we must first establish a background. In the Beginning: Spanish Expansion to the Americas In the late fifteenth century, when Spain expanded its efforts to grow an empire under its dominion, their plan was not the result of an impulsive idea. For more than seven hundred years, the Iberian world had been engaged in attempts to throwback Muslim populations whose forebears had seized the Iberian Peninsula’s southern regions in the eighth century. Beyond that, they also strove to establish a direct hand in the long-distance trade that Muslim merchants had long held in northern Africa and Asia. Thus, since that period’s inception, regional Iberian governments strategized to regain control on these fronts. As evidence of that effort, such fortified kingdoms as Aragon, Castile, and Portugal, among others, maneuvered to recapture the regions by establishing the government’s stronghold, but for years had little success. Their failures had less to do with inability than it did with the reality that by the twelfth century, the expansive North African Almoravid Dynasty had established solid roots in Andalusia and Granada. Their dominion guaranteed a formidable and enduring Muslim presence. Iberian Christian governments, never abandoning their desire to expunge the Muslim presence, made eventual headway. In 1469, Queen Isabella of Castile and King Ferdinand of Aragon married, and their union resulted in the unification of their respective kingdoms. Once together, they moved forward with their efforts to consolidate extant regional kingdoms. More than twenty years later, in 1492, at a battle in Granada, they successfully flattened Muslim dominance in the long-contested southern areas. That defining moment witnessed the eventual annexation of the southern Iberian Peninsula to the nascent Spanish Empire. The same year as the Muslim defeat at Granada, 1492, marked the year that Spain commissioned Christopher Columbus to seek out and bring into consideration new lands for the Empire and for the Catholic Church. While their original goal was to land in Asia, by forging a westward navigation route that would curtail the need to circumvent Africa or pass eastward overland, the planned route did not manifest as they had envisioned. With a certain level of unanticipated fortune, however, Columbus’s voyage managed to pave the way for worldwide interconnections and realignments among people who originally belonged to both the Old and New Worlds. That happened when he landed in what he called the Indies, because he believed that he had landed in India. Once it was learned that in fact they had not reached India, they explored the region to determine what of value, particularly gold and other mineral wealth, they might find. Over the long haul, the ultimate and long-ranging effects of those developments led to both increased material wealth for Spain and her New World representatives, but the negative outcome was subjugation and death for Indigenous peoples and enslaved African populations. The enduring implications have been the intertwined histories shared among Europe, Africa, and the Americas that have continued until the present. By 1516, Spanish successes remained limited in the Indies. They fell far short in the agricultural production of sugar cane or other items that would generate meaningful wealth. They also increasingly came to understand that they had not arrived in Asia at all, yet they held on to hopes that they might eventually find an overland route that would reach the Far East continent. To do this, they initiated multi-pronged expeditions into the interior hoping that such efforts might lead them to riches in gold and more along the way. In 1517, Cuba’s governor, Diego Velazquez, sent expeditions to find and enslave Indians for work on the islands. Some of the expeditions led explorers along the coasts of Florida, Central America, and South America. One explorer, Francisco Hernandez de Cordóba discovered the Yucatan Peninsula, where he and his men battled Mayan Indians who defended their lands. A wounded Cordóba returned to Cuba and, before dying from his injuries, reported to Velazquez that there was gold, silver, and cotton cloth, along with other sources of wealth among the populations he encountered. Because Spaniards had not found signs of such abundance in more than twenty-five years in the Indies, this information renewed their enthusiasm and inspired new efforts to move farther into the mainland. Two years later, in 1519, following Cordóba’s news, Velázquez commissioned Hernán Cortes to undertake further explorations. Cortes assembled a diverse party of sailors and soldiers that included Spaniards, Indigenous Cubans, and Africans totaling approximately 550 men organized into eleven companies. His eventual objective was to conquer the wealthy Aztec empire. After engaging in protracted battles and strategy building, the fall of Tenochtitlán, the seat of the empire, came to pass on August 13, 1521. In that decisive episode, at least six black men were among Spain’s forces. One identifiable by name was African born -- likely from Morocco -- Juan Garrido. Garrido was enslaved and fought in the Caribbean as early as 1503, but when he participated in Spain’s founding of New Spain he did so as a free man. Among his accomplishments as a contingent in the conquest, he is credited with being the first person to grow wheat in New Spain. Later in his life, Garrido established a family and lived in Mexico City. An additional instance of an early African presence in the conquest period is found in the story of Juan Valiente, a slave in Mexico City, who in 1533 negotiated with his owner for permission to participate in wars of conquest south of New Spain on the condition that he share any wealth he amassed with his owner. Valiente eventually traveled to Guatemala, Peru, and Chile, successfully earning a position of captaincy and an encomienda, a Spanish grant that provided him access to native labor and paid tribute to him. Afro-Mexican conquistadors continued to be part of Spanish companies in New Spain, but their participation after conquest was typically done as dependents and auxiliaries whose status was marked by their attachments to Spaniards. And although they were often promised rewards for their service, the reality was they experienced many disappointments, and after conquest were often prevented from continued duty and the rewards that came with being part of the military. Some of them, however, did manage to shift from the status of enslaved to free status through their military service. But the precedent of African participation in the conquest had later implications in New Spain. By the late sixteenth and early seventeen centuries their involvement in the military was significant, especially during emergencies and when the stability of the state was at risk. In this way, they played an integral role in the sustainability and structure of the Spanish Empire. The Rise of African Enslavement in New Spain The consequences of Old World and New World populations intermingling were devastating to Indigenous communities. Biologically, because indigenous populations had not previously been exposed to the communicable diseases transported by Europeans and Africans, the Indigenous people endured tremendous casualties because of their inability to recover from contracting the newly imported diseases. This situation was compounded by the harsh conditions and treatment they withstood at the hands of Spaniards. Combined, those conditions led to an abysmal decline in the Indigenous population, thought, at the time of conquest to number 5-10 million. However, it is estimated that only 1 million Indigenous persons remained within one hundred years of Spanish arrival. Even with such tremendous human loss, the Indigenous population remained the demographic majority throughout the colonial era. Yet, the decrease in their populations did lead to labor shortages for the agricultural, domestic, mining, and transportation jobs needed to grow and sustain Spain’s budding colony. Iberians and West Africans already had well-established relationships when the demand for an imported labor force arose in New Spain. By the early sixteenth century, an Iberian presence along the West African coast had life. The relationships among West African and West Central African populations and Iberian peoples had initiated in the late fifteenth century, when Iberians, backed by new navigation technologies, looked to curtail the power held by Trans-Saharan African Muslim traders. The Iberians sought to establish a foothold in trade that stemmed from the Gold Coast, home to the lucrative West African gold fields that fed into inter-continental economies. With that objective, Iberians had begun, with the cooperation of African traders along the coast, to service the diverse demands along the coast between what is today Angola and northern Africa. Among the various products traded included the trafficking of human bodies between various West African communities, who themselves used slave labor. This became an aspect of commercial relations between Africans and Iberians. Eventually, Iberians began to use African slave labor for sugar production on Sao Tome and Principe, islands off the shores of West Africa, a move that inspired the eventual transport of enslaved African people to New Spain. Essentially, that expanded system of human trade grew from an already established system. In the period of African enslavement most concerning New Spain, it is apparent that African populations were largely taken from homelands located in West and West Central Africa, a vast region comprised of a great diversity of populations. The majority of those people likely originated from the areas of modern day Senegal, Gambia, and Guinea-Bissau in the sixteenth century, while in the seventeenth century the majority of enslaved peoples likely came from West Central Africa, primarily from the regions of modern-day Angola. Enslaved African laborers were present in New Spain by 1521. From the inception, mixtures of African descended, Indigenous, and Spanish citizens formed intimate unions that resulted in mixed children. The emerging complexity of racial mixtures within New Spain was almost immediate. Based on historical records, the Afro-Mexicano population in 1570 stood at 24,235, while those deemed African (that is, having parents who were both singularly identified as African) is estimated to have been 20,569. If we compare the African descended population with that of Spaniards, by 1570, the African population and their descendants comprised approximately 0.67 percent of the population, while Spaniards accounted for only 0.2 percent. When we examine a delimited period in the sixteenth century, between 1521 and 1594, the data indicates that approximately 36,500 Africans had been brought to New Spain. If we turn only to urban areas, according to a 1595 census, Afro-Mexicans outnumbered Spanish and Mestizos (persons of Indian and Spanish mixed-descent) in urban towns. By 1646, the numbers increased to 116,529 for Afro-Mexicans and 35,089 for those African identified. It is clear that the number of children from mixed unions accounted for the much of the growth. African descended populations thus comprised 8.8 percent, compared to Spaniards and their descendents, who comprised 0.8 percent in 1646. During the seventeenth century the number of people of African ancestry who had been born in Africa and lived in New Spain had reached 110,000 people. Taken as a whole, the late sixteenth century through the late seventeenth was the clear high point of New Spain’s involvement in the slave trade to the colony. By the midpoint of the seventeenth century, the majority of Afro-Mexicans had been either born in New Spain or originated from the circum-Caribbean world, but by the end of the century the number from Africa declined significantly. During the seventeenth century, New Spain was home to the second highest number of slaves and largest free African-descended populations in the Americas. Examining the available numbers of Afro-Mexican populations recovers a narrative of the clear presence they had in New Spain. But those populations cannot be reduced to mere numbers. They lived their lives in New Spain. Afro-Mexicans Work and Communities African people enslaved during the conquest established roots in New Spain, but they did so in diverse locales and with some variation in status. The majority of Africans arrived at the start of the colonial period and lived as slaves, though over the course of their lives, some of them managed to attain the status of “free” persons. A number of possibilities paved the path for this outcome. For instance, some of them found a way to pay their owners for their freedom, while in other cases, owners sometimes manumitted free status. This commonly happened upon an owner’s death when such instructions were noted in the deceased’s wills. Another possibility for holding free status occurred when a child was born to a mother whose status was free. By law, this happened even if the child’s biological father was himself enslaved. At the same time, even though free status may have been accorded them, Africans, like Indigenous populations, were always subject to hegemonic Spanish institutions, government and ecclesiastical entities, as well. So, free status did not equal absolute liberty. For the most part, the largest concentration of Africans and their descendants were heavily represented in urban areas because Spanish culture was entrenched with a preference for urban living, and that inclination was carried over to New Spain. By one estimate, in 1574 approximately 18,000 or 30% of the colony’s Spanish population lived in Mexico City. The African presence in the city was inseparable from the Spanish presence because a hefty part of the successful Spaniards’ image included the presence of a few domestic slaves in one’s home. But Afro-Mexicanos also played other labor roles throughout New Spain. Outside of urban areas, Africans were represented, and perhaps best known for, their skilled work in mining -- especially gold, but also silver -- in ranching, and in small factories. Additionally, a good number of Afro-Mexicanos lived in free settlements, (cimarrones or palenques) established by renegade slaves. Far and away the most detail known about how Afro-Mexicanos lived and worked in the aforementioned milieus come from government records describing life in urban zones. In those spaces, Africans and Spaniards typically lived in close quarters within the traza, the city center, which comprised thirteen square blocks in Mexico City. It was common, for example, for Africans to live on the ground floors of multi-storied buildings, while Spaniards lived on upper floors, away from the foul odors that characterized cities where dense populations and the lack of effective sanitation systems were everyday realities. Indigenous people, because of laws that required them to live in regions of distinct populations, usually lived in areas on the perimeter and beyond, away from Spanish and African residents. Such rules of segregation were also common to workplaces and hospitals, though as previously noted, the restrictions were largely ineffective in preventing intimate unions among Spaniards and Africans. In the city, the typical Afro-Mexicano was enslaved for domestic purposes. As domestic laborers they were subject to the whims of their owners within spaces that were typically private, but their work often required a mobility that did not confine them to the indoors. Afro-Mexicanos and Spaniards lived and socialized in the midst of a bustling, constricted urban setting. Indeed, some have suggested that in such situations Afro-Mexicanos held a limited degree of control over their lives and labor. In navigating the city streets, Afro-Mexicano vendors peddled goods associated with elite Spanish populations, especially the Calle de San Francisco. Many worked in skilled jobs as leatherworkers, weavers, tailors, carpenters, and candlemakers. While seemingly adept at these jobs and no doubt more, Afro-Mexicans were often systematically excluded from participation in such trades. For example, in 1570, they were prevented from practicing the prestigious skilled craft of silk weaving and joining the associated guilds. This was, perhaps, because Spanish guilds feared competition from Afro- But Afro-Mexicanos lives involved more than work, they also included time for developing social, political, and cultural networks. To do this they carved out niches, usually against the will of Spanish officials, where they could self-determine their relationships with one another, even across racial categories. In fact, many Afro- Mexicanos, in an effort to gain access to perceived advantages associated with identification in another ethnic category, may have changed their identities after marriage across race. For instance, Indian women who married either Mestizos or free people of African ancestry often refused to pay taxes due to native chiefs on the grounds that they had acquired new identities, and they often pointed to new styles of dress as evidence. But a Spanish law attempted to curtail this possibility if marriage did not justify it. For instance, women of various castas (social class) were not permitted to wear Indigenous styled dress unless they were married to an Indian man. If a woman’s dress was deemed inappropriate her items could be confiscated. However, regardless of government attempts to keep Africans, Indigenous, and Spanish populations separated, relationships across these distinctions were prevalent throughout the seventeenth century. By then, the capital was particularly However, because of the often-precarious nature of their social gatherings, the associations they developed were typically kept quiet since much of what was undertaken might be interpreted as plots to contest or weaken Spanish political authority by nurturing solidarity. Some of the most prevalent places they met and socialized included taverns, city markets, servants’ quarters, and cofradias (mutual aid societies). Cofradias were important among the community networks Afro-Mexicanos created and sustained. Such organizations had been characteristic of cultural life in Seville, Spain, and the tradition was brought to the colony. Furthermore, the formation of black cofradias in Spain before their presence in New Spain paved the way for their expression among Afro-Mexican communities in New Spain. Interestingly, because the cofradia had been an institution associated with the church, it was uniquely viewed in positive terms by Spanish elites -- when Afro- Mexicanos established them -- as their attempts to participate in church life. However they were perceived and received, they served as niches in which Afro-Mexicanos could exchange information and offer one another support. For example, the cofradia was the first place they turned to for help in arranging burials or for contesting violent situations they encountered under their masters’ dominion. Also important in the history of city life is that these societies were also thought to be places where Africans organized themselves politically. Reportedly, among their agendas were sometimes strategies for overthrowing or overruling the political institutions in places like Mexico City. For instance, in Mexico City as early as 1537 there were allegations of a plotted slave revolt, and then again in 1540, leading to two uprisings. Later, in 1609 and 1612, authorities became concerned when cofradias elected their own kings and queens to represent them and their attempts to overthrow the government. In the seventeenth century, the number of cofradias in New Spain reached a high point, with participation found in such cities as Taxco, Zacatecas, San Luis Potosí, Veracruz, Mexico City, and Michoacan. Cofradias members collected alms to support one another, performed ritual ceremonies, public penance processions, lavish processions, parties, and more. There is little doubt that the cofradia was a platform from which Afro-Mexican people could express their cultures entwined Afro-Mexicano political organizing, while sometimes embedded within the robes of the church held sway and made for unease among New Spain officials. And because of the groups’ effectiveness, Spanish officials often responded with violence and intimidation as a means of quashing resistance and attempts to overthrow the government. One such purported effort, in 1612, led to the execution of thirty-five Afro-Mexicans. Another case occurred in 1611 when 1,500 Afro-Mexicans schemed to hold a demonstration in front of the viceregal palace and the Office of the Inquisition in response to what they charged was the death of an African woman that resulted from abuse by her owner. In an act of dissension, they carried her body in a solemn procession in front of both the palace and Inquisition office. Attempts to diminish their ability to protest also came with the creation of laws, which were usually ignored until incidents occurred. Such laws included: forbidding the carrying of arms, curfews between 8pm and 5am, requiring Afro-Mexicans to live with “known masters” who were imbued with the power to give permission for their travel, and the banning of gatherings of more than four people. While Spanish attempts to control and intimidate were real, they were likely inefficient in milieus that relied on the mobility of their servants, which inherently led to opportunities to intermingle in the city. Outside of the city, gold and silver mines throughout New Spain were primary locations in which Afro- Mexicanos, free and enslaved, lived and worked alongside Indigenous populations. At major mining sites, Afro- Mexicanos typically did not comprise more than 15% of the population. There is far less detail about the day-to- day lives of those who toiled in the mines, which were scattered far from urban centers and created a wide dispersal of Afro-Mexicanos throughout expansive areas of New Spain. As early as 1540, in Zacatecas, which lay about 150 miles northwest of Mexico City, Spaniards – largely cattlemen and miners – had started businesses and settled in the region. The height of Zacatecas mining would come between 1550-1650. By 1550, there was silver mining in Guanajuato, Parral, and Zacatecas, where slave labor was needed. In 1560, Guanajuato Viceroy Don Luis de Velasco claimed lawless Afro-Mexicans roamed the hillsides. In 1569, Taxco mines employed 800 African slaves, who worked along side Indigenous laborers. A mid-seventeenth century example is instructive of an individual’s journey to the mines. Apparently, Juan de Moraga, the teenage son of a Spanish priest and an African woman, was sold to an accountant in Mexico City, who then sold him to a mine owner for work in Zacatecas. Indeed, the great demand for laborers created a level of competition among the owners. As a consequence, though many of the mine workers were not enslaved, the great majority ended up permanent residents after incurring debts as a result of taking advances of food, clothing, and shelter. A reason for the desire of an Afro-Mexican presence in the mining efforts in the sixteenth century New World, beyond the need for laborers generally, might well have had to do with an understanding that many of the regions of West Africa from which they originated would have been home to abundant gold mines that fed the economies of the time. Possibly, some of the African slaves in New Spain were descendants of West African lands and may have had direct or indirect mining skills and knowledge that transferred with them to New Spain. Likewise, the same may have been true when it came to textile production, for which Africans in Cholula were Another area that saw the predominance of Africans was agriculture production. The African knowledge of and skills in agriculture was anchored in generations of knowledge their forebears handed down. In West Africa, many people would have had adept knowledge and practice in rice and yam cultivation, both of which had been staples in diets of the region. In fact, the ability of African farmers to produce topnotch crops in New Spain (wheat, sugar) and raise Euro livestock was at times a matter of concern to Indigenous populations who sometimes brought complaints to the Spanish courts because their produce was often overlooked for that which Afro-Mexicans brought to market or, alternatively, that Spanish citizens often went searching in Afro-Mexican vicinities for particular items. Tied to African knowledge in agricultural skills was their ability to, from time to time, escape the confines of enslavement, climb remote terrain, and establish sustainable cimarron communities that challenged and circumvented Spanish authority. Highlighted in the literature on such communities throughout the Americas was their ability to evade and defend themselves from capture. Legendary in Mexico is an African known as Yanga, who founded a cimarron community that was reported to have attained autonomy within the mountainous regions of Veracruz, and turned into the establishment of what is thought to be the first free black town in the Americas in 1609, known as San Lorenzo de los Negros. This town still exists, but in 1932 was renamed In addition to its longevity, it is the location of festivals that celebrate this determined man’s savvy at interacting with and getting colonial officials to agree to his demands that the people in his settlement be declared free and that the town be given an official charter. Although in the end Yanga negotiated with New Spain’s government, it was in part due to his and others’ military defense skills that paved the way for that possibility. But Afro-Mexicano involvement in resistance efforts in urban milieus and cimarron communities were not limited to New Spain’s valley regions. From early times, and increasingly so as time passed, Afro-Mexicanos and their descendants also moved north, away from the areas nearest the seat of the Spanish Viceroy, and this posed additional worries for Spanish officials. The threat of African movement toward the north, where the Spanish administration was weakest, was particularly worrisome to the government because they feared, and rightfully so, that cimarrones in central Mexico were particularly inclined to form alliances with Indigenous peoples, as well as non-Catholic Europeans who eventually settled the frontier zones. The Northern Frontier: The Eventual US/Mexico Border and the African Presence Mexico’s valley region was far and away the most densely populated of New Spain. Nevertheless, the northern frontier zones of the empire – the lands that later became known as Northern Mexico and the American Southwest – had their share of newcomers, who moved into areas where long-rooted indigenous populations already lived. The most ambitious of early newcomers to New Spain's northern zones arrived early in the sixteenth century and continued to found new settlements in the seventeenth century, though never on a grand or In the earliest periods, the expansion of Spanish and African presence resulted from moves made by ambitious, private individuals, who made enterprising decisions to explore new lands in hopes of procuring lands and mineral wealth for themselves. Those moves were often spawned by rumors of such wealth throughout the sixteenth century. In fact, early on, many of the men who ventured out (sailors and soldiers) to claim lands are said to have deserted their companies the first chance they had in order to capture a piece of land for private gain. These individuals were often viewed as rogues by elite Spaniards, for many of them did not uphold the same standards of social distance elites thought beneficial; in contrast, they often intermingled with populations of Mestizos and mulattoes. This sort of individual land acquisition changed in the latter seventeenth and eighteenth centuries when the Spanish government carried out efforts to ensure the security of its empire through efforts to draw Spanish settlers to the northern frontier zones. Key among their desires, too, included the hopes that lucrative mineral deposits would be found. The result was that over time the arid, rugged northern frontier maintained regular though sparse settlements of populations who participated in mining and ranching until the eighteenth century. Detailed accounts of the Africans who settled the frontier zones are rare, but like the original conquest, there is every indication that among the earliest people in the northern zones included, at minimum, an African man – Esteban, a Moroccan slave who was a member of the disasterous 1528 Pánfilo de Narvaez expedition to Florida that eventually led to the exploration of Texas, New Mexico, and Arizona. Esteban accompanied Alvar Núñez Cabeza de Vaca, Andrés Dorantes de Carranza, Alonso Castillo Maldonado, and is thought to be the first black man to set foot in Texas and the Southwest. However, there are few direct references to specific African populations and individuals who inhabited the eventual Mexico/Texas border areas throughout the sixteenth and By the last quarter of the seventeenth century, areas of Southern and Southwestern Chihuahua were beginning to see civilian populations move in, along with greater administrative control. One historian writes, “miners, ranchers, farmers, Spaniards, and Mestizos, more or less tamed Indians, and a sprinkling of Africans, [were] introduced for some of the heavier work in the mines.” In 1691 an Afro-Mexican bugler accompanied Domingo Teran as part of the second Spanish missionary expedition to visit the Indigenous populations of East Texas. Then, in the eighteenth century Afro-Mexicans were among the garrisons forming permanent settlements. The Spanish found New Spain -- from the valley of Mexico to the Spanish borderlands -- as a unique zone, full of complex identities, government presences, and cultural milieus. Between 1528 and 1700, this complexity was on a steady march northward. On the borderlands could be found a hodgepodge of people -- black slaves, Mestizos, mulattos -- who fled oppressive institutions and used the borderlands to become roving miners, peddlers, and even participating in criminal activities as horse and cattle thieves, murderers, and otherwise violent There, rather than racial enclaves, they formed mixed societies who gathered in cimarrones, as a means of resisting oppressive circumstances as social persons who shared common aspirations. Indeed, there has long been a presence of Afro-Mexicanos along the Mexico and Texas borders. - Bannon, John Francis, ed. The Spanish Borderlands Frontier 1513-1821. New York: Holt, Rinehart and Winston, Inc., 1970. - Barr, Alwyn. Black Texans: A History of African-Americans in Texas, 1528-1995. Oklahoma City: University of Oklahoma - Beltrán, Gonzalo Aguirre. La Población Negra de México. Tercera ed. México, D.F.: Fondo de Cultura Económica, 1989. - Bennett, Herman L. Africans in Colonial Mexico: Absolutism, Christianity, and Afro-Creole Consciousness, 1570-1640. Bloomington: Indiana University Press, 2003. - Bristol, Joan Cameron. Christians, Blasphemers, and Witches: Afro-Mexican Ritual Practice in the Seventeenth Century. Albuquerque: University of New Mexico Press, 2007. - Chipman, Donald E. Spanish Texas 1519-1821. Austin: University of Texas Press, 1992. - Cope, R. Douglas. The Limits of Racial Domination: Plebeian Society in Colonial Mexico City, 1660-1720. Madison: The University of Wisconsin Press, 1994. - Germeten, Nicole Von. Black Blood Brothers: Confraternities and Social Mobility for Afro-Mexicans. Gainesville: University Press of Florida, 2006. - Jackson, Robert H. "Some Common Threads on the Northern Frontier of Mexico." In New Views of Borderlands History, edited by Robert H. Jackson. Albuquerque: University of New Mexico Press, 1998. - Lane, Kris. "Africans and Natives in the Mines of Spanish America." In Beyond Black and Red, edited by Matthew Restall. Albuquerque: University of New Mexico Press, 2005. - Lewis, Laura A. Hall of Mirrors: Power, Witchcraft, and Caste in Colonial Mexico. Durham: Duke University Press, 2003. - Menchaca, Martha. Recovering History Constructing Race: The Indian, Black, and White Roots of Mexican Americans. Austin: University of Texas Press, 2001. - Meyer, Michael C., William L. Sherman, and Susan M. Deeds. The Course of Mexican History. Sixth ed. New York: Oxford University Press, 1999. - Palma, Norma Angelica Castillo, and Susan Kellogg. "Conflict and Cohabitation in Central Mexico." In Beyond Black and Red, edited by Matthew Restall. Albuquerque: University of New Mexico Press, 2005. - Restall, Ben Vinson III and Matthew. "Meanings of Military Service in the Spanish American Colonies." In Beyond Black and Red, edited by Matthew Restall. Albuquerque: University of New Mexico Press, 2005. - Seed, Patricia. To Love, Honor, and Obey in Colonial Mexico: Conflicts over Marriage Choice, 1574-1821. Stanford: Stanford University Press, 1988. - Stern, Peter. "Marginals and Acculturation in Frontier Society." In New Views of Borderlands History, edited by Robert H. Jackson. Albuquerque: University of New Mexico Press, 1998. - Teja, Jesús de la. "Spanish Colonial Texas." In New Views of Borderlands History, edited by Robert H. Jackson. Albuquerque: University of New Mexico Press, 1998. - Thornton, John. Africa and Africans in the making of the Atlantic world, 1400-1800. Second ed. Cambridge: Cambridge University Press, 1998. - Weber, David J. Bárbaros: Spaniards and Their Savages in the Age of Enlightenment. New Haven: Yale University Press, 2005. - ———, ed. New Spain's Far Northern Frontier: Essays on Spain in the American West, 1540-1821. Albuquerque: University of New Mexico Press, 1979. - David J. Weber, Bárbaros: Spaniards and Their Savages in the Age of Enlightenment (New Haven, 2005). 16. Weber makes this argument regarding the eighteenth century, but I believe is applicable to earlier periods. While European occupiers understood that domestic rivalries existed among communities of people on the ground, they were all indios in their views. - Michael C. Meyer, William L. Sherman, and Susan M. Deeds, The Course of Mexican History, Sixth ed. (New York, 1999). 91. Columbus actually landed on an island indigenously named Guanahani, which was later named San Salvador. The exact island is not known with absolute certainty. - Martha Menchaca, Recovering History Constructing Race: The Indian, Black, and White Roots of Mexican Americans (Austin, 2001). 40-1, John Thornton, Africa and Africans in the making of the Atlantic world, 1400-1800, Second ed. (Cambridge, - Meyer, Sherman, and Deeds, The Course of Mexican History. 92. - Ibid. 93-93. At a point Velazquez became dubious of sending Cortes, and he ended up moving to stop the expedition, but not before Cortes was able to set sail. - Gonzalo Aguirre Beltrán, La Población Negra de México, Tercera ed. (México, D.F., 1989). 20, Meyer, Sherman, and Deeds, The Course of Mexican History. 120-122, 205. - Beltrán, La Población Negra de México. 19-20, Ben Vinson III and Matthew Restall, "Meanings of Military Service in the Spanish American Colonies," in Beyond Black and Red, ed. Matthew Restall (Albuquerque, 2005), 18. - Restall, "Meanings of Military Service in the Spanish American Colonies," 18. - Ibid., 18-19. - Ibid., 19. - Meyer, Sherman, and Deeds, The Course of Mexican History. 92, Patricia Seed, To Love, Honor, and Obey in Colonial Mexico: Conflicts over Marriage Choice, 1574-1821 (Stanford, 1988). 22. - Joan Cameron Bristol, Christians, Blasphemers, and Witches: Afro-Mexican Ritual Practice in the Seventeenth Century (Albuquerque, 2007). 11. - Beltrán, La Población Negra de México. 20. - R. Douglas Cope, The Limits of Racial Domination: Plebeian Society in Colonial Mexico City, 1660-1720 (Madison, 1994). 13. - Thornton, Africa and Africans in the making of the Atlantic world, 1400-1800. 138-9. - Herman L. Bennett, Africans in Colonial Mexico: Absolutism, Christianity, and Afro-Creole Consciousness, 1570-1640 (Bloomington, 2003). 1, Bristol, Christians, Blasphemers, and Witches: Afro-Mexican Ritual Practice in the Seventeenth - Bristol, Christians, Blasphemers, and Witches: Afro-Mexican Ritual Practice in the Seventeenth Century. 4. - Laura A. Lewis, Hall of Mirrors: Power, Witchcraft, and Caste in Colonial Mexico (Durham, 2003). 16. - Bennett, Africans in Colonial Mexico: Absolutism, Christianity, and Afro-Creole Consciousness, 1570-1640. 1. - Meyer, Sherman, and Deeds, The Course of Mexican History. 205. - Cope, The Limits of Racial Domination: Plebeian Society in Colonial Mexico City, 1660-1720. 10. - Meyer, Sherman, and Deeds, The Course of Mexican History. 205. - Cope, The Limits of Racial Domination: Plebeian Society in Colonial Mexico City, 1660-1720. 9-11. - Ibid. 16. - Meyer, Sherman, and Deeds, The Course of Mexican History. 205. - Thornton, Africa and Africans in the making of the Atlantic world, 1400-1800. 178. - Cope, The Limits of Racial Domination: Plebeian Society in Colonial Mexico City, 1660-1720. 9. - Ibid. 10, 21, Peter Stern, "Marginals and Acculturation in Frontier Society," in New Views of Borderlands History, ed. Robert H. Jackson (Albuquerque, 1998), 162. - Stern, "Marginals and Acculturation in Frontier Society," 162. - Cope, The Limits of Racial Domination: Plebeian Society in Colonial Mexico City, 1660-1720. 17. - Norma Angelica Castillo Palma and Susan Kellogg, "Conflict and Cohabitation in Central Mexico," in Beyond Black and Red, ed. Matthew Restall (Albuquerque, 2005), 119. - Cope, The Limits of Racial Domination: Plebeian Society in Colonial Mexico City, 1660-1720. 16. - Palma and Kellogg, "Conflict and Cohabitation in Central Mexico," 115-8. - Cope, The Limits of Racial Domination: Plebeian Society in Colonial Mexico City, 1660-1720. 11. - Ibid. 39. - Thornton, Africa and Africans in the making of the Atlantic world, 1400-1800. 202-3. - Nicole Von Germeten, Black Blood Brothers: Confraternities and Social Mobility for Afro-Mexicans (Gainesville, 2006). 14. - Cope, The Limits of Racial Domination: Plebeian Society in Colonial Mexico City, 1660-1720. 17. - Ibid. 17-18, Thornton, Africa and Africans in the making of the Atlantic world, 1400-1800. 202-3. Cope writes 1608 and 1612. - Germeten, Black Blood Brothers: Confraternities and Social Mobility for Afro-Mexicans. 11, 14. - Cope, The Limits of Racial Domination: Plebeian Society in Colonial Mexico City, 1660-1720. 17-18. - Ibid. 18. - Ibid. 13, Kris Lane, "Africans and Natives in the Mines of Spanish America," in Beyond Black and Red, ed. Matthew Restall (Albuquerque, 2005), 173. - David J. Weber, ed., New Spain's Far Northern Frontier: Essays on Spain in the American West, 1540-1821 (Albuquerque, - Lane, "Africans and Natives in the Mines of Spanish America," 173. - Weber, ed., New Spain's Far Northern Frontier: Essays on Spain in the American West, 1540-1821. 184. - Patrick Carol in Beyond Black - Lane, "Africans and Natives in the Mines of Spanish America," 174. - Ibid., 160. - Ibid., 173. - Palma and Kellogg, "Conflict and Cohabitation in Central Mexico," 116. - Thornton, Africa and Africans in the making of the Atlantic world, 1400-1800. 131. - Ibid. 269. - Insert Yanga reference. - Stern, "Marginals and Acculturation in Frontier Society," 163. - Weber, ed., New Spain's Far Northern Frontier: Essays on Spain in the American West, 1540-1821. viii, xi. - Stern, "Marginals and Acculturation in Frontier Society," 159. - Robert H. Jackson, "Some Common Threads on the Northern Frontier of Mexico," in New Views of Borderlands History, ed. Robert H. Jackson (Albuquerque, 1998), 227. - Jesús de la Teja, "Spanish Colonial Texas," in New Views of Borderlands History, ed. Robert H. Jackson (Albuquerque, 1998), - John Francis Bannon, ed., The Spanish Borderlands Frontier 1513-1821 (New York, 1970). 77. - Barr, Black Texans: A History of African-Americans in Texas, 1528-1995. 3. - Stern, "Marginals and Acculturation in Frontier Society," 157. - Ibid., 158. - Restall, "Meanings of Military Service in the Spanish American Colonies," 37, Stern, "Marginals and Acculturation in Frontier Society," 163, 167, 170. |“Africans in Mexico left their cultural and genetic imprint everywhere they lived. In states such as Veracruz, Guerrero, and Oaxaca, the descendants of Africa's children still bear the evidence of their ancestry. No longer do they see themselves as Mandinga, Wolof, Ibo, Bakongo, or members of other African ethnic groups; their self identity is Mexican, and they share much with other members of their nation-state.” – Historian Colin Palmer, “A Legacy of Slavery” Queen Isabella of Castile |Documenting the Complete African American Experience in Texas -- "Know your history, know yourself"
http://www.tbhpp.org/africans_newspain.html
13
18
Lesson 1: Budget Basics In this "Plan, Save, Succeed!" lesson, students use sample student monthly expense and income information to understand how a budget is created, and how it can be analyzed using percentages. This exercise is designed to encourage students to consider the role saving plays in financial planning. - Students will understand how a budget is created and how it can support good financial decision making. (financial literacy) - Students will understand that mastery of fractions, decimals, and percentages can help address real-world situations. (financial literacy and math) - Students will begin to consider the role saving plays in financial planning. (financial literacy) TIME REQUIRED: 20 minutes, plus additional time for worksheet 1. Ask students how much money a middle school student needs to "live" each month. Record responses on the board. Ask students to identify how they spend money (answers may include clothing, entertainment, savings, etc.). Finally, ask students how they obtain the money they spend. Answers may include allowance from parents, chores, jobs, gifts, etc. 2. Write the following sample student monthly expense and income information on the board (examples can be modified as appropriate for your class): (one $10 movie/month plus $5 popcorn) Monthly Allowance $40 Music/Game Downloads $20 (16/month @ $1.25) Pay from walking neighbor's dog $10 (four ten-minute walks per month) 3. Ask if this student has enough money to meet the monthly expenses. (Yes.) Ask how this can be determined. (Identify and group together income items and expense items, calculate totals, and compare the totals.) Indicate that the student has income of $50 per month and expenses of $45. Indicate that the difference of $5 can be categorized as "savings." 4. Next rewrite the income and expense items in the form of a monthly budget: |Allowance $40||Entertainment $15| |Dog Walking Pay $10||Music $20| |Total Income $50||Total Expenses $45| 5. Ask students how to show the $5 difference between income and expenses. (Show as "savings" under expenses and change "total expenses" to $50, equal to income.) 6. Indicate that this is called a budget. Ask students why it might be useful to keep a budget. (Answers might include: keeping track of expenses, making sure expenses don't exceed income, helping set financial goals, etc.) To demonstrate, ask the class how this student could increase monthly savings for a large purchase in the future. Answers will vary but should include increasing income and/or cutting expenses. 7. Ask students what percentage of monthly expenses is savings (5/50 = 10%). Demonstrate how to calculate percentage if necessary. Ask the percentage of expenses for snacks (20%), music (40%), and entertainment (30%). Demonstrate to students that the expense categories add up to 100%. 8. Ask the class whether or not the dog walking income is money the student can count on. (No, the family might go on vacation, decide to walk the dog themselves, etc.) Then ask what would happen if the family paying for the dog walking moved away and there was now no dog walking income? (Answers might include: find another family that wants its dog walked, cut expenses, etc.) What would happen if a second family wanted its dog walked and dog walking pay increased to $20? (Answers might include that the student could spend and/or save more.) 9. Distribute Worksheet 1 to students, then review answers with class. 10. Point out the poster front. Discuss with students what types of decisions involving money can help a person "plan, save, and succeed."
http://www.scholastic.com/browse/lessonplan.jsp?id=1561
13
20
First Opium War ||The examples and perspective in this article may not represent a worldwide view of the subject. (August 2012)| |First Opium War| |Part of the Opium Wars| The Nemesis destroying Chinese war junks during the Second Battle of Chuenpee, 7 January 1841 |United Kingdom||Qing Dynasty| |Commanders and leaders| |19,000 troops ||200,000 men| |Casualties and losses| The First Anglo-Chinese War (1839–42), known popularly as the First Opium War or simply the Opium War, was fought between the United Kingdom and the Qing Dynasty of China over their conflicting viewpoints on diplomatic relations, trade, and the administration of justice. Chinese officials wished to end the spread of opium, and confiscated supplies of opium from British traders. The British government, although not officially denying China's right to control imports, objected to this seizure and used its military power to violently enforce redress. In 1842, the Treaty of Nanking—the first of what the Chinese later called the unequal treaties—granted an indemnity to Britain, the opening of five treaty ports, and the cession of Hong Kong Island, thereby ending the trade monopoly of the Canton System. The failure of the treaty to satisfy British goals of improved trade and diplomatic relations led to the Second Opium War (1856–60). The war is now considered in China as the beginning of modern Chinese history. From the inception of the Canton System by the Qing Dynasty in 1756, trade in goods from China was extremely lucrative for European and Chinese merchants alike. However, foreign traders were only permitted to do business through a body of Chinese merchants known as the Thirteen Hongs and were restricted to Canton (Guangzhou). Foreigners could only live in one of the Thirteen Factories, near Shameen Island, Canton and were not allowed to enter, much less live or trade in, any other part of China. Tea and silver trade There was an ever growing demand for tea in the United Kingdom, while acceptance of only silver in payment by China for tea resulted in large continuous trade deficits. A trade imbalance came into being that was highly unfavourable to Britain. The Sino-British trade was dominated by high-value luxury items such as tea (from China to Britain) and silver (from Britain to China), to the extent that European specie metals became widely used in China. Britain had been on the gold standard since the 18th century, so it had to purchase silver from continental Europe and Mexico to supply the Chinese appetite for silver. Attempts by the British (Macartney in 1793), the Dutch (Van Braam in 1794), Russia (Golovkin in 1805) and the British again (Amherst in 1816) to negotiate access to the China market were vetoed by the Emperors, each in turn. By 1817, the British hit upon counter-trading in a narcotic, Indian opium, as a way to both reduce the trade deficit and finally gain profit from the formerly money-losing Indian Colony. The Qing Administration originally tolerated the importation of opium because it created an indirect tax on Chinese subjects, while allowing the British to double tea exports from China to England which profited the monopoly for tea exports of the Qing imperial treasury and its agents. Opium was produced in traditionally cotton-growing regions of India under British East India Company monopoly (Bengal) and in the Princely states (Malwa) outside the company's control. Both areas had been hard hit by the introduction of factory-produced cotton cloth, which used cotton grown in Egypt. The opium was sold on the condition that it be shipped by British traders to China. Opium as a medicinal ingredient was documented in texts as early as the Tang dynasty but its recreational use was limited and there were laws in place against its abuse. But opium became prevalent with the mass quantities introduced by the British (motivated, as noted above, by the equalisation of trade). British sales of opium in large amounts began in 1781[verification needed] and between 1821 and 1837 sales increased five fold. East India Company ships brought their cargoes to islands off the coast, especially Lintin Island, where Chinese traders with fast and well-armed small boats took the goods for inland distribution. However, by 1820 the planting of tea in the Indian and African colonies along with accelerated opium consumption reversed the flow of silver, just when the Imperial Treasury needed to finance suppression of rebellions against the Qing. The Qing government attempted to end the opium trade, but its efforts were complicated by local officials (including the Viceroy of Canton), who profited greatly from the bribes and taxes. A turning point came in 1834. Free trade reformers in England succeeded in ending the monopoly of the British East India Company, leaving trade in the hands of private entrepreneurs. Americans introduced opium from Turkey, which was of lower quality but cheaper. Competition drove down the price of opium and increased sales. In 1839, the Daoguang Emperor appointed Lin Zexu as the governor of Canton with the goal of reducing and eliminating the opium trade. On his arrival, Lin Zexu banned the sale of opium, demanded that all opium be surrendered to the Chinese authorities, and required that all foreign traders sign a 'no opium trade' bond the breaking of which was punishable by death. Lin also closed the channel to Canton, effectively holding British traders hostage in Canton. The British Superintendent of Trade in China, Charles Elliot, got the British traders to agree to hand over their opium stock with the promise of eventual compensation for their loss from the British government. (This promise, and the inability of the British government to pay it without causing a political storm, was an important cause for the subsequent British offensive). Overall 20,000 chests (each holding about 55 kg) were handed over and destroyed beginning 3 June 1839. Following the collection and destruction of the opium, Lin Zexu wrote a "memorial" (折奏/摺奏) to Queen Victoria in an unsuccessful attempt to stop the trade of opium, as it had poisoned thousands of Chinese civilians (the memorial never reached the Queen). Kowloon incident (July 1839) After the chest seizure in April, the atmosphere grew tense and at the end of June the Chinese coast guard in Kowloon arrested the commodore of the Carnatic, a British clipper. On Sunday, 7 July 1839, a large group of British and American sailors, including crew from the Carnatic, was ashore at Kowloon, a provisioning point, and found a supply of samshu, a rice liquour, in the village of Chien-sha-tsui (Tsim Sha Tsui). In the ensuing riot the sailors vandalised a temple and killed a man named Lin Weixi. Because China did not have a jury trial system or evidentiary process (the magistrate was the prosecutor, judge, jury and would-be executioner), the British government and community in China wanted "extraterritoriality", which meant that British subjects would only be tried by British judges. When the Qing authorities demanded the men be handed over for trial, the British refused. Six sailors were tried by the British authorities in Canton (Guangzhou), but they were immediately released once they reached England. Captain Charles Elliot's authority was in dispute; the British government later claimed that without authority from the Qing government he had no legal right to try anyone, although according to the British Act of Parliament that gave him authority over British merchants and sailors, 'he was expressly appointed to preside over ' Court of Justice, with Criminal and Admiralty Jurisdiction, for the trial of offences committed by His Majesty's subjects in the said Dominions or on the high seas within one hundred miles of the coast of China'". The Qing authorities also insisted that British merchants not be allowed to trade unless they signed a bond, under penalty of death, promising not to smuggle opium, agreed to follow Chinese laws, and acknowledged Qing legal jurisdiction. Refusing to hand over any suspects or agree to the bonds, Charles Elliot ordered the British community to withdraw from Canton and prohibited trade with the Chinese. Some merchants who did not deal in opium were willing to sign the bond, thereby weakening the British position. Opium War (1839–42) In late October the Thomas Coutts arrived in China and sailed to Guangdong. This ship was owned by Quakers who refused to deal in opium, and its captain, Smith, believed Elliot had exceeded his legal authority by banning trade. The captain negotiated with the governor of Canton and hoped that all British ships could unload their goods at Chuenpee, an island near Humen. In order to prevent other British ships from following the Thomas Coutts, Elliot ordered a blockade of the Pearl River. Fighting began on 3 November 1839, when a second British ship, the Royal Saxon, attempted to sail to Guangdong. Then the British Royal Navy ships HMS Volage and HMS Hyacinth fired a warning shot at the Royal Saxon. The official Qing navy's report claimed that the navy attempted to protect the British merchant vessel and also reported a great victory for that day. In reality, they were out-classed by the Royal Naval vessels and many Chinese ships were sunk. Elliot reported that they were protecting their 29 ships in Chuenpee between the Qing batteries. Elliot knew that the Chinese would reject any contacts with the British and there would be an attack with fire boats. Elliot ordered all ships to leave Chuenpee and head for Tung Lo Wan, 20 miles (30 km) from Macau, but the merchants liked to harbour in Hong Kong. In 1840, Elliot asked the Portuguese governor in Macau to let British ships load and unload their goods at Macau and they would pay rents and any duties. The governor refused for fear that the Qing Government would discontinue to supply food and other necessities to Macau. On 14 January 1840, the Qing Emperor asked all foreigners in China to halt material assistance to the British in China. In retaliation, the British Government and British East India Company decided that they would attack Guangdong. The military cost would be paid by the British Government. Lord Palmerston, the British Foreign Secretary, initiated the Opium War in order to obtain full compensation for the destroyed opium. The war was denounced in Parliament as unjust and iniquitous by young William Ewart Gladstone, who criticised Lord Palmerston's willingness to protect an infamous contraband traffic. Outrage was expressed by the public and the press in the United States and United Kingdom as it was recognised that British interests may well have been simply supporting the opium trade. In June 1840, an expeditionary force of 15 barracks ships, 4 steam-powered gunboats and 25 smaller boats with 4000 marines reached Guangdong from Singapore. The marines were headed by James Bremer. Bremer demanded the Qing Government compensate the British for losses suffered from interrupted trade. Following the orders of Lord Palmerston, a British expedition blockaded the Mouth of Pearl River and moved north to take Chusan. Led by Commodore J.J. Gordon Bremer in Wellesley, they captured the empty city after an exchange of gunfire with shore batteries that caused only minor casualties. The next year, 1841, the British captured the Bogue forts which guarded the mouth of the Pearl River — the waterway between Hong Kong and Canton, while at the far west in Tibet the start of the Sino-Sikh war added another front to the strained Qing military. By January 1841, British forces commanded the high ground around Canton and defeated the Chinese at Ningbo and at the military post of Dinghai. By the middle of 1842, the British had defeated the Chinese at the mouth of their other great riverine trade route, the Yangtze, and were occupying Shanghai. The war finally ended in August 1842, with the signing of China's first Unequal Treaty, the Treaty of Nanking. The ease with which the British forces had defeated the numerically superior Chinese armies seriously affected the Qing Dynasty's prestige. The success of the First Opium War allowed the British to resume the opium trade. It also paved the way for opening of the lucrative Chinese market to other commerce and the opening of Chinese society to missionary endeavors. Among the most notable figures in the events leading up to military action in the Opium War was the man that Daoguang Emperor assigned to suppress the opium trade;Lin Zexu, known for his superlative service under the Qing Dynasty as "Lin the Clear Sky". Although he had some initial success, with the arrest of 1,700 opium dealers and the destruction of 2.6 million pounds of opium, he was made a scapegoat for the actions leading to British retaliation, and was blamed for ultimately failing to stem the tide of opium import and use in China. Nevertheless, Lin Zexu is popularly viewed as a hero of 19th century China, and his likeness has been immortalised at various locations around the world. The First Opium War was the beginning of a long period of weakening of the state and civil revolt in China, and long-term depopulation. |Wikimedia Commons has media related to: First Opium War| Contemporaneous Qing Dynasty wars: - Sino-Sikh war (1841–1842) - Le Pichon, Alain (2006). China Trade and Empire. Oxford University Press. pp. 36–37. ISBN 0-19-726337-2. - Martin, Robert Montgomery (1847). China: Political, Commercial, and Social; In an Official Report to Her Majesty's Government. Volume 2. James Madden. pp. 81–82. - Tsang, Steve (2007). A Modern History of Hong Kong. I.B.Tauris. p. 3–13, 29. ISBN 1-84511-419-1. - Tsang 2004, p. 29 - Stockwell, Foster (2003). Westerners in China: A History of Exploration and Trade, Ancient Times Through the Present. McFarland. p. 74. ISBN 0-7864-1404-9. - Janin, Hunt (1999). The India–China Opium Trade in the Nineteenth Century. McFarland. p. 207. ISBN 0-7864-0715-8. - Alain Peyrefitte, The Immobile Empire-- The first great collision of East and West -- the astonishing history of Britain's grand, ill-fated expedition to open China to Western Trade, 1792-94 (New York: Alfred A. Knopf, 1992), p. 520 - Peyrefitte 1993, p487-503 - Peyrefitte, 1993 p520 - Peter Ward Fay, The Opium War, 1840-1842: Barbarians in the Celestial Empire in the Early Part of the Nineteenth Century and the Way by Which They Forced the Gates Ajar (Chapel Hill, North Carolina:: University of North Carolina Press, 1975). - "China: The First Opium War". John Jay College of Criminal Justice, City University of New York. Retrieved 2 December 2010Quoting British Parliamentary Papers, 1840, XXXVI (223), p. 374 - "Foreign Mud: The opium imbroglio at Canton in the 1830s and the Anglo-Chinese War," by Maurice Collis, W. W. Norton, New York, 1946 - Poon, Leon. "Emergence Of Modern China". University of Maryland. Retrieved 22 Dec. 2008. - "Opiates". University of Missouri. Retrieved 22 Dec. 2008. - Letter to Queen Victoria, 1839. From Chinese Repository, Vol. 8 (February 1840), pp. 497–503; reprinted in William H. McNeil and Mitsuko Iriye, eds., Modern Asia and Africa, Readings in World History Vol. 9, (New York: Oxford University Press, 1971), pp. 111–118. The text has been modernized by Prof. Jerome S. Arkenberg, Cal. State Fullerton. This text is part of the Internet Modern History Sourcebook. - Fay, Peter Ward (1975). The Opium War 1840-1842. New York: W.W. Norton & Co. p. 71. ISBN 0-393-00823-1. - Hanes, W. Travis III, PhD and Frank Sanello, 'The Opium Wars; the Addiction of One Empire and the Corruption of Another', New York: Barnes & Noble, 2002. - The London Gazette: . 15 December 1840. - Lin Zexu Encyclopædia Britannica - Opium War - East Asian Studies - Monument to the People's Heroes, Beijing - Lonely Planet Travel Guide - Lin Zexu Memorial - Lin Zexu Memorial Museum Ola Macau Travel Guide Read in another language This page is available in 38 languages
http://en.m.wikipedia.org/wiki/First_Opium_War
13
17
Severe weather refers to any dangerous meteorological phenomena with the potential to cause damage, serious social disruption, or loss of human life. Types of severe weather phenomena vary, depending on the latitude, altitude, topography, and atmospheric conditions. High winds, hail, excessive precipitation, and wildfires are forms and effects of severe weather, as are thunderstorms, downbursts, lightning, tornadoes, waterspouts, tropical cyclones, and extratropical cyclones. Regional and seasonal severe weather phenomena include blizzards, snowstorms, ice storms, and duststorms. Meteorologists generally define severe weather as any aspect of the weather that poses risks to life, property or requires the intervention of authorities. A narrower definition of severe weather is any weather phenomena relating to severe thunderstorms. According to the World Meteorological Organization (WMO), severe weather can be categorized into two groups: general severe weather and localized severe weather. Nor'easters, European wind storms, and the phenomena that accompany them form over wide geographic areas. These occurrences are classified as general severe weather. Downbursts and tornadoes are more localized and therefore have a more limited geographic effect. These forms of weather are classified as localized severe weather. The term severe weather is technically not the same phenomenon as extreme weather. Extreme weather describes unusual weather events that are at the extremes of the historical distribution for a given area. Organized severe weather occurs from the same conditions that generate ordinary thunderstorms: atmospheric moisture, lift (often from thermals), and instability. A wide variety of conditions cause severe weather. Several factors can convert thunderstorms into severe weather. For example, a pool of cold air aloft may aid in the development of large hail from an otherwise innocuous appearing thunderstorm. However, the most severe hail and tornadoes are produced by supercell thunderstorms, and the worst downbursts and derechos (straight-line winds) are produced by bow echoes. Both of these types of storms tend to form in environments high in wind shear. Floods, hurricanes, tornadoes, and thunderstorms are considered to be the most destructive weather-related natural disasters. Although these weather phenomena are all related to cumulonimbus clouds, they form and develop under different conditions and geographic locations. The relationship between these weather events and their formation requirements are used to develop models to predict the most frequent and possible locations. This information is used to notify affected areas and save lives. Severe thunderstorms can be assessed in three different categories. These are "approaching severe", "severe", and "significantly severe". Approaching severe is defined as hail between 1⁄2 to 1 inch (13 to 25 mm) diameter or winds between 50 and 58 M.P.H. (50 knots). In the United States, such storms will usually warrant a Significant Weather Alert. Severe is defined as hail 1 inch (25 mm) diameter or larger, winds 58 M.P.H. or stronger, or a tornado. Significant severe is defined as hail 2 inches (51 mm) in diameter or larger, winds 75 M.P.H. (65 knots) or stronger, a tornado of strength EF2 or stronger, the occurrence of flash flood phenomena by heavy precipitation, or extreme temperatures.[specify] Both severe and significant severe events warrant a severe thunderstorm warning from the United States National Weather Service (excludes flash floods), the Environment Canada, the Australian Bureau of Meteorology, or the Meteorological Service of New Zealand if the event occurs in those countries. If a tornado is occurring (a tornado has been seen by spotters) or is imminent (Doppler weather radar has observed strong rotation in a storm, indicating an incipient tornado), the severe thunderstorm warning will be superseded by a tornado warning in the United States and Canada. A severe weather outbreak is typically considered to be when 10 or more tornadoes, some will likely be long tracked and violent, and many large hail or damaging wind reports. Severity is also dependent on the size of the geographic area affected, whether it covers hundreds or thousands of square kilometers. High winds High winds are known to cause damage, depending upon their strength. Wind speeds as low as 23 knots (43 km/h) may lead to power outages when tree branches fall and disrupt power lines. Some species of trees are more vulnerable to winds. Trees with shallow roots are more prone to uproot, and brittle trees such as eucalyptus, sea hibiscus, and avocado are more prone to branch damage. Wind gusts may cause poorly designed suspension bridges to sway. When wind gusts harmonize with the frequency of the swaying bridge, the bridge may fail as occurred with the Tacoma Narrows Bridge in 1940. Hurricane-force winds, caused by individual thunderstorms, thunderstorm complexes, tornadoes, extratropical cyclones, or tropical cyclones can destroy mobile homes and structurally damage buildings with foundations. Winds of this strength due to downslope winds off terrain have been known to shatter windows and sandblast paint from cars. Once winds exceed 135 knots (250 km/h) within strong tropical cyclones and tornadoes, homes completely collapse, and significant damage is done to larger buildings. Total destruction to man-made structures occurs when winds reach 175 knots (324 km/h). The Saffir-Simpson scale for cyclones and Enhanced Fujita scale (TORRO scale in Europe) for tornados were developed to help estimate wind speed from the damage they cause. A dangerous rotating column of air in contact with both the surface of the earth and the base of a cumulonimbus cloud (thundercloud) or a cumulus cloud, in rare cases. Tornadoes come in many sizes but typically form a visible condensation funnel whose narrowest end reaches the earth and surrounded by a cloud of debris and dust. Tornadoes wind speeds generally average between 40 miles per hour (64 km/h) and 110 miles per hour (180 km/h). They are approximately 250 feet (76 m) across and travel a few miles (kilometers) before dissipating. Some attain wind speeds in excess of 300 miles per hour (480 km/h), may stretch more than a mile (1.6 km) across, and maintain contact with the ground for dozens of miles (more than 100 km). Tornadoes, despite being one of the most destructive weather phenomena are generally short lived. A long-lived tornado generally lasts no more than an hour, but some have been known to last for 2 hours or longer (for example, the Tri-State Tornado). Due to their relatively short duration, less information is known about the development and formation of tornadoes. Downbursts are created within thunderstorms by significantly rain-cooled air, which, upon reaching ground level, spreads out in all directions and produce strong winds. Unlike winds in a tornado, winds in a downburst are not rotational but are directed outwards from the point where they strike land or water. "Dry downbursts" are associated with thunderstorms with very little precipitation, while wet downbursts are generated by thunderstorms with large amounts. Microbursts are very small and macrobursts are large-scale downbursts. The heat burst is created by vertical currents on the backside of old outflow boundaries and squall lines where rainfall is lacking. Heat bursts generate significantly higher temperatures due to the lack of rain-cooled air in their formation. Derecho are longer, usually stronger, forms of downburst winds characterized by straight-lined windstorms. Downbursts create vertical wind shear or microbursts, which are dangerous to aviation. They can also cause tornado-like damage on the ground and, depending on the size of the downburst, can generate winds at speeds of up to 168 miles per hour (270 km/h). Downbursts also occur much more frequently than tornadoes, with ten downburst damage reports for every one tornado. Squall line A squall line is an elongated line of severe thunderstorms that can form along or ahead of a cold front. The squall line typically contains heavy precipitation, hail, frequent lightning, strong straight line winds, and possibly tornadoes or waterspouts. Severe weather in the form of strong straight-line winds can be expected in areas where the squall line forms a bow echo, in the farthest portion of the bow. Tornadoes can be found along waves within a line echo wave pattern (LEWP) where mesoscale low pressure areas are present. Some[which?] summer bow echoes are called derechos, and move quickly over large territories. A wake low or a mesoscale low pressure area forms behind the rain shield (a high pressure system under the rain canopy) of a mature squall line and is sometimes associated with a heat burst. Squall lines often cause severe straight-line wind damage, and most non-tornadic wind damage is caused from squall lines. Although the primary danger from squall lines is straight-line winds, some squall lines also contain weak tornadoes. Tropical cyclone Very high winds can be caused by mature tropical cyclones (called hurricanes in the United States and Canada and typhoons in eastern Asia). A tropical cyclone’s heavy surf created by such winds may cause harm to marine life either close to or upon the surface of the water, such as coral reefs. Coastal regions may receive significant damage from a tropical cyclone while inland regions are relatively safe from the strong winds, due to their rapid dissipation over land. However, severe flooding can occur even far inland because of high amounts of rain from tropical cyclones and their remnants. Waterspouts are not known for inflicting much damage because they are not commonly exposed to land, but they are capable of traveling over land. Some waterspouts are known to produce hurricane strength winds and are capable of producing equivalent damage. Vegetation, weakly constructed buildings, and other infrastructure may be destroyed by waterspouts. Automobiles may be lifted by to advancing waterspouts. Heavy precipitation may be noted, developed from the water raised by the wind currents. Waterspouts do not generally last long over terrestrial environments as the friction produced easily dissipates the winds. Strong horizontal winds cause waterspouts to dissipate, destroying the concentration of the updrafts. While not generally as dangerous as "classic" tornadoes, waterspouts can overturn boats, and they can cause severe damage to larger ships. Strong extratropical cyclones Severe local windstorms in Europe that develop from winds off the North Atlantic. These windstorms are commonly associated with the destructive extratropical cyclones and their low pressure frontal systems. European windstorms occur mainly in the seasons of autumn and winter. A synoptic-scale extratropical storm along the East Coast of the United States and Atlantic Canada is called a Nor'easter. They are so named because their winds come from the northeast, especially in the coastal areas of the Northeastern United States and Atlantic Canada. More specifically, it describes a low pressure area whose center of rotation is just off the East Coast and whose leading winds in the left forward quadrant rotate onto land from the northeast. Nor'easters may cause coastal flooding, coastal erosion, and hurricane force winds. Dust storm An unusual form of windstorm that is characterized by the existence of large quantities of sand and dust particles carried by moving air. Dust storms frequently develop during periods of droughts, or over arid and semi-arid regions. Dust storms have numerous hazards and are capable of causing deaths. Visibility may be reduced dramatically, so risks of vehicle and aircraft crashes are possible. Additionally, the particulates may reduce oxygen intake by the lungs, potentially resulting in suffocation. Damage can also be inflicted upon the eyes due to abrasion. Dust storms can produce many issues for agricultural industries as well. Soil erosion is one of the most common hazards and decreases arable lands. Dust and sand particles can cause severe weathering of buildings and rock formations. Nearby bodies of water may be polluted by settling dust and sand, killing aquatic organisms. Decrease in exposure to sunlight can affect plant growth, as well as decrease in infrared radiation may cause decreased temperatures. The most common cause of wildfires varies throughout the world. In the United States, Canada, and Northwest China, lightning is the major source of ignition. In other parts of the world, human involvement is a major contributor. For instance, in Mexico, Central America, South America, Africa, Southeast Asia, Fiji, and New Zealand, wildfires can be attributed to human activities such as animal husbandry, agriculture, and land-conversion burning. Human carelessness is a major cause of wildfires in China and in the Mediterranean Basin. In Australia, the source of wildfires can be traced to both lightning strikes and human activities such as machinery sparks and cast-away cigarette butts." Wildfires have a rapid forward rate of spread (FROS) when burning through dense, uninterrupted fuels. They can move as fast as 10.8 kilometers per hour (6.7 mph) in forests and 22 kilometers per hour (14 mph) in grasslands. Wildfires can advance tangential to the main front to form a flanking front, or burn in the opposite direction of the main front by backing. Wildfires may also spread by jumping or spotting as winds and vertical convection columns carry firebrands (hot wood embers) and other burning materials through the air over roads, rivers, and other barriers that may otherwise act as firebreaks. Torching and fires in tree canopies encourage spotting, and dry ground fuels that surround a wildfire are especially vulnerable to ignition from firebrands. Spotting can create spot fires as hot embers and firebrands ignite fuels downwind from the fire. In Australian bushfires, spot fires are known to occur as far as 10 kilometers (6 mi) from the fire front. Since the mid-1980s, earlier snowmelt and associated warming has also been associated with an increase in length and severity of the wildfire season in the Western United States. Any form of thunderstorm that produces precipitating hailstones is known as a hail storm. Hailstorms are generally capable of developing in any geographic area where thunderclouds (Cumulonimbus) are present, although they are most frequent in tropical and monsoon regions. The updrafts and downdrafts within cumulonimbus clouds cause water molecules to freeze and solidify, creating hailstones and other forms of solid precipitation. Due to their larger density, these hailstones become heavy enough to overcome the density of the cloud and fall towards the ground. The downdrafts in cumulonimbus clouds can also cause increases in the speed of the falling hailstones. The term "hailstorm" is usually used to describe the existence of significant quantities or size of hailstones. Hailstones can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and crops. Rarely, massive hailstones have been known to cause concussions or fatal head trauma. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest recorded incidents occurred around the 12th century in Wellesbourne, Britain. The largest hailstone in terms of maximum circumference and length ever recorded in the United States fell in 2003 in Aurora, Nebraska, USA The hailstone had a diameter of 7 inches (18 cm) and a circumference of 18.75 inches (47.6 cm). Heavy rainfall and flooding Heavy rainfall can lead to a number of hazards, most of which are floods or hazards resulting from floods. Flooding is the inundation of areas that are not normally under water. It is typically divided into three classes: River flooding, which relates to rivers rising outside their normals banks; flash flooding, which is the process where a landscape, often in urban and arid environments, is subjected to rapid floods; and coastal flooding, which can be caused by strong winds from tropical or non-tropical cyclones. Meteorologically, excessive rains occur within a plume of air with high amounts of moisture (also known as an atmospheric river) which is directed around an upper level cold-core low or a tropical cyclone. Flash flooding can frequently occur in slow-moving thunderstorms and are usually caused by the heavy liquid precipitation that accompanies it. Flash floods are most common in dense populated urban environments, where less plants and bodies of water are presented to absorb and contain the extra water. Flash flooding can be hazardous to small infrastructure, such as bridges, and weakly constructed buildings. Plants and crops in agricultural areas can be destroyed and devastated by the force of raging water. Automobiles parked within experiencing areas can also be displaced. Soil erosion can occur as well, exposing risks of landslide phenomena. Like all forms of flooding phenomenon, flash flooding can also spread and produce waterborne and insect-borne diseases cause by microorganisms. Flash flooding can be caused by extensive rainfall released by tropical cyclones of any strength or the sudden thawing effect of ice dams. Seasonal wind shifts lead to long-lasting wet seasons which produce the bulk of annual precipitation in areas such as Southeast Asia, Australia, Western Africa, eastern South America, and Mexico. Widespread flooding occurs if rainfall is excessive, which can lead to landslides and mudflows in mountainous areas. Floods cause rivers to exceed their capacity with nearby buildings becoming submerged. Flooding may be exacerbated if there are fires during the previous dry season. This may cause soils which are sandy or composed of loam to become hydrophobic and repel water. Government organizations help their residents deal with wet season floods though floodplain mapping and information on erosion control. Mapping is conducted to help determine areas that may be more prone to flooding. Erosion control instructions are provided through outreach over the telephone or the internet. Flood waters that occur during Monsoon seasons can often host numerous protozoa, bacterial, and viral microorganisms. Mosquitoes and flies will lay their eggs within the contaminated bodies of water. These disease-agents may cause infections of food borne and waterborne diseases. Diseases associated with exposure to flood waters include: Malaria, Cholera, Typhoid, Hepatitis A, and the Common cold. Possible trenchfoot infections may also occur when personnel are exposed for extended periods of time within flooded areas. Tropical cyclone A tropical cyclone is a storm system characterized by a low pressure center and numerous thunderstorms that produce strong winds and flooding rain. A tropical cyclone feeds on heat released when moist air rises, resulting in condensation of water vapor contained in the moist air. Tropical cyclones may produce torrential rain, high waves, and damaging storm surge. Heavy rains produce significant inland flooding. Storm surges may produce extensive coastal flooding up to 40 kilometres (25 mi) from the coastline. Although cyclones take an enormous toll in lives and personal property, they are also important factors in the precipitation regimes of areas they impact. They bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage. Tropical cyclones can also relieve drought conditions. They also carry heat and energy away from the tropics and transport it toward temperate latitudes, which makes them an important part of the global atmospheric circulation mechanism. As a result, tropical cyclones help to maintain equilibrium in the Earth's troposphere. Non-convective Flooding |This section requires expansion. (March 2012)| Non-convective flooding refers to flooding that is not directly caused by excessive precipitation. This type of flooding usually occurs near rivers as a result of blockage upstream. It could be caused by the sudden clearing of an ice jam, the breaking of a dam, or the intentional release of reservoir water. Though not the same as the precipitation-related flood mechanisms, non-convective floods can cause the same types of damage as typical floods. Severe Winter Weather Heavy snowfall When extratropical cyclones deposit heavy, wet snow with a snow-water equivalent (SWE) ratio of between 6:1 and 12:1 and a weight in excess of 10 pounds per square foot (~50 kg/m2) piles onto trees or electricity lines, significant damage may occur on a scale usually associated with strong tropical cyclones. An avalanche can occur with a sudden thermal or mechanical impact on snow that has accumulated on a mountain, which causes the snow to rush downhill suddenly. Preceding an avalanche is a phenomenon known as an avalanche wind caused by the approaching avalanche itself, which adds to its destructive potential. Large amounts of snow which accumulate on top of man-made structures can lead to structural failure. During snowmelt, acidic precipitation which previously fell in the snow pack is released and harms marine life. Lake-effect snow is produced in the winter in the shape of one or more elongated bands. This occurs when cold winds move across long expanses of warmer lake water, providing energy and picking up water vapor which freezes and is deposited on the lee shores. For more information on this effect see the main article. Conditions within blizzards often include large quantities of blowing snow and strong winds which may significantly reduce visibility. Reduced viability of personnel on foot may result in extended exposure to the blizzard and increase the chance of becoming lost. The strong winds associated with blizzards create wind chill that can result in frostbites and hypothermia. The strong winds present in blizzards are capable of damaging plants and may cause power outrages, frozen pipes, and cut off fuel lines Strong extratropical cyclones The precipitation pattern of Nor'easters is similar to other mature extratropical storms. Nor'easters can cause heavy rain or snow, either within their comma-head precipitation pattern or along their trailing cold or stationary front. Nor'easters can occur at any time of the year but are mostly known for their presence in the winter season. Severe European windstorms are often characterized by heavy precipitation as well. Ice storm Ice storms are also known as a Silver storm, referring to the color of the freezing precipitation. Ice storms are caused by liquid precipitation which freezes upon cold surfaces and leads to the gradual development of a thickening layer of ice. The accumulations of ice during the storm can be extremely destructive. Trees and vegetation can be destroyed and in turn may bring down power lines, causing the loss of heat and communication lines. Roofs of buildings and automobiles may be severely damaged. Gas pipes can become frozen or even damaged causing gas leaks. Avalanches may develop due to the extra weight of the present. Visibility can be reduce dramatically. The aftermath of an ice storm may result in severe flooding due to sudden thawing, with large quantities of displaced water, especially near lakes, rivers, and bodies of water. Heat and Drought Another form of severe weather is drought, which is a prolongued period of persistently dry weather (that is, absence of precipitation). Although droughts do not develop or progress as quickly as other forms of severe weather, their effects can be just as deadly; in fact, droughts are classified and measured based upon these effects. Droughts have a variety of severe effects; they can cause crops to fail, and they can severely deplete water resources, sometimes interfering with human life. A drought in the 1930s known as the Dust Bowl affected 50 million acres of farmland in the central United States. In economic terms, they can cost many billions of dollars: a drought in the United States in 1988 caused over $40 billion in losses, exceeding the economic totals of Hurricane Andrew, the Great Flood of 1993, and the 1989 Loma Prieta earthquake. In addition to the other severe effects, the dry conditions caused by droughts also significantly increase the risk of wildfires. Heat Waves Although official definitions vary, a heat wave is generally defined as a prolonged period with excessive heat. Although heat waves do not cause as much economic damage as other types of severe weather, they are extremely dangerous to humans and animals: according to the United States National Weather Service, the average total number of heat-related fatalities each year is higher than the combined total fatalities for floods, tornadoes, lightning strikes, and hurricanes. In Australia, heat waves cause more fatalities than any other type of severe weather. As in droughts, plants can also be severely affected by heat waves (which are often accompanied by dry conditions) can cause plants to lose their moisture and die. Heat waves are often more severe when combined with high humidity. - World Meteorological Organization (October 2004). "Workshop On Severe and Extreme Events Forecasting". Retrieved 2009-08-18. - Glossary of Meteorology (2009). "Severe weather". American Meteorological Society. Retrieved 2009-08-18. - Glossary of Meteorology (2009). "Severe storm". American Meteorological Society. Retrieved 2010-02-04. - "Storm Prediction Center Frequently Asked Questions (FAQ)". Spc.noaa.gov. 2008-07-16. Retrieved 2009-12-05. - "Why One Inch Hail Criterion?". National Weather Service. Retrieved 2010-05-29. - "Online Severe Weather Climatology - Radar Coverage Areas". Storm Prediction Center. 2007-10-06. Retrieved 2010-05-29. - Edwards, Robert (23 March 2012). "The Online Tornado FAQ: Frequently Asked Questions about Tornadoes". Storm Prediction Center, NOAA. Retrieved 29 March 2012. - Guyer, Jared. "SPC Web Feedback -- Outbreaks." Message to the author. 14 June 2007. E-mail. - Lightning: Principles, Instruments and Applications. Springer. 2009. pp. 202–203. ISBN 978-1-4020-9078-3. Retrieved 2009-05-13. - Derek Burch (2006-04-26). "How to Minimize Wind Damage in the South Florida Garden". University of Florida. Retrieved 2009-05-13. - T. P. Grazulis (2001). The tornado. University of Oklahoma Press. pp. 126–127. ISBN 978-0-8061-3258-7. Retrieved 2009-05-13. - Rene Munoz (2000-04-10). "Boulder's downslope winds". University Corporation for Atmospheric Research. Retrieved 2009-06-16. - National Hurricane Center (2006-06-22). "Saffir-Simpson Hurricane Scale Information". National Oceanic and Atmospheric Administration. Retrieved 2007-02-25. - Storm Prediction Center (2007-02-01). "Enhanced F Scale for Tornado Damage". Retrieved 2009-05-13. - Renno, Nilton O. (August 2008). "A thermodynamically general theory for convective vortices" (PDF). Tellus A 60 (4): 688–99. Bibcode:2008TellA..60..688R. doi:10.1111/j.1600-0870.2008.00331.x. - "Doppler On Wheels". Center for Severe Weather Research. 2006. Retrieved 2006-12-29. - "Hallam Nebraska Tornado". Omaha/Valley, NE Weather Forecast Office. 2005-10-02. Retrieved 2006-09-08. - "Tornadoes". 2008-08-01. Retrieved 2009-08-03. - Fernando Caracena, Ronald L. Holle, and Charles A. Doswell III (2002-06-26). "Microbursts: A Handbook for Visual Identification". Retrieved 2008-07-09. - Glossary of Meteorology (2009). "Macroburst". Retrieved 2008-07-30. - "Oklahoma "heat burst" sends temperatures soaring". USA Today. 1999-07-08. Retrieved 2007-05-09. - Mogil, Micheal.H (2007). Extreme Weather. New York: Black Dog & Leventhal Publisher. pp. 210–211. ISBN 978-1-57912-743-5. - National Aeronautics and Space Administration Langley Air Force Base (June 1992). "Making the Skies Safer From Windshear". Retrieved 2006-10-22. - National Weather Service Forecast Office, Columbia, SC (5 May 2010). "Downbursts". Retrieved 29 March 2012. - Glossary of Meteorology (2009). "Squall line". American Meteorological Society. Retrieved 2009-06-14. - Glossary of Meteorology (2009). "Prefrontal squall line". American Meteorological Society. Retrieved 2009-06-14. - Office of the Federal Coordinator for Meteorology (2008). "Chapter 2: Definitions". NOAA. pp. 2–1. Retrieved 2009-05-03. - Glossary of Meteorology (2009). "Bow echo". American Meteorological Society. Retrieved 2009-06-14. - Glossary of Meteorology (2009). Line echo wave pattern. American Meteorological Society. ISBN 1-878220-34-9. Retrieved 2009-05-03. - Johns, Robert H.; Jeffry S. Evans (2006-04-12). "About Derechos". Storm Prediction Center, NCEP, NWS, NOAA Web Site. Retrieved 2007-06-21. - Glossary of Meteorology (2009). Heat burst. American Meteorological Society. ISBN 1-878220-34-9. Retrieved 2009-06-14. - National Weather Forecast Office, Louisville, KY (31 August 2010). "Structure and Evolution of Squall Line and Bow Echo Convective Systems". NOAA. Retrieved 29 March 2012. - Dan Brumbaugh (October 2004). "Hurricanes and Coral Reef Communities". BBP in Brief (American Museum of Natural History) (3). Retrieved 2009-08-18. - "AMS Glossary". Amsglossary.allenpress.com. Retrieved 2009-12-05. - "Waterspout Information - NWS Wilmington, NC". Erh.noaa.gov. 2007-01-14. Retrieved 2009-12-05. - "European Windstorms | PANDOWAE". Pandowae.de. Retrieved 2009-12-05. - "Adiabatic process". American Meteorological Society. 2009. Retrieved 2009-08-18. - "AMS Glossary". Amsglossary.allenpress.com. Retrieved 2009-12-05. - "Dust storms Factsheet - NSW Department of Health". Health.nsw.gov.au. Retrieved 2009-12-05. - "Dust Storm Safety". Nws.noaa.gov. Retrieved 2009-12-05. - Krock, Lexi. NOVA online - Public Broadcasting System (PBS). The World on Fire; June 2002 [cited 2009-07-13]. - Florida Alliance for Safe Homes (FLASH). Protecting Your Home From Wildfire Damage [PDF] [cited 3 March 2010]; p. 5. - Billing, 5-6 - Graham, et al., 12 - Shea, Neil. National Geographic. Under Fire; July 2008 [cited 2008-12-08]. - Graham, et al., 16. - Graham, et al., 9, 16. - Billing, 5 - Westerling, Al; Hidalgo, Hg; Cayan, Dr; Swetnam, Tw (Aug 2006). "Warming and earlier spring increase western U.S. Forest wildfire activity". Science 313 (5789): 940–3. Bibcode:2006Sci...313..940W. doi:10.1126/science.1128834. ISSN 0036-8075. PMID 16825536. - "AMS Glossary". Amsglossary.allenpress.com. Retrieved 2009-12-05. - "Facts about Hailstorms". Buzzle.com. 2009-07-13. Retrieved 2009-12-05. - "Hailstorms". Andthensome.com. Retrieved 2009-12-05.[dead link] - Nolan J. Doesken (April 1994). "Hail, Hail, Hail ! The Summertime Hazard of Eastern Colorado". Colorado Climate 17 (7). Retrieved 2009-07-18. - "Hailstorms: Extremes". TORRO. Retrieved 2009-12-05. - Knight, C.A., and N.C. Knight, 2005: Very Large Hailstones From Aurora, Nebraska. Bull. Amer. Meteor. Soc., 86, 1773–1781. - Glossary of Meteorology (2009). "Flash Flood". American Meteorological Society. Retrieved 2009-09-09. - Boniface J. Mills, K. Falk, J. Hansford, and B. Richardson (2010-01-20). "An Analysis of Heavy Rainfall Weather Systems over Louisiana". 24th Conference of Hydrology. - WeatherEye (2007). "Flash Flood!". Sinclair Acquisition IV, Inc. Retrieved 2009-09-09. - National Weather Service Forecast Office Morristown, Tennessee (2006-03-07). "Definitions of flood and flash flood". National Weather Service Southern Region Headquarters. Retrieved 2009-09-09. - Overseas Security Advisory Council (2009). Warden Message: Guyana Rainy Season Flood Hazards. Overseas Security Advisory Council. Retrieved on 2009-02-05. - National Flood Insurance Program (2009). California's Rainy Season. U.S. Federal Emergency Management Agency (FEMA). Retrieved on 2009-02-05. - AFP (2009). Bali Hit By Wet Season Floods. ABC News. Retrieved on 2009-02-06. - Jack Ainsworth & Troy Alan Doss. Natural History of Fire & Flood Cycles. California Coastal Commission. Retrieved on 2009-02-05. - FESA (2007). Flood. Government of Western Australia. Retrieved on 2009-02-06. - King County Department of Development and Environmental Services (2009). Erosion and Sediment Control for Construction Sites. King County, Washington Government. Retrieved on 2009-02-06. - "Preventing Disease and Injury in a Flood - Fairfax County, Virginia". Fairfaxcounty.gov. Retrieved 2009-12-05. - "Monsoon diseases: Prevention and cure". Zeenews.com. 2009-07-21. Retrieved 2009-12-05. - James M. Shultz, Jill Russell and Zelde Espinel (2005). "Epidemiology of Tropical Cyclones: The Dynamics of Disaster, Disease, and Development". Oxford Journal. Retrieved 2007-02-24. - Climate Prediction Center (2005). "2005 Tropical Eastern North Pacific Hurricane Outlook". National Oceanic and Atmospheric Administration. Retrieved 2006-05-02. - Jack Williams (2005-05-17). "Background: California's tropical storms". USA Today. Retrieved 2009-02-07. - National Oceanic and Atmospheric Administration. "VTEC in FFW/FFS and Temporary Work-Arounds for Non-Convective Situations". Retrieved 29 March 2012. - Stu Ostro (2006-10-12). "Historic snowfall for the Niagara Frontier". Weather Channel blog. Retrieved 2009-07-07. - "Historic Lake Effect Snow Storm of October 12–13, 2006". National Weather Service Forecast Office in Buffalo, New York. 2006-10-21. Retrieved 2009-07-08. - Glossary of Meteorology (2009). "Avalanche". American Meteorological Society. Retrieved 2009-06-30. - Gershon Fishbein (2009-01-22). "A Winter's Tale of Tragedy". Washington Post. Retrieved 2009-01-24. - Samuel C. Colbeck (March 1995). "Of Wet Snow, Slush, and Snow Balls". The Avalanche Review 13 (5). Retrieved 2009-07-12.[dead link] - Glossary of Meteorology (2009). "Lake-effect snow". American Meteorological Society. Retrieved 2009-06-15. - "Blizzards". Ussartf.org. Retrieved 2009-12-05. - Multi-Community Environmental Storm Observatory (2006). "Nor'easters". Archived from the original on 2007-10-09. Retrieved 2008-01-22. - "AMS Glossary". Amsglossary.allenpress.com. Retrieved 2009-12-05. - "Glossary - NOAA's National Weather Service". Weather.gov. 2009-06-25. Retrieved 2009-12-05. - "WinterStorms.p65" (PDF). Retrieved 2009-12-05. - NOAA Forecast Office, Flagstaff, AZ. "What is Meant by the Term Drought?". Retrieved 29 March 2012. - Earth Observatory. "Drought: The Creeping Disaster". Retrieved 29 March 2012. - Courtney, Joe; Middelmann, Miriam (2005). "Meteorological hazards" (PDF). Natural hazard risk in Perth, Western Australia – Cities Project Perth Report. Geoscience Australia. Retrieved 25 December 2012. - Office of Climate, Water, and Weather Services, NOAA. "Heat:A Major Killer". Retrieved 30 March 2012. - Australian Government Attorney-General's Department. "Heatwaves: Get the Facts". Retrieved 30 March 2012.
http://www.digplanet.com/wiki/Severe_weather
13
20
Between 1200 and 1535 AD, the Inca population lived in the part of South America extending from the Equator to the Pacific coast of Chile. The beginning of the Inca rule started with the conquest of the Moche Culture in Peru. The Inca were warriors with a strong and a powerful army. Because of their fierceness and their hierarchical organization, they became the largest Native American society. The height of their reign in the 15th century came to a brutal end in 1535 when the Spanish conquistadors took over their territory. Their cities and fortresses were mostly built on the highlands and on the steep slopes of the Andes Mountains. The architecture of the Inca cities still amazes and puzzles most scientists. Some steps lead up to the top of the cities which consist of stone houses and religious buildings. The blocks of some of the stones weigh several tons and are fit together so tightly that not even a razor blade can fit through them. The Inca society was arranged by a strict hierarchical structure. There were many different levels with the Sapa, high priest, and the army commander at the top. Family members were councilors to the Sapa and even women had authority in the Inca hierarchy. The Temple Priests, architects and army commanders were next. The two lowest classes consisted of artisans, army captians, farmers and herders. Farmers provided most of the subsistence for the rest of the population, they had to pay tax in the form of gold, which were distributed to the higher classes. The comprehension of how irrigation can benefit agriculture is evident by the expansion into the highland areas. They developed drainage systems and canals to expand their crops; potatoes, tomatoes, cotton, and coca were among the many crops grown by the Inca. Llamas were used for meat and transportation and there was enough resources available for everyone... this lead to a rapid growth in population. Since population was increasing and the organization of the Inca became stronger, they needed protection. They built enormous fortresses on tops of steep mountains that enabled them to see their enemies and to defend themselves. One of the most famous Inca fortresses is located in Cuzco, Sacasahuman (right), the Inca Empire capital. Even though the Inca never had access to the wheel, they built a very sophisticated road system to connect the villages, they were paved with flat stones and barriers to protect the chasqui (messengers) from falling off the cliffs. The highest point in an Inca village was reserved for religious purposes. This point was the closest to the sun which represented their major god, Inti, the Sun God. The six major gods of the Inca represent the moon, sun, earth, thunder/lighting, and the sea. Pachamama is the earth god, who is the mother of all humans. The Inca had shamans who believed in animal spirits living on earth. Heaven was depicted by the condor, the underworld by the anacomda, and the brother who resided on earth was the puma. The Sun Temple, located in Machu Picchu, Peru, was a religious calendar that marked the winter and summer solstices. The Inca were not only fierce conquerors but they also had a violent punishment system. If someone stole, murdered, or had sex with a Sapa wife or a Sun Virgin, they were thrown off a cliff, hands cut off or eyes cut out, or hung up to starve to death. Prisons were of no use because punishment usually consisted of death. Recent excavations of the Inca siteshas re-vealed mummified bodies of the Inca royalty. They have been preserved by ice on the peaks of the Andes. The Incas had an army which consisted of 40,000 people. The Spanish army in the Americas, which was commanded by Francisco Pizarro, only had about 180 men. How could an Army of only 180 defeat an army of 40,000 men? There are three main reasons for this: 1. Much of the Incan army died as a result of smallpox, which was carried to them by the Spanish. 2. The Spanish Conquistadors were able to convince other tribes, already under Incan rule, to side with them and over through the Incan Empire. 3. The weapons used by Incan warriors, though effective in tribal warfare, were no match for the Spanish arms. By 1535, the Inca society was compltely overthrown and Pizarro moved the capital from Cuzco to Lima. Chilca Valley: The Chilca Valley lies on the western coast of Peru, surroumded by the Andes Mountains and the Pacific Ocean, called the puna zone. The puna zone is a barren, wind-swept area of open fields and rolling grasslands and has an altitude of 4000 m. The Chilca Valley was an im-portant traveling route for coastal inhabitants to the highlands. Two major sites have been discovered with-in this area, Tres Ventanas and Kiqche. These two sites are located on the rim of the puna zone . The people of the Andean culture who inhabited this region were hunters and gatherers. They lived in this region from the Early Archaic Period through the Formative Period. (8000-4500 BP.) Animals such as deer and camelid, which were natives of the Andean Mountians, were the prime sources of protien and fat for these Middle Archaic peoples. It wasn't until about 10,000 years ago that this culture began to change into a gardening community. This process of evolving took place primarily because of the domestication of the camelid and the lessend population of the hunted animals. Other artifacts found at such sites include clam shells and various sea shell structures. These artifacts mean that the Chilca Valley was a vital means of travel from the Pacific Coast to the Andes Mountains infering trade between surrounding communities. Ref: Jennings, Jesse D. Ancient South Prehistoric Hunters of the High Andes, Academic Press Inc 1990. Chavin de Huantar: was located in Peru and developed around 900 BC late in the Initial Period. At an elevation of 3,150 m., Chavin de Huantar was situated at the bottom of Cordillera Blanca's eastern slopes, halfway bet-ween tropical forests and the coastal plains. At the inter-section of major routes, it was in the position to control the routes, increase their ex-change with others, and rec-eive goods that were not in their area. Chavin de Huantar was an agriculture society, home to a very large population. The Old Temple was built during the late Initial Period and it was the "center of super-natural power and authority." It was a U-shaped platform opening to the east with a circular courtyard in the center. The Old Temple also had numerous passageways and chambers underground called galleries. These were used for storage chambers, religious rites, and possibly temporary or permanent living for small groups working with temple activities. The Lanzon Gallery is located at the very center of the Old Temple. It was where this sculpture (left) of the Lanzon was found. The Lanzon, the supreme deity of Chavin de Huantar, is anthropomorphic. With its feline head and human body, it has intertwined the frline deity of Chavin de Huantar and the shaman of the pre-Chavin period. For the pre-Chavin period, the object of worship was the feline, but this was gradually changed. By the time of Chavin de Huantar, it was anthropomorphic. During this time, it was believed that priests could become jaguars and interact with the supernatural forces. This was achieved by taking hallucinogenic drugs as part of rituals at the Old Temple. There are many sculptures that decorate the Old Temple depicting the transformation of the priests. There have been mortars, pestles, conch-shell trumpets and many other items with the anthropomorphic design found and thought to be associated with Chavin rituals. The Chavin culture existed in what is known today as Peru and is comprised of three types of landforms; the Pacific Coast, the Andes Mountains, and the Amazon lowlands. These are highly contrasting environmental areas that are condensed into a small area. Despite these conditions, it is thought that the Chavin civilization arose because of the coexistence of the peoples of these areas. Early Peruvian civilization can be traced back to the first peoples crossing the Bering Sea Land Bridge by Asiatic hunters long before 15,000 years ago. It is bel-ieved that between 10,000 and 14,000 years ago, some of these people began occupying the Peruvian Andes. About 10,000 years of development took place before any civilization was organized. The best known of these early Peruvian prehistory civilizations is the Chavin. Chavin is one of Peru's oldest civilizations and laid the cultural foundations of all later Peruvian cultures. They flourished from 900 BCE to 200 BCE, although many elements of the culture can be traced back to about 1,000 years before its start. It's evident that the people of Chavin held religious beliefs due to the many artifacts relating to religious ceremonies that have been excavated. Several objects are believed to have been used in the ceremonial ingestion of hallucinogens such as small mortars that were used to grind vilca, a sort of hallucinogenic snuff. Other objects used for these purposes are bone tubes and spoons. Those items were decorated with impressions of wild animals that are associated whith shamanistic transformation. The Chavin culture is known for its beautiful art and design but Chavin was also innovative with metallurgy and textile production. Cloth production was revolutionized during the time of the Chavin. New techniques and materials are popular through the use of camel hair, textile painting, the dying of camel hair and the "resist" painting style similar to modern day tie-dye. Advances in metallurgy also occured during the Chacin's reign in Peru such as joining pieces of preshaped metal sheets to form both objects of art and objects for practical use. Soldering and temperature control were also advanced during this time. The Moche lived along the Northern Peruvian coastline, where they were relegated to life within the lower river valley. This environment was rich with clay and metals and gave the cultures of Northern Peru the tools to create extensive artistic traditions. Unfortunately, Moche artistic expression is the only main way archeologists have been able to interpret and under-stand Moche culture. No written records were kept by the people nor were there a predominant written language. The Moche occupation of Northern Peru occured after the gradual demise of Chavin culture. The demise of Chavin culture ended several centuries of political unification in northern Peru. As the small states began to break away from the unified government, and its citizens turned toward a more structured lifestyle, each state that branched off began to develop its own artistic style. Soon, each had created its own Huanca or temple center, which all city life flourished around. These city states were run through a centralized theocratic government system. As the style of the Moche spread and evolved throughout northern Peru, it became a predominant media of all the states, which lasted for seven centuries, from 100 AD to 800 AD and underwent five phases of development. The main historical and cultural record of the Moche lay within its expressive artistic styling. It represents ceremony, mythology and daily life of the Moche people. It depicts everything from sexual acts to ill humans, and even anthropomorphized warriors, dieties and humans. The ceramic work of the Moche took on a highly standardized form. The emphasis on hierarchy, the ceremonial themes of the Moche pottery indicate that the people partook of human sacrifice and sexually explicit acts. Moche artisans were also renowned for their use of silver, copper and gold. Like modern metallurgical styles, they used turquoise inlay techniques as well as simple wax castings. These techniques aided the Moche in making chisels, spear points, fish hooks, digging tools, tweezers and many other metallurgical goods. Decline of Moche Culture came abruptly with the rise of Chimu culture. However, Moche culture remains a meaningful premeager to many of the other ceramic and artistic forms found throughout South America, and eventually led to the rise of the great Incan civilizations and their artistic endeavors. At first, the Nazca people lived in "oases" terraced hill-sides suitable for the construction of irrigation systems. People also used the source of the Nazca River. Then agriculture flourished in this former dry lands. The capital of Nazca, Cahuachi, was 31 miles inland of the south bank of the Nazca River. This place was also farmland, but it became the sacred place and the cere-monial center because of the natural springs. In this sacred place, people got together to honor their ancestors at a certain time of the year. Some mounds, shrines and at least six pyramids and courts were con-structed. Several buildings had walls made of small conical and loaf-shaped adobes and were built by canes bound together and covered with mud. Nazca art impresses the masked ritual performances and the rituals are associated with rain, water, and fert-ility. Human heads were also regarded as valuable trophies. The burial tradition was practiced at Cahuachi. Huddled bodies, some wrapped in cloth, were burried in circular pits or in great deep adobe-lined tombs, both generally covered with logs. Cahuachi was the big cemetery and the place for votive offerings. The Nazca lines are the most attractive feature in this culture. These large "geoglyphs" (drawings on the eart's surface) make no sense on the ground, one can only recog-nize the features from the air. There are several kinds of figures; birds, fish, monkeys, a whale, spiders and plants. These lines spread on the ground more than 800 miles, some of which extend 12 miles long. Since these lines are on a flat surface and its climate is extremly dry, nearly all the geoglyphs remain completely intact. These geoglyphs are not only featured at Nazca, but also in other coastal areas; Zana, Santa, Sechin Valleys, Pampa Canto Grande, the Sihuas Valleys and northern Chile. The purpose of the drawings is uncertain, but its believed to be connected to their beliefs and economical systems. According to anthropologist Johan Reinhard, the Nazca people believed that mountian gods protected humans and controlled the weather. These gods also affected water sources and land fertility since they are associated with lakes, rivers and the sea. Each figure might have a different meaning for the Nazca people. The straight lines, as sacred paths, from Nazca to the Andean highlands are still used to bring water. Today, these lines are main-tained for the religious merit of the people. The triangles and trapezoids are made for the flow of water and are placed near the river. People often have ceremonies beside the water flow. The figure of spirals depicts sea-shells and the ocean, and the figures of zig-zags illustrates lighting and the river. The bird figures, representing a heron, pelican or condor, are believed to be signs of faithfulness to the mountain gods. Other sea birds are associated with the ocean. Monkeys and Lizards represent the hope for water. Shark or Killer Whale motifs show the success of fishing. Spiders, Millipedes and plants are associated with the rain. Even though the Nazca River was located near this cultural area, river water was not enough to support their agriculture needs. Some questions are still being debated among specialists. Why were so many lines necessary? How and why did the people draw such large figures on the ground without any aerial vision or aerial equipment? It may never be under-stood what is the true meaning of the Nazca lines however, it can be deciphered, pieces of the traditional Andean people's belief system from these great Geoglyphs. Richard F. Townsend, ed the Ancient Americas: the Art Institute of Chicago, 1992 Christopher Scarre, Ancient Civilizations: New York. Longman, 1997 G.H.S.Bushnell, Ancient Peoples and Places Peru: New York. Frederick A. Praeger, 1957 Luis G. Lumbreras, The Peoples and Cultures of Ancient Peru: Smithonian Press 1974 Machu Picchu is a city located high in the Andes Mountains in Peru. It lies 43 miles northwest of Cuzco at the top of a ridge, hiding it from the Urabamba Gorge below. The ridge is between a block of highland and the massive Huaynac Picchu, around which the Urubamba River takes a sharp bend. The surrounding area is covered in dense bush, some of it covering Pre-Colombian cultivation terraces. Machu Picchu, which means "Old Peak", was most likely a royal estate and religious retreat. It was built between 1460 and 1470 AD by Pachacuti Inca Yupanqui, an Incan ruler. The city has an altitude of 8,000 feet and is high above the Urubamba River can-yon cloud forest, so it likely did not have any administrative, military or commercial use. After Pachacuti's death, Machu Picchu became the property of his allus, or kinship group, which was responsible for its main-tenance, administration, and any new building. Machu Picchu is comprised of approximately 200 buildings, most being residences, although there are temples, storage and other public buildings. They have polygonal masonry, typical of the late Inca period. About 1,200 people lived in and around Machu Picchu, most of them women, children and priests. The buildings are thought to have been planned abd built under the supervision of professional Inca architects. Most of the structures are built of granite blocks cut with bronze or stone tools and smoothed with sand. The blocks fit together perfectly without mortar, although none of the blocks are the same size and have many faces; some have as many as 30 corners. Another unique thing about Machu Picchu is the intergration of the architecure into the landscape. Existing stone formations were used in the construction of structures, sculptures are carved into the rock, water flows through cisterns and stone channels and temples hang on steep precipices. The houses had steep thatched roofs and trapezoid-shaped doors; windows were unusual. Some of the houses were two stories tall, the second story was probably reached by ladders, which was likely was made from rope since there weren't many trees at this altitude. The houses were gathered around a communal courtyard or aligned on narrow terraces connected by narrow alleys. At the center were large open squares; livestock encloaures and terraces for growing maize stretched around the edge of the city. One of the most important things found at Machu Picchu is the Intihuatana, which is a column of stone rising from a block of stone the size of a grand piano. Intihuatana literally means "For Tying the Sun" although it is usually translated as "hitching post of the sun". As the winter solstice approached, when the sun seemed to disappear more each day, a priest would hold a ceremony to tie the sun to the stone to prevent the sun from disappearing altogether. The other intihuatanas were destroyed by the Spanish conquistadors, but because the Spanish never found Machu Picchu, it remained intact. Mummies have also been found there; most of the mummies were women. Machu Picchu was rediscovered in 1911 by Hiram Bingham, a professor from Yale. Bingham was searching for Vilcabamba, which was the undiscovered last strong-hold of the Incan Empire. When he stumbled upon Machu Picchu he thought he had found it, although now most scholars believe that Machu Picchu is not Vilcabamba. Hiram Bingham was an American historian from Yale University searching for one of the last Inca cities that resisted the Spanish invasion. The hist-orian was driven by the desire to find the last city of the Incas, he also heard rumours from Cuzco's University's North American rector about the existence of uncovered ruins in the Urubamba Jungle. Bingham conducted extensive research in the regions of the Uruamba and Vilcabamba, when he made the astonishing discovery on July 24th, 1911, when he met a group of Quechuans who were actually living in Machu Picchu, also using the agriculture terraces there. He was lead to the site of the ruins of a six centuries-old Inca city by a group of locals who he met in the area. Bingham conducted a survey of the area and completed archeological studies. Photographs are taken of the ruins still covered with dense vegetat-ion. It covered all of the buildings, many buildings were collapsed however, most of them were intact. The roofs of course, were gone because they were made of easily perishable materials, like wood and grass. Even a while after he discovered Machu Picchu, Bingham thought that it was Vitcos. The excavations started in the second half of 1911 and went on for years until the city we see today in pictures was uncovered. Specialists believe that there are still parts of the city hiding in the ground and vegetation. An ancient Incan cemetery is believed to be hiding in the ground and vegetation not far from the entrance to the site. After discovering all 3 cities, Vitcos, Vilcabamba, and Machu Picchu, Bingham understood that it was about three different sites and that Machu Picchu had the higest value to archaeology. He was so passionate about the "Old Peak" (meaning Machu Picchu in Quechuan) that he almost completely forgot about any other archaeological site that he had come across. Hiram Bingham removed many precious artifacts from Machu Picchu, reason for which Peru still makes legal efforts to get back many thousands of objects removed from its most important archaeological site and tourist attraction. Most of the types of objects taken by Bingham still remain unknown. It is not known exactly what econ-omic value they could have or if they are pure artistic and historic values. After Bingham had discovered Machu Picchu, he had over 5,000 archaeological pieces removed from the site and transported many of them to the Yale University. In late 2005, Peru said it would sue Yale in order to retrieve as many archaeological pieces as possible. For hundreds of years, many have conducted searches in the Andes to discover the "Lost City of the Incas", other-wise known as Paititi. Some thought, as Bingham did at the beginning, that Machu Picchu was it. But no gold, no silver was ever found there. The artifacts removed by Bingham were only copper, stone and other objects which didn't have important material value. However, it is important that Machu Picchu was isolated for centuries and such objects could have been found. Archaeologically, culturally, artistically, these are priceless treasures. Perhaps the most important "Treasure" was the intact Intihuatana Stone. Proof that the Conquistadores didn't find the city (all such stones were destroyed after the Spanish came across them). Hiram Bingham conducted excavations through 1915, his team cleared most of the ruins. Many articles were were written in the 1910s & 1920s about this important discovery, among them was "The Lost City of the Incas" and the 1913 issue of the National Geographic magazine, which was entirely dedicated to Bingham's discovery of Machu Picchu. Many experts say that there are still things yet to be discovered at the site and the tourists are hindering their efforts. For many years, people have been speculating that Machu Picchu could have been some sort of secret city of the Incas where they hid their treasures from the Spaniards. It's a false belief, a confusion with the Paititi myth and the El Dorado myth, both of which also differ significantly between each other. Machu Picchu must have been a religious sanctuary or imperial residence. What if Hiram Bingham wasn't first? Some disagree with Bingham's priority claim. Simone Waisbard claims Enrique Palma, Gabino Sanchez, and Agustin Lizarraga were those who discovered the ancient city, leaving their names on one of the rocks there on July 14, 1901. The words "Machu Picchu" actually refer to the peak, not the city, who knows what its real name is? Machu Picchu appears in documents several centuries before, but it refers to the mountain, not what its called today. It's also important to remember that Machu Picchu wasn't as remote as we think, it was the locals that lead Bingham to the site. It is very likely that there were other visitors in the past, but either did not recognize or give importance to the site or had looted it and got away with precious objects. Hiram Bingham certainly wasn't the first, but was the only specialist of the day able to recognize the importance of Machu Picchu and let the world know about it. Vitcos, (one of the last cities held by the Incas), city's ruins are located close to Vilcabamba, which was the last strong-hold held by the Incas. In fact, they are both found in the Vilcabamba Valley. Some may confuse the two. The Last Stronghold of the Incas: There are several Vilcabambas in South America that must not be confused. One of them is the Vilcabamba Valley in Southwestern Ecuador, where a city with the same name was founded by Lius Fernando de la Vega on September 1st, 1756; this place is 42km from the city of Loja. The other two are in Peru. One of them is the Vilcabamba Valley and the other one is a ruined city in it. The Vilcabamba Inca city is the subject of many tales and myths about hidden treasures. Constructed in 1539, the city was crushed by the Spanish army only in 1572, signalling the end of the Inca resistance to Spanish rule. At that time, the Inca ruler was Tupac Amaru. He was caught and killed by the Conquistadores by raising over 100,000 troops, but the Incas soon found themselves on the run through the valleys towards the deep Amazonian jungle where they could not fight the invaders. The Spaniards reached the stronghold of Vilcabamba and dest-royed it with ease. The city was burned and soon the location was forgotten for centuries. The ruins of the city were discovered by Hiram Bingham in 1911 when he was searching for another lost city called Vitcos. Bingham had failed to notice the importance of Vilcabamba thinking that Machu Picchu was the "Lost City of the Incas". The location was explored later by Antonio Santander and Gene Savoy in 1960 and later by Vincent Lee and John Hemming. Today, not much can be seen where Vilcabamba once stood except for a few rocks. They're based on good facts. When the Spaniards invaded Preu, the Incas were literally "Ripped Off", their temples looted and buildings demolished. Pizarro's greedy solders were only looking for Gold, Silver and other precious objects. And when they laid their hands on them, they melted them into single pieces, easier to transport and sell, the artistical value lost forever. The Incas had all the reason to hide their treasures from the Conquistadores and so they did. When Atahualpa was a prisoner, he offered Pizarro enough Gold and Silver to fill a large room. Pizarro lied to him about letting him go after the treasures arrived. He didn't let him go, he asked him where the treasures came from. Atahualpa was smart enough not to reveal the location, but rather pointed out places where smaller amounts are found, but never the larger ones were. Even worse, the Spaniards mixed the gold with less precious metals such as iron, the alloys didn't have as much value as pure gold, it was difficult for them to sell it. Atahualpa was not released but murdered by Pizarro's men. The Spaniards didn't get the large quantities of gold, no matter how much they searched. Today, many adventur-ers, explorers and specialists believe that the treasures are still out there somewhere in the Andes, hidden by the Incas in some remote secret city. The "El Dorado" Legend: Spanish Conquistador Francisco Pizarro was thrilled to find the "El Dorado"... the "Land of Gold", a place with unimaginable riches, hidden in the Andes. The legend of the El Dorado was spread by the Spanish, which ref-erred to the lands within Colombia and Venezuela. Some mix the myth of El Dorado with the stories about the Preuvian Inca treasures. The El Dorado legend was spread by the Muisca People, who lived in what today is Colombia and Venezuela. These people have a legend about a golden person ("El Dorado"), who had golden skin and took large amounts of gold with him to a lake. That lake is the Lago Guatavita located in Colombia, a crater lake in the Andes. Pizarro and his men took the Muisca legend literally and tried to drain the lake and collect the gold in it. They chopped out part of the crater's wall in a V-shape which can still be seen today. So, the El Dorado is not an Inca legend about treasures, nor does it have any connection with the conquest of Peru. The El Dorado legend is a story started by Conquistador Vasco Nunez de Balboa, who heard about the Muisca legend and the story spread and had captured Pizarro's imagination.
http://www.jacksbromeliads.com/incacivilizations.htm
13
14
Jewish settlements in Ukraine can be traced back to the 8th century. During the period of the Khazar kingdom, Jews lived on the banks of the River Dnieper and in the east and south of Ukraine and the Crimea. The Kingdom was considered the most influential of the medieval period because of its economic and diplomatic standing. The Khazars, an ancient nomadic Turkic people who reached the lower Volga region in the 6th century, were held in high esteem by the pope and other national leaders and played a major role in solving the region’s conflicts. The Khazars’s Empire, at its height between the 8th and 10th centuries, extended from the northern shores of the Black Sea and the Caspian Sea as far west as Kiev. Jewish refugees from the Byzantium, Persia and Mesopotamia regions — fleeing from persecution by Christians throughout Europe, settled in the Kingdom because the Khazars allowed them to practice their own religion. Over time, Jews integrated into the society and married Khazar inhabitants. At first, Khazars from royal families converted to Judaism. But other citizen from throughout the Kingdom soon followed suit, adopting Jewish religious practices including reading the Torah, observing the Sabbath, keeping kosher and switching to Hebrew as the official written system. At a time of religious intolerance, the Jews of Khazaria contributed to building a powerful nation while living in peace. The Jews of Khazaria may have been among the founders of the Jewish community of Poland and of other communities in Eastern Europe. In 965 A.D., however, the Khazar Empire suffered a blow when the Russians ransacked its capital. In the middle of the 13th century (1241), the Khazars were defeated by the Mongol invasion — an invasion that devastated all of Poland. To rebuild the country and defend its cities, Poland recruited immigrants from the west, mainly Germany, promising to help them settle in villages and towns. German Jews, many of whom were massacred by Christian crusaders in the 1200 and devastated by the Black Death in 1300, immigrated to Poland. Jews in Poland shared a heritage with the new immigrants, but not a language. To communicate with one another, Jews in Poland created a common language. Yiddish. Made up of a combination of Middle German, Hebrew, Polish and German-Hebrew, Yiddish became the Ashkenazi national Jewish language. Later, Jews from the western provinces of Poland moved to Ukraine because of the economic opportunities created by Poland’s expanding influence, which increased even more so in the 16th century with the consolidation of Poland-Lithuania over the region. By the end of the 15th century, between 20,000 and 30,000 Jews were living in 60 communities throughout Poland-Lithuania, most of them in cities. Ukraine became the center of Jewish life in Poland-Lithuania. - Ukraine Directory - Jewish Web Index History of Poland (966-1385) The first Jews arrived in the territory of modern Poland in the 10th century. By travelling along the trade routes leading eastwards to Kiev and Bukhara, Jewish merchants (known as Radhanites) crossed the areas of Silesia. One of them, a diplomat and merchant from the Moorish town of Tortosa in Spanish Al-Andalus, known under his Arabic name of Ibrahim ibn Jakub, was the first chronicler to mention the Polish state under the rule of prince Mieszko I. The first actual mention of Jews in Polish chronicles occurs in the 11th century. It appears that Jews were then living in Gniezno, at that time the capital of the Polish kingdom of the Piast dynasty. The first permanent Jewish community is mentioned in 1085 by a Jewish scholar Jehuda ha-Kohen in the city of Przemyśl. From the Middle Ages until the Holocaust, Jews comprised a significant part of the Polish population. The Polish-Lithuanian Commonwealth, known as a "Jewish paradise" for its religious tolerance, attracted numerous Jews who fled persecution from other European countries, even though, at times, discrimination against Jews surfaced as it did elsewhere in Europe. Poland was a major spiritual and cultural center for Ashkenazi Jewry, and Polish Jews made major contributions to Polish cultural, economic, and political life. At the start of the Second World War, Poland had the largest Jewish population in the world (over 3.3 million), the vast majority of whom were killed by the Nazis in the Holocaust during the German occupation of Poland, particularly through the implementation of the "Final Solution" mass extermination program. Only 369,000 (11%) survived. After massive postwar emigration, the current Polish Jewish population stands at somewhere between 8,000 and 20,000. LIST OF CITIES Bukowsko ( Bikofsk ) 49.31.10 North, 22.02.30 East [buˈkɔfskɔ] (Yiddish: בוקאווסק Bikofsk, Russian: Буковско) is a village in Sanok County, Subcarpathian Voivodeship, Poland. It's in the Bukowsko Upland mountains, parish in loco, located near the towns of Medzilaborce and Palota (in northeastern Slovakia). During the Polish–Lithuanian Commonwealth it was in Lesser Poland prowincja. Bukowsko is the administrative and cultural centre of the Gmina Bukowsko. It is crossed by the rail road connecting it with Slovakia. It is especially the private sector and service industries that are developing rapidly at this time. It is home to the Uniwersytet Ludowy, opened in 2005, which contains many artworks and effects of the folk handworks inspiration. Bukowsko is situated in the poorest region of Poland Settled in prehistoric times, the southern-eastern Poland region that is now Podkarpacie was overrun in pre-Roman times by various tribes, including the Celts, Goths and Vandals (Przeworsk culture). After the fall of the Roman Empire, of which most of south-eastern Poland was part (all parts below the San), the area was invaded by Hungarians and Slavs. The region subsequently became part of the Great Moravian state. Upon the invasion of the Hungarian tribes into the heart of the Great Moravian Empire around 899, the Lendians of the area declared their allegiance to Hungarian Empire. The region then became a site of contention between Poland, Kievan Rus and Hungary starting in at least the 9th century. This area was mentioned for the first time in 981 (by Nestor) , when Volodymyr the Great of Kievan Rus took the area over on the way into Poland. In 1018 it returned to Poland, 1031 back to Rus, in 1340 Casimir III of Poland recovered it. In historical records the village was first mentioned in 1361. During 966 - 1018, 1340 - 1772 (Ruthenian Voivodeship) and during 1918 - 1939 Bukowsko was part of Poland. While during 1772 - 1918 it belonged to Austrian empire, later Austrian-Hungarian empire when double monarchy was introduced in Austria. This part of Poland was controlled by Austria for almost 120 years. At that time the area (including west and east of Subcarpathian Voivodship) was known as Galicia. It was given the Magdeburg law in 1768. In 1785 the village lands comprised 6.5 km2 (2.5 sq mi). There were 700 Catholics. In 1864 Rabbi Shlomo Halberstam was appointed as rabbi of the Jewish community of Bukowsko. He held this position until 1879. After the Nazis had captured the town, Jewish homes and shops were robbed by the Ukrainians from neighbouring towns. In the spring of 1942, 804 Jews of Bukowsko and over 300 of the surrounding villages were put into a ghetto. Out of that number over 100 were shot on the local (Jewish) cemetery. The rest were transported to the camp in Zwangsarbeitslager Zaslaw. None of the prayer houses survived the war. Only a few matzevahs remained on the cemetery. Bukowsko also had a labour camp which existed from August to Oct. of 1942. The Jews, 60 on average, carried out road construction. The village was burned down January, March and November 1946. Only over a dozen years after the war the village started to rebuild 49´15´´94 North, 25´70´´47 East Budaniv (Ukrainian: Буданів, Polish: Budzanów) is a village in Ternopil Oblast, Western Ukraine, near Chortkiv, Buchach. Population: 1,634 (2005). The settlement was founded in 1549 on the banks of the Seret River. The village was named after a Polish nobleman, Jakub Budzanowski, Halitz voevode. Mountainous terrain of the region always attracted new settlers and about 1550 a wooden castle was built up on the peak of one of the hills. The castle was rebuilt in the beginning of 17th century. The castle was ruined by the Turks in 1675. In 1765 Maria Potocka, a Polish countess, founded a Catholic church on the castle's ruins. Leżajsk, ליזשענסק , Lizhensk [ˈlɛʐai̯sk] 50.16 North, 22.26 East Full name The Free Royal Town of Leżajsk, Polish: Wolne Królewskie Miasto Leżajsk, Yiddish: ליזשענסק-Lizhensk is a town in southeastern Poland with 14,127 inhabitants (02.06.2009). It has been situated in the Subcarpathian Voivodship since 1999 and is the capital of Leżajsk County. Leżajsk is famed for its Bernadine basilica and monastery, built by the architect Antonio Pellacini. The basilica contains a highly regarded pipe organ from the second half of the 17th century and organ recitals take place there. Leżajsk is also home of the Leżajsk brewery. The Jewish cemetery in Leżajsk is a place of pilgrimage for Jews from all over the world, who come to visit the tomb of Elimelech, the great 18th century Orthodox rabbi. The town is crossed by a forest creek, ‘Jagoda’. Lviv, Lwów, Lvov, and Lemberg, Львів 49º 51 North, 24º 01' East. - Sister cities Corning, Freiburg, Grozny, Kraków, Novi Sad, Przemyśl, Saint Petersburg, Whitstable, Winnipeg, Rochdale - Website http://lviv.travel/en/index (English) - http://www.city-adm.lviv.ua (Ukrainian) The city is regarded as one of the main cultural centres of today's Ukraine and historically has also been a major Polish and Jewish cultural center, as Poles and Jews were the two main ethnicities of the city until the outbreak of World War II and the following Holocaust and Soviet population transfers. The historical heart of Lviv with its old buildings and cobblestone roads has survived World War II and ensuing Soviet presence largely unscathed. The city has many industries and institutions of higher education such as the Lviv University and the Lviv Polytechnic. Lviv is also a home to many world-class cultural institutions, including a philharmonic orchestra and the famous Lviv Theatre of Opera and Ballet. The historic city centre is on the UNESCO World Heritage List. Lviv celebrated its 750th anniversary with a son et lumière in the city centre in September 2006. Lviv was founded in 1256 in Red Ruthenia by King Danylo Halytskyi of the Ruthenian principality of Halych-Volhynia, and named in honour of his son, Lev. Together with the rest of Red Ruthenia, Lviv was captured by the Kingdom of Poland in 1349 during the reign of Polish king Casimir III the Great. Lviv belonged to the Crown of the Kingdom of Poland 1349-1772, the Austrian Empire 1772–1918 and the Second Polish Republic 1918–1939. With the Invasion of Poland at the outbreak of the second World War, the city of Lviv with adjacent land were annexed and incorporated into the Soviet Union, becoming part of the Ukrainian Soviet Socialist Republic from 1939 to 1941. Between July 1941 and July 1944 Lviv was under German occupation and was located in the General Government. In July 1944 it was captured by the Soviet Red Army and the Polish Home Army. According to the agreements of the Yalta Conference, Lviv was again integrated into the Ukrainian SSR. After the collapse of the Soviet Union in 1991, the city remained a part of the now independent Ukraine, for which it currently serves as the administrative centre of Lviv Oblast, and is designated as its own raion (district) within that oblast. On 12 June 2009 the Ukrainian magazine Focus assessed Lviv as the best Ukrainian city to live in. Its more Western European flavor lends it the nickname the "Little Paris of Ukraine". Bratslav ( Breslov ) Nemirov, Nemyriv , Немирів , Peace Island City Nemyriv is one of the eldest cities in Vinnytska oblast, Ukraine. It was founded by Prince Nemyr in 1390. It is a minor industrial center with a current estimated population of around 10,000. Nemyriv was built on the site of ancient Scythian settlement Myriv, destroyed during the Mongol invasion of Rus. It was first mentioned under its modern name in 1506. 49´27 N, 27´25 East Medzhybizh is first mentioned in chronicles as an estate in Kievan Rus. It was given to Prince Svyatoslav by the prince of Kiev in the year 1146. In 1148, ownership transferred to Rostyslav, the son of Yuri Dolgoruky. The wooden fortress that stood there was destroyed in 1255. After the Mongol incursion, by 1360, the town and surrounding territory passed into the hands of the Lithuanians. The town suffered from numerous attacks by the Tatars in 1453, 1506, 1516, 1546, 1558, 1566, and 1615. In 1444 the town was incorporated into lands administered by Poland. In the 16th century, the territory was controlled by the Sieniawski and Potocki Polish noble families. In 1511 work began to replace the wooden palisades with massive stone fortifications, many of which can still be seen today. A dam was built across the Southern Bug river to provide a defensive lake, and a rhomboid Medzhybizh Castle with four towers was built. The state-of-the-art fortifications made Medzhybizh one of the strongest military sites in the region and led to the rise of its prosperity in the next three centuries. - In 1571 a census was recorded, listing the population as being made up of 95 Ruthenians, 35 Jews, and 30 Poles. In 1593 Adam Sienawski gave the town Magdeburg rights. In the mid-16th century the Zasławski family, a Polish noble family, turned Medzhybizh into an impregnable fortress. The Zaslavskys used Medzhybizh as their base from which to defend the southern borders from the incursions of the Ottoman Turks and Crimean Tartars. Jewish history and culture : Medzhybizh was the center of Jewish culture in its region in Ukraine. The first records of Jews in Medzhybizh date back to the early 16th century. These records state that various Jews were granted special privileges by the Polish kings, including a proclamation in 1566 by King Sigismund II Augustus that the Jews of Medzhybizh were exempt from paying taxes in perpetuity. The earliest known burial in the Jewish cemetery dates from 1555. Many key rabbinic leaders lived in Medzhybizh during the 17th through 20th centuries. The earliest important rabbi to make Medzhybizh home was : Rabbi Joel Sirkes (1561–1640), a key figure in Judaism at that time. He lived in Medzhybizh from 1604-12. The most important rabbi from Medzyhbizh was Rabbi Israel ben Eliezer Baal Shem Tov - Besht (1698–1760), the founder of Hasidism. He lived in Medzhybizh from about 1742 until his death in 1760. His grave can be viewed today in the Medzhybizh old Jewish cemetery. The Baal Shem Tov is considered one of the key Jewish personalities of the 18th century who has shaped Judaism into what it is today. His work led to the founding the Hasidic movement, established by his disciples, some of whom also lived in Medzhybizh, but most of whom traveled from all over Eastern Europe, sometimes from great distances, to visit and learn from him. In Medzhybizh, the Baal Shem Tov was also known as a "doktor" and healer to both Jews and non-Jews. He was known to have been given a special tax-free dispensation by the Czartoryski family and his house shows up on several town censuses. There were two fundamentally different rabbinic leaders in the town, those who were Hasidic and those who were not. In general, both groups got along, but the followers of the Hasidic leaders believed they had a special connection with God and were cult-like in their devotion to their "rebbe". The non-Hasidic leaders tended to follow a scholarly path and were more responsible for the Jewish institutions, such as observance of kashrut, the social structure of the town, liaison with the town's nobles, and control of the Jewish court. Hasidic leaders included : Rabbi Boruch of Medzhybizh (1757–1811), the Baal Shem Tov's grandson. Rabbi Boruch was notable for his principle of malkhus ("royalty") and conducted his court accordingly. He was also known for his "melancholy" and he had a fiery temper. Many of his grandfather's disciples and the great Hasidic leaders of the time, regularly visited Rabbi Boruch, including the Magid of Chernobyl, the Magid of Mezritch, Rabbi Shneur Zalman of Liadi (founder of the Chabad Hasidic movement), and others. In an attempt to remedy Rabbi Boruch's melancholy, his followers brought in Hershel of Ostropol as a "court jester" of sorts. Hershel was one of the first documented Jewish comedians and his exploits are legendary within both the Jewish and non-Jewish communities. Hershel is also buried in the old Jewish cemetery in Medzhybizh, though his grave is unmarked. One legend has it that in a fit of rage Rabbi Boruch himself was responsible for Hershel's death. Rabbi Nachman of Breslav (1772–1810), the Baal Shem Tov's great-grandson, was born in Medzhybizh but left at an early age. He became the founder of the Breslover Hasidim. Another Hasidic leader Rabbi Avraham Yehoshua Heshel of Apta, The Apter Rov (1748–1825), made Medzhybizh his home from 1813 until his death in 1825. The Apter Rov is also buried in the old Jewish cemetery in Medzhybizh, very close to the Baal Shem Tov's grave. The Heshel family became one the foremost Hasidic rabbinic dynasties and various descendants remained in Medzhybizh well into the 20th century. The non-Hasidic rabbinic leadership of Medzhybizh was controlled by the Rapoport-Bick dynasty, the most important of all the non-Hasidic rabbinic dynasties of Medzhybizh. Rabbi Dov Berish Rapoport (d. 1823) was the first to make Medzhybizh his home. He was the grandson of Rabbi Chaim haCohen Rapoport of Lviv (d. 1771), a notable sage during the mid 18th century. Dov Berish Rapoport's grave can be seen today at the old Jewish cemetery in Medzhybizh. Other rabbis of this dynasty include Rabbi Isaac Bick (1864–1934) who immigrated to America in 1925 and founded a synagogue in Rhode Island. Rabbi Chaim Yekhiel Mikhel Bick (1887–1964) was the last known rabbi to reside in Medzhybizh. He left Medzhybizh for New York in 1925. It is not known whether Medzhybizh had another rabbi when it served as a Jewish ghetto in World War II. The Rapoport Dynasty traces its roots back to Rabbi Jacob Emden (1697–1776) who was involved in the Frankist debates and his father Rabbi Tsvi Hirsh Ashkenazi, known as the Chacham Tsvi (1660–1718). The Rapoports themselves are a long distinguished rabbinic family who traces their roots back to Central Europe and Northern Italy in the 15th century. The first Rapoport rabbi to make his home in Medzhybizh was Rabbi Dov Berish Rapoport (d. 1823). He was the grandson of Rabbi Chaim haCohen Rapoport of Lviv (d. 1771), who was also involved in the Frankist debates. Rabbi Dov Berish became the head of the Jewish court (Av Beth Din) and leader of the entire Jewish community of Medzhybizh. However, in a dispute with Rabbi Moshe Chaim Ephraim, the Baal Shem Tov's grandson around the year 1800, the non-Hasidic and the Hasidic communities separated into two leadership groups. The Rapoport/Bick family continued to control the town's Jewish religious court. The Hasidic community at the time chose Rabbi Issachar Dov-Ber Landa to represent them in official matters. Interestingly, both Rabbis Rapoport and Landa are buried side-by-side in the Medzhybizh Jewish cemetery, just a few steps away from the Baal Shem Tov's grave. Jewish institutions in Medzhybizh : Medzhybizh was the home to at least two synagogue buildings and numerous small minyanim. One synagogue still stands today but is used for other purposes. It was the synagogue of R. Avraham Yehoshua Heshel, the Apter Rov. In early 2008, it was bought by the Ohalei Zaddikim organization and is slated for reconstruction. The other synagogue, the Baal Shem Tov's old wooden synagogue, was torn down for firewood during World War II. It has recently been rebuilt according to plan. Medzhybizh also contains two Jewish cemeteries. The old Jewish cemetery contains the grave of the Baal Shem Tov and other famous and notable Jews. It has turned into something of a tourist attraction, a magnet for Hasidic Jews from all over the world. The new Jewish cemetery has graves from the early 19th century through to the 1980s. A Nazi mass killing site outside of town holds the graves of almost 3,000 Jews in 3 different trenches. 48°45′N, 30°13′E / 48.75°N, 30.217°E / 48.75; 30.217 Uman is a city located in the Cherkasy Oblast (province) in central Ukraine, to the east of Vinnytsia. The city rests on the banks of the Umanka River and serves as the self-governing administrative center of the Umanskyi Raion (district). Stanislawczyk, Stanisławczyk, Stanislawczyk, Stanislawczy 49°44′ N , 22°51′E / 49.733, 22.85 Stanislawczyk – wies w Polsce polozona w wojewodztwie podkarpackim , w powiecie przemyskim , w gminie Przemysl . Stanislawczyk - a village in Poland located in Subcarpathian Voivodeship , in the district of Przemysl , in the municipality of Industry . History . Miejscowosc zostala zalozona jako miasto po koniec XVII w. przez kasztelana lwowskiego Jana Stanislawa Fredre. Town was founded as a town after the end of the seventeenth century by the castellan of Lviv John Stanislaus Fredro. Stanislawczyk nie mial odpowiednich warunkow rozwoju. Stanislawczyk did not have the appropriate terms of development. Byl takze niszczony przez wylewy rzeki Wiar . He was also destroyed by the floods of the river Faiths . Na skutek tego, juz pod koniec XVIII w. osada zostala zdegradowana do rzedu wsi. As a result, already at the end of the eighteenth century, the settlement has been relegated to the village a row. W 1914 r. zrownana z ziemia podczas przygotowan do obrony Przemysla. In 1914, razed in preparation for the defense of Przemysl. Ze Stanislawczyka pochodzil Dmytro Karwanskyj . He came from Stanislawczyka Dmytro Karwanskyj . Tarnopol, Ternopil, Тернопіль, Tarnopol, Тернополь, 49' 34 N, 25' 36' E Tarnopol is a city in western Ukraine, located on the banks of the Seret River. Ternopil is one of the major cities of Western Ukraine and the historical region of Galicia. The city was founded in 1540 by Jan Amor Tarnowski as a Polish military stronghold and a castle. In 1544 the Ternopil Castle was constructed and repelled its first Tatar attacks. In 1548 Ternopil was granted city rights by king Sigismund I of Poland. In 1567 the city passed to the Ostrogski family. In 1575 it was plundered by Tatars. In 1623 the city passed to the Zamoyski family. In the 17th century the town was almost wiped from the map in the Khmelnytsky Uprising which drove out or killed most of its Jewish residents. Ternopil was almost completely destroyed by Turks and Tatars in 1675 and rebuilt by Aleksander Koniecpolski but did not recover its previous glory until it passed to Marie Casimire, the wife of king Jan III Sobieski in 1690. The city was later sacked for the last time by Tatars in 1694, and twice by Russians in the course of the Great Northern War in 1710 and the War of the Polish Succession in 1733. In 1747 Józef Potocki invited the Dominicanes and founded the beautiful late baroque Dominican Church (today the Cathedral of the Immaculate Conception of The Blessed Virgin Mary of the Ternopil-Zboriv eparchy of the Ukrainian Greek Catholic Church). The city was thrice looted during the confederation of Bar (1768–1772), by the confederates themselves, by the king's army and by Russians. In 1770 it was further devastated by an outbreak of smallpox. Tarnopol Voivodeship before 17 September 1939In 1772 the city came under Austrian rule after the First Partition of Poland. At the beginning of the 19th century the local population put great hope in Napoleon Bonaparte, in 1809 the city came under Russian rule, which created Ternopol krai there. In 1815 the city (then with 11,000 residents) returned to Austrian rule in accordance with the Congress of Vienna. In 1820 Jesuits expelled from Polatsk by Russians established a gymnasium in the town. In 1870 a rail line connected Ternopil with Lviv, accelerating the city's growth. At that time Ternopil had a population of about 25,000. After the dissolution of the Austro-Hungarian Empire, the city was proclaimed part of the West Ukrainian People's Republic on 11 November 1918. During the Polish-Ukrainian War it was the country's capital from 22 November to 30 December after Lviv was captured by Polish forces. After the act of union between Western-Ukrainian Republic and the Ukrainian People's Republic (UPR), Ternopil formally passed under the UPR's control. On 15 July 1919 the city was captured by Polish forces. In 1920 the exiled Ukrainian government of Symon Petlura accepted Polish control of Ternopil and of the entire area in exchange for Polish assistance in restoration of Petlura's government in Kyiv. This effort ultimately failed, and in July and August 1920 Ternopil was captured by the Red Army in the course of the Polish-Soviet War and served as the capital of the Galician Soviet Socialist Republic. By the terms of the Riga treaty that ended the Polish-Soviet war, the Soviet Russia recognized the Polish control of the area. From 1922 to September 1939, it was the capital of the Tarnopol Voivodeship that consisted of 17 powiats. The policies of the Polish authorities, especially the assimilationist ethnic policies, affected all spheres of public life. In 1939 it was a city of 40,000; 50% of the population was Polish, 40% Jewish and 10% Ukrainian. During the Polish Defensive War it was annexed by the Soviet Union and attached to the Ukrainian Soviet Socialist Republic. The Soviets continued the campaign against the Organization of Ukrainian Nationalists aided by the information given to them by the former Polish authorities. The Soviets also carried out mass deportations of the Polish part of the population to Kazakhstan. In 1941 the city was occupied by the Germans who continued exterminating the population by murdering the Jews and sending others to forced labour in Germany. In April 1944 the city was retaken by the Red Army, the remaining Polish population having been previously expelled. During the Soviet reoccupation in March and April 1944, the city was encircled and completely destroyed. In March 1944 the city was declared a fortified place by Adolf Hitler, to be defended until the last round was shot. The stiff German resistance caused extensive use of heavy artillery by the Red Army, resulting in the complete destruction of the city and killing of nearly all German defenders. (55 survivors out of 4,500) Unlike many other occasions, where the Germans had practised a scorched earth policy during their withdrawal from territories of the Soviet Union, the devastation was caused directly by the hostilities. After the war, Ternopil was rebuilt in typically Soviet style. Only a few buildings were reconstructed. Polish Jews settled in Ternopil beginning at its founding and soon formed a majority of the population. During the 16th and 17th centuries there were 300 Jewish families in the city. The Great Synagogue of Ternopil was built in Gothic Survival style between 1622 and 1628. Among the towns destroyed by Bohdan Khmelnytsky during his march from Zolochiv through Galicia was Tarnopol, the large Jewish population of which carried on an extensive trade. Shortly afterward, however, when the Cossacks had been subdued by John III of Poland, the town began to prosper anew, and its Jewish population exceeded all previous figures. It may be noted that Hasidism at this time dominated the community, which opposed any introduction of Western culture. During the troubled times in the latter part of the eighteenth century, the city was stormed (1770) by the adherents of the Confederacy of Bar, who massacred many of its inhabitants, especially the Jews. After the second partition of Poland, Ternopil came under Austrian domination and Joseph Perl was able to continue his efforts to improve the condition of the Jews there, which he had begun under Russian rule. In 1813 he established a Jewish school which had as its chief object the instruction of Jewish youth in German as well as in Hebrew and various other subjects. Controversy between the traditional Hasidim and the modernising Maskilim which this school caused, resulted four years later in a victory for the latter, whereupon the institution received official recognition and was placed under communal control. Starting in 1863, the school policy was gradually modified by Polish influences, and very little attention was given to instruction in German. The Tempel für Geregelten Gottesdienst, opened by Perl in 1819, also caused dissensions within the community, and its rabbi, S. J. Rapoport, was forced to withdraw. This dispute also was eventually settled in favour of the Maskilim. As of 1905, the Jewish community numbered 14,000 in a total population of 30,415. The Jews seized the active import and export trade with Russia through the border city of Podwoloczyska. In 1941, 500 Jews were murdered on the grounds of Ternopil's Christian cemetery by local inhabitants using weapons borrowed from a German army camp. According to interviews conducted by a Roman Catholic priest, Father Patrick Desbois, some of the bodies were decapitated. One woman described how her mother would "finish off" wounded Jews with a shovel blow to the head before burying them. Trembowla, Terebovlia, Теребовля, Terebovlya, Trembowla 49´16´60 North, 25'41'60 East In 1929 there was 7,015 people (mostly Polish, Ukrainian and Jewish). Prior to the Holocaust the city was home to 1,486 Jews. Most of the local Jews (1,100) were shot by Germans in the nearby village of Plebanivka on April 7, 1943. Terebovlia (English: Trembovl) is one of the oldest cities in present western Ukraine. The city is quite ancient and during the Red Ruthenia times it used to be the center of Terebovlia principality. It was called Terebovl (Polish: Trembowla). Terebovlia principality included lands of whole south east of Galicia, Podolia and Bukovyna. The city was first mentioned in chronicles in the year 996. Polish King Casimir III the Great became the suzerain of Halych after his cousin's death, Boleslaw-Yuri II of Galicia, the city became the part of Polish domain but it became fully incorporated into Poland in 1430 under king Władysław II Jagiełło while his son Casimir IV Jagiellon granted the town limited Magdeburg Rights. After the construction of a castle in 1366, Poland's (Podole Voivodeship) administered Terebovlia, and it became part of the system of border fortifications of Polish Kingdom and later Polish-Lithuanian Commonwealth, against the Moldavian and Wallachian transgressions and later also against constant Crimean Tatars and Turkish and later also Zaporozhian Cossack invasions from the south and south-east. That is why Terebovlya castle, monastery and churches were all designed as defensive structures. This was the seat of the famous starost and most successful 16th century anti-Tatar Polish commander, Bernard Pretwicz, who died there in 1563. In 1594, the Ukrainian Cossackt rebel Severyn Nalyvaiko sacked the town. During the Khmelnytsky Uprising Terebovl became one of the centers of the struggle in Podolia lands. The city was frequently raided by the Crimean Tatars and Turks and their erstwhile allies Zaporozhian Cossacks. In 1675 the Ottoman Army destroyed the town but the castle was held by a small group of defenders (80 soldiers and 200 townsmen) until their king Jan III Sobieski arrived to relieve them, episode known as Battle of Trembowla. The castle was destroyed during the final Turkish invasion of 1688. Here Bar Confederacy was declared in 1768. After the first partition of Poland (1772) Terebovlia became part of Austrian Empire (until 1918), then after Polish-Ukrainian War and Polish-Soviet War again part of Poland (1918–1939), then Soviet Union took the city along with eastern Poland until the German invasion in 1941, then again the Soviet Union (1944–1991) took over the town at it became a part of the Soviet Ukraine and in 1991 finally Terebovl part of an independent Ukraine. 48.40.00 North, 25.30.00 East The current estimated population is around 9,800 (as of 2001). During World War II the Jewish population of Horodenka, comprising about half of the town's population, were shot and killed in a mass grave by the Nazis. About a dozen Jews survived and formed a partisan combat unit which fought against the Nazis and hid in the forests. Famous people from Horodenka - Nicholas Charnetsky (1884–1959), Ukrainian Catholic bishop and martyr. - Salo Flohr, chess grandmaster - Marie Ljalková, sniper in the Soviet army - Leonard Lyons, U.S. newspaper columnist - Morris Orodenker, U.S.music critic with an Americanized version of the surname "Horodenka" - Rabbi Nachman of Horodenka, a disciple of the Baal Shem Tov and grandfather of Rebbe Nachman of Breslov - Aleksander Topolski, soldier, architect, and writer, author of "Without Vodka" - Alexander Granach (Jessaja Szajko Gronish), leading stage and film actor in Weimar Germany, died at 52 while establishing himself in Hollywood and on Broadway. Author of autobiography, There Goes an Actor [new edition: From the Shtetl to the Stage: the Odyssey of a Wandering Actor.] - Elias Jubal (born as Benno Neumann 12. 1. 1901), theatre director and founder of the Kellertheater "Theater für 49" in Vienna. The Villages Around Horodenka by, Dov Mossberg. Translated by Yehudis Fishman In a general way, I am considered a son of the city of Horodenka, rather than from the surrounding areas, since I spent most of my youth in that city and returned there after the First World War. However, I was born in one of the surrounding villages called Semenovka, and I spent my childhood years in another nearby village called Stetseva. I have memories of the home of my father and grandfather, who were both villagers most of their life. Now that the chapter of the history of Judaism in Galicia has been sealed, the chapter about the Jewish villagers and settlers and how they struck roots in a strange environment among the Ukrainian population, is worth recreating with some of the impressions and experiences of a young Jewish village boy. The district of Horodenka encompassed forty-eight villages, and in them the Ukrainian population was about twenty thousand people. In almost all of these villages there lived several Jewish families, who with great strength in their souls uprooted themselves from the city centers to seek their livelihood in the villages. However, it wasn't easy for Jews to give up the conveniences, security, and warmth of being with the city folk to take upon themselves the loneliness and alienation of living among a primitive and envious folk. This isolation intensified the feeling of exile and they acquired the taste of exile within exile. However, only a select few were able to continue their lives year after year to maintain the genealogy of the Jewish villager, who represented a special type of person in the chapter of life of Jews in exile. The first Jew to come to the village was generally someone who rented a saloon. This way of earning a livelihood was harsh and bitter and demanded constant interaction with the non-Jews who gathered there during their holidays. They often became intoxicated, and more than once, fistfights broke out among them. There were also threats directed toward Jews, whom they hated intensely. The loneliness that oppressed the Jews during weekdays was intensified sevenfold on the Sabbaths and holidays. On those days, the villager was forced to forgo being able to pray in a congregation, to hear kedusha and barchu (prayers which can't be said alone-trans.) from the cantor, and had to be satisfied with an “orphaned” and grieving prayer. And who can describe the great pain of raising children in the village! There were two possibilities open to the individual Jewish villager in educating his children, and both involved great expense: to send his children to a nearby city to attend the cheder there, or to hire a teacher, that he would be willing to pay, to come to his home. Under these harsh conditions, the villager had to be satisfied with a very minimal education for his children – to be able to read Hebrew from the prayer book. The father had to watch with a painful heart, as his son grew up to be an am haaretz, an ignorant person. We also cannot minimize the effect of the non-Jewish environment to which both their sons and daughters were drawn. More than once, this attraction ended up tragically, with children changing their religion, leaving their parents' home, and casting a stain upon the entire family: fathers did not want to forgive the child who betrayed her family. Sometimes an outside Jew would drop into this special environment. He might have been a wanderer going from city to city knocking on the doors of philanthropists. I still recall one who was graciously made welcome in the home of a Jewish villager, and honored with a wholesome meal and a place to stay over. The entire household would then try to get close to him and drink his words with thirst. Sometimes he would bring regards from the host's relatives or friends, whom he came across in his meanderings. Other times, he would just convey news and information about what was happening in nearby villages or in the “big wide world.” Often the traveler would be a Torah scholar, who transmitted a God-fearing air. In the middle of conversing with him, the host might remember something from his childhood learning, and would hold tight to a brief teaching or story from the guest that he had never heard before. In the morning, after prayer and breakfast, the traveler would go on his way with a generous donation from his host, who would bless him for the pleasure that the guest provided from his visit. He considered this visit to be “live regards” from the larger Jewish community to which he belonged and with which his soul yearned to connect. However these visits were relatively rare and the rest of the days of the year, the Jewish villager remained in his sad state of loneliness. Only when the Days of Awe came would he leave his house and property, and entrust, or perhaps practically abandon his property to the hands of non-Jews – the house with its furniture, the field whose crop had not yet been gathered – and travel with his family to the city, to spend the holy days together with all the house of Israel, to pour out his conversation before the creator just like everyone else and to absorb the atmosphere of the shul that was so far away during the rest of the year. And when the holidays were finished, he went back to his village, cleansed and purified from materiality, and filled with hope and faith that his prayer had been accepted, and that the new year would bring only good on its wings, for him and his family and for all Israel. This description of the lonely Jewish villager was actually known to me only by hearsay. During my childhood, there were about 30 Jewish families in our village of Stetseva. This was just enough to mitigate the harsh loneliness. Most of the families in the village were related to each other by marriage. On Sabbaths and holidays they would gather for communal prayer, with a minyan. They did not travel to the city for the Days of Awe, but, because they want all to fulfill the directive of “The glory of the King is in a large populace,” two adjoining groups would gather together for one minyan and they would summon a cantor with a distinguished appearance and a pleasant voice, to help them celebrate the holiday in all its details. The relationship between the Jewish villager and the general population was decent enough. In spite of the vast difference in religion, in their way of life, and in external appearance, neighborly feelings existed between them, which were based on shared daily experiences. These connections were closer among those Jews who were actually involved in working the land. This joint activity and their common concerns even forged a common language. Though these similarities were not enough to uproot mistrust or to diminish the embedded hatred toward the Jew, it was enough to enable proper neighborly relations during stable times. There was one special village near Horodenka, the village of Chernovitz. Most of its residents were Jews, and most of them were engaged in farming. This city also stood out for their communal activities; for a certain period after the World War One, they even had a Hebrew school. The landowners occupied a special place among the villagers. In all the neighboring villages, a substantial amount of land belonged to one family, generally a wealthy Polish family who worked the land with peasant villagers who were supervised by a foreman. Sometimes the land was leased out to the tenants who had to pay a portion of the crops to their landlords. This was a residue of the lifestyle of the feudal system, when the land generally remained with the rulers, and the farmers got a very meager portion of the produce, usually just enough to sustain life. In the beginning of the twentieth century, the farmers were free and independent and owned their own portions of land. Still the primary landowner retained his position. Since most business was in the hands of the Jews, all the landowners needed Jews. Many Jews thus succeeded in winning the trust of the Polish landowners and were involved in their daily business dealings. Many Jews took supervisory jobs. With the passage of time, many plots of land were transferred to wealthy Jews, and the class of Jewish landowners arose. Most of these lived in the village, in an estate that was in the courtyard of the farm. But there were also those who lived in the city and ran their farm through supervisors. Also in Stetseva, the village where my father lived during his last years and where I spent my childhood, there was a Jewish landowner named Yehudah Cohen, a very learned and educated Jew. His brother Dr. Cohen was a lawyer in Horodenka, who also stood out for his knowledge of Hebrew and his Zionistic leanings, unlike most Jewish lawyers who were usually assimilated. Only in the years after World War One did groups of Jewish intelligencia get involved in political life. Beside Yehuda Cohen, there were several other Jewish landowners in the villages around Horodenka: Yossel Zeidman in Serafince, Bezner in Potoczysk, Nota Goldberg in Strel'Cheye, the Baron family in Semenuvka and Rakovets, and the Ruble family in Kornev. One of the citizens of Horodenka also joined the landowner class in the last years before 1914, when he purchased the estate in Czerniatyn. This was Berel Shpierer, who reached a level of affluence in a few short years, and was for a time also the communal head in Horodenka. He was a modern Jew, and in his youth was a member of the Maskilim group in the city. The Zionists, who supported his choice, also accepted him. The other estate holders did not participate in communal matters. They didn't turn to Zionism but also were not assimilationists. In the years after World War One, these estates sometimes served as training camps for pioneers. The differences between the village Jews and the city residents were great. The village Jew in his coarse and simple garments, with his primitive customs and lack of culture, often served as a target for sarcastic darts thrown by the city Jew, who emphasized his superiority at every available opportunity. However, these Jews were bound with every fiber of their being to the collective Jewish nation. They rejoiced in community happiness, and were the first to suffer when a troublesome time came. They had a special merit, these folk who survived by picking food from the ground, and most of them physically fulfilled the historic destiny: “By the sweat of your brow, shall you eat bread.” Ostroh, Острог, Ostrog, Ostróg 50'20 North 26'31 East Ostroh is a historic city located in Rivne Oblast (province) of western Ukraine, located on the Horyn River. Ostroh is the administrative center of the Ostroh Raion (district) and is itself designated as a special administrative subordination within the oblast. The current estimated population is around 14,801 (as of 2001). Tulchin, Тульчин, Tul’chyn, Tulczyn, Tulcin 48´40´28 N, 28´50´59 East Tulchin is a small city in the Vinnytsya Oblast (province) of western Ukraine. It is the administrative center of the Tulchynsky Raion (district), and was the chief centre of the Southern Society of the Decembrists. The city is also known for being the home to Ukrainian composer Mykola Leontovych who produced several of this choral masterpieces when he lived here. An important landmark of the city is the palace of the Potocki family. The current estimated population is around 13,500 (as of 2005). 49´10´00 North´ , 25´43´00 East Daleszowa, Daleshevo, Daleshova 48.47.10 North, 25.29.00 East Ukraine 252.3 miles WSW of Kyyiv 50°26' N 30°31' E 48.68.21 North 25.56.36 East 48´85´N, 25´71 East THE JEWISH COMMUNITY - Medieval Ukrainian lands were a loosely knit group of principalities. By the late 1300s, most Ukrainian lands were controlled by either the Grand Duchy of Lithuania or the Mongolian-Tatar Golden Horde. In 1569, the Kingdom of Poland and the Grand Duchy of Lithuania became the Polish-Lithuanian Commonwealth. Poland controlled Western Ukrainian lands while eastern Ukrainian was controlled by the Ottoman Empire. In 1772, Russia, Prussia, and Austria partitioned the Polish-Lithuanian Commonwealth at which time several Ukrainian areas became part of Galicia, a province of Austria. By 1795, Austria controlled western Ukraine and Russia controlled eastern Ukraine. During the 1930s, all of western Ukraine was governed by either Poland and/or Czechoslovakia. By the end of WWI, Ukrainian territory was divided into the Ukrainian Soviet Socialist Republic (USSR), Poland, Czechoslovakia, and Romania. In 1939 the Jewish population of Ukraine was 1.5 million (1,532,776) or 3% of the total population of Ukraine. One half to two thirds of the total Jewish population of Ukraine were evacuated, killed or exiled to Siberia. Ukraine lost more population per capita than any other country in the world in WW II. After WWII, the borders of the Ukrainian SSR expanded west, including those Ukrainian areas of Galicia. At the collapse of the USSR in 1991, Ukraine became an independent state. JewishGen's ShtetlSeeker references border changes of a given town with more information at JewishGen ShtetLinks for Ukrainian towns. [February 2009] Ukraine SIG facilitates research of former Russian Empire Guberniyas now in Ukraine; Podolia, Volhynia, Kiev, Poltava, Chernigov, Kharkov, Kherson, Taurida and Yekaterinoslav. [February 2009] HISTORY: Wikipedia article: "History of the Jews of Ukraine" and The Virtual Jewish History Library- Ukraine [February 2009] US Commission for the Preservation of America's Heritage Abroad, 1101 Fifteenth Street, Suite 1040, Washington, DC 20005. Telephone 202-254-3824. Executive Director: Joel Barries. US Commission for the Preservation of America's Heritage Abroad supplied most Ukraine information. The data is alphabetical by the name of the town. Historical Research Center for Western Ukrainian communities in all countries: "ZIKARON" Ukraine Jewish community. Jewish Cemeteries in Ukraine Report, Winter 1997-98 Ukraine's turbulent past saw sovereignty pass between Poland, Russia and other nations, but has a rich history: one Crimean tribe converting to Judaism in the eighth century, the first shtetls built by Jews working for Polish aristocrats (18th century), and rise of Hasidism. The Germans murdered 1.4 million of the two million Jews. Communism then suppressed religious life of those that survived. Despite this, Ukraine is now home to one of the largest Jewish communities in Europe (100,000-300,000). Some 1500 Jewish heritage sites published by the United States Commission for the Preservation of America's Heritage Abroad (2005) BOOKS ABOUT UKRAINE: - Chelm, M. Bakalczuk-Felin, 1954, in Yiddish. - Dnepropetrovsk-Yekaterinoslav, Harkavy and Goldburt, 1973, in Hebrew. - Pinkas Hakehillot Poland, Volumes I-VII. - Frank, Ben G. A Travel Guide to Jewish Russia & Ukraine. Paperback (October 1999) Pelican Pub Co; ISBN: 1565543556 - Gitelman, Zvi. Chapter The Jews of Ukraine and Moldova" published in Miriam Weiner's Jewish Roots in Ukraine - and Moldova (see below) online. - Goberman, D. Jewish Tombstones in Ukraine and Moldova. Image Press, 1993. ISBN 5-86044-019-7) shows many interesting styles. - Greenberg, M. Graves of Tsadikim Justs in Russia. Jerusalem, 1989. 97 pages, illustrated, Hebrew and English. S2 89A4924. Notes: Rabbis tombstone restoration, no index, arranged by non-alphabetical town names. - Gruber, Ruth Ellen. Jewish Heritage Travel: A Guide to Eastern Europe, Washington: National Geographic, 2007 - Ostrovskaya, Rita (Photographer), Southard, John S. and Eskildsen, Ute (Editor). Jews in the Ukraine: 1989-1994: Shtetls. Distributed Art Publishers; ISBN: 3893228527 - Weiner, Miriam. Jewish Roots in Ukraine and Moldova: Pages from the Past and Archival Inventories (The Jewish Genealogy Series). Routes to Roots Foundation/YIVO InstituteYIVO Institute; ISBN: 0965650812. see Routes to Roots Foundation, Inc. ISRAEL: Tragger, Mathilde. Printed Books on Jewish cemeteries in the Jewish National and University Library in Jerusalem: an annotated bibliography. Jerusalem: The Israel Genealogical Society, 1997. BOOKS ABOUT CRIMEA: - Chwolson, D. Corpus inscriptionum hebraicarum (All the Hebrew Inscriptions). Hildesheim, 1974 (1st print: St. Petersburg, 1882). 527 pages, Latin title and German text. SB74B2774. Notes: 194 tombstones, 9th-15th centuries, based on Firkowiz's book scripture analysis. - Chwolson, D. Achtzehn hebraische Grabschiften aus der Krim (Eighteen Hebrew grave inscriptions in Crimea).. St. Petersburg, 1985 in "Memories de L'Academie Imperial de St. Petersburg", 7Šme, series, volume IX, no. 7, III XVIII, 528 pages, illustrated. [translation] of the author's Russian book s29V5256]. German text and Hebrew inscriptions. PV255, series 7, book 9, no.7. Notes: 18 tombstones, 6-960, scripture analysis based on Firkowiz's book. - Firkowiz, A. Y. Avnei zikaron behatsi ha'i krim, besela hayehudim bemangup, besulkat ubekapa (Jewish memorial stones in Crimea and in [the Caucasian towns of Mangup, Sulkat and Kapa [Theodesia). Vilnius, 1872. 256 pages, illustrated, Hebrew. 29V4818. Notes: 564 tombstones, 3-1842. - Harkavy, A.L. Alte juedusche Denmaeler aus der krim (The old Jewish monuments in Crimea),. St. Petersburg, 1876, X, 288 pages. German and Hebrew inscriptions. PV255, VII, 24/1. Notes: 261 inscriptions, 604-916?, scripture analysis based on Firkowiz's book. Nadwirnaaus, Nadwirna (Надвірна) Nadworna 48° 38′ 1″ North, 24° 34′ 5″ O Nadwirna (ukrainisch Надвірна; russisch Надворная/Nadwornaja, polnisch Nadwórna), ist eine Kleinstadt in der West-Ukraine mit 20.932 Einwohnern (Volkszählung 2001). Sie ist das Zentrum des gleichnamigen Rajons. Die Stadt liegt am Ufer des Flusses Bystryza und verfügt über einen Bahnanschluss. Sie liegt etwa 37 Bahn- bzw. Straßenkilometer in süd-südwestlicher Richtung vom Oblastzentrum Iwano-Frankiwsk entfernt. Nadwirna liegt am Fuße der Karpaten und bekam 1939 den Stadtstatus verliehen. Wirtschaft [Bearbeiten]Industrie [Bearbeiten]In Nadwirna betreibt der ukrainische Erdöl- und Erdgaskonzern НАК "Нафтогаз України/Naftohas Ukrajiny" über seine Erdölgesellschaft (ВАТ "Укрнафта"/Ukrnafta) eine Raffinerie ("Надвірнанафтогаз"/Nadwirnanaftohas). Verkehr [Bearbeiten]Von Nadwirna führt eine der wenigen wichtigen Straßen- und Schienenverbindung über die Karpaten nach Rachiw (Oblast Transkarpatien). Der Nahverkehr wird mit Bussen und Linientaxis (Marschroutkas) abgewickelt. Bahnstrecken [Bearbeiten]Iwano-Frankiwsk–Nadwirna–Jaremtsche–Worochta–Rachiw–Sighetu Marmației Nadwirna–Deljatyn Geschichte [Bearbeiten]Nadworna teilt weitgehend die Geschichte der Ukraine bzw. Galiziens/Polens Seit 1349 gehörte es zu Polen-Litauen. Nach der ersten Teilung Polens 1772 fiel die Stadt an Österreich, nach dem Ende des Ersten Weltkrieges 1919 an Polen, und 1939 an die ukrainische Sowjetrepublik. In den dreißiger Jahren gab es verstärkte Aktivitäten der ukrainischen nationalistischen Bewegung OUN unter Stepan Bandera in der Region. 1941 bis 1944 war Nadworna von der deutschen Wehrmacht besetzt und fiel anschließend wieder an die Sowjetunion. Historisch [Bearbeiten]Aus Meyers Konversationslexikon von 1888: "Nadworna, Marktflecken in Galizien, in rauher Gebirgsgegend, an der Bystrica, Sitz einer Bezirkshauptmannschaft und eines Bezirksgerichts, hat Sägemühlen, Holzhandel und (1880) 6.707 Einw. (davon 4.190 Juden). In der Nähe ein altes Schloss der Familie Potocki." (Anmerkung: Schloss Pniw, heute nur Ruine erhalten) - Partnerstädte [Bearbeiten]Krnov, Tschechien - Prudnik, Polen - Weblinks [Bearbeiten]http://www.kresy.co.uk/nadworna.html - Sowjetische Landkarte (Stand 1990) - Nadworna. In: Meyers Konversations-Lexikon. 4. Auflage. Band 11, Bibliographisches Institut, Leipzig 1885–1892, S. 975 Administrative divisions of Ivano-Frankivsk Oblast, Ukraine Dolyna · Halych · Horodenka · Kalush · Kolomyia · Kosiv · Nadvirna · Rohatyn · Rozhniativ · Sniatyn · Tlumach · Tysmenytsia · Verkhovyna Cities Oblast subordinated cities Bolekhiv · Ivano-Frankivsk · Kalush · Kolomyia · Yaremche · Burshtyn · Dolyna · Halych · Horodenka · Kosiv · Nadvirna · Rohatyn · Sniatyn · Tlumach · Tysmenytsia Bytkiv · Bilshivtsi · Bohorodchany · Broshniv-Osada · Bukachivtsi · Chernelytsya · Delatyn · Hvizdets · Kuty · Lanchyn · Lysets · Obertyn · Otynia · Perehinske · Pechenizhyn · Rozhniativ · Solotvyn · Verkhovyna · Voynyliv · Vorokhta · Vyhoda · Yabluniv · Yezupil · Zabolotiv
http://www.geni.com/projects/Cities-in-Ukraine/3960
13
30
Slavery In America Facts, information and articles about Slavery In America, one of the causes of the civil war Slavery In America summary: Slavery in America began in the early 17th Century and continued to be practiced for the next 250 years by the colonies and states. Slaves, mostly from Africa, worked in the production of tobacco crops and later, cotton. With the invention of the cotton gin in 1793 along with the growing demand for the product in Europe, the use of slaves in the South became a foundation of their economy. In the late 18th century, the abolitionist movement began in the north and the country began to divide over the issue between north and south. In 1820, the Missouri Compromise banned slavery in all new western territories, which southern states saw as a threat to the institution of slavery itself. In 1857, the decision by the Supreme Court known as the Dred Scott Decision said that slaves that escaped to the north where not free but remained the property of their owners in the south, antagonizing abolitionists. With the election in 1860 of Abraham Lincoln, who ran on a position of anti-slavery, the south felt that slavery was sure to be abolished, causing many southern states to secede from the union. This capped off a bloody four year battle known as the Civil War. During the war, Abraham Lincoln issued his famous Emancipation Proclamation, ostensibly freeing all slaves in the confederate states. But it wasn’t until the Union had actually won the war and the subsequent passage of the Thirteenth Amendment to the Constitution that the American slaves were officially freed. Articles Featuring Slavery In America From History Net Magazines Timeline: The Abolition of the Slave Trade By Andrea Curry It had been decades since the first mention of the issue in Parliament. In 1791, 163 Members of the Commons had voted against abolition. Very few MPs dared to defend the trade on moral grounds, even in the early debates. Instead, they called attention to the many economic and political reasons to continue it. Those who profited from the trade made up a large vested interest, and everyone knew that an end to the slave trade also jeopardized the entire plantation system. “The property of the West Indians is at stake,” said one MP, “and, though men may be generous with their own property, they should not be so with the property of others.” Abolition of the British trade could also give France an economic and naval advantage. Before the parliamentary debates, Englishmen like John Locke, Daniel Defoe, John Wesley and Samuel Johnson had already spoken against slavery and the trade. In a stuffy party at Oxford, Dr. Johnson once offered the toast, “Here’s to the next insurrection of the Negroes in the West Indies.’’ Amid such scattered protests, the Quakers were the first group to organize and take action against slavery. Those on both sides of the Atlantic faced expulsion from the Society if they still owned slaves in 1776. In 1783 the British Quakers established the antislavery committee that played a huge role in abolition. The committee began by distributing pamphlets on the trade to both Parliament and the public. Research became an important aspect of the abolitionist strategy, and Thomas Clarkson’s investigations on slave ships and in the trade’s chief cities provided ammunition for abolition’s leading parliamentary advocate, William Wilberforce. Mockingly—and sometimes respectfully others called Wilberforce and his friends “the Saints,” for their Evangeli- cal faith and championing of humanitarian causes. The Saints worked to humanize the penal code, advance popular education, improve conditions for laborers and reform the “manners” or morals of England. Abolition, however, was the “first object” of Wilberforce’s life, and he pursued it both in season and out. May 12, 1789, was clearly out of season for abolition. Sixty members of the West Indian lobby were present, and the trade’s supporters had already called abolition a “mad, wild, fanatical scheme of enthusiasts.” Wilberforce spoke for more than three hours. Although the House ended by adjourning the matter, the Times reported that both sides thought Wilberforce’s speech was one of the best that Parliament had ever heard. Wilberforce had concluded with a solemn moral charge: “The nature and all the circumstances of this trade are now laid open to us. We can no longer plead ignorance.” Having failed to obtain a final vote, the abolitionists redoubled their efforts to lay open the facts of the trade before the British people. So far, the public had easily ignored what it could not see, and there had been no slaves in England since 1772. English people saw slave ships loading and unloading only goods, never people. Few knew anything of the horrors of the middle passage from Africa. Over time, it became more and more difficult for anyone to plead ignorance of this matter. William Cowper’s poem “The Negro’s Complaint” circulated widely and was set to music. Thoughts and Sentiments on the Evil and Wicked Traffic of the Slavery and Commerce of the Human Species, by an African man named Ottabah Cugoano, also became popular reading. Thomas Clarkson and others toured the country and helped to establish local antislavery committees. These committees in turn held frequent public meetings, campaigned for a boycott of West Indian sugar in favor of East and circulated petitions. When, in 1792, Wilberforce again gave notice of a motion, 499 petitions poured in. Although few MPs favored immediate abolition, this public outcry was hard to ignore. An amendment inserting the word “gradual” into the abolition motion eventually carried the day. While in theory a victory of conscience, the bill as it then stood came to nothing. The abolitionist cause endured disappointments and delays each year following until 1804; and each year, British ships continued to carry tens of thousands of Africans into slavery in the Western Hemisphere. Anxiety about the bloody aftermath of the French Revolution contributed to Parliament’s conservative, gradualist decision in 1792; and the next year brought war with France. Wartime England lost her fervor for the cause. Although Wilberforce stubbornly brought his motion in Parliament each year until 1801, only two very small measures on behalf of the oppressed Africans succeeded in the first decade of the war. Respect for Wilberforce and his ilk turned to annoyance, and many seconded James Boswell’s sentiments: Go W— with narrow skull, Go home and preach away at Hull… Mischief to trade sits on your lip. Insects will gnaw the noblest ship. Go W—, begone, for shame, Thou dwarf with big resounding name. The state of affairs in France also brought abolitionist ideals under suspicion. One earl thundered: “What does the abolition of the slave trade mean more or less in effect, than liberty and equality? What more or less than the rights of man? And what is liberty and equality; and what are the rights of man, but the foolish fundamental principles of this new philosophy?” Even so, after more than a decade, the war with France began to lose its sense of urgency, however much the future of the world might—and did—hang in the balance. Slowly, public opinion began to reawaken and assert itself against the trade. Conditions in Parliament also became more favorable. Economic hardship and competition with promising new colonies weakened the position of the old West Indians. In 1806 abolitionists in Parliament managed to secure the West Indian vote on a bill that destroyed the three-quarters of the trade that was not with the West Indies. This bill, though in the West Indians’ competitive interest, also did much to pave the way for the 1807 decision. On the night of the decisive 283-16 vote for total abolition of the trade in 1807, the House of Commons stood and cheered for the persistent Wilberforce, who for his part hung his head and wept. The bill became law on March 25, and was effective as of January 1, 1808. At home after the great vote, Wilberforce called gleefully to his friend Henry Thornton, “Well, Henry, what shall we abolish next?” Thornton replied, “The lottery, I think!”—but the more obvious answer was the institution of slavery itself. For the next century, England fought diplomatic battles on many fronts to reduce the foreign slave trade. British smugglers were stopped in their tracks by the 1811 decision that made slaving punishable by deportation to Botany Bay. Smuggling under various flags threatened to continue the Atlantic trade after other nations had abolished it, and the British African Squadron patrolled the West African coast until after the American Civil War. In 1833 slavery was abolished throughout the British Empire. This radical break was possible partly through an “apprenticeship” system, and a settlement to the planters amounting to 40 percent of the government’s yearly income. The news reached Wilberforce two days before his death. “Thank God that I should have lived to witness a day in which England is willing to give 20 millions Sterling for the abolition of slavery,” he said. Some time before, Wilberforce had said, “that such a system should so long have been suffered to exist in any part of the British Empire will seem to our posterity almost incredible.” He was right. It is bittersweet, 200 years later, to commemorate the end of one of the most atrocious crimes in history. Yet the dismantling of an immensely profitable and iniquitous system, over a relatively short period of time and in spite of many obstacles, is certainly something to commemorate. This article by Andrea Curry was originally published in the May 2007 issue of British Heritage Magazine. For more great articles, subscribe to British Heritage magazine today! Why Cotton Got To Be King By Robert Behre More than two decades before the Civil War, a planter in Edgefield, South Carolina, contemplated the languishing cotton prices and the plummeting value of his slaves—which by some accounts were worth less than a third of their value before the Panic of 1837. “Every day, I look forward to the future with more anxiety,” James Henry Hammond confided to his diary in 1841. “Cotton is falling, falling never to rise again.” But his fortunes did improve, and Hammond earned the admiration of influential South Carolinians, who eventually sent him to the U.S. Senate—where, during an 1858 debate with William Seward of New York, Hammond argued the South’s agricultural riches could bring the world to its feet in event of war with the North. “What would happen if no cotton was furnished for three years?” Hammond asked. “England would topple headlong and carry the whole civilized world with her. No, you dare not make war on cotton! No power on earth dares make war upon it. Cotton is King.” But if your image of the South’s plantation economy is a refined, agrarian ideal that changed little in two-and-a-half centuries, before the loathsome Yankees put an end to it, well, that moonlight and magnolias picture isn’t quite right. In fact, the only constant in the South’s plantation economy was change—as reflected in the way Hammond’s fortunes varied over those 17 years. The South’s crops evolved—from tobacco and indigo to rice and sugar and then, only relatively late in the game, to cotton. The lands being farmed evolved—from coastal plains linked by rivers and bays, to interior regions connected by rail and canals. The states with the most promising crops evolved—from the old Atlantic seaboard states of the Carolinas and Virginia, west and south to Georgia, Alabama, Mississippi, Louisiana and eastern Texas. And the labor evolved—from a situation where enslaved blacks and whites essentially were both pioneers struggling to eke out an existence in a new world, to a system of chattel slavery in which the slaves were as much an asset as the land. As England struggled for its own foothold in the New World, one of the few surviving settlers from the London Company planted a powerful seed around 1612 in what is now Virginia’s coastal plain. The tobacco John Rolfe planted wasn’t the harsh strain grown by the natives, but a milder seed that Spanish colonists were growing in the Caribbean and South America. Bad relations with the American Indians had plagued the colonists, who were struggling simply to keep themselves fed—much less earn the riches they had hoped to earn in this new land. Rolfe’s seed would change all that. When the first of Rolfe’s new tobacco crop was sold in London, the essential framework of the Southern plantation economy was put in place. The building blocks included colonists and planters eager for riches, seeds of crops from other places, a wealthy European market and a complicated gumbo of human relations that would breed both invention and cruelty. Rolfe found good ways to grow and cure the Spanish tobacco, possibly with advice from his new bride, Pocahontas. Seven years after Rolfe first planted his tobacco, Jamestown had exported 10 tons of it to Europe. This luxury crop eventually gave colonists needed income to buy African slaves. The tobacco not only increased the colonists’ wealth, but the crown got its cut as well—a steady stream of income as the plant grew in popularity in London and beyond. At times, the colony had to force its residents to plant food. Within three decades, Jamestown was shipping 750 tons of tobacco back across the Atlantic, making tobacco the largest export in the American colonies. But the crop wore out the soil, so there was a scramble across the Chesapeake Bay waterways for fresh, suitable lands. England’s foothold was now secure, nonetheless, as the South learned that great prosperity could be gained through the cultivation of the right cash crop. The story of Southern agriculture isn’t confined to the South. Not only were European markets essential; precedents in the Caribbean colonies influenced its development. French and Spanish colonists established sugar plantations on several islands, and English colonists got in on the action in Barbados. By the 1640s, the small island was divided into large plantations. To do the demanding work, colonists imported African slaves in such numbers that there were three for every one planter, as wealthy planters eclipsed the poorer ones, some of whom would leave for a new colony called Carolina. As the Virginia colonists were establishing wealth with tobacco, another English ship came ashore farther South in 1670 to create a new colony that eventually would surpass Virginia in cultivation of cash crops. The ship Carolina arrived via Barbados, and unlike the first settlers in Virginia, the colonists arrived with African slaves, though they were more like indentured servants. The Barbadian notion that a white planter considered all persons in his household as family helped shape the colony’s early slavery practices. In his definitive history of South Carolina, author Walter Edgar writes, “Everyone, white and black, was a pioneer.” The colonists tried tobacco first, without much luck—partly because the European market was saturated, forcing prices down. But by 1685, the Carolina colonists found a different crop that made many of them fortunes a few decades later: rice. The slaves’ knowledge of growing rice in their native Africa is increasingly understood to be an important part of the rice crop’s success. South Carolina planters valued slaves from rice-growing regions; Henry Laurens, a merchant slave trader and one of the wealthiest men in all of the American colonies, distinguished between slaves based on skills they learned in their native lands. “The Slaves from the River Gambia are preferred to all others with us, save the Gold Coast,” he would write. “Gold Coast or Gambia are best…next to them the Windward Coast are prefer’d to Angolas.” The kind of wealth Lowcountry planters could amass is illustrated by the case of Peter Manigault, a planter, lawyer and legislator, born in 1731, who eventually held a 1,300-acre plantation west of Charles Towne, a 2,476-acre plantation in Port Royal and more than 2,000 acres of rice plantations along the Santee River, and another working plantation outside Columbia. His son Gabriel would inherit about 25,000 acres, enough wealth to allow him to pursue architecture and design some of Charleston’s most imposing buildings at the dawn of the 19th century. There was always a scramble for the next big crop. Eliza Lucas Pinckney of Charles Towne loved to experiment with crops—including indigo, a blue dye now commonly used for jeans but created a rare and valuable color in the 18th century; so valuable England was willing to subsidize its production. The indigo market—and subsidy—effectively ended with the Revolutionary War, but rice would survive and find lucrative markets in Europe. After all, people can do without smoke or blue-colored garments, but everyone needs to eat. In Louisiana, French and Spanish settlers had moderate success with sugar, but indigo also was the major crop there in the late 18th century, before the region was part of the United States. The balance started to shift after a French nobleman, Etienne de Bore, returned to his native Louisiana. At his plantation about six miles north of New Orleans, de Bore became frustrated by insects gobbling up his indigo, so he began tinkering with sugar cane and in 1795, pioneered production of granulated sugar. He was helped by the expertise of other sugar makers who moved to Louisiana after the bloody 1791 slave uprising in Saint-Domingue, now Haiti. At the peak in the early 19th century, Louisiana planters got yields from 16 to 20 tons of cane per acre and harvested 300,000 tons of sugar per year, helping support half a million people. Had a South Carolinian kept his promise in 1793 to pay a Yale-educated tutor 100 guineas a year, the Southern economy as most know it today might have looked a whole lot different. But the deal fell through, and Eli Whitney headed south to Savannah instead, accepting an offer from the widow of Revolutionary War General Nathaniel Greene to stay at her plantation and continue his studies. A handful of planters produced cotton in Georgia, but extracting the valuable lint from the worthless seed was a time-consuming chore that could easily wipe away any meaningful profit. Greene’s plantation manager, Phineas Miller—also a Yale alumnus—was familiar with the difficulties of processing cotton. At their urging, Whitney concocted a series of wires to hold the seed while a drum with hook-shaped wires pulled the fiber out and a rotating brush cleaned the lint off the hooks. Cotton was by no means a new crop: Planters had grown Sea Island cotton, a long-staple variety, in the sandy soils along the South Carolina and Georgia coast since the early 1700s. But like tobacco, it depleted the soil and often was challenging to market. Whitney’s gin changed the game; the market for it spread faster than he could control or profit from. Demand for cotton, including the short-staple variety, exploded as England and France built new textile mills that craved the raw material. By 1804, Southern cotton production ballooned eight-fold from the decade before. Unlike Sea Island cotton, short-staple cotton could grow in upland areas, giving planters in vast swaths of the South a chance at riches previously confined to the coast. The War of 1812 disrupted trade with England, but entrepreneurial Northerners stepped into the breach. While a few cotton and wool spinning mills had been built in Rhode Island, Massachusetts and Connecticut by 1805, scores more sprang up in the following decade. The number of mills within a 30-mile radius of Providence, R.I., doubled between 1812 and 1815, spurred by the same hopes of riches that induced Southerners to plant the cotton. The revolution was on. The lucrative short-staple cotton trade helped create two Souths: An upper South of Virginia, Maryland, Kentucky, Tennessee and North Carolina that began moving away from the plantation model, selling their slaves to owners in the lower South—states like Georgia, Alabama and Mississippi, where cotton planters desperately needed the labor. Land planted with cotton or tobacco and nothing else eventually was exhausted, and planters pushed west in search of fresh land and profits. South Carolina planter Wade Hampton is just one example. Hampton first journeyed west as an Army colonel and quickly saw the potential there, University of South Carolina history professor Lacy Ford notes. By 1812, he had acquired 38,000 acres and 285 slaves in Louisiana and Mississippi. “Ultimately, he produced more cotton there than he did in South Carolina,” Ford says. “He became one of the four or five richest people in the South on the basis of his extensive holdings.” William Hamilton, a North Carolina native and one of Hampton’s top aides, wrote to his family about the new areas’ possibilities: “An acre of ground, well prepared, can yield 2,000 pounds of sugar and one good negro can make five bales of cotton worth $500 and 40 prime field hands can till 200 acres and produce $10,000 of cotton annually,” a huge fortune then. Ford says there was both a push and pull in the move west: The push was spent fields in the Carolinas and Virginia, and the pull was the promise of riches on new land. “A lot of people didn’t pick up and go, but a lot of people did,” he notes. In 1801, South Carolina produced half of the nation’s cotton. By 1821, the margin had dropped to 29 percent. Half of all whites born in South Carolina between 1800 and 1860 eventually left the state. Natchez, Miss., became a new boomtown, and New Orleans soon overtook Charleston in shipping and population. The Cotton kingdom extended into eastern Texas and hundreds of miles up the Mississippi River. The flight west also created a big political problem as the abolition movement geared up and the nation quarreled over which new states should be permitted to have slaves and which should not. Big bucks were on the line. The South produced about three-fourths of the cotton that fed the textile mills in England and France. By one estimate, more than 20 percent of England’s economy depended in some way on the textile industry, and the United States’ domestic textile plants produced about $100 million worth of cloth each year; its ships transported cotton and cotton products across the globe. By 1860, two-thirds of the world’s supply of cotton came from the states that would soon constitute the Confederacy. Cotton didn’t receive its coronation easily. Richard Porcher and Sarah Fick note in The Story of Sea Island Cotton, a recent history, the turbulence Lowcountry planters faced. “Even with skilled slaves and the special care taken with the plants, sea island cotton was an uncertain crop, succumbing to unseasonable rains, storms, insect pests and a fluctuating market,” they write. The drive west meant a second “Middle Passage” for many slaves. After Congress outlawed the international slave trade in 1808, the only way planters could get new slaves was to buy them on the domestic market, and the push west meant thousands of slaves were sold and relocated—and often torn away from their families. Meanwhile, pressure built to free the slaves—and it wasn’t coming only from the North. Two of Charleston’s elite, Angelina and Sarah Grimke, became abolitionists in 1830. With Angelina’s husband, Theodore Weld, they published Slavery As It Is: The Testimony of a Thousand Witnesses. The book included published excerpts from Southern newspapers that spoke to the institution’s cruelty: In The Raleigh Standard, they found this bit from Nash County, N.C., slave owner Micajah Ricks: “Ran away, a negro woman and two children, a few days before she went off, I burnt her face with a hot iron, on the left side of her face, I tried to make a letter M.” Slaves also were always resisting, trying to find ways to lighten their workload, get better provisions and more autonomy. As industrialization seemed increasingly likely, Southerners began to debate whether slaves or freed men should work in their emerging factories. Planters who owned large numbers of slaves produced most goods for export, but the South had many more small farmers, mostly whites, who farmed the upland areas—the lucky ones producing a small surplus of cotton for market while managing to feed their families. In 1850, the average South Carolina farm covered 541 acres, and that would drop to 488 acres by 1860. There were 33,171 farms, some wealthy but many less so. Meanwhile, cotton prices oscillated wildly over the decades; prices were high until 1819 and then down, rose again until a crisis low in 1837 and then climbed back in 1848 with another dip coming in 1851. Some worried that cotton was dominating to the South’s detriment. Cotton “starves everything else,” warned W.J. Grayson, an outspoken unionist. “The farmer curtails and neglects all other crops. He buys from distant places not only the simplest manufactured articles, his brooms and buckets, but farm products, grain, meat, ham, butter, all of which he could make at home.” And there soon would be a need to make them. Ford, author of Deliver Us From Evil: The Slavery Question in the Old South, says the single biggest misunderstanding about the Southern plantation economy is how diverse and ever-changing it was. By the beginning of the Civil War, the cotton gin had been around only as long as computers have been today. “It was a much more dynamic economy than most people realize,” he says. “It was newer, fresher and under constant strain. Cotton production was only in its third generation when the Civil War came, and there were many people still alive who could remember when the first meaningful amount of cotton was grown in the South. “In the larger sense, in dealing with the lower Cotton South as a whole, it’s always important to remember the upper South—North Carolina, Virginia, Tennessee and parts of Kentucky, is really a different kettle of fish. They planted almost no cotton, and they’re having a very different experience and reacting to circumstances differently as well.” Some Southerners might have accepted the coming Civil War because they had little land, slaves or anything else to lose, but others had significant sums at stake. “Throughout the lower south as a whole, the cotton boom of the 1850s and the prosperity that the boom created for the region gave it the self-confidence and belief that it had a system that would work,” Ford says, “and that probably enhanced people’s willingness to secede.” And though the coming war eventually would end slavery and the plantation economy it supported, the South would continue to plant and profit from cotton, rice, sugar and tobacco well into the 20th century. Of course, it wasn’t the same, but on the other hand, the South’s plantation economy never stood still. A bird's-eye view of pre-war New York displays the shipping commerce that made the city rich. Image courtesy of Library of Congress. A NOTE FROM THE EDITOR: Because of a production problem, a portion of this article was omitted from … [PRESS RELEASE] Baldwin City, KS – Four events organized by the Black Jack Battlefield Trust will commemorate the 155th Anniversary of the Battle of Black Jack. On Thursday, June 2nd at 5:00am the actual date and time of the battle, … The evolution of Father Abraham Respected historian Eric Foner's new book, The Fiery Trial: Abraham Lincoln and American Slavery, examines what the president truly believed about human bondage Author Eric Foner. Courtesy of Eric Foner. Q Why another book on … Secession fever revisited We can take an honest look at history, or just revise it to make it more palatable Try this version of history: 150 years ago this spring, North Carolina and Tennessee became the final two Southern states … Two Virginias, two Civil Wars? The state in the forefront of war remembrance still argues over what happened The state of Virginia has been back in the news, again at war with itself and again over issues relating to the … Simmering animosities between North and South signaled an American apocalypse Any man who takes it upon himself to explain the causes of the Civil War deserves whatever grief comes his way, regardless of his good intentions. Having acknowledged … He signed documents with an "X" and left no known recorded quotes or memoir of his experiences. Yet because of his determination to be free, we know his name: Dred Scott, the intrepid slave who battled an unjust system through … Americans who lived through the Civil War established four great interpretive traditions regarding the conflict. The Union Cause tradition framed the war as preeminently an effort to maintain a viable republic in the face of secessionist actions that threatened both … Lincoln's Political Generals, by David Work University of Illinois Press, 2009 Abraham Lincoln made his share of mistakes as commander in chief during the Civil War, but did his politically motivated appointments of nonmilitary men as Union generals help or … On a chill foggy autumn evening in 1859, abolitionist John Brown and a rough gang of 21 men with guns and pikes and revolt in their hearts quietly hiked five miles from a farm in Western Maryland to … Nearly two months after the battle of Gettysburg 24-year-old Isaac Dunsten of the 105th Pennsylvania Infantry lay on officers' row at Camp Letterman, the large tent hospital established just east of the town. On July 2, 1863, the second day … By Harold Holzer By Claire Hopley Reviewed by Craig Symonds By Bruce Leviner Oxford University Press There has been a lot of discussion in the last decade or so about black Confederates. Some of that discussion has questioned the number of African Americans who labored or … Writing the Civil War: The Quest to Understand, edited by James M. McPherson and William J. Cooper, Jr., University of South Carolina Press, 937 Assembly Street, Carolina Plaza, 8th Floor, Columbia, SC 29208, 356 pages, $29.95. The "dean" of living … April 1865: The Month That Saved America, by Jay Winik, HarperCollins, New York, 520 pages, $30. Historians, in their efforts to offer fresh and exciting interpretations, sometimes handle evidence like trial lawyers. They are so determined to win a … 1863: Rebirth of a Nation, by Joseph E. Stevens, Bantam Books, New York, 212-765-3535, 450 pages, $26.95. Joseph Stevens is not only a gifted writer, he is also a clear-thinking and acute observer of the American landscape. He is a … The Valley of the Shadow: The Eve of War–Two Communities in the American Civil War, by Edward L. Ayers and Anne S. Rubin, W.W. Norton, 103-page book and CD-ROM, $49.95. It is called the Great Valley and runs up and … Southern Unionist Pamphlets and the Civil War, edited by Jon. L. Wakelyn, University of Missouri Press, Columbia, 573-882-0180, 392 pages, $39.95. For decades after the Civil War, Southerners who had remained loyal to the Union were shrouded in a fog … In his provocative new book, The Confederate War, author Gary Gallagher revises the revisionists. By Richard F. Welch Over the past 15 years an influential school of Civil War historians–now perhaps the dominant orthodoxy–has argued that class, race and gender … The Approaching Fury: Voices of the Storm, 1820-1861, by Stephen B. Oates, HarperCollins, New York, 1997, $28. The vast pantheon of Civil War literature is graced with titles focusing on the underlying causes of America's bloodiest conflict. Politics and economics, … The Union Must Stand: The Civil War Diary of John Quincy Adams Campbell, Fifth Iowa Volunteer Infantry, edited by Mark Grimsley and Todd D. Miller, University of Tennessee Press, Knoxville, 2000, $38. The outpouring of Civil War-era diaries and memoirs … MAPPING AMERICA'S PAST: A HISTORICAL ATLAS, edited by Mark C. Carnes and John A. Garraty, with Patrick Williams (Henry Holt and Company, 287 pages, $50.00). Nearly four hundred color maps created especially for this book, along with more than 120 … Frederick Stowe in the shadow of Uncle Tom's Cabin By James Tackach The fame of novelist Harriet Beecher Stowe followed her son throughout the Civil War. "So you're the little woman who wrote the book that started this great war!" … Why the South Lost the Civil War Ten Civil War historians provide some contrasting–and probably controversial–views on how and why the Confederate cause ultimately ended in defeat. Interviews by Carl Zebrowski "The art of war is simple enough. Find out … Suave, gentlemanly Lt. Col. Arthur Fremantle of Her Majesty's Coldstream Guards picked an unusual vacation spot: the Civil War-torn United States. By Robert R. Hodges, Jr. After graduating from Sandhurst, Great Britain's West Point, Arthur James Lyon Fremantle entered the … Dedicated Massachusetts abolitionist Silas Soule ironically gave his life for the red man, not the black. By Bruce M. Lawlor Fate consigns most people to lives of quiet anonymity, choosing only a favored few to shape an era's epochal events. … DRONES IN THE GREAT HIVE By Christian A. Fleetwood An African-American Medal of Honor-winner writes bitterly of the way the Union army treats its black soldiers. Christian A. Fleetwood was one of 13 African-American soldiers who won theMedal of Honor …
http://www.historynet.com/slavery-in-america
13
37
The Croats are believed to be a purely Slavic people who migrated from Ukraine and settled in present-day Croatia during the 6th century. After a period of self-rule, Croatians agreed to the Pacta Conventa in 1091, submitting themselves to Hungarian authority. By the mid-1400s, concerns over Ottoman expansion led the Croatian Assembly to invite the Habsburgs, under Archduke Ferdinand, to assume control over Croatia. Habsburg rule proved successful in thwarting the Ottomans, and by the 18th, much of Croatia was free of Turkish control. In 1868, Croatia gained domestic autonomy while remaining under Hungarian authority. Following World War I and the demise of the Austro-Hungarian Empire, Croatia joined the Kingdom of Serbs, Croats, and Slovenes (The Kingdom of Serbs, Croats, and Slovenes became Yugoslavia in 1929). Yugoslavia changed its name once again after World War II. The new state became the Federal Socialist Republic of Yugoslavia and united Croatia and several other states together under the communistic leadership of Marshall Tito. After the death of Tito and with the fall of communism throughout eastern Europe, the Yugoslav federation began to crumple. Croatia held its first multi-party elections since World War II in 1990. Long-time Croatian nationalist Franjo Tudjman was elected President, and one year later, Croatians declared independence from Yugoslavia. Conflict between Serbs and Croats in Croatia escalated, and one month after Croatia declared independence, civil war erupted. The UN mediated a cease-fire in January 1992, but hostilities resumed the next year when Croatia fought to regain a third of the territory lost the previous year. A second cease-fire was enacted in May 1993, followed by a joint declaration the next January between Croatia and Yugoslavia. However, in September 1993, the Croatian Army led an offensive against the Serb-held Republic of Krajina. A third cease-fire was called in March 1994, but it, too, was broken in May and August 1995 after Croatian forces regained large portions of Krajina, prompting an exodus of Serbs from this area. In November 1995, Croatia agreed to peacefully reintegrate Eastern Slavonia, Baranja, and Western Dirmium under terms of the Erdut Agreement. In December 1995, Croatia signed the Dayton peace agreement, committing itself to a permanent cease-fire and the return of all refugees. The death of President Tudjman in December 1999, followed by the election of a new coalition government and President in early 2000, brought significant changes to Croatia. Croatia's new government, under the leadership of Prime Minister Racan, has progressed in implementation of the Dayton Peace Accords, regional cooperation, refugee returns, national reconciliation and democratization. Following World War II, rapid industrialization and diversification occurred within Croatia. Decentralization came in 1965, allowing growth of certain sectors, like the tourist industry. Profits from Croatian industry were used to develop poorer regions in the former Yugoslavia. This, coupled with austerity programs and hyperinflation in the 1980s, contributed to discontent in Croatia. Privatization and the drive toward a market economy had barely begun under the new Croatian Government when war broke out in 1991. As a result of the war, the economic infrastructure sustained massive damage, particularly the revenue rich tourism industry. From 1989 to 1993, GDP fell 40.5%. Following the close of the war in 1995, tourists reemerged, and the economy briefly recovered. The solid growth that began in the mid-1990s halted in 1999. A recession, which was caused primarily by weak consumer demand and decrease in industrial production, led to a 0.9% contraction of GDP that year. Furthermore, inflation and unemployment rose, and the kuna fell, inciting fears of devaluation. In the second half of 2000, the tourism industry once again contributed to a recovery, helping Croatia grow 3.7% that year. This trend continued in 2001, when the economy expanded by 4.3% aided by an approximately 6% increase in industrial production, 12% growth in tourism--which generated about $3.7 billion in revenue--a stringent fiscal policy, and continued remittances from the Croatian diaspora. Unfortunately, forecasts for 2002 are less positive, with growth projected at 3.0%-3.5%. A decline in export markets and a decrease in foreign investment are predicted to temporarily slow the growth of the Croatian economy in 2002. However, the planned privatization of the national insurance, oil, and gas companies, and an expansion of telecommunication services, during 2002-03 should stimulate foreign investment and boost revenue over the near term. The Government has pursued economic reforms including privatization, public sector reductions, anticorruption legislation, and reforms of banking and commercial laws. In June the Government adopted a development strategy to transform socialist-era structures into a functioning market economy. An interim association agreement with the European Union was signed in October and was scheduled to enter into effect in January 2002. During the year, the economy overcame the effects of the 1998-1999 recession and banking sector crisis. The population of the country is 4,677,000 and per capita GDP in 2000 was approximately $4,600 (39,500 kuna). During the year, real GDP rose an estimated 4.2 percent over the previous year. The exchange rate and prices remained stable. Income from tourism increased an estimated 20 percent over 2000, reaching prewar levels. While retail price inflation was 7.4 percent in 2000, by the end of the third quarter of the year, inflation had fallen to 3.8 percent. Croatia's unemployment rate was 15.3 percent during the first half of the year, measured by International Labor Organization (ILO) methodology. (Due to improved methodology, this figure is not directly comparable to 2000's reported unemployment rate of 22.4 percent. Year-end data suggests that the unemployment level remained constant or fell slightly during the year.) INCIDENCE OF CRIME The crime rate in Croatia is low compared to industrialized countries. An analysis was done using INTERPOL data for Croatia. For purpose of comparison, data were drawn for the seven offenses used to compute the United States FBI's index of crime. Index offenses include murder, forcible rape, robbery, aggravated assault, burglary, larceny, and motor vehicle theft. The combined total of these offenses constitutes the Index used for trend calculation purposes. Croatia will be compared with Japan (country with a low crime rate) and USA (country with a high crime rate). According to the INTERPOL data, for murder, the rate in 2000 was 5.96 per 100,000 population for Croatia, 1.10 for Japan, and 5.51 for USA. For rape, the rate in 2000 was 2.68 for Croatia, compared with 1.78 for Japan and 32.05 for USA. For robbery, the rate in 2000 was 16.76 for Croatia, 4.08 for Japan, and 144.92 for USA. For aggravated assault, the rate in 2000 was 21.09 for Croatia, 23.78 for Japan, and 323.62 for USA. For burglary, the rate in 2000 was 348.87 for Croatia, 233.60 for Japan, and 728.42 for USA. The rate of larceny for 2000 was 229.79 for Croatia, 1401.26 for Japan, and 2475.27 for USA. The rate for motor vehicle theft in 2000 was 52.86 for Croatia, compared with 44.28 for Japan and 414.17 for USA. The rate for all index offenses combined was 678.01 for Croatia, compared with 1709.88 for Japan and 4123.97 for USA. TRENDS IN CRIME Between 1995 and 2000, according to INTERPOL data, the rate of murder decreased from 8.23 to 5.96 per 100,000 population, a decrease of 27.6%. The rate for rape increased from 1.65 to 2.68, an increase of 62.4%. The rate of robbery decreased from 198.56 to 16.76, a decrease of 91.6%. The rate for aggravated assault decreased from 23.02 to 21.09, a decrease of 8.4%. The rate for burglary increased from 326.31 to 348.87, an increase of 6.9%. The rate of larceny increased from 157.77 to 229.79, an increase of 45.6%. The rate of motor vehicle theft increased from 13.19 to 52.86, and increase of 300.8%. The rate of total index offenses decreased from 728.73 to 678.01, a decrease of 7%. The Ministry of Interior oversees the civilian national police, and the Ministry of Defense oversees the military and military police. The national police have primary responsibility for internal security but, in times of disorder, the Government and President may call upon the army to provide security. Civilian authorities generally maintained effective control of the security forces. Security forces committed a few abuses. The Constitution prohibits torture, mistreatment, or cruel or degrading punishment, and the authorities generally observed these prohibitions in practice; however, police apathy regarding societal crimes against Roma was a problem (see Section 5). Unlike the previous year, there were no reports that police occasionally abused prisoners. Societal intimidation and violence against Serbs continued in war-affected areas during the year. In the Danubian region (Eastern Slavonia), senior Interior Ministry authorities removed several police commanders who were responsible for fomenting tensions between ethnic Serb and ethnic Croat police officers as well as for discouraging ethnic Serbs from reporting incidents to police. There were periodic reports of ethnic tensions between ethnic Serb and Croat police officers in the Danubian region. The Government undertook a major reform of the police during the year, cutting nearly 15 percent of the police workforce. In undertaking this sensitive downsizing, the Government committed itself to honoring its obligations under the 1995 Erdut Agreement to maintain "proportionality" in the numbers of ethnic Serb and Croat police officers in Eastern Slavonia; however, full compliance with these obligations was not yet achieved by year's end. Continuing problems in the police included poor police investigative techniques, acute social sensitivity to ethnic issues, indecisive middle management in the police, and pressure from hard-line local politicians. These factors continued to impede development of local police capability. There were no reports of political killings during the year by the Government or its agents. During the year, eight persons were killed in landmine incidents, most caused by landmines laid by Croatian and Serb forces during the 1991-95 war. The Croatian Center for Demining reported that from 1991 through the end of the year, 1,350 land mine incidents were recorded in which 418 persons were killed The Constitution prohibits arbitrary arrest and detention; however, the Government did not always respect this right in practice. Police normally obtain arrest warrants by presenting evidence of probable cause to an investigative magistrate. Police may make arrests without a warrant if they believe a suspect might flee, destroy evidence, or commit other crimes; such cases of warrantless arrest are not uncommon. The police then have 24 hours to justify the arrest to a magistrate. Detainees must be given access to an attorney of their choice within 24 hours of their arrest; if they have none and are charged with a crime for which the sentence is over 10 years' imprisonment, the magistrate appoints counsel. The magistrate must, within 48 hours of the arrest, decide whether to extend the detention for further investigation. Investigative detention generally lasts up to 30 days, but the Supreme Court may extend the period in exceptional cases (for a total of not more than 6 months, or 12 months in serious corruption/organized crime cases). Once the investigation is complete, detainees may be released on their own recognizance pending trial unless the crime is a serious offense or the accused is considered a public danger, may influence witnesses, or is a flight risk. However, lengthy pretrial detention remained a serious problem, particularly for ethnic Serbs accused of war crimes. Suspects generally are held in custody pending trial, and there have been several cases of suspects held in pretrial detention for several months on weak evidence. In March the Supreme Court ordered two Bosnian Croat suspects freed in the investigation of the 1993 Ahmici massacre in central Bosnia. The two were arrested in Zadar in September 2000, and the court freed them after they had been detained for the legal maximum of 6 months without charges being brought. The option of posting bail after an indictment is available but not commonly exercised. The Government improved its record of applying the 1996 Amnesty Law (which amnestied acts of rebellion by ethnic Serbs), and appropriately granted amnesty to several individuals during the year, particularly returning ethnic Serb refugees. However, in October 2000, the state prosecutor directed local prosecutors to reopen old war crimes cases and execute dormant arrest warrants, although there appeared to be no new evidence to justify the arrests. Arrests of ethnic Serbs for war crimes continued but decreased in frequency throughout the year. From October 2000 to May 2001, over 50 persons were arrested, 28 of whom were refugees. In some of these cases, the subject was released in a few days after the Amnesty Law was applied or charges were dropped; however, in other cases, persons were detained for long periods. In January authorities in Pozega arrested Natasa Jankovic on war crimes charges; she remained in detention until June, when a judge threw out the case because Jankovic was not the person named in the indictment. Several ethnic Serb defendants convicted in absentia or at nontransparent, politicized trials conducted by the previous regime continued to be held in detention for extended periods as their cases progressed slowly through the overburdened judicial system. In April a domestic court convicted a Serb police officer from the Danubian region of war crimes; the police officer had been arrested in 1999 and was sentenced to 13 years in prison. There was no further information regarding the case of four ethnic Serb members of the Croatian police who were arrested and detained in 2000 despite being cleared by the Ministry of Interior of involvement in war crimes. In October 2000, 13 Serbs were arrested and detained in Baranja on war crimes charges based on 1996 indictments from the Osijek county court, despite the fact that these indictments had little or no supporting evidence; 7 of the Serbs eventually were released but 6 remained in detention at year's end. Evidentiary hearings began in September and continued at year's end. NGO and international observers in the Danubian region noted that police occasionally called ethnic Serbs to police stations for "voluntary informative talks," which amounted to brief warrantless detentions intended to harass Serb citizens. The Constitution prohibits forced exile of citizens and the Government does not employ it The Constitution provides for an autonomous and independent judiciary; however, the judiciary continued to suffer from some political influence, a backlog of over 1.1 million cases, and funding and training shortfalls. The judicial system consists of municipal and county courts, an administrative court, and the Supreme Court. In May Ivica Crnic--a former non-party Justice Minister and labor law expert known for his independence--became the new president of the Supreme Court. The independent Constitutional Court determines the constitutionality of laws, governmental acts, and elections, and serves as the court of final appeal for individual cases. In March pursuant to constitutional amendments, the Constitutional Court was expanded from 11 to 13 justices. The three new justices are respected professionals and were chosen in a transparent process; the rest of the Court judges were appointed under the former Tudjman regime. Justices of the Constitutional Court are elected for 8-year terms by Parliament, while all other judges are appointed for life. A parallel commercial court system adjudicates commercial and contractual disputes. The State Judicial Council (consisting of 11 members serving 8-year terms) is a body independent of both the judiciary and the Ministry of Justice. It is charged with the appointment and discipline, including removal, of judges. In the past, the State Judicial Council was criticized for the politicization of its decisions. In July the State Judicial Council was reconstituted pursuant to legislative amendments modifying the Council's authority with the goal of depoliticizing the Council and judicial appointments and, by extension, improving the quality of sitting judges. In July Parliament passed a new law designed to contribute to transparency and reduce politicization of the Prosecutor's offices, which creates a similar council for public prosecutors. This legislation enabled Chief State Prosecutor Radovan Ortynski to begin to renominate or replace the chiefs of municipal and county prosecutors' offices. Similarly a new Law on the Courts, passed in December 2000 and implemented during the year, introduced reforms in the appointment of court presidents of the various municipal, county, commercial, and misdemeanor courts. The law was designed to depoliticize the positions while streamlining administrative oversight; however, it has been criticized by some observers as giving too much control over judicial appointments to the Justice Ministry. By year's end, some of the county court presidents were being either renominated or replaced; the municipal court judges are to be addressed next. Judges are prohibited constitutionally from being members of any political party. Over the past 2 years, the judiciary has been subject to far less political influence than under the Tudjman regime, although there continued to be reports of political influence at the local level. The politicization of hard-line judges appointed by the previous Government, who at times made decisions in a nontransparent manner seemingly at odds with the evidence or the law, also continued to be a problem. The greatest problems facing the judiciary are outmoded procedural codes and court rules, inexperienced judges and staff, bureaucratic inefficiencies, and funding shortfalls, which have created a massive backlog of over 1 million cases, some dating back 30 years or more. The inexperience of young and newly appointed judges continued to be a problem, and there continued to be areas of the country without a permanent judge. Although the Constitution provides for the right to a fair trial and a variety of due process rights in the courts, at times citizens were denied these rights. Excessive delays in trials remained a problem. Courts tried and convicted in absentia persons for war crimes. Courts convicted persons in mass trials and in trials with weak supporting evidence, particularly in Eastern Slavonia. In January authorities in Pozega arrested Natasa Jankovic on war crimes charges while she was entering the country from Bosnia; she had been convicted in absentia in 1996 for inhumane treatment of prisoners while she purportedly worked as a guard in a prison camp. Jankovic was unaware of the charges and had entered the country seven times previously before being arrested. At two hearings in April, dozens of witnesses stated that Jankovic had been in Bosnia the entire time she was alleged to have been a camp guard. No prosecution witnesses identified her as being at the camp, and at least one confirmed that her case was one of mistaken identity. However, the prosecutor refused to drop the charges and Jankovic remained in detention until June, when a judge threw out the case for lack of evidence. In March mass trials in the "Babska group" and "Tompojevci group" cases resulted in absentia convictions for 11 and 10 ethnic Serbs respectively. In a long-standing pattern, armed activities that should have qualified for amnesty under the 1996 Law on General Amnesty were classified mistakenly and prosecuted as common crimes or war crimes. Particularly for those who previously exhausted their appeal procedures, there is no mechanism to review these cases. Nevertheless, domestic courts continued to adjudicate war crimes cases arising from the 1991-95 conflicts in Bosnia and Croatia; courts opened and reopened several outstanding allegations involving Croatian forces and took steps to depoliticize cases against ethnic Serbs. For example, by midyear the chief State Prosecutor had initiated a case-by-case review of war crimes cases and sought to limit sharply the use of in absentia proceedings. Instructions were issued to county prosecutors not to initiate criminal proceedings or in absentia proceedings without consultation with the state prosecutor. In the past, in cases where courts have made decisions on property claims, courts have overwhelmingly favored ethnic Croats over ethnic Serbs, particularly in the Danubian region Prison conditions generally meet international standards. Jails are crowded, but not excessively so, and family visits and access to counsel generally are available to prisoners. Men and women are housed separately, juveniles are held separately from adults, and pretrial detainees are held separately from convicted prisoners. The Government permits visits by independent human rights monitors, and such visits occurred during the year by both international organizations and domestic NGO's. Although the Government collected only limited statistics on the problem, credible NGO observers have reported that violence against women, including spousal abuse, remained a widespread and underreported problem. Alcohol abuse and poor economic circumstances were cited as contributing factors. Rape and spousal rape are illegal under the Penal Code; however, NGO's report that many women do not report rape or spousal rape. There is only one women's shelter, in Zagreb. In 2000 the Government revoked 1997 Penal Code amendments that removed domestic violence from the categories of crimes to be prosecuted automatically by the state attorney. As a result, a domestic violence case can be initiated by persons other than the victim; for example, cases can be initiated on the basis of suspicions of health care workers or police rather than requiring the victim to press charges. Legislation passed in autumn 2000 created a specific Penal Code provision for family violence to replace inadequate existing provisions, and to direct that perpetrators of family violence, in addition to being punished, be placed under supervision and receive psychiatric treatment. Amendments to the Law on Misdemeanors passed in 2000 are designed to protect victims by extending detention (for up to 30 days) of perpetrators of family violence, even during the defendant's appeal. The country is a transit route as well as a lesser country of origin and destination country for trafficking in women for the purposes of sexual exploitation. Workplace sexual harassment is a violation of the Penal Code's section on abuse of power but is not specifically included in the employment law. NGO's reported that in practice, women who were sexually harassed often did not resort to the Penal Code for relief for fear of losing their jobs. The labor law prohibits gender discrimination; however, in practice women generally held lower paying positions in the work force. Government statistics from previous years showed that, while women constituted an estimated 48 percent of the work force, they occupied few jobs at senior levels, even in areas such as education and administration where they were a clear majority of the workers. Considerable anecdotal evidence gathered by NGO's suggested that women hold the preponderance of low-level clerical, labor, and shopkeeping positions. Women in these positions often are among the first to be laid off in times of corporate restructuring. NGO's and labor organizations continued to report a practice in which women received short-term work contracts renewable every 3 to 6 months, creating a climate of job insecurity for them. While men occasionally suffered from this practice, it was used disproportionately against women to dissuade them from taking maternity leave. This practice has become less common since 1999 legislation limited the use of short-term work contracts to a maximum of 3 years. The Labor Code authorizes 1 full year of maternity leave, although changes enacted in October reduced the 3-years' leave for multiple births to 1 year. Government efforts on gender equality improved during the year. In March the Parliament created a Committee for Gender Equality, chaired by Gordana Sobol (SDP). The committee met several times during the year to review pending legislation for compliance with gender equality criteria, and to offer amendments and modifications. In September the Government established a new human rights office, an existing office on gender equality within the Labor Ministry was upgraded and attached to this human rights office. Among its ongoing tasks were the implementation of the 2001-05 National Action Plan on gender equality and the coordination of tasks among ministries, parliamentary offices, unions, and the NGO community to promote gender equality. The Government ratified the U.N. "Convention on the Elimination of All Forms of Discrimination Against Women" (CEDAW) in 1991, and in March the Government ratified the "Optional Protocol" to the convention. This ratification represents implementation of the final element of the previous year's "Beijing Plus Five" platform on international legal instruments on women. While there is no national organization devoted solely to the protection of women's rights, many small, independent groups were active in the capital and larger cities. The Government is generally committed to the welfare of children. Education is free and mandatory through grade 8 (generally age 14). Schools provide free meals for children. The majority of students continue their education to the age of 18, with Roma being the only notable exception. Romani children face serious obstacles in continuing their schooling, including discrimination in schools and a lack of family support. An estimated 10 percent of Croatian Romani children begin primary school, and of these only 10 percent go on to secondary school. There were only an estimated 50 Romani students in secondary school throughout the country during the year. Nearly all Roma children drop out of school by grade 8. In Medjumorje County, local officials operate segregated classrooms for Romani children, reportedly with less-qualified staff and fewer resources. Subsidized daycare facilities are available in most communities even for infants. Medical care for children is free. While there is no societal pattern of abuse of children, NGO's operating hotlines for sexual abuse victims reported numerous cases of abuse of children. TRAFFICKING IN PERSONS The law does not specifically prohibit trafficking in persons, although other existing laws may be used to prosecute traffickers; trafficking in women was a problem. Little statistical information on trafficking exists, although U.N. officials tracking the issue regionally and local research indicate that Croatia is primarily a transit country for women trafficked to other parts of Europe for prostitution, as well as a lesser country of origin and destination country for trafficked women. Police failure to identify trafficked women among illegal aliens smuggled into the country and shortcomings in the readmission agreement with Bosnia, which puts police under pressure to process and repatriate illegal migrants within 72 hours after their initial arrest, resulted in a significant underestimation of the trafficking problem in the country. Women from Hungary, Ukraine, Romania, Bulgaria, Slovakia, and other countries reportedly were trafficked through Bosnia-Herzegovina and Yugoslavia to Croatia, where some remained to work as prostitutes or were trafficked to other destinations. Women are transported through the country by truck or boat. In addition women from Albania, Bosnia, Bulgaria, Hungary, Macedonia, Moldova, Romania, Slovenia, and Yugoslavia were detained in incidents of illegal entry into the country; some of these women were believed to be victims of trafficking. Anecdotal information indicates that international organized crime groups are responsible for trafficking. Although there is no law specifically prohibiting trafficking in persons, trafficking can be prosecuted under laws prohibiting slavery, the illegal transfer of persons across state borders, international prostitution, or procurement or pimping. However, police awareness of the problem is low, and the police are not trained or encouraged to identify and document possible cases of trafficking. Police are reluctant to acknowledge that trafficking in persons might occur in the country. Victims are not encouraged to take legal action against their traffickers. According to the Ministry of the Interior, from 1998-2000 the Government prosecuted 5 persons under the law prohibiting slavery and 21 persons under the law prohibiting international prostitution. However, no data is available regarding the final disposition of the cases. Public awareness of trafficking is low, and there were no government or NGO programs to deal with the prevention of trafficking during the year. There have been no trafficking awareness campaigns in the country. While government officials, international missions, and NGO's are working to develop an antitrafficking strategy, progress has been slow. The Government appointed an official from the Interior Ministry as the national coordinator for trafficking issues, who was engaged in the issue by year's end. In November the Government hosted a ministerial-level conference for Stability Pact participants to coordinate regional antitrafficking approaches; however, there was little publicity for the event and no broad substantive discussion of the problem occurred during the brief conference. There were no support services available for trafficking victims. Trafficking victims typically are detained for illegal entry and voluntarily deported. Victims generally are detained at a Zagreb detention facility on immigration violations. Detention may last several days or several weeks. Foreign embassies usually do not organize repatriation for its citizens, and victims typically are returned to their countries of origin by train organized by the Croatian Government. There is one women's shelter that occasionally helps trafficked women. With the consolidation of peace in the region, the GOC opened several border crossing points with northern Bosnia, and regularized the status of border crossing points with western Bosnia and Serbia. MUP officials have commented that they lack funds to purchase equipment (particularly equipment to x-ray truck traffic) to adequately search traffic from Serbia at the busiest crossing at Bajakovo. The volume of traffic transiting the Zagreb/Belgrade highway increased dramatically over the year, although the GOC maintained adequate customs controls along the Serbian border. There was a large increase in the volume of cocaine transshipping the Dalmatian seaports, particularly the port of Rijeka. There has also been an upsurge in drugs transiting Bosnia, particularly in conjunction with traffic in stolen vehicles. Domestic organized crime gangs allegedly are cooperating with Kosovar and Albanian traffickers to move narcotics through Croatia to Europe and possibly onward to the US. Internet research assisted by Phy Long Ngov
http://www-rohan.sdsu.edu/faculty/rwinslow/europe/croatia.html
13
17
Starting in the early 19th Century the United States underwent an industrial revolution. The work that many people did changed as they moved from farms and small workshops into larger factories. They tended to buy things in stores, rather than make them at home or trade with their neighbors. They used machines, and purchased the products of machines, more than they ever had. |LEFT: Spinning wheel, possibly for flax. Courtesy of the National Museum of American History, Washington, D.C. | RIGHT: Mechanized Spinner from The Progress of Cotton, 1835-40. Courtesy of Slater Mill Historic Site, Pawtucket, RI.. The small-scale centers of textile production discussed in Unit 1 lasted well into the 19th century. But the manufacture of textiles began to change dramatically, starting as early as the 1790's, as these traditional sources were first joined, and then replaced, by a new material, a new kind of agriculture, and a new kind of factory. The material processed changed, from linen and wool to cotton; the way that cotton was grown and prepared changed, with the invention of the cotton gin and the reinvention of the plantation; new machines, invented to process the cotton, found a new setting in larger and more complex factories. Together, these changes added up to an industrial revolution. This textile revolution did not happen everywhere in the United States at the same time, and its effects were quite different in different areas. Perhaps the largest change came in the South, where the new demand for cotton was supplied by plantations based on slave labor and mechanized processing of the cotton by the cotton gin. ("Gin" is short for "engine.") The Northeastern United States changed dramatically as home spinning and weaving, and small-scale carding and fulling mills gave way to large integrated mills where a new kind of worker used new machines to produce cotton cloth on a scale previously unimagined. Smaller mills remained, and would remain for the rest of the century, but for the most part, only in areas of low population far from the commercial markets of the Northeast. This account of the American Industrial Revolution is different from the usual one found in textbooks. Many textbooks claim, for example, that the Industrial Revolution did not occur until the end of the 19th century, with the coming of massive steel mills and the end of small-scale production. And they omit the mechanization and reorganization of Southern plantations, on the grounds that agricultural production is not part of the history of industry. While this traditional story is not wrong, it leaves out an important part of the story. It also leaves out many people who participated in and whose lives were changed by industrialization. To focus on factories, which have traditionally employed native white and immigrant workers, and from which African Americans were kept by racial prejudice, leaves out a large group whose story is a key element of American history. Slaves produced the cotton that made possible Northern factories, a piece of history often slighted in favor of stories about those factories. In this curriculum we have widened our point of view to include Southern cotton production as part of textile history. So slavery, and later sharecropping, becomes an important part of the story of Northern textile mills; African Americans become part of the history of technology; and technology becomes part of African American history. Such an inclusionary view should help students of color imagine themselves as people who, like their ancestors, use and control technology. |William Aiken Walker, The Sunny South, 1881| Photo courtesy of Robert M. Hicklin Jr.,Inc., Spartanburg, SC. It is right to start the story of the industrialization of the textile industry in the South, because that is where the story of cotton starts. Southern plantations underwent an industrial revolution of a sort: one of the key new technologies of the textile revolution, the cotton gin, made possible a new, much larger scale of production, and that increased scale demanded new organization Before the American Revolution, tobacco, rice, and indigo were the major crops produced for market in the South. Cotton was not produced for market because it was so hard to remove the sticky seeds from inside each cotton ball; it could not be done fast enough to make cotton profitable. (The only exception was long-staple cotton, which could only be grown on the seacoast.) Since the first millenium B.C., people around the world used roller gins to speed the cleaning of cotton. The saw gin, patented in 1793, made processing cotton even easier, faster, and cheaper. |Eli Whitney's cotton gin, demonstration model| 1973. Courtesy of National Museum of American History, Based on an ancient technology, the introduction of the saw gin at the end of the 18th century changed the nature of American cotton cultivation. Developed just as the world-wide demand for raw cotton was skyrocketing because of the expansion of textile mills in Britain and the United States, the machine removed the principal bottleneck to cotton production. Even the early machines allowed one person to clean the seeds from fifty pounds of green-seed cotton in one day. Soon cotton became the most important market crop in the South. Production went from 3,000 bales in 1790 to 1 million bales in 1835. With the opportunity to make a good profit from cotton came dramatic changes in Southern agriculture: increased size of plantations, and to work them, increased numbers of slaves. African slaves had been used in Southern agriculture almost from the beginning of European settlement. Tobacco planters had used slaves since the 17th century; slaves were critical to the rice cultivation that developed in the 18th century. Plantations, large farms using slave labor to grow a single crop, were created to make a profit for the owners before technology made cotton a cash crop and before slavery was the only labor system. But plantations were adapted to produce cotton in the 19th century and by then many of them employed only slaves. Planters became wealthy by exploiting the labor of Africans in America, men and women who could not choose another way of life. The growth of cotton as a cash crop in the 19th century meant the growth of slavery throughout the South. Slavery, which had been in decline, became an integral part of the new agriculture. It might seem odd that a new labor-saving machine like the cotton gin meant an increase in the size of the labor force. But the lower price meant an enormous increase in cotton production, and even with the cotton gin, cotton production still required an enormous amount of labor. Cotton demanded large plantations; it made money only when plantation owners could put more workers in the field. From an investor's point of view, slaves were a capital investment, comparable to the machinery a northern factory owner might purchase. (The student essay "Why a Plantation?" addresses the issues of plantation size and management.) The cotton gin was one of those inventions that brought about an enormous change in the way people lived and worked, and even in their politics, and so it is appropriate that much of the southern section of this Unit is focused on the gin. The exercise on "Inventing the Cotton Gin" raises issues about the nature of invention. The exercise on fixing a gin raises questions about technological skills. Both of these include a discussion of race and technology. |Bleaching, from The Progress of Cotton, 1835-40 | Courtesy of Slater Mill Historic Site, Pawtucket, RI. The invention of the cotton gin was only one of the technological innovations that propelled the growth of cotton as a cash crop. The other important new technology was new machines for the manufacture of cloth, which lowered the price and increased the speed of production. These technological developments were crucial in the growth of cotton as a commodity crop, as was a commercial and market revolution that created a growing demand for cotton. The new technology and new demand meant changes in northern industry every bit as extensive as those in the South. As discussed in Unit 1, textile manufacturing in the 18th century occurred mostly in homes. Farm women worked hard to turn raw wool into finished cloth, first picking and breaking it, then spinning, and then weaving. For some farm women this work was a full-time winter job, for some a job done between other chores. Toward the end of the century, small water-powered carding and fulling mills became increasingly common. Thus, some of this work was industrialized. Women might take the wool to the local carding mill for cleaning and carding, then take it home for spinning. They might do the weaving themselves, or perhaps take the yarn to a professional weaver. Finally, they would take the cloth to a mill to be fulled and finished. This division of labor brought some of industrial work into the home, but, for the most part, women worked alone, in control of the details of their own time and pace. They used machinery, but very simple machinery. Hand powered, individually controlled, and highly dependent on the skill of the user, home textile production made use of mechanisms that were more tools than machines. |Samuel Slater's 1793 Mill| Courtesy of Slater Mill Historic Site, Pawtucket, RI. Mills like that of Almy, Brown, and Slater were soon found throughout Southern New England, especially after the 1809 Embargo on shipping with England led to an enormous increase in American textile production. (In 1815, there were almost 170 mills just in the area of Providence, RI.) These mills changed the lives of their thousands of workers, who had to learn a new time discipline, and of their workers' families. But they also changed the lives of those who bought the cloth the workers produced--cloth became cheaper, and part of a system of commercial exchange, impacting the lives of those who lived nearby. The dams required to provide water power to the mills flooded farmers' fields and stopped fish from their annual migration. Industry did not coexist easily with traditional ways of life. (These early mills are discussed in the student essay, "Why a Factory?", the game "Industrial Life," the exercises on water power and factory ecology, and the video A different sort of textile industry developed in the cities, which had always been centers of manufacturing. Philadelphia, which became the largest producer of textiles,pioneered a style of production quite different from that found in New England. Philadelphia's textile industry was quite diverse. There were a few large mills that used water or steam power to drive machinery, but, for the most part, Philadelphia's textiles were produced in small shops or home-based operations. The workers who produced them, many of them British immigrants, tended to be highly skilled workers, able to undertake a variety of jobs. The management too was highly skilled, not only knowledgeable about the machines and processes under its control, but skilled in rapid shifts of resources, product, and market. The products tended to be of high quality. The machines reflected what historian Philip Scranton has called "productive flexibility," their complexity and ease of adjustment allowing a broad range |A View of Lowell, 1840.Courtesy of | The Library of Congress. It is worthwhile to look more deeply into the story of Lowell, for it was, at its start, a unique industrial city--a city that raises key questions about the nature of American industrialization. Designed as an explicitly American style of industry, the mills at Lowell were unique in their utopian aims, their workforce, their managerial style, and their machines. Not that the Lowell mills had no predecessors--these mills had a heritage that stretched back to the English mills of Arkwright and Robert Owen, to the many industrial experiments that were part of the American attempts to win economic independence from England, and to the small textile mills of New England--but the Lowell mills compounded these all, and at a scale so much larger as to be something new in industrial history. In developing this experimental city, the developers of Lowell broke new ground. They had to solve a new set of problems, and they solved them in a different way than anyone else. The problem was simple: how to make money in manufacturing in a nation unused to manufacturing. That is, in a country without skilled machine makers, without skilled workers--for that matter, without many workers available at all--without a great deal of capital or a tradition of manufacturing--indeed, with a strong philosophical bent against manufacturing, and against managerial prerogative. Many of the beliefs, ideas, skills, and machines that modern industrialists take for granted, or assume they can purchase, were missing in Lowell. So too were most of the economic assets. The solutions found by the owners and managers of Lowell--solutions managerial, technological, social, cultural, and political in nature--took a large step toward the modern industrial style. Lowell was to become one of the places where American manufacturing and managerial traditions were born. By 1840, more than 50,000 people worked in the cotton textile industry in New England. Perhaps the most important tradition to which the founders of Lowell helped contribute was that of managerial expertise, separate from engineering or technological or financial expertise. Managers at Lowell were hired for their skills as managers, not their technical abilities or their financial prowess. The textile mills of Lowell drew on traditions of authority that existed elsewhere--in schools, prisons, on ships--and reshaped them to the needs of industry. Managers at Lowell brought a new rationality of production, a new precision of understanding of processes, and also a new way of looking at the men, women, machines, and materials of industry. It is the beginning of the abstraction from reality into information that is essential to modern economic life, the origin of a new belief in the efficacy of numbers as a means of control. (For some of the technical decisions Lowell mill managers had to make, see the "What's in a Factory?" |The Three Cassidy Sisters, 1877.| Courtesy of The Pollard Memorial Library, Lowell, MA. The social, cultural, and technological innovations of the New England corporations were important elements in the industrialization of the United States. Not so much because these mills set the style for other industries; the Lowell mills, with their millgirls and "moral" boardinghouses, were not widely copied outside of northern New England. They also did not last long; by 1860 the so-called "golden age" of Lowell was over, and the utopian dreams of its founders had disappeared as Lowell became simply another textile city. But, as the first large-scale industrial experiments, the mills at Lowell brought with them much that would be found in later factories--not only large-scale production, but also some profound opposition to industrial The opposition was in part theoretical and in part practical. The theoretical aspects were based on philosophical beliefs about the nature of American democracy. America would only stay a republic, Thomas Jefferson and others of his period believed, as long as it stayed agricultural. "God forbid," wrote Zachariah Allen, an American mill owner, "that there may arise a counterpart of Manchester [England] in the New World." Thomas Mann put the same sentiments in even stronger terms. A sometime mill worker and teacher, he made up in feeling what he lacked in poetic ability in his Portrait of a Factory Village For liberty our fathers fought Which with their blood, they dearly bought, The Factory system sets at naught. A slave at morn, a slave at eve, It doth my inmost feelings grieve; The blood runs chilly from my heart, To see fair Liberty depart; And leave the wretches in their chains, To feed a vampyre from their veins. Great Britain's curse is now our own; Enough to damn a King and Throne. Another mill worker was less poetic and more straightforward. Jabez Hollingworth, an English immigrant who had worked in several American mills, combined in one sentence the two comparisons that came to a textile mill operative's mind when management was oppressive: "Management breeds lords and Aristocrats, poor men and slaves." Some of the Lowell millgirls felt the same way. The rhetoric of their early strikes showed the influence of radical democratic ideas: The millgirls called themselves "daughters of freemen" and feared that the "oppressing hand of avarice would enslave us." The mention of slavery brings us back to the beginning of this essay. Clearly, the use of the word "slavery" by radical Northern workers is more rhetorical than real; industrial work, as hard and unpleasant as it might have been, was not slavery. Workers were not property like slaves were. They were, in principle anyway, always free to leave. But there are some comparisons that are useful. Both factory workers and slaves were treated, to some extent, as cogs in a larger machine. Northern factory owners as well as Southern plantation owners used similar imagery in describing their operations as machines. Andrew Ure, a British scientist, wrote in his Philosophy of Manufactures (1835): "The main difficulty [of inventing the factory was] in training human beings to renounce their desultory habits of work, and to identify themselves with the unvarying regularity of the complex automaton." A plantation owner echoed Ure when he wrote a few years later: "A plantation might be considered as a piece of machinery, to operate successfully, all of its parts should be uniform and exact, and the impelling force regular and steady." Both systems of labor demanded a strict adherence to the rules set down by those in charge. New systems of rules, along with the new machines and new products, are the legacy of the Industrial Revolution of the early 19th century, for the second Industrial Revolution of the later part of the century, and, to a large degree, for today. We have come to accept the necessity of technology, products, and hierarchy. The exercises in this unit suggest that they came about not out of necessity, but as the product of specific historical In her 1931 history of the early New England textile industry, historian Caroline Ware asked: "Could political democracy encompass industrial autocracy, could it harbor a working class and a moneyed power and survive? . . . These problems which New England faced before 1860 have confronted other American communities as one by one they have experienced the process of industrialization. Their solution still lies in the future." The path to that solution might be the key question for students to take from this Unit. Why a plantation? Why a factory? What were the relations of labor and management, of machines and people? Why were they that way? And how did the factories and machines, the owners and the workers of the factories, change American culture? Comments and questions to the Lemelson Center:[email protected] Last Revision: 7/12/99 invention.smithsonian.org/centerpieces/whole_cloth/u2ei/u2materials/eiTessay.html 12/02/99 invention.smithsonian.org/centerpieces/whole_cloth/u2ei/u2materials.eiTessay.html 12/02/99
http://invention.smithsonian.org/centerpieces/whole_cloth/u2ei/u2materials/eiTessay.html
13